Bigger AIs Aren’t Better AIs

Turns out that when large language models (“LLMS”) get larger, they get better at certain tasks and worse on others.

Researchers in a group called BigScience found that feeding LLMs more data made them better at solving difficult questions – likely those that required access to that greater data and commensurate prior learning – but at the cost of delivering reliably accurate answers to simpler ones.

The chatbots also got more reckless in their willingness to tee-up those potentially wrong answers.

I can’t help but think of an otherwise smart human friend who gets more philosophically broad and sloppily stupid after a few cocktails.

The scientists can’t explain the cause of this degraded chatbot performance, as the machinations of evermore complex LLMs make such cause/effect assessments more inscrutable. They suspect that it has something to do with user variables like query structure (wording, length, order) or maybe how the results themselves are evaluated, as if a looser definition of accuracy or truth would improve our satisfaction with the outcomes.

The happyspeak technical term for such gyrations is “reliability fluctuations.”

So, don’t worry about the drunken friend’s reasoning…just smile at the entertaining outbursts and shrug at the blather. Take it all in with a grain of salt.

This sure seems to challenge the merits of gigantic, all-seeing and knowing AIs that will make difficult decisions for us.

It also begs questions about why the leading tech toffs are forever searching for more data to vacuum into their ever-bigger LLMs. There’s a mad dash to achieve artificial general intelligence (“AGI”) because it’s assumed there’s some point of hugeness and complexity that will yield a computer that thinks and responds like a human being.

Now we know that the faux person might be a loud drunk.

There’s a contrarian school of thought in AI research and development that suggests smaller is better because a simplified and shortened list of tasks can be accomplished with less data, use less energy, and spit out far more reliable results.

Your smart thermostat doesn’t need to contemplate Nietzsche, it just needs to sense and respond to the temperature. It’s also less likely to decide one day that it wants to annihilate life on the planet.

We already have this sort of AI distributed in devices and processes across our work and personal lives. Imagine if development was focused on making these smaller models smarter, faster, and more efficient, or finding new ways to clarify and synthesize tasks that suggested new ways to build and connect AIs to find big answers by asking ever-smaller questions?

Humanity doesn’t need AGI or evermore garrulous chatbots to solve even our most seemingly intractable problems

We know the answers to things like slowing or reversing climate change, for instance, but we just don’t like them. Our problems are social, political, economic, psychological…not really technological.

And the research coming from BigScience suggests that we’d need to take any counsel from an AI on the subject with that grain of salt anyway.

We should just order another cocktail.

AI And The Dancing Mushroom

It sounds like the title of a Roald Dahl story, but researchers have devised a robot that moves in response to the wishes of a mushroom.

OK, so a shroom might not desire to jump or walk across a room, but they possess neuron-like branch-things called hyphae that transmit electrical impulses in response to changes in light, temperature, and other stimuli.

These impulses can vary in amplitude, frequency, and duration, and mushrooms can share them with one another in a quasi-language that one researcher believes yields at least 50 words that can be organized into sentences.

Still, to call that thinking is probably too generous, though a goodly portion of our own daily cognitive activity is no more, er, thoughtful than similar responses to prompts with the appropriate grunt or simple declaration.

But doesn’t it represent some form of intelligence, informed by some type of awareness?

The video of the dancing mushroom robot suggests that the AI sensed the mushroom’s intentionality to move. It’s not necessarily true, since the researchers had to make some arbitrary decisions about which stimuli would trigger what actions, but the connection between the organism and machine is still quite real, and it suggests stunning potential for the further development of an AI that mediates that interchange.

Much is written about the race to make AI sentient so that we can interact with it as if we were talking to one another, and then it could go on to resolve questions as we would but only better, faster, and more reliably.

Yet, like our own behavior, a majority of what happens around the world doesn’t require such higher-level conversation or contemplation.

There are already many billions of sensors in use that capture changes in light, temperature, and other stimuli, and then prompt programmed responses.

Thermostats trigger HVAC units to start or stop. Radars in airplanes tell pilots to avoid storms and trigger a ping when your car drifts over a lane divider. My computer turned on this morning because the button I pushed sensed my intention and translated it into action.

Big data reads minds, of a sort, by analyzing enough external data so that a predictive model can suggest what we might internally plan to do next. It’s what powers those eerily prescient ads or social media content that somehow has a bulls-eye focus on the topics you love to get angry about.

The mushroom robot research suggests ways to make these connections – between observation and action, between internal states of being and the external world – more nuanced and direct.

Imagine farms where each head of lettuce manages its own feeding and water supply.  House pets that articulate how they feel beyond a thwapping tail or sullen quiet. Urban lawns that can flash a light or shoot a laser to keep dogs from peeing on them.

AI as a cross-species Universal Translator.

It gets wilder after that. Imagine the complex systems of our bodies being able to better manage their interaction, starting with prescribing a bespoke vitamin to start every day and leading to more real-time regulation of water intake, etc. (or microscopic AIs that literally get inside of us and encourage our organs and glands to up their game).

Think of how the AI could be used by people who have infirmities that impede their movement or even block their interaction with the outside world. Faster, more responsive exoskeletons. Better hearing and sight augmentation. Active sensing and responses to counter the frustrating commands of MS or other neurological diseases.

Then, how about looking beyond living things and applying AI models to sense the “intentionally” of, say, a building or bridge to stay upright or resist catching on fire, and then empowering them to “stay healthy” by adjusting stresses of weight and its allocation.

It’s all a huge leap beyond a dancing mushroom robot, but it’s not impossible.

Of course, there’s a downside to such imagined benefits: The same AI that can sense when a mushroom wants to dance will know, by default, how to trigger that intention. Tech that better reads us will be equally adept at reading to us.

The Universal Translator will work both ways.

There are ethical questions here that are profound and worthy of spirited debate, but I doubt we’ll ever have them. AI naysayers will rightly point out that a dancing mushroom robot is a far cry from an AI that reads the minds of inanimate objects, let alone people.

But AI believers will continue their development work.

The dance is going to continue.