“I don’t know how it works” is the last thing you want to hear from an AI developer.
We’ve all heard stories about the amazing accomplishments of ChatGPT and other large language models (LLMS). They figure out how to cheat to complete tasks or win games. Translate complex physics into metered poetry. Try to convince a user that his marriage is ruined. Pass the law bar exam.
It turns out nobody can explain exactly how they did it.
The story gets technical pretty quickly, which is probably the reason we Neanderthals aren’t rioting in the streets about it.
Here’s the rub in plain English: AI are going beyond their expected abilities to learn from existing data and inventing new ways to assess and apply it. These are called “emergent abilities” and they include the capacity to learn in real-time as users enter queries, not just process them, and make novel associations between concepts.
Only the AI haven’t been coded to do it.
Some LLMs have shown the ability to run code, as if they have a memory (which they don’t) and a computational component (also not there). They figure out workarounds, as if there’s some sort of “mind” at work, even though no programmer gave it the ability to do it (nor can find evidence of where, let along how such a “mind” operates).
What’s even scarier is neural networks, the models on which many of today’s AI tools are based, have befuddled experts for decades.
One explanation is that its early developers used the human brain as a template for imagining and then designing how neural networks might work. They wanted neural networks to be able to work on their own, like human beings do.
The rub is that nobody can explain how human brains work, at least not exactly. One researcher working on neural networks said back in 1998, “our major discovery seems to be an awareness that we really don’t know what’s going on.”
For instance, neural networks form “layers” of operations, some focused on inputs and outputs but others somewhere between, called “hidden layers” that morph over time based on the networks’ experience.
This makes how they think, or what they’re thinking about, all but opaque to experts who coded them in the first place.
Another tech phrase is “deep learning,” which seems to be a broad label for all of those capabilities that we can’t quite understand. It seems to be emerging as an antonym for “transparency.”
The problems that can arise from this opacity are many, but can be distilled into a simple statement:
If we can’t see how AI make decisions, how can be predict them?
It isn’t a problem if we want smart machines to be as capricious and risky as human beings. It would probably be the ultimate realization of the dream for neural networks.
Maybe that’s the direction we go for government regulation…forget reviewing systems in development or trying to build responsible or ethical AI, and instead simply rely on catching systems when they do wrong.
Maybe intentions don’t matter, whether rendered in electronic circuits or biological blobs.
Stay tuned for the first recruiting posters for The Robot Police.
No comment yet, add your voice below!