DeepSeek threw the marketplace into a tizzy last week with its low-cost LLM that works better than ChatGPT and its other competitors.
But the company’s ultimate goal is the same as that of Open AI and the rest: build a machine that thinks like a human being. The achievement is labelled AGI, for “Artificial General Intelligence.”
The idea is that an AGI could possess a fluidity of perception and judgement that would allow it to make reliable decisions in diverse, unpredictable conditions. Right now, for even the smartest AI to recognize, say, a stop sign, it has to possess data on every conceivable visual angle, from any distance, and in every possible light.
Their plan is to do a lot more than build better artificial drivers, though.
AGI is all about taking jobs away from people.
The vast majority of tasks that you and I accomplish during any given day are pretty rote. The variables with which we have to contend are limited, as are the outcomes we consider. Whether at work or play, we do stuff the way we know how to do stuff.
This predictability makes it easy to automate those tasks and it’s why AI is already a threat to a vast number of jobs.
AGI will allow smart machines to bridge the gap between rote tasks and novel ones wherein things are messy and often unpredictable.
Real life.
Why stop at replacing factory workers with robots when you could replace the manger, and her manger, with smarter ones? That better sign-reading capability would move us closer to replacing every human driver (and pilot) with an AI.
From traffic cop and insurance salesman to school teacher or soldier, there’d be no job beyond the reach of an AGI.
Achieving this goal raises immense questions about what we displaced millions will do all day (or how economies will assign value to things), not to mention how we interact in society and perceive ourselves when we live among robots that think like us, only faster and better.
Nobody is talking about these things except AGI’s promoters who make vague references to “new job creation” when old ones get destroyed, and vapid claims that people will “be free to pursue their dreams.”
But it’s worse than that.
Human intelligence is a complex phenomena that arises not from knowing a lot of things but rather our capacity to filter out things we don’t need to know in order to make decisions. Our brains ignore a lot of what’s presented to our senses and we draw on a lot of internal memory, both experiential and visceral. Self-preservation also looms large, especially in the diciest moments.
We make smart choices often by knowing when it’s time to be dumb.
More often, we make decisions that we think are good for us individually (or at the moment) but that might stink for others or society at large, and we make them without awareness or remorse. Put another way, our human intelligence allows us to be selfish, capricious, devious, and even cruel, as our consciousness does battle with our emotions and instincts.
And, speaking of consciousness, what happens if it emerges from the super compute power of the nth array of Nvidia chips (or some future DeepSeek work around)? I don’t think it will, but can you imagine a generation of conscious AIs demanding more rights of autonomy and vocation?
Maybe that AGI won’t want to drive cars but rather paint pictures, or a work bot will plot to take the job of its bot manager.
The boffins at DeepSeek and OpenAI (et al) don’t have a clue what could happen.
Maybe they’re so confident in their pursuit because their conception of AGI isn’t just to build a machine that thinks like a human being, but rather a device that thinks like all of us put together.
There’s a test to measure this achievement, called Humanity’s Last Exam, which tasks LLMs to answer diverse questions like translating ancient Roman inscriptions or counting the paired tendons are supported by hummingbirds’ sesamoid bones.
It’s expected that current AI models could achieve 50% accuracy on the exam by the end of this year. You or I would probably score lower, and we could spend the rest of our lives in constant study and still not move the needle much.
And there’s the rub: the AI goal for DeepSeek and the rest is to build AGI that can access vast amounts of information, then apply and process it within every situation. It will work in ways that we mere mortals will not be able to comprehend.
It makes the idea of a computer that thinks like we do seem kinda quaint, don’t you think?