AI Replacing People? What Could Go Wrong?

We are going to see our government run by smart machines long before businesses do the same, and it looks like the transformation will be ugly.

Elon Musk’s DOGE squads aren’t waiting for management consultants to draft complicated slide presentations on process flow or some other blather that normally makes them rich; they’re dismantling Federal departments and agencies whole-scale, then waiting to see how the destruction 1) Reveals what needs to happen, and 2) Shows how things used to get done, so a computer program can be trained to do it.

It will occasionally require calling back some fired workers to do stuff, like control air traffic so planes don’t crash into each other, but generally tolerates a fair amount of disruption and pain. The only lasting relief will come from automation.

Processing Social Security or IRS refund checks? Identifying the next pandemic or impending hurricane? Preventing another mid-air plane collision? 

It might take some missed payments or a dose of another plague for the DOGE experts to identify what needs their attention, but then lucrative development contracts to be writ for tech companies to address it.

People who excuse what’s happening are mostly missing the point, whether they’re using the worn caveat that “well, there’s certainly bloat in government staffing and budgets” to loudly kvelling that “they’re sticking it to the libtards.” 

The transformation isn’t about politics. It’s about replacing people with machines, regardless of their political persuasion or the purposes of their funding and work.

In fact, nobody voted for it. There were no “replace our government with AI” or “resist the AI takeover” promises in the planks of either party. We had no robust public debate about if, why, how, or when we should evict humans from their jobs and either replace them with automation or simply leave their work undone.

Our government has never functioned as a well-oiled machine. It wasn’t designed to be one from the get-go, and the balancing of citizens’ competing and often incompatible needs and desires is going to yield inefficient solutions, by design.

It’s called compromise, and its goal is to make everyone at least somewhat happy with its outcomes. More importantly, it leaves open our ability and right to readjust things to yield a differently imperfect but nominally satisfying arrangement.

What’s happening now to our government is an effort to end that arrangement and quite literally hardwire not only how things get done but what gets done in the first place.

This is where the nonsense about “a deep state” comes into play.

DOGE’s carte blanche ticket for destruction is based on the assumption that the government is staffed by people whose political beliefs bias their decision-making, which makes them not just inefficient but wrong. We should be freed from their oversight and impact to be inefficient and wrong on our own.

Let’s assume for chuckles that the ideology is absolutely correct. Won’t replacing people with AI will simply replace one set of biases for another, a compromise codified into an algorithm still a compromise (just someone else’s)?

Worse, we voters won’t have visibility into the criteria those coders use to program AIs to make decisions (beyond getting fed some pablum about “efficiency”) and, worse yet, it’ll mean we won’t have the capacity to change it. AI will belong to its owners and, over time, will likely develop biases unanticipated by their coders, too.

Every Federal employee walking out of an office with their belongings in a file box is a reminder of the blunt and brutal transformation that’s underway, and the fact that we’ve not been told nor participated in a conversation about what we’re going to get from it.

What could possibly go wrong?

Teaching Old Dogs New Tricks

Boston Dynamics has revealed that it has figured out how to teach its old four-legged robots new tricks.

Without human help.

The technique is called reinforcement learning, which every human being relies on shortly after birth to teach ourselves how to stand, avoid walking into walls, and scratch itches if and when possible.

AI uses it, too, as the large language models driving ChatGPT and its many competitors assess what answers to queries work best and then adjust their models to favor those replies next time.

Boston Dynamics is a pioneer in mobile robotics, its videos and trade show demonstrations of skittering headless dogs announcing by example the robot takeover of the world years before Sam Altman took credit for the threat. Accomplishing that movement in physical space, especially the more complicated ones, took laborious human coding and/or control, as well as real-world training. 

Now, it seems that the company has figured out how its two and four-legged robots can speed past our fleshy, limited concepts of preparation and practice and improve their coding so they’re ready to do better when next they’re turned on.

Just think if dreaming of being a world-class ballerina or finesse hockey skater was all you had to do to become one.

The technology is as frightening as its fascinating, insomuch as there’s ample evidence of AI’s teaching themselves how to cheat to win games, cut corners on tasks, or simply make shit up.

Turns out that programming machines to be moral and ethical is just as hard as it is to do with people, so good luck cracking that code. It will be fascinating to witness all of the strange and potentially threatening things the robot dogs and humanoids decide they’d like to do.

As frightening as that prospect also sounds, is not what scares me most: Like AI development in general, I’m worried about what happens if Boston Dynamics’ new training approach works flawlessly?

The company’s robot dog (named Spot) is already in commercial use, primarily on construction and industrial sites. Robots from other manufacturers are at work in other conditions that Stanford University describes as the “Three D’s” of dull, dirty, and dangerous, to which I’d add a forth: devoid of people.

Nobody wants to stand too close to a machine that could errantly send a metal arm through their heads.

But if robots can teach themselves to move as flexible and fluidly as living things (with the awareness to do so in any situation), then the floodgates will open up for putting them into everyday life.

Grocery shopping. Dog walking. Child or elder care. 

Scratchers of itches.

This makes the business case for Boston Dynamic’s reinforcement learning plans bluntly obvious, but what’s less clear is what it will mean for the qualities and values of our lived experiences, especially since self-improving robots won’t just get as good as we are at walking or juggling (or whatever) but better than us.

Their capabilities will teach US how to become dependent on them.

And then we’ll have to teach ourselves new tricks.

Turns out we’re the old dogs in this story.

Safe Superintelligence?

I was briefly encouraged last week by news that the guy who’d quit OpenAI because of its lapses in ethics had raised a billion dollars for his new company named Safe Superintelligence (“SSI”).

In late 2023, news broke that Ilya Sutskever, one of OpenAI’s co-founders and a board member, had led the ouster of CEO Sam Altman because the guy was moving too fast in AI development and too slowly in communicating its risks. 

The company’s investors, partners, and fan base were shocked…running along a cliff edge with your eyes closed was central to AI innovation…and within weeks, Sutskever and his fellow mutineers were out at OpenAI and the company had doubled down on its pursuit of secretive breakthroughs in AI that may or may not annihilate humanity.

So, I was encouraged when Sutskever’s SII announced that it had raised another $1 billion and was valued at much more than it had been when I’d last checked.

I immediately concocted an elaborate fantasy for what was happening…

…a principled AI innovator wants to build AI that has morality and a respect for human ethics built into (and inseparable from) its design and function. Not only will this AI be “safe” because it will be incapable of doing harm but it will operate as an advocate and protector for doing good.

SSI is building an AI that will stand with humanity, like a real-life Optimus Prime, policing the AI Decepticons intent on stealing our privacy so they can control our thoughts and behaviors, and otherwise exploit humanity as nothing more than fodder for the machinations of businesses and governments.

There was no reason to fret over OpenAI or the other contenders for name sponsorship of the Apocalypse, as SSI would protect us.

Then I blinked and the fantasy was gone.

SSI’s single page website explains, well, nothing, though it repeatedly references its monomanical focus on developing “safe superintelligence” that seems dependent on not worrying about “short term commercial pressures.”

In other words, it’s not going to unleash its AI products on the world until it’s reasonable sure that they’ll do exactly what their owners want them to do. “Safe” has nothing to do with what its AI might do to change every conceivable aspect of our public lives or private selves, its effects no more moral or ethical than its competitors’ offerings.

It’s about selling a better product.

In this sense, safe superintelligence is an oxymoron like loyal opponent, jumbo shrimp or, my favorite, disposable income.

SSI isn’t promising it won’t destroy our world, it’ll just do it more responsibly, my brief fantasy a nice pause from the relentless march toward that end.

Government For AI, By AI, And Answerable To AI

Not so hidden in news about Elon Musk’s DOGE romp through US government offices is his intention to build an AI chatbot that will replace human bureaucrats.

Well, it is kinda hidden, since most of the news stories are guarded by online paywalls, but from what I can gather from above-the-fold snippets is that one goal is to use the chatbot, called “GSAi,” to analyze spending at the General Services Administration. 

The premise is that much of what the government does isn’t just corrupt but inept.

Subsequent iterations of the bot will allow it to analyze and suggest updates to its code, thereby always keeping it one step ahead of circumstantial (or regulatory?) demands. 

Another version will replace human employees, since human work is another presumptive inefficiency of any institution’s operation.

It’s a horribly simplistic solution applied to a terribly complex challenge.

At its core, it assumes that people are the problem. People have bad intentions. They make bad decisions. Their actions yield bad outcomes. Their badness resists examination and change.

People are just bad, Bad, BAD!

An AI will bring needed objectivity, efficiency, and reliability to study any problem or effect any solution. A bot will owe no allegiance to any interest group, voter constituency, or any outright under-the-table incentive.

The result will be good.

Such thinking ignores, either out of ignorance or, more likely, purposeful disregard, the reality that there’s no such thing as an AI that is wholly objective, or that government’s  operational complexities — the result of agreements, compromises, and concessions between legitimate and often competing interests — can or should be replaced by an AI even if it were impartial.

That GSAi will be coded with specific intentionality and the scope of its perception of anything brought before it will be guided (and limited) by the specifics of its training data.

As AIs get more capable and even flexible, they tend behave more like human beings.

And, as for “inefficiency,” one person’s pork-barrel is another person’s weekly paycheck. An earmark might seem unnecessary or silly to someone but vitally important to someone else.

That’s not to say that there aren’t woeful amounts of outright inefficiency in government operations, just like there are in any business or community group. But the challenge of separating them from reasonable compromises — dare I say “good inefficiencies” — isn’t the result of people being stupid or evil.

It’s because it’s a  complex challenge, which leads me to think that it can’t be “solved” with a chatbot  saying “yes” or “no.”

And pushing people out of their jobs entirely won’t necessarily improve anything. Just imagine interacting with a robot bureaucrat on the phone or via email, only it has been coded to better know your weaknesses and thereby drive you even crazier than any human staffer could hope to do. Or bots deciding services and budgets based on, well, whatever their coders thought should be considered.

Again, we know why government’s make bad decisions and we have the capacity to change them if we choose (we, as human beings, created them). We just haven’t done it, and electing an AI to do it for us seems doomed and probably cruel.

Oh, wait a minute. We didn’t vote for AI to run things.

Now it’s too late to say no.

Do We Really Want AI That Thinks Like Us?

DeepSeek threw the marketplace into a tizzy last week with its low-cost LLM that works better than ChatGPT and its other competitors.

But the company’s ultimate goal is the same as that of Open AI and the rest: build a machine that thinks like a human being. The achievement is labelled AGI, for “Artificial General Intelligence.”

The idea is that an AGI could possess a fluidity of perception and judgement that would allow it to make reliable decisions in diverse, unpredictable conditions. Right now, for even the smartest AI to recognize, say, a stop sign, it has to possess data on every conceivable visual angle, from any distance, and in every possible light.

Their plan is to do a lot more than build better artificial drivers, though.

AGI is all about taking jobs away from people.

The vast majority of tasks that you and I accomplish during any given day are pretty rote. The variables with which we have to contend are limited, as are the outcomes we consider. Whether at work or play, we do stuff the way we know how to do stuff.

This predictability makes it easy to automate those tasks and it’s why AI is already a threat to a vast number of jobs.

AGI will allow smart machines to bridge the gap between rote tasks and novel ones wherein things are messy and often unpredictable.

Real life.

Why stop at replacing factory workers with robots when you could replace the manger, and her manger, with smarter ones? That better sign-reading capability would move us closer to replacing every human driver (and pilot) with an AI.

From traffic cop and insurance salesman to school teacher or soldier, there’d be no job beyond the reach of an AGI.

Achieving this goal raises immense questions about what we displaced millions will do all day (or how economies will assign value to things), not to mention how we interact in society and perceive ourselves when we live among robots that think like us, only faster and better.

Nobody is talking about these things except AGI’s promoters who make vague references to “new job creation” when old ones get destroyed, and vapid claims that people will “be free to pursue their dreams.”

But it’s worse than that.

Human intelligence is a complex phenomena that arises not from knowing a lot of things but rather our capacity to filter out things we don’t need to know in order to make decisions. Our brains ignore a lot of what’s presented to our senses and we draw on a lot of internal memory, both experiential and visceral. Self-preservation also looms large, especially in the diciest moments.

We make smart choices often by knowing when it’s time to be dumb. 

More often, we make decisions that we think are good for us individually (or at the moment) but that might stink for others or society at large, and we make them without awareness or remorse. Put another way, our human intelligence allows us to be selfish, capricious, devious, and even cruel, as our consciousness does battle with our emotions and instincts.

And, speaking of consciousness, what happens if it emerges from the super compute power of the nth array of Nvidia chips (or some future DeepSeek work around)? I don’t think it will, but can you imagine a generation of conscious AIs demanding more rights of autonomy and vocation?

Maybe that AGI won’t want to drive cars but rather paint pictures, or a work bot will plot to take the job of its bot manager.

The boffins at DeepSeek and OpenAI (et al) don’t have a clue what could happen.

Maybe they’re so confident in their pursuit because their conception of AGI isn’t just to build a machine that thinks like a human being, but rather a device that thinks like all of us put together.

There’s a test to measure this achievement, called Humanity’s Last Exam, which tasks LLMs to answer diverse questions like translating ancient Roman inscriptions or counting the paired tendons are supported by hummingbirds’ sesamoid bones.

It’s expected that current AI models could achieve 50% accuracy on the exam by the end of this year. You or I would probably score lower, and we could spend the rest of our lives in constant study and still not move the needle much.

And there’s the rub: the AI goal for DeepSeek and the rest is to build AGI that can access vast amounts of information, then apply and process it within every situation. It will work in ways that we mere mortals will not be able to comprehend.

It makes the idea of a computer that thinks like we do seem kinda quaint, don’t you think?

Where’s The Counterpoint To AI Propaganda?

Reid Hoffman’s recent paean to the miracle of AI in the New York Times is just another reminder that we’re not having a real discussion about it.

The essay, which is behind a firewall, is entitled “AI Will Empower Humanity,” and argues that based on his past experience of technologies giving people more power and agency, and despite the risks inherent with any “truly powerful technologies,” that “…A.I. is on a path not just to continue this trend of individual empowerment but also to dramatically enhance it.”

What a bunch of fucking nonsense, for at least three reasons:

It’s unmitigated propaganda.

To be fair, the essay is an op-ed, which isn’t supposed to be balanced reportage. But his opinions aren’t given such exposure in the New York Times because of the force of his argument but rather his stature in the tech world. The resulting placement imputes that he possesses some sense of knowledge and authority.

He dismisses his bias in favor of AI with a single sentence, mentioning that he has “a significant personal stake in the future of artificial intelligence” but that “my stake is more than just financial,” and then goes on with his biased favoritism of AI.

Oh, and he’s shilling his new book.

His PR firm probably wrote the thing and maybe used a generative AI tool to do it. They certainly pitched it to the newspaper. Anybody with an opposing view probably doesn’t have the standing or mercenary economic purpose to enjoy such access.

His argument is crap.

In a sentence, Hoffman’s belief is that AI will make our lives not only easier but better and more reliable, and that these benefits will outweigh any concerns about it.

In his rich tech guy bubble, adoring oneself in the mirror of social media is “the coin of the realm,” whatever that means. He references Orwell’s 1984 when he claims that giving up anonymity and sharing more data improves people’s autonomy instead of limiting it.

His Orwellian reference is supposed to be a good thing.

Then, he reels off instances wherein AI will know more about us than we know ourselves, and that it will get between us and the world to mediate our every opinion and action. This way, we’ll always make the best possible decisions, as the dispassionate clarity of AI will replace “hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism.”

If only human beings behaved like smart machines, all would be well. We’ve heard similar arguments in economics and politics. It’s a scary pipe dream that he wants us to believe isn’t so scary.

But it’s still a pipe dream.

His strawman opposition is a farce.

Like other AI and tech evangelists, Hoffman smears people as “tech skeptics” and their opposition to AI as a worry it’s “a threat to personal autonomy,” and then goes on to provide examples of how losing said autonomy will be a good thing (the Orwell thing).

He also references the possibility of misuse of data by overzealous corporations or governments, but counters that individuals will have access to AI tools to combat such AI surveillance and potential manipulation.

Life as an incessant battle between AIs. Gosh, doesn’t that future sound like fun?

At least he doesn’t reference “Luddites,” which is a pejorative intended to dismiss people as maniacs with hammers in search of machines to smash (the caricature is far from the historical truth, but that’s the stuff of another essay).

And, thankfully, he doesn’t quote some fellow tech toff saying that the risk of AI is that it could destroy the world, which usually comes with some version of “please, stop me because the machines I’m building are too powerful” offhand boast.

The thing is there’s no organized or funded opposition to AI.

Even though every AI benefit he foresees will require profound, meaningful changes and trade-offs of personal agency, sense of self, how we relate to others, and what we perceive and believe. All of us sense that huge, likely irreversible changes are coming to our lives, yet there’s no discussion — whether referenced and challenged in his op-ed or anywhere else — about what it means and whether or not we want it.

We deserve thoughtful and honest debate, only instead we get one-sided puff pieces from folks who can afford to sell us one side of the story.

Where’s the counterpoint to AI propaganda?

AI That Thinks

The headline of a recent article at TechCrunch declared that an AI thinks in Chinese sometimes, though its coders can’t explain it.

It was misleading, at best, and otherwise just wrong.

The essay, “Open AI’s AI reasoning model ‘thinks’ in Chinese sometimes and no one really knows why,” recounted instances where the company’s super-charged GPT model (called “o1”) would occasionally incorporate Chinese, Persian, and other languages when crunching data in response to queries posed in English.

The company provided no explanation, which lead outsiders to ponder the possibilities that the model was going beyond its coded remit and purposefully choosing its language(s) on its own. The essay’s author closed the story with an admittedly great sentence:

“Short of an answer from OpenAI, we’re left to muse about why o1 thinks of songs in French but synthetic biology in Mandarin.”

There’s a slight problem lurking behind the article’s breezy excitement, though:

AIs don’t think.

The most advanced AI models process data according to how those processes are coded, and they’re dependent on the physical structure of their wiring. They’re machines that do things without any awareness of what they’re doing, no presence past the conduct of those actions.

AIs don’t “think” about tasks any more than blenders “think” about making milkshakes.

But that inaccurate headline sets the stage for a story, and subsequent belief, that AIs are choosing to do things on their own. Again, that’s a misnomer, at best, since even the most tantalizing examples of AIs accomplishing tasks in novel ways are the result of the alchemy of how they’ve been coded and built, however inscrutable it might appear.

Airplanes evidence novel behaviors in flight, as do rockets, in ways that threaten to defy explanation. So do markets and people’s health. But we never suggest that these surprises are the result of anything more than our lack of visibility into the cause(s), even if we fail to reach conclusive proof.

And the combination of data, how it’s labeled (annotated) and parsed (tokens), and then poured through the tangle of neural networks is a strange alchemy indeed.

But it’s science, not magic.

The causes for the use of different languages could be simply an artifact of the varied data sources on which the model is trained. It could also be due to which language provides relevant data in the most economical ways.

A research scientist noted in the article that there’s no way to know for certain what’s going on, “due to how opaque these models are.”

And that’s the rub.

OpenAI, along with its competitors, are in a race not only to build machines that will appear to make decisions as if they were human, though supposedly more accurately and reliably than us, but to thereafter make our lives dependent on them.

That means conditioning us to believe that AI can do things like think and reason, even if they can’t (and/or may never do), and claiming evidence of miracle-like outcomes with an explanatory shrug.

It’s marketing hype intended to promote OpenAI’s o1 models as the thoughful smarter cousins of its existing tools.

But what if all they’re doing is selling a better blender? 

Insane AI

If AIs use data produced by other AIs, it degrades their ability to make meaningful observations or reach useful conclusions.

In other words, they go insane.

The problem is called “AI model collapse” and it occurs when AIs like LLMS (ChatGPT et al) create a “recursive” loop of generating and consuming data. For those of us old enough to remember copiers, think copy of a copy of a copy.

The images get blurry and the text harder to read. The content itself become stupid, if not insane.

It’s an inevitable outcome of the fact that AI developers have all but used up most of the data available on the Internet to train their models, and the most rich sources of reliable stuff are now restricting said access.

And estimates range from some to lots of that available (or formerly so) data is already tainted by AI, whether by creation or translation. It’s no surprise since there are concerted efforts underway to generate AI content on purpose for the very purpose of training AIs: called “synthetic data,” it has been suggested that it could top the presence of “real data” by the end of the decade.

Just think of the potential for AIs to get stuff wrong by default or, worse, AIs used to purposely generate data to convince other AIs of something wrong or sinister. It will supercharge gaming a system of information that is already corrupt.

There are three most obvious ways to address this emergent problem:

First, and probably the most overt, will be the development of tools to try to stop or mitigate it. We’ll hear from the tech boffins about “guardrails” and “safeguards” that will make AI model collapse less likely or severe.

And, when something weird or scary happens, they’ll label it with something innocuous (I love the idea that AIs already making shit up is called “hallucinating”) and then come up with more AIs to police that new problem that also creates demands for money as it prompts more problems.

Second, and more insidiously, the boffins will continue to flood our lives with gibberish about how “free speech” means the unfettered sharing and amplifying of misunderstanding, falsehoods, and lies, which will further erode our ability to distinguish between sane or mad AI. After all, who’s to say people of color weren’t Nazis or other fictions weren’t historical fact (or visa versa)?

Slowly, we’re being conditioned to see bias and inaccuracies as artifacts of opinion or process, not facts. An insane AI may well be viewed as no worse than our friends and family members whose ideas and beliefs are demonstrably wrong or nutty (though utterly right to them).

Third, and least likely, is that regulators could step up and do something about it, like demand that the data used for training AIs is vetted and maybe even certified fresh, or good, or whatever.

We rely on government to help ensure that hamburgers aren’t filled with Styrofoam and prescription drugs aren’t made with strychnine. Cars must meet some bare minimum safety threshold, as do buildings under construction, etc.

How is it that there are no such regulatory criteria for training AI models?

Maybe they just don’t understand it. Maybe they’re too scared to impede “innovation” or some other buzzword that they’ve been sold. Maybe they’ve asked ChatGPT for its advice and were told that there’s nothing to worry about.

Whatever the cause, the fact that we’re simply watching Als slowly descend into insanity is simply mad.

Your AI Shopping List At CES?

Many of the 4,500 exhibitors at this year’s Consumer Electronics Show will talk about AI, according to the annual event’s organizer.

Only there won’t be any AI gizmos on display, since AI isn’t some “thing” we consumers can or will buy. AI isn’t a product, per se. It’s an enabler, an ingredient, a component of electronic devices, albeit a potentially immense one.

So, the show will be all about putting AI into the devices consumers use and why that’ll be a good thing.

We won’t have a choice about it.

From products to ideas

Trade shows used to be the best and perhaps only way for manufacturers to sell their stuff to distributors and retailers. Sure, there was always an element of “what if” presented to jazz up exhibits (think concept cars at auto shows), but success was measured in the number and value of written purchase orders.

The Internet killed most of those events, even though displaying products online and purchasing them at the push of a button obviated the need for boozy dinners on a company’s dime.

CES survived because it provided a last stand for all that happy schmoozing, but more so because it shifted its focus from facilitating sales of today’s gizmos to promoting fantasies of what tomorrow’s offering might look like.

It’s one big PR stunt intended to nudge media, financial analysts, and influencers of all shapes and sizes to embrace a shared expectation for the future. It’s also where companies scare and dare one another to commit to those outcomes.

CES aims to present a view of the world that rises to the level of self-fulfilling prophecy. Rest assured that the media coverage of it this week will glowing reflect that certainty and its subtle impacts will be felt in the how AI is talked about until, well, next year’s event.

Or so that’s the game plan.

The problem with predicting the future

If past visions of our tech future were any good, we’d be buzzing around on our personal jetpacks and have family and friends living in orbit and on the Moon.

Not only are most predictions imperfectly realized, if at all, but they usually miss all the ugly side-effects that’ll come with them. Just imagine if the little cars buzzing on wide-open freeways in the Futurama exhibit at the 1939 World’s Fair had been bathed in exhaust haze.

I was a frequent CES attendee over the past 25 years, and they got the future prediction thing consistently wrong. I’m reminded of years of promises that homes would be “smart” and, more recently, that cars would drive themselves.

From the promotion of somewhat reasonable sharper and ever-larger TV screens to the silly can opener/fly-fishing gizmos, my overriding takeaway was to wonder who asked for this crap?

On the topic of their latest infatuation with AI, the answer remains nobody…except the companies and investors who hope to make a killing on it, just like every other prediction they’ve tried to make come true. Partial and/or delayed success is still a win.

There’s no demand for this stuff, only supply looking for an outlet.

Their AI shopping list

I’m relieved not to be at this year’s show (the past few have been memorable mostly as COVID super-spreader events), but I can imagine the aisles will be filled with endless iterations of AI making so-and-so product better/faster/cheaper and thereby providing consumers with ease, efficiency, and/or “value” (a consultingese bugaboo term that has no meaning whatsoever).

Nvidia’s founder and CEO will keynote the festivities. You can just imagine where it’ll go from there.

And it won’t matter if their rosy predictions are inaccurate, incomplete, or don’t get realized at a speed and scope that matches their aspirations.

CES is a statement of purpose:

Manufacturers, their suppliers and consultants, a global distribution system and an entire ecosystem of investors and shareholders are challenging each other to make this AI future happen.

It’s not our shopping list…it’s theirs.

Watch that Futurama film again and ask yourself a question that should be top of mind for us this week:

Where are the people?

Remembering The World Before AI

As we approach the end of 2024, I’m spending some time committing to memory what it was like to live without AI.

Granted, it’s not possible, at least not completely, since AI is already present in our countertop and digital phone assistants, customer service interactions, and every business meeting recap and homework assignment that takes a nanosecond to complete.

It already lurks behind the scenes, routing airplanes, making insurance coverage decisions, and transforming the chaos of factory floors into choreographed robot dance numbers.

But it’s not everywhere, at least not yet, though there are a slew of companies large and small, backed by many billions and staffed by some of the smartest boffins on the planet, who want to change that fact.

I want to be able to tell my grandchildren what it was like before AIs were omnipresent.

A world transformed

What will our lives look like once they’re managed by more and better AIs?

More information will be available to us in more easily accessible ways, thereby greatly increasing our already great dependence on the Internet. Our awareness of where that information comes from, or who/what benefits from its propagation, will get cloudier than it is today, as AIs’ inscrutability will encourage our trust and willingness to overlook its occasionally overt invention.

Our devices and systems will tell us what we should do when they haven’t already decided for us what we will do, get, or know. And we will believe them.

What the tech types call “inefficiency,” I call “experience.” The label obscures the fact that our exchange the freedom to make decisions, right or wrong, will be a trade that is irreversible long before we know its cost.

At work, our helpful AIs will continue to step up and assume more and more responsibility until such time that they can do our jobs. This’ll create new jobs and even entire industries for we newly unemployed to consider, only AIs will immediately begin learning and adapting to do those jobs, too.

The race against machines that work fast, better, and for less cost than humans that started with the first spinning jennies in the 1770s won’t just continue but speed up, rendering our ability to win it evermore brief and therefore futile.

AIs may well solve some or all the “big” problems that we currently face – global warming and cancer, for instance – but they’ll just as likely create new ones that we haven’t yet encountered or can only imagine, like AIs posing as people or cracking every firewall or data security protocol.

How about AIs deciding to change their coding, and thereby choose to do things in ways they weren’t originally tasked (or do new things altogether, whether we like them or not)?

What’s certain is that it’ll take more AIs to address these AI-originating problems.

A chance to remember

Technology has been changing our world ever since Oog first had the idea to roll his dinosaur carcass on wheels instead of dragging it in the dirt.

People used to spend much of their time in, and focused on, their immediate local surroundings. Generations of families would live their lives in the same places, generally, their upbringing and worldviews defined and limited by their places.

Speedy travel and communications at a distance blew up this tradition and labelled it “provincial,” or something worse.

People used to spend vast amounts of time in silence, or at least in moments that gave them space for contemplation. Generations of families would entertain themselves with reading, making their own music, or telling stories that would change every time they were shared.

Media technologies blew up this tradition and labelled it “boring,” giving us a constant stream of content to consume in lieu of stuff of our own invention.

Now, we still live in a world in which we don’t know the “right” answer to every question. Our daily lives are filled with uncertainty, risk, and chances that we might be surprised by events, whether pleasing or discouraging.

AIs will remove these chances from our lives, labelling them “inefficient” and make oodles of money reducing the distance between what we want to do…and what some aggregation of data and/or mercenary interests wants from us.

So, I’m taking a moment whenever I can to savor that uncertainty while I still have the chance.

Happy New Year!