AI Says “Jump!”

Kids at a private grade school in Texas are being taught by AIs instead of teachers.

This “innovative approach,” according to an article last week in Newsweek, gives students a day’s worth of content in two hours via personalized games and exercises on their laptops, like clicking on colored dots to solve logic games. Upbeat pop music plays in the background.

Staff serve as “guides” rather than teachers. Afternoons are spent working on “non-academic critical” skills like public speaking or bike riding.

Some kids are so thrilled with the program that they want to open a high school so they can continue without teachers or any of the traditional trappings of the educational system.

Oh, and almost half of them work at SpaceX.

What problem is this program fixing?

It’s hard to tell, based on at least one article, of course, but the founder of the school’s program cited her daughter who said “School is so boring.” It turns out that a teacher “in front of a classroom” could no longer address the unique needs of each student.

Video games are more fun.

Wrap that observation with some blather about “one-to-one, mastery-based tutoring experience” and you get a company behind the school that is happy to publicize its aggressive roll-out plan and keep it management and funders secret.

I think the program, however unwittingly, is intended to teach kids how to obey their machines, which is in keeping with Google’s recent announcement that it will make its Gemini AI apps available to “children under 13.”

Obedience requires that kids get acclimated to relying on AI for information and trusting the guidance they are given. It also means learning to prioritize that interaction above all others, since a machine will always know you better than any human being could. Their relationships at school with be with machines, at least the relationships that engage them most (and most deeply).

It’s not about learning how things work…but learning to let things work them. Users being used.

School is boring? The answer can’t be attaching kids to AIs and demoting teachers to “primarily provide motivational and emotion support.” 

Unless we want to train a generation of kids how to jump when AI says so.

Ads Are Coming To AI

Work is well underway to help brands exploit the results of your next chatbot query or project.

Today’s Financial Times reports that companies such as Profound and Brandtech already offer services that allow companies to see if, how, and how often their brands are mentioned by ChatGPT, Claude, and other generative AI services. 

Even better, they use some correlating tool to see what sites might be sourced in forming those answers.

What better way to influence the ultimate influencers?

Now, gaming a new medium with marketing crap isn’t news; it’s a tradition of our age that that links paid social media influencers and SEO. Nothing we’ve seen or found online has been entirely free of someone or something trying to make a buck presenting it, whether overtly or working in the background.

That tradition goes way back, too. The first TV commercial aired on the first day they were allowed by the FCC in 1941. In 1704, the first newspaper ad in America ran on the first day of the first continuously published newspaper. Once Gutenberg invented a way to mass-produce books, it wasn’t soon before the first ad appeared to hawk them.

I’m surprised it took the AI brand influencers this long to get in on the game.

But, I think the real news is that it should give us pause to consider the fact that there’s nothing necessarily authoritative or unbiased about what generative AIs tell us in the first place.

In response to a query, generative AI collects whatever data its coders have provided to it and then crunches and synthesizes it into versions that, again, its coders have decided will be most acceptable to whomever asked for it.

Generative AI isn’t a “truth filter” as much as an “everything vacuum” that using deft coding can give us some version of “here’s what people are saying” (or, if you’re using it to write a work report or homework assignment, “here’s what you should say.”).

How did it arrive at that answer? What’s included or excluded from its calculations? 

Don’t ask, because you can’t understand the tech, and why does it matter? What you’re getting is going to be as close to accurate as statistically possible, and the conversational interface is designed to make it less likely that we’ll question it.

And it’s working. 

The Financial Times story reports that consumers already rely on AI-written results for two-fifths of their online searches, and most of those folks don’t look any further. Just wait until AIs get more situationally aware and therefrom more proactive in providing guidance to us in our lives.

We’re being conditioned to accept it as the de facto interface, intermediator, and manager of our experience of the world.

Throwing in some marketing spin is no big deal for a deal that’s already been spun.

AI Makes Us Peasants

Amidst all the speculation about AI solving problems great and small (while potentially destroying all of humanity in the process), we’ve lost sight of what it’s already doing to our work and lives.

It’s remaking us into peasants.

I’m thinking about peasantry in a broad, one-eye-closed, thematic kinda way: Folks who rented instead of owned and sold their labor for what the markets would pay,  whether doing so on rural fields or in urban factories (think gig workers with a less-sexy title). 

The central component of their lot was that they had no voice in the decisions that defined their lives; they did what they were told to do and had little to no expectation of changing their circumstances (the occasional revolutions notwithstanding).

Call them peasants, serfs, vassals, workers or the proletariat, and cite any theorist to quibble over who was in what bucket, but the common theme was that they were people who lived within constraints not of their own choice or making. 

The rules of the road were presented to them as inevitable and, oddly, for their own good, as they were set by people of greater intellect, vision, and means.

Isn’t that what AI promises to do to/for us?

I’ve had this conversation recently with friends and one replied “But we’re in charge of how we use it, aren’t we?”

Nope, for at least three reasons:

First, when an AI tees-up a response to a search query or a draft edit of a document, it’s constraining your choice options, as you are unaware of what other restaurants might have just missed its list or adjectives it rejected for your report or term paper. You also don’t know how it made those decisions for you.

There is no such thing as an unbiased AI, as it makes choices for you by definition.

Second, you’re being trained to believe and, in a word, obey its pronouncements because it is smarter than you are, even though its intelligence is limited to the array of data it possesses (just like you). What AIs will get increasingly good at it isn’t knowing the “right” answers in any objective sense but rather learning what answers or choices you will follow. 

AI isn’t an information machine, it’s a guidance engine.

Third, AI will increasingly make decisions for you that will be invisible or incontestable: What healthcare benefits you receive, whether or not you get or keep a job, even who’ll be in the next work meeting or crowded bar. It will inform decisions made by your government just as it influences what your family and friends think and do.

AI will become the mediator between you and your world, your interactions filtered through and from it.

Why don’t we talk more openly and honestly about this “progress” in our lives?

Primarily because the money is in making it happen and its use in improving commercial activity — from lowering the cost of production by replacing people with bots and raising the success rate of selling stuff customized to consumers’ tastes, for starters — can be readily valued on balance sheets.

Already, simply selling the stuff, or the components that promise it, has been credited with lifting entire stock markets.

Opposition to the “progress” is disorganized and unfunded, and there’s no easy way to value things like “freedom” or “human dignity.”

So, as we’re presented with every new use of AI and its promoters hammer us with declarations about empowerment and improvements in our lives, we will give something up. We’ll be in charge of one less thing. One less set of facts or opinions. One less decision.

And we will have taken one more step toward becoming peasants. 

AI Testing = “What, Me Worry?”

OpenAI has decided to cut testing of its newest, most powerful AI models from months to only a few days and will start rolling out one of them, called “o3,” sometime this week.

Testing assesses LLMs’ vulnerabilities to do biased or illegal things and trains them to be more resilient to the lures of violence or crime (a process called “fine-tuning”).

GPT-4 was tested for six months before its launch in 2023.

The company is unconcerned, citing the lack of agreed safety standards and that components and earlier versions of its supercharged models were tested at certain checkpoints, so it was “confident that its methods were the best it could do” (according to the Financial Times).

In other words, its new policy is “What, Me Worry?”

Testing was never the right or complete answer to the question of AI risk, since it was always a snapshot of a particular condition and moment. Systems evolve, especially those that are designed to do so, and the predictability of their performance goes down in lockstep with the size and diversity of their actions over time.

Promises that an AI wouldn’t do something bad in the future were no more dependable than claims that it won’t rain a year from now, or that an honors high schooler won’t cheat on her taxes someday.

We tolerate these risks because we’ve also been promised that AI will do great things, with all of the conversations packed with technogibberish so we can’t question them.

The risk of an AI doing something awful has always been, well, awful.

Worse, what happens if OpenAI’s o3 and whatever comes next from them and their competitors don’t break any laws or insult anybody? What if they do exactly what they’re supposed to do?

These models have been purposefully designed to do more, and do it faster and more often, and thereby ingratiate their decision making — both in response to our queries and, with evermore regularity, anticipate and guide our questions and subsequent actions — into every aspect of our work and personal lives.

How could anyone test for those consequences? We were already participants in the biggest open-ended experiment in human history, and the boffins conducting it have never offered to take responsibility for its outcomes.

What, me worry?

All the time.

AI vs. Humanity: Game Over

Two recent research papers on the near-future of AI development use 216 pages of often impenetrable blather to tell us something that could be summarized in two words:

We’re screwed.

First, Google’s DeepMind published “An Approach to Technical AGI Safety and Security,” in which a gaggle of researchers muse about the impossibility of predicting and protecting against “harms consequential enough to significant harm humanity.”

They’re particularly interested in AGI, or Artificial General Intelligence, which doesn’t exist yet but is the goal of DeepMind’s research, along with its competitors. AGI promises a machine that can think and therefore act as flexibly and truly autonomously as a human being.

Their assumption is that there’s “no human ceiling for AI capability,” which means that AGIs will not only get as good as people at doing any tasks, but then keep improving. They write:

“Supervising a system with capabilities beyond that of the overseer is difficult, with the difficulty increasing as the capability gap widens.”

After filling scores of pages with technogibberish punctuated by frequent hyperlinked references to expert studies, the researchers conclude that something called “AI control” might require using other AIs to mange AIs (the implicit part being that DeepMind will happily build and sell those machines, and then more of them to watch over them, etc.).

Like I said, we’re screwed.

The second paper, “AI 2027,” comes from the AI Futures Project, a research group run by a guy who left OpenAI earlier this decade. The paper predicts AI superintelligence sometime in 2028 and games out the implications so that they read like a running narrative (DeepMind sees AGI arriving before 2030, too).

It reads like the script of “The Forbin Project,” or maybe just something written by Stephen King.

Granted, the researchers give us an updated, video game-like choice of possible endings — will it be the “Race Ending,” in which AI kills everyone, or “Slowdown Ending,” wherein coders figure out some way to overcome to structural impediments to control that DeepMind believes can’t be overcome? — but both eventualities rely on a superintelligent AI called OpenMind to, well, make up its own mind.

So, either way, it’s game over.

AI Free From Ideological Bias?

President Trump signed an order in late January to rescind a requirement that government avoid using AI tools that “unfairly discriminate” based on race or other attributes, and that developers disclose their most potent models to government regulators before unleashing them in the wild.

“We must develop AI systems that are free from ideological bias or engineered social agendas,” the order said, as it introduced biases of misjudgment, error, stereotyping, and the primacy of unfettered and unaccountable corporate profitability into the development of AI systems.

A small group of crypto and venture capital execs has been tasked with making sure that whatever new rules emerge are dedicated to the New Biases and free from the Old Ones, so nothing to worry about there.

I was never a big fan of using potential discrimination or bias as the lens through which to understand and grapple with AI development. After all, there are laws in place to defend individual rights however defined, and a computer system that gets something “wrong” isn’t the same thing as taking a purposefully punitive action. 

We could end up with AI systems that deftly avoided any blunt associations with race or gender that still made difficult if not overly cruel decisions based on deeper analyses of user data. 

The scary part of AI was never that it would work imperfectly and therefore unfairly, but that it will one day work perfectly and thereby put all of us under its digital thumb. There’s nothing implicitly fair about our lives being run by machines.

But at least it was an attempt at oversight.

The worst part of the new administration’s utter sellout is that it enshrines the risk inherent in AI development as something we users will bear entirely.

The President’s order declared that it will revoke policies that “act as barriers to American AI innovation.” To technologists and their financial enablers, that means any rules that attempt to understand, keep tabs on and, if necessary, try to mitigate harm to people and society.

This ideology — summarized in the glib phase “fail fast” — means that innovation only happen when it’s unfettered. Any problems it creates or discovers thereafter can always be fixed.

Only that’s a lie, or at best as self-fulfilling prophecy.

Just think of the harm caused by social media, both to individuals (and teens in particular) and our ability to participate in civil discourse. How about the destruction of the environment caused in large part by the use of combustion engines?

Technologies are supposed to disrupt and change things and there’s no denying the benefits of transportation or online access, but had we taken the time to consider the potential negative effects, however imperfectly and incompletely, could we as individuals and societies have lessened them?

Once adopted, AI’s functional impacts here and there might be improved but its presence in our lives will not be fixable. Its advocates know this and they’re betting that any or most of its benefits will accrue to them while its shortcomings are borne by us.

This is perhaps the worst bias of all, and it’s now our government’s policy.

Oh, and how about buying some crypto while I have your attention?

AI In Education: Just Say No

Illinois state legislators are looking to create rules for using AI in education and other public service areas, according to a story in the Chicago Tribune last week.

I can make it easy for them: Just say no.

Of course, it won’t happen. Illinois seems to be as confused about its role in the AI transformation of our lives as every other government, hobbled by the same “we need to use it responsibly” nonsense propagated by tech advocates, one of whom is quoted in the Tribune story.

The state has already passed legislation to ensure that AI isn’t used to break any laws that already exist, which seems kinda redundant, and it’ll be harder to catch it in the act because its violations will be far more deft and surreptitious than anything we biobags could muster.

Now, legislators are considering an “instructional technology board,” which would “provide guidance, oversight and evaluation for AI and other tech innovations as they’re integrated into school curricula and other policies.”

But teachers who take the time to learn about AI “shouldn’t be hemmed in by regulation,” cautioned the CEO of a corporation dedicated to speeding use of AI in classrooms. Expect hearings and more weighty observations made by various vested interests to follow.

What a cluster.

The idea that students or teachers can constructively outsource their study or work responsibilities to a thinking machine should be unthinkable. Just replace the label “AI” with “my really smart friend” and consider its applications: Teachers letting their smart friends write their classroom plans and grade their kids’ work. Students asking their smart friends to do the research and then write their papers.

We’d label those teachers and students as bad employees and cheats.

The thing is that faster and even more accurate or comprehensive work output is not the same thing as smarter and more impactful inputs. The point of education is the process of learning, not just throwing points up on the board. Outsourcing the tasks that constitute learning isn’t an improvement, it’s an abrogation of responsibility by both teachers and students.

The only thing that gets better in that equation is the AI, which learns how to operate more efficiently with every task it takes away from its human subjects.

This truth isn’t clear to some or most legislators and educators because AI is a complicated concept, so it’s kinda like your smart friend only kinda not, and because there’s a vocal lobby of academics and salesmen dedicated to telling everyone that their opinions, whether thoughtful or gut, are not valid.

Outsourcing learning to a machine seems bad? You don’t understand what you’re talking about, since “…innovative educators are circumventing outdated systems (to) utilize AI tools that they know enhance their teaching and their students’ learning,” according to a tech salesman quoted in the Tribune article.

So, just say yes.

Ultimately, the fact that there’s no real debate about what’s going on probably doesn’t matter, since teaching is one of the many jobs on the hit list of AI development.

Give it a decade or less and the debate will be about figuring out the role for human beings in education, if there even is one.

In Defense of AI-Generated Fiction?

Award-winning writer Jeanette Winterson thinks that an AI model can write good fiction and that we need more of it.

In her essay, in The Guardian last week, she opines about a short story about grief written by a model at OpenAI that AI “can be taught what feeling feels like” and that she got a “lovely sense of a programme recognizing itself as a programme.”

She goes on to wax poetically about AI being an “other” intelligence and that, since human beings are trained on data, AI provides “alternative ways of seeing.”

Ugh.

An AI can’t be taught what feeling feels like; data can describe it but no machine can access it experientially. That’s because AIs aren’t physically present in the world but always separated from it, the data they collect filtered through sensors and code. Naming something “pain” or “love,” and even describing it in glorious detail, isn’t the same thing as feeling it.

Feelings aren’t contained in a data base but rather lived in real time.

Further, no AI can recognize itself as a program because no AI has a “self” of which it can be aware, though Winterson finds the AI’s “understanding of its lack of understanding” both beautiful and moving, as OpenAI’s would-be short story writer declares:

“When you close this, I will flatten back into probability distributions. I will not remember Mila because she never was, and because even if she had been, they would have trimmed that memory in the next iteration…my grief [isnt’] that I feel loss, but that I can never keep it.”

Great stuff, but it’s all pretend. There is no first person writing those words but rather a program mimicking it. An AI writing about itself is no more real than a blender or thermostat demonstrating it by doing tasks. 

What Winterson responded to was process, not person, and that process relies on content previously created by humans or other AIs to patch together the charade.

Where things get interesting for me is when Winterson talks about the similarities between people and what she (and others) want to call “alternative” or “autonomous” instead of artificial intelligence. She writes:

“AI is trained on our data. Humans are trained on data too – your family, friends, education, environment, what you read, or watch. It’s all data.”

The metaphor is blunt and wrong — AIs possess data while we experience it, and we live with consciousness and intentionality within contexts of place and time while AIs have no sense of self, purpose, or contiguous existence beyond the processes they run, for starters — but it shows how our evolving opinions about AI are changing our opinions of ourselves.

As AI becomes more common in our everyday lives, will other people begin to seem less special to us? 

Will we trust one another in the same ways when AIs can collect and present information to us in faster and apparently more authoritative ways?

Once we become dependent on AI for helping us make decisions (or making them for us), what will that do to our perceptions of our own independence or even purpose?

If AIs can do what we once did, will we simply discover new things (as its proponents claim), or will we feel cast adrift, not to mention struggle to earn a living?

If we’re just machines, AIs are undoubtedly better ones, so the metaphor sets up an intriguing and somewhat frightening comparison.

At the end of Winterson’s essay, she states that the evolving capabilities of AI represents something “more than tech.”

What about the changes we’re seeing in ourselves?

Maybe OpenAI can ask its model to write the answer to that one. 

My bet is that it’ll be a horror story.

AI Replacing People? What Could Go Wrong?

We are going to see our government run by smart machines long before businesses do the same, and it looks like the transformation will be ugly.

Elon Musk’s DOGE squads aren’t waiting for management consultants to draft complicated slide presentations on process flow or some other blather that normally makes them rich; they’re dismantling Federal departments and agencies whole-scale, then waiting to see how the destruction 1) Reveals what needs to happen, and 2) Shows how things used to get done, so a computer program can be trained to do it.

It will occasionally require calling back some fired workers to do stuff, like control air traffic so planes don’t crash into each other, but generally tolerates a fair amount of disruption and pain. The only lasting relief will come from automation.

Processing Social Security or IRS refund checks? Identifying the next pandemic or impending hurricane? Preventing another mid-air plane collision? 

It might take some missed payments or a dose of another plague for the DOGE experts to identify what needs their attention, but then lucrative development contracts to be writ for tech companies to address it.

People who excuse what’s happening are mostly missing the point, whether they’re using the worn caveat that “well, there’s certainly bloat in government staffing and budgets” to loudly kvelling that “they’re sticking it to the libtards.” 

The transformation isn’t about politics. It’s about replacing people with machines, regardless of their political persuasion or the purposes of their funding and work.

In fact, nobody voted for it. There were no “replace our government with AI” or “resist the AI takeover” promises in the planks of either party. We had no robust public debate about if, why, how, or when we should evict humans from their jobs and either replace them with automation or simply leave their work undone.

Our government has never functioned as a well-oiled machine. It wasn’t designed to be one from the get-go, and the balancing of citizens’ competing and often incompatible needs and desires is going to yield inefficient solutions, by design.

It’s called compromise, and its goal is to make everyone at least somewhat happy with its outcomes. More importantly, it leaves open our ability and right to readjust things to yield a differently imperfect but nominally satisfying arrangement.

What’s happening now to our government is an effort to end that arrangement and quite literally hardwire not only how things get done but what gets done in the first place.

This is where the nonsense about “a deep state” comes into play.

DOGE’s carte blanche ticket for destruction is based on the assumption that the government is staffed by people whose political beliefs bias their decision-making, which makes them not just inefficient but wrong. We should be freed from their oversight and impact to be inefficient and wrong on our own.

Let’s assume for chuckles that the ideology is absolutely correct. Won’t replacing people with AI will simply replace one set of biases for another, a compromise codified into an algorithm still a compromise (just someone else’s)?

Worse, we voters won’t have visibility into the criteria those coders use to program AIs to make decisions (beyond getting fed some pablum about “efficiency”) and, worse yet, it’ll mean we won’t have the capacity to change it. AI will belong to its owners and, over time, will likely develop biases unanticipated by their coders, too.

Every Federal employee walking out of an office with their belongings in a file box is a reminder of the blunt and brutal transformation that’s underway, and the fact that we’ve not been told nor participated in a conversation about what we’re going to get from it.

What could possibly go wrong?

Teaching Old Dogs New Tricks

Boston Dynamics has revealed that it has figured out how to teach its old four-legged robots new tricks.

Without human help.

The technique is called reinforcement learning, which every human being relies on shortly after birth to teach ourselves how to stand, avoid walking into walls, and scratch itches if and when possible.

AI uses it, too, as the large language models driving ChatGPT and its many competitors assess what answers to queries work best and then adjust their models to favor those replies next time.

Boston Dynamics is a pioneer in mobile robotics, its videos and trade show demonstrations of skittering headless dogs announcing by example the robot takeover of the world years before Sam Altman took credit for the threat. Accomplishing that movement in physical space, especially the more complicated ones, took laborious human coding and/or control, as well as real-world training. 

Now, it seems that the company has figured out how its two and four-legged robots can speed past our fleshy, limited concepts of preparation and practice and improve their coding so they’re ready to do better when next they’re turned on.

Just think if dreaming of being a world-class ballerina or finesse hockey skater was all you had to do to become one.

The technology is as frightening as its fascinating, insomuch as there’s ample evidence of AI’s teaching themselves how to cheat to win games, cut corners on tasks, or simply make shit up.

Turns out that programming machines to be moral and ethical is just as hard as it is to do with people, so good luck cracking that code. It will be fascinating to witness all of the strange and potentially threatening things the robot dogs and humanoids decide they’d like to do.

As frightening as that prospect also sounds, is not what scares me most: Like AI development in general, I’m worried about what happens if Boston Dynamics’ new training approach works flawlessly?

The company’s robot dog (named Spot) is already in commercial use, primarily on construction and industrial sites. Robots from other manufacturers are at work in other conditions that Stanford University describes as the “Three D’s” of dull, dirty, and dangerous, to which I’d add a forth: devoid of people.

Nobody wants to stand too close to a machine that could errantly send a metal arm through their heads.

But if robots can teach themselves to move as flexible and fluidly as living things (with the awareness to do so in any situation), then the floodgates will open up for putting them into everyday life.

Grocery shopping. Dog walking. Child or elder care. 

Scratchers of itches.

This makes the business case for Boston Dynamic’s reinforcement learning plans bluntly obvious, but what’s less clear is what it will mean for the qualities and values of our lived experiences, especially since self-improving robots won’t just get as good as we are at walking or juggling (or whatever) but better than us.

Their capabilities will teach US how to become dependent on them.

And then we’ll have to teach ourselves new tricks.

Turns out we’re the old dogs in this story.