Open Source Is AI Plutonium

An open source AI based on Meta’s LLAMA model is being used to create sexbots. 

Its creator is thrilled to get to experiment with state art tech, according to this story. He feels that commercial chatbots are “heavily censored.”

And that’s the argument in favor of open source development. Entrepreneurs and artists need freedom to experiment. The ugliness of a chatbot that engages in graphic rape fantasies is a small price to pay for all of the wonderful and beautiful things that might emerge from fooling around freely with AI.

After all, there’d be no Internet goodness without porn badness, especially in the early days. I’d wager that most innovators of any sort have never been particularly comfortable working with the constraints of regulations or propriety.

But calling open source AI “code” or “a model” along with a cute name or acronym doesn’t do it justice.

Open source is AI plutonium. We’re being told that we must tolerate the possibility of deadly weapons in order to enjoy power generation. 

It’s not true. Sure, the strides made by using open source, like LLMs, gave developers the easiest path to the quickest results. Online customer service will never be the same. A generation of kids can cheat better on their homework assignments. AI in government and businesses is culling data to find more patterns and make better predictions.

But we can be sure that development is underway on applications that are illegal, possibly deadly, and which certainly promise/threaten to change the ways we work and live. And there’s no way to find those bad actors among the good ones until their badness appears in public. 

It could even impact us and we wouldn’t know that AI was responsible.

So, we might never figure out that errant AI have been quietly manipulating stock prices or skewing new drug trials. It could sway elections, entertainment reviews, and any other crowdsourced outcome. Bad actors, or an AI acting badly, could encourage social media ills among teens and start fights between adults.

It might even start a war, or decide that nuclear weapons were the best way to end one.

Unlike plutonium, there’s no good or reliable way to track or control such outcomes, no matter how transparent the inputs might have been.

In true Orwellian fashion, the CEO of a site that promotes open source argues that the real risk is from businesses that are “secretive” and take at least some responsibility for their AI models, like Google and OpenAI.  A VC exec who promotes AI worries that relegating development to big companies means “they’re only going to be targeting the biggest use-cases.”

It’s a false dilemma. I’d happily “censor” a porn application in exchange for a cure for cancer, especially if it came with the likelihood that the world wouldn’t get blown up along the way.

Move Over AI, We’re Robots Too?

Earlier this year, brain researchers found the spots in the area that governs physical movement intersect with networks for thinking.

Mind-body duality questions solved for good. The mind is what the brain does. It controls our bodies with commands like a CPU controls robot arms and legs.

Not.

It’s a recurring delusion in neuroscience these days, probably because it’s so powerful. We can only study and understand what we can perceive and test. When it comes to human beings and our minds, all we have to look at is what’s physically contained in our skulls and the filaments of its connections to the rest of our bodies.

Mind is therefore a product of brain. It’s a self-evident, a priori truth. Machine references are more than analogies, they’re descriptive. Genetic code is computer code. 

We’re all machines. It means that AI minds will one day equal if not exceed ours because those machines can think faster and have greater memories.

And, if there’s anything we can’t explain about our minds, such as our ability to have a sense of “self” or a presence in space and time that we feel as much as perceive, well, that’s just tech we can’t yet explain.

One of the study’s authors explained that the discovery “provides additional neuroanatomical explanation for why ‘the body’ and ‘the mind’ aren’t separate or separable.”

And there’s the rub. There is no explanation in the study, or in any scientific research. 

More broadly, science provides descriptions. It reports the ways things work. It’s repeatable and reliable, and it has given us cars on our roads, electricity in our homes, and meds in our bloodstreams (among a zillion other aspects of modern life). 

It is undeniably accurate and real. But it doesn’t tell us how things work, let alone why.

Just spend a few minutes on the Wiki page for electricity and you’ll see what I mean. Electricity has charge, force, current, but these are all descriptive qualities, not explanations. We don’t know why electrons move from one spot to the next, we just know that they do

Same goes for all of the building blocks of reality. Atoms are attracted to one another by various forces, but the WHY of that attraction is a mystery. Gravity is mutual attraction between things that possess mass or energy, yet the best description of how it works — it’s the curvature of spacetime — still lacks a description of how that description works, let alone an explanation.

Science can describe life but can’t explain it. It can identify individual components of our genetic code and attach them to propensities for specific outcomes, but has no explanation for how or why. Locations for mental functions have been mapped in our brains, but there’s no such thing as a biological computer program that operates them. 

We don’t have a clue how the squishy blob in our heads records music or can count time in hours and even minutes. We can only report that it does.

The scientific delusion that conflates description with explanation has at least two implications for AI:

First, it overstates the potential for AI consciousness. Since self-awareness popped out of our biological machine minds, it’s only a matter of time before the same thing happens to AI. We don’t know how, or when, or why, but simply that it will. 

When researchers can’t describe how a generative AI made a decision, they claim it’s proof that such evolution is already taking place.

Sounds like wishful thinking to me.

Second, it understates the complexity of human consciousness. What if there are material limits to what brain functions science can describe? Already, we’re told we live in a Universe that is infinite, which defies explanation, and that the matter of which we’re made isn’t tangibly real as much as the uncertain possibility of existence, which doesn’t even fully work as a description.

So, while researchers might be able to provider ever-finer pinpoints of the physical whats for every mental quality we associate with being alive, they could still leave us wanting for an explanation for how or why.

I think this conundrum reveals the mechanistic model of the human brain/mind as less model than metaphor. I’m not suggesting that the limits of scientific description mean we need to invent magical explanations instead.

Rather, I wonder if some things are simply unknowable, and that a little humility would give us a better shot at coming to terms with how and why things are the way they are. That includes what we expect from AI.

If I’m right, the hope that continued scientific research will prove every question is answerable is probably a delusion.

AI Deepfakes Are A Distinction Without A Difference

The European Union has demanded that online content faked by AI needs to be labeled as part of its fight against “AI generated disinformation.”

Hell yeah. I want my lies and half-truths produced by real people. Hand-crafted disinformation. With hands.

Technology has been misleading us for a long time.

Story length and placement in newspapers was dictated in large part by the technologies of printing and distribution. Headlines were conceived as analog clickbait, often with a tenuous connection to the truth.

Radio allowed broadcasters to fake baseball sounds as they narrated games they were reading about on a telegraph tape. It let Orson Wells recreate all of the sounds of an on-the-spot radio newscast describing a Martian invasion. The uncontrolled laughter on comedy shows wasn’t real.

Television (and video more broadly) has always been a “funnel” technology, showing us what’s within the frame its cameras can capture, and nothing else. We might see a crowd gathered right in front of a stage in an otherwise empty venue. A snippet of an angry encounter between two people will lack the context of what immediately preceded it or followed.

Visuals are immediate but they are incomplete, which often leads to misunderstanding. It turns out a picture requires a thousand words versus replacing them.

Computers have been delivering fantasies to us for decades.

Spoiler Alert: Leo wasn’t really standing on the deck of the Titanic and the planet Pandora doesn’t exist. Most pop stars can’t sing with perfect pitch, and Internet influencers don’t have sparkly eyes and perfect skin.

It’s all fake, at least somewhat, thanks to the creative and corrective tools of technology. 

The EU isn’t talking about labelling all of these computer-generated falsehoods, of course, so its actions beg the question of what are they really trying to fix?

Because compared to the misinformation produced by technology or AI in particular, we human beings put it to shame.

Very little of what we hear from politicians is wholly true. There’s always a spin or selective omission. Businesses report what they’re required to disclose to regulators while conveniently staying mum on what’s not, or claim to be saving the planet from [insert global catastrophe here]. 

Brand marketers make promises that beer or makeup will make us attractive and reduce the flow of time.

Where’s the labelling for all of this misinformation?

Sadly, it’s not necessary because any reasonable consumer of media already assumes  that everything we hear or see is not entirely true.

Faked content is just another form of false content, and we already bathe in the latter.

Deep fakes of President Biden joining the Shining Path or Paul and Ringo reuniting with dead bandmates for a new album would be no worse or convincing than what is already searchable online.

Are we more uncomfortable with AI producing it than humans?

There’s another insight into the EU’s thinking: a government task force has also decided that customer service chatbots must be clearly identified as AI.

Why? How many of us have interacted with human beings who acted just like robots?

Will the EU require them to behave differently? What if the chatbot can act with more empathy and operational latitude than some minimum-wage human gig worker?

Labelling something as originating from “AI” is kinda insisting on making a distinction without a difference. And it’s not even entirely true, since there’s likely a human being (with an agenda) behind the content.

We should be thankful that the EU even cares about trying to regulate AI, especially since its developers have outsourced responsibility for keeping us safe from the existential risk of their creation to, well, anybody other than themselves.

The US Congress is grossly uninformed on the topic, though its rumored that it will soon introduce its initial thinking on regulating laserdiscs.

But the real threat of AI is how it is changing business, culture, and us.

How about a label that reveals how many human beings were put out of work by AI? Here’s product X that comes with so-and-so number of people rendered obsolete. Where’s the financial reporting disclosure on that impact?

Why not insist on disclosure of energy use of AI, so we could decide if we wanted to buy something that required an extra ton of carbon to get spewed into the atmosphere instead of simply buying a human coder a lunch now and then?

Where’s the testing affirmation of mission-critical AI? Robots already do most of the work flying airplanes but only after intensive training and testing (and they still fail, as evidenced by the MCAS automation that ceased a few Boeing 737 MAXs). 

How are governments certifying the safety of AI inserted into cars or our homes? What about ensuring that they’re not biased when assessing health care claims, deciding college admissions, or auditing tax returns?

Again, we’re used to people doing this work imperfectly, but maybe the point should be to figure out how to make AI an improvement in the Status Quo? 

And where’s the massive, world-wide, shockingly deep and responsible research on how AI will impact our senses of personal identity and well-being? We’re only now seeing the effects of what social media has/is doing to us, and most of it isn’t good (or reversible, but technologists have outsourced responsibility for dealing with that consequence to parents).

There’s so much that governments could do to better frame and oversee what’s happening, but trying to catch deep fakes or forcing AI to reveal itself should be pretty low on the list.

The Internet is already filled with crap and lies. 

Marketer, Meet Your AI Replacement

A recent survey of marketers found that over half of them are currently using generate AI, yet few of them realize that it’s going to put them out of work.

The research, sponsored by Salesforce, revealed that marketers estimated AI would save them over five hours per work week, which adds up to at least a month every year.

Since there were 1,029 respondents to the survey, the savings they cited add up 85 full-time employees, or nearly one in ten of them.

Their primary concern? “Accuracy & Quality.” Two-thirds of them say that their employers aren’t providing enough quality data or AI training for them to fully exploit the technology that will render them unemployed.

Of course, this research is part of a propaganda campaign from Salesforce, which has been selling a sales and marketing automation platform for many years (and quite successfully). Its announcement of the survey findings is filled with blather from its executives touting the transformative potential of AI and, without naming its offering, the importance of its offering. 

As for the details of the marketers’ use of AI, they’re pretty much applying it to producing the content for which they’re paid to produce. They call it “busy work,” oddly, and then go on to say that they think AI will someday soon “transform the way they analyze data, personalize messaging content, and build marketing campaigns,” among other benefits.

This will allow them to “focus on more strategic work,” whatever that means. There’s a reference that many of them think AI lacks “human creativity and contextual knowledge” and will require human oversight, so maybe they think that they’ll get new managerial jobs to watch robots do their old ones.

But that’s just wishful thinking doing the talking.

We don’t know who these marketers are, and any survey is limited to getting answers to the questions it asks. It’s intriguing that most of them are eager to give AI a greater share of their workloads.

But this isn’t research, it’s sales promotion. And if the marketers who responded to the survey are as clueless as they appear, they probably deserve to get replaced by robots.