AIs will increasingly make normative judgments about what things mean, relying more and more on what other AIs have concluded and using opaque code. What could possibly go wrong?
Continue readingAI Is Harvesting Our Humanity
Generative AI is able to mimic how we think and communicate because we make it possible for it to analyze how we think and communicate.
Continue readingWe’re Clueless About AI
To say that most people are conflicted about AI would be an understatement. It’s on purpose.
Continue readingSo Much For Slowing Down
What about concerns about deepfakes, election manipulation, privacy, massive economic or social upheaval, and even the existential threat of AI?
Continue readingOpen Source Is AI Plutonium
An open source AI based on Meta’s LLAMA model is being used to create sexbots.
Its creator is thrilled to get to experiment with state art tech, according to this story. He feels that commercial chatbots are “heavily censored.”
And that’s the argument in favor of open source development. Entrepreneurs and artists need freedom to experiment. The ugliness of a chatbot that engages in graphic rape fantasies is a small price to pay for all of the wonderful and beautiful things that might emerge from fooling around freely with AI.
After all, there’d be no Internet goodness without porn badness, especially in the early days. I’d wager that most innovators of any sort have never been particularly comfortable working with the constraints of regulations or propriety.
But calling open source AI “code” or “a model” along with a cute name or acronym doesn’t do it justice.
Open source is AI plutonium. We’re being told that we must tolerate the possibility of deadly weapons in order to enjoy power generation.
It’s not true. Sure, the strides made by using open source, like LLMs, gave developers the easiest path to the quickest results. Online customer service will never be the same. A generation of kids can cheat better on their homework assignments. AI in government and businesses is culling data to find more patterns and make better predictions.
But we can be sure that development is underway on applications that are illegal, possibly deadly, and which certainly promise/threaten to change the ways we work and live. And there’s no way to find those bad actors among the good ones until their badness appears in public.
It could even impact us and we wouldn’t know that AI was responsible.
So, we might never figure out that errant AI have been quietly manipulating stock prices or skewing new drug trials. It could sway elections, entertainment reviews, and any other crowdsourced outcome. Bad actors, or an AI acting badly, could encourage social media ills among teens and start fights between adults.
It might even start a war, or decide that nuclear weapons were the best way to end one.
Unlike plutonium, there’s no good or reliable way to track or control such outcomes, no matter how transparent the inputs might have been.
In true Orwellian fashion, the CEO of a site that promotes open source argues that the real risk is from businesses that are “secretive” and take at least some responsibility for their AI models, like Google and OpenAI. A VC exec who promotes AI worries that relegating development to big companies means “they’re only going to be targeting the biggest use-cases.”
It’s a false dilemma. I’d happily “censor” a porn application in exchange for a cure for cancer, especially if it came with the likelihood that the world wouldn’t get blown up along the way.
Who’s Responsible For AI?
Responsible AI wouldn’t rely on good intentions or mere compliance with regulations, but on designing responsibility into the technology builds themselves and then sharing those details fully and regularly over time.
Continue readingShould AI Preach?
What if any religion’s deity speaks through AI? It’s not inconceivable: in fact, it should be hard to deny.
Continue readingMove Over AI, We’re Robots Too?
Earlier this year, brain researchers found the spots in the area that governs physical movement intersect with networks for thinking.
Mind-body duality questions solved for good. The mind is what the brain does. It controls our bodies with commands like a CPU controls robot arms and legs.
Not.
It’s a recurring delusion in neuroscience these days, probably because it’s so powerful. We can only study and understand what we can perceive and test. When it comes to human beings and our minds, all we have to look at is what’s physically contained in our skulls and the filaments of its connections to the rest of our bodies.
Mind is therefore a product of brain. It’s a self-evident, a priori truth. Machine references are more than analogies, they’re descriptive. Genetic code is computer code.
We’re all machines. It means that AI minds will one day equal if not exceed ours because those machines can think faster and have greater memories.
And, if there’s anything we can’t explain about our minds, such as our ability to have a sense of “self” or a presence in space and time that we feel as much as perceive, well, that’s just tech we can’t yet explain.
One of the study’s authors explained that the discovery “provides additional neuroanatomical explanation for why ‘the body’ and ‘the mind’ aren’t separate or separable.”
And there’s the rub. There is no explanation in the study, or in any scientific research.
More broadly, science provides descriptions. It reports the ways things work. It’s repeatable and reliable, and it has given us cars on our roads, electricity in our homes, and meds in our bloodstreams (among a zillion other aspects of modern life).
It is undeniably accurate and real. But it doesn’t tell us how things work, let alone why.
Just spend a few minutes on the Wiki page for electricity and you’ll see what I mean. Electricity has charge, force, current, but these are all descriptive qualities, not explanations. We don’t know why electrons move from one spot to the next, we just know that they do.
Same goes for all of the building blocks of reality. Atoms are attracted to one another by various forces, but the WHY of that attraction is a mystery. Gravity is mutual attraction between things that possess mass or energy, yet the best description of how it works — it’s the curvature of spacetime — still lacks a description of how that description works, let alone an explanation.
Science can describe life but can’t explain it. It can identify individual components of our genetic code and attach them to propensities for specific outcomes, but has no explanation for how or why. Locations for mental functions have been mapped in our brains, but there’s no such thing as a biological computer program that operates them.
We don’t have a clue how the squishy blob in our heads records music or can count time in hours and even minutes. We can only report that it does.
The scientific delusion that conflates description with explanation has at least two implications for AI:
First, it overstates the potential for AI consciousness. Since self-awareness popped out of our biological machine minds, it’s only a matter of time before the same thing happens to AI. We don’t know how, or when, or why, but simply that it will.
When researchers can’t describe how a generative AI made a decision, they claim it’s proof that such evolution is already taking place.
Sounds like wishful thinking to me.
Second, it understates the complexity of human consciousness. What if there are material limits to what brain functions science can describe? Already, we’re told we live in a Universe that is infinite, which defies explanation, and that the matter of which we’re made isn’t tangibly real as much as the uncertain possibility of existence, which doesn’t even fully work as a description.
So, while researchers might be able to provider ever-finer pinpoints of the physical whats for every mental quality we associate with being alive, they could still leave us wanting for an explanation for how or why.
I think this conundrum reveals the mechanistic model of the human brain/mind as less model than metaphor. I’m not suggesting that the limits of scientific description mean we need to invent magical explanations instead.
Rather, I wonder if some things are simply unknowable, and that a little humility would give us a better shot at coming to terms with how and why things are the way they are. That includes what we expect from AI.
If I’m right, the hope that continued scientific research will prove every question is answerable is probably a delusion.
Are AI Lies Worse Than Human Lies?
Why don’t social media influencers already require a big badge that says they’ve been hired by marketers and get paid when people follow their recommendations?
Continue readingSorry Google, AI Is Your Problem
If you want to read a summary of what’s wrong with the conversation about AI risk, read “Google’s Policy Agenda For Responsible Progress In Artificial Intelligence.”
Continue reading