Prove You’re Not An AI

A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones.

Such “Personhood Credentials,” or PHCs, would help people protect themselves from privacy and security threats, not to mention the proliferation of falsehoods online, that will almost certainly come from a tidal wave of bots that get ever-better at impersonating people.

Call it at Turing test for people.

Of course, whatever the august body of researchers come up with won’t be as onerous as a multiple-choice questionnarie; PHCs will probably rely on some cryptobrilliant tech that works behind the scenes, and finds proof of who we say we are in the cloud (or something).

I’m not convinced it’ll work, or that it’s intended to work on the problem it claims to address.

PHCs probably won’t work or work consistently, for starters, because they’ll always be in a race with computers that get better at hacking security and pretending that they’re humans. The big money, both in investments and potential profits, will be on the hackers and imposters.

Even though they don’t exist yet, the future security threats of quantum computing are so real that the US government has already issued standards to combat capabilities that they imagine hackers might have in, say, a decade, because when they are invented, those bad guys will be able to retroactively decrypt data.

Think about that for a moment, Mr. Serling.

Now, imagine the betting odds on correctly identifying what criminal or crime-adjacent quantum tech might emerge sometime after 2030. There’s a very good chance that today’s PHCs will be tomorrow’s laserdiscs.

Add to that the vast amounts of smarts and money working on inventing AGI, or Artificial General Intelligence that can not just mimic human cognition but possess something equal or better. At least half of a huge sampling of AI experts concluded in 2021 that we’d have such computers by the late 2050s, and that wait time has shortened each time they’ve been canvassed.

What’ll be the use of a credential for personhood if an AGI-capable computer can legitimately claim it?

And then there are other implactions for PHCs that may also be part of an ulterior purpose.

If they do get put into general use, they will never be used consistently. Some folks will neglect to comply or fail to qualify. Some computers will do a good enough job to get them, perhaps with the aid of human accomplices.

Just think of the the complexities and nuisance people already experience trying to resolve existing online identity problems, credit card thefts, and medical bill issues. PHCs could make us look back fondly on them.

Anybody who claims that such innanities couldn’t happen because some inherent quality of technology will prohibit it, whether extant or planned, is either a liar or a fool. Centuries of tech innovation have taught us that we should always consider the worst things some new gizmo might deliver, not just the best tones.

Never say never.

Plus, a side-effect of making online users prove that they’re human will become a litmus test for accessing services, sort of like CAPTCHA only on steriods. Doing so will also make the data marketers capture on us more reliable. It’ll also make it easier to surveil us.

After all, what’s the point of monitoring someone if you can’t be entirely sure that they’re someone worth monitoring?

This is where my tinfoil hat worries seep into my thinking: What if the point of PHCs is to obliterate whatever remaining vestiges of anonymity we possess?

I’ll leave you with a final thought:

We human beings have done a pretty good job of lying, cheating, and otherwise being untruthful with one another since long before the Internet. History is filled with stories of scams based on people pretending to be someone or something they’re not.

Conversely, there’s this assumption underlying technology development and use that it’s somehow more trustworthy, perhaps because machines have no biases or personal agendas beyond those that are inflicted on them by their creators. This is why there’s so much talk about removing those influences from AI.

If we can build reliably agnostic devices, they’ll treat us more fairly than we treat one another.

So, maybe we need PHCs not to identify who we want to interact with, but to warn us away from who we want to avoid?

AI’s Latest Hack: Biocomputing

AI researchers are fooling around with using living cells as computer chips. One company is even renting compute time on a platform that runs on human brain organoids.

It’s worth talking about, even if the technology lurks at the haziest end of an industry already obscured by vaporware.

Biocomputing is based on the fact that nature is filled with computing power. Molecules perform tasks and organize into systems that keep living things alive and responsive to their surroundings. Human brains are complex computers, or so goes the analogy, but intelligence of varying sorts is everywhere, even built into the chemistry and physics on which all things living or intert rely.

A company called FinalSpark announced earlier this month that it is using little snippets of human brain cells (called organoids) to process data. It’s renting access to the technology and live streaming its operation, and claims that the organic processors use a fraction of the energy consumed by artificial hardware.

But it gets weirder: In order to get the organoids to do their bidding, FinalSpark has to feed them dopamine as positive reinforcement. And the bits of brain matter only live for about 100 days.

This stuff raises at least a few questions, most of which are far more interesting than whether or not the tech works.

For starters, where do the brain cells come from? Donations? Fresh cadavers? Maybe they’re nth generation cells that have never known a world beyond Petri dish.

The idea that they have to be coaxed into their labors with a neuotransmitter that literally makes them feel good hints at some awareness of their existence, even vaguely. If they can feel pleasure, can they experience pain?

At what point does consciousness arise?

We can’t explain the how, where, or why of consciousness in fully-formed human beings. So, even if the clumps of brain matter are operating as simple logic gates, who’s to say that some subjective sense of “maybe” might emerge in them along the way?

The smarts and systems of nature is still an emergent and fascinating field. Integrative thinking about how ants build colonies, trees care for their seedlings, and octupi think with their tentacles are just hints of what we could learn about intelligence, and perhaps thereafter adapt to improve our own condition.

But human brain cells given three months to live while sentenced to servitude calculating chatbot queries?

Don’t Fall In Love With Your AI

You’re probably going to break up with your smart assistant. Your future life partner has just arrived.

OpenAI’s new ChatGPT comes with a lifelike voice mode that can talk as naturally and fast as a human, throw out the occasional “umms” and “ahhs” for effect, and read people’s emotions from selfies.

The company says the new tech comes with “novel risks” that could negatively impact “healthy relationships” because users get emotionally attached to their AIs. According to The Hill:

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

Where will that investigation take place? Your life. And, by the time there’s any conclusive evidence of benefit or harm, it’ll be too late to do anything about it.

This is cool and frightening stuff.

For those of us I/O nerds, the challenge of interacting with machines is a never-ending process of finding easier, faster, and more accurate ways to get data into devices, get it processed into something usable, and then push it out so that folks can use it.

Having spend far too many hours waiting for furniture-sized computers to batch process my punchcards, the promise of using voice interaction to break down the barriers between man and machine is thrilling. The idea that a smart device could anticipate my needs and intentions is even more amazing.

It’s also totally scary.

The key word in OpenAI’s promising and threatening announcement (they do it all the time, BTW) is dependence, as The Hill quotes:

“[The tech can create] both a compelling product experience and the potential for over-reliance and dependence.”

Centuries of empirical data on drug use proves that making AI better and easier to use is going to get it used more often and make it harder to stop using. There’s no need for “continued investigation.” A ChatGPT that listens and talks like your new best friend has been designed to be addictive.

Dependence isn’t a bug, it’s a feature.

About the same time OpenAI announced its new talking AI, JPMorgan Chase rolled out out a generative AI assistant to “tens of thousands of its employees” as “more than 60,000 employees” are already using it.

You can imagine that JPMorgan Chase isn’t the only company embracing the tech, or that it won’t benefit from using its most articulate versions.

Just think…an I/O that enalbes us to feed our AI friends more data and rely on them more often to do things for us until we can’t function without them…or until they have learned enough to function without us.

Falling in love with your AI may well break your heart.

Bias In AI

The latest headlines about AI are a reminder that most egregious biases relating to AI are held by the people talking about it.

AI will improve the world. It may destroy it. My favorites are presumably thoughtful positions that say it’s a little bit of both, and therefore demands additional thoughtful positions.

The news last week was dour on AI transforming businesses fast enough, so the stock markets reacted with a big selloff. Prior news of AI’s transformative promise got the EU’s bureaucracy to react with lots of regulations.

There’s a biased interest behind all of it.

I know, that sounds like I’m afflicted, too, with something called “intentionality bias” or, worse, a penchant for tin foil hats.

Most things that happen in life aren’t the result of a conspiracy or ulterior motive. But some things are, and I think what we’re told about AI is one of them.

When someone building AI comments about its potential to do great harm, they’re also promoting its promise and pumping the value of their business. Investors who deride AI are looking to make money when tech stocks fall. Academics who blather about the complexities of AI are interested in more funding to pay for more blather, and management consultants who talk about those complexities hope to make money promising to resolve them. Bureaucrats tend to build more bureaucracy after claiming to foster innovation (or whatever).

There’s no better poster child for the inanity of believing in some online “public square” whereat ideas are fairly vetted and conclusions intelligently reached.

The conversation about AI is a battle of biases, and its winners are determined by the size of their megaphones and the time of day.

It’s too bad, because we’d all benefit from a truly honest and robust dialogue, most notably when it comes to making decisions about if, where, and when we want AI to play a role in our lives.

But we’re not being empowered to ask important questions or get reliable answers. The conversation about AI sees us as users, potential victims, and always the data-generating fodder for its continued rollout.

The ultimate bias of AI is against us.