Sam Altman, the exec behind OpenAI’s ChatGPT, told Congress yesterday that he wanted it to regulate his industry. He “charmed the socks off” of legislators who are used to sparring with dodgy and disingenuous tech execs.

Imagine if he’d concocted a highly transmissible virus and unleashed it on the world, and then appeared in Washington and declared his support for finding a cure.

Or if he’d handed out an addictive drug outside schools across America and then called for better rules for playground access. 

They’d have arrested him at the door.

Our leaders still don’t know how to talk about technology with technologists.

Much has been made of the fact that Altman’s appearance came “earlier in OpenAI’s lifecycle” than those of social media leaders who’d been at work for a decade or more before anybody thought  to question them.

Only it’s not early in OpenAI’s lifecycle (see virus and drug similes above).

It’s too late.

No matter how chatty or friendly the conversations were between lawmakers and Altman — one guy playfully asked Altman if he’d quit his job and run a would-be government agency tasked with regulating AI — the hearing served only to codify the Status Quo. 

Congress, and we, learned nothing new. Whatever regulation gets developed won’t address the real issues AI presents to our lives and world, for two basic reasons:

First, every regulatory scheme mentioned during the hearing presumes that an AI is a thing with established parts and defined functions, like a drug or atomic bomb.

But even the most rudimentary AI isn’t set in stone the moment it’s switched on. It learns, adapts, gets better, does more. That’s the point. And AI feeds on information from any number of sources. Plutonium is easy to regulate. Data isn’t.

Altman knows this, even if the curmudgeonly legislators don’t.

Secondly, references to such dangerous things also fits into the narrative that we need to worry about similarly big, existential risks arising from AI. It focuses the conversation on far-off applications like driving a driverless car into a crowd of pedestrians or launching nuclear missiles.

What about the data on humanity that AI’s are sucking up every nanosecond of every moment that Altman spoke and the legislators cooed? AI is used to make gazillions of decisions for people every day, usually without their knowledge and often without their active participation. 

It will likely put many millions out of work in the near future, and limit the opportunities for human employment thereafter.

Altman reassured Congress that the AI industry designs “…systems that do not maximize for engagement…we’re not trying to get people to use it more and more.” I can imagine the sigh of relief in the room since that’s exactly what social media companies did when the government wasn’t looking.

It’s worse. The AI industry doesn’t want to engage people. It wants to replace them.

That’s why IBM paused hiring in thousands of back-office functions and Dropbox fired hundreds to get work done with machines instead. staff positions. One study finds that LLMs like Altman’s ChatGPT could soon put people in 20 professions out of work, including teachers, judges, and psychologists.  

There won’t be any regulations affecting that transformation.

This week’s lovefest did nothing to address the real risks of AI. And maybe that was the point. Maybe the AI industry doesn’t want Congress regulating those dugs and schoolyards. Altman might be a smart business person, and not a virtuous philosopher king after all.

And maybe it was all kabuki.

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *