Leading companies creating or using AI have fired staff responsible for evaluating the tech’s ethical issues, according to a story in the Financial Times last week.
The functions still exist, loosely known as “Responsible AI,” but the layoffs and continued development of generative AI like ChatGPT suggest a retreat from the practice.
Though I can’t tell you what it is, exactly.
Responsible AI sounds like a marketing slogan…less operating unit within a business than an idea or vague aspiration. Consultants like Accenture have produced pithy briefings on it that amount to a whole lot of nothing.
Helping ensure that AI doesn’t rely on biases of race, gender, economic status or other qualities is often cited as the example for why it needs to be responsible, but that’s a business issue and not a matter of justice or ethical judgement. The better AI works for the greatest number of people should translate into making more money by the folks who sell it.
Specify the functional aspects that will lead to that outcome in a marketing or design brief and even an engineer who hasn’t seen daylight in recent memory could follow it, if not fully understand why.
The business operators can and should make those decisions.
Also, AI is responsible for complying with the laws that regulate its operation, which usually reveals itself through the data on which it voraciously feeds. It’s why companies have lengthy privacy statements to reassure users that they have no control over what AI will do with their personal information (or even know or understand it), but that it’ll collect and use the data lawfully.
Legal departments have responsibility for that oversight, right?
So, what’s Responsible AI? To whom or what is it responsible?
Does responsible AI fool people into thinking they’re chatting with another human being? Can responsible AI produce deepfakes or other deceptions? Would responsible AI nudge consumers into buying things they otherwise might have skipped? Could responsible AI be used to monitor behaviors and dictate, say, how fast someone can drive or what unhealthy foods they can eat without losing their insurance coverage?
What’s responsible about AI putting millions of people out of work?
It would be great if those Responsible AI departments has the authority for adjudicating these ethical questions, and even better if they possessed the power to kill a feature or entire project.
Recent experience suggests no such thing. My bet is that they produce lots of thought-provoking presentations and go to conferences.
Still, it’s bad news that there’ll be less of them working at the companies that are forging ahead with AI technologies for which nobody has the slightest clue what the social, economic, or even religious implications might be.
The worse news is that I fear that Responsible AI was and is only responsible to those companies’ bottom lines.
No comment yet, add your voice below!