Chipmaker Nvidia’s CEO Jensen Huang said last week that his company will stop selling GPUs (its chips) to companies engaged in unethical projects.
“We only sell to customers that do good,” Huang said in this article, which went on to explain that the recently celebrated ChatGPT chatbot was trained using many thousands of Nvidia chips.
The statement is nonsense, for at least two reasons:
First, as Nvidia’s CEO admits, the company is a parts maker and not an operator of AI programs. Like a pen maker that doesn’t want bad actors using its products to sketch out plans for crimes, there’s not a lot Nvidia can do.
Second, very little of what AI has or will accomplish will be obviously good or bad.
Is it unethical to use AI to put millions of people out of work? If an AI can chart a course for a company to expend the barest, minimum effort combating climate change or empowering its employees, is it unethical or just good business?
What about AI that better understands a consumer’s interests and nudges their subconscious into the direction of buying products or services? AI could immerse every individual in a world of information and choices that has been curated to only include things that can be sold to them…all in the name of convenience.
Are chatbots that assess and then tee-up political rhetoric intended to inflame or raise money unethical, or just smart politics? Is the lie of a deepfake video any different from the lies and falsehoods that politicians level at one another all the time? One person’s fervent belief is another person’s propaganda, so will AI determine a difference?
To his credit, Huang notes in the article that he supports government regulation but then adds the typical technologist refrain that politicians need to understand the tech before setting any rules.
Which means he doesn’t believe in any meaningful regulation whatsoever.
Do you think members of Congress understand how heart valves work? How about telephony? Forget tech and ask yourself if they grasp how complex financial instruments function? How about defining pornography? A US Supreme Court justice once said he couldn’t say, that but knew it when he saw it.
Governments need to understand impacts and effects, and do their best to identify both intended and unintended consequences. The tech is simply the prompt for those analyses and debates. We could regulate the railroads without members of Congress understanding how steam engines worked.
Ultimately, nobody responsible for making AI tech, or looking to make zillions of dollars from selling or operating it, has any interest in seeing it limited in any way. Certainly not in suffering meaningful constraints on its development or use.
Don’t hold your breath for Nvidia to call out an unethical AI customer. Same goes for any substantive AI legislation coming out of Congress.
Everyone is paralyzed by the complexity of the topic and transfixed by the money it’s going to make.
Sounds like a pretty unethical project to me.
No comment yet, add your voice below!