I wrote my last essay on the frightening reality of AIs harvesting our humanity so they can mimic us. The prospect that AIs might learn from one another is even scarier.
It’s standard operating procedure for industrial robots and autonomous car controllers because it multiplies and quickens the learning curve. One AI figures out what a stop sign looks like in so-and-so indirect sunlight and shares its learning with the cloud. Same goes for a robot on an assembly line figuring out how to place the widgets it touches more efficiently.
Abilities acquired by one machine are shared so that all of them can acquire the same skills.
It’s different for LLMs, which don’t learn how to accomplish tasks in the geophysical world as much as analyze and interpret digital content. Generative AI are idea machines.
As such, what they create is derivative of the content that is available to them, and then use it to make decisions on what things mean, not only on how to do them. One analysis determines what answers are most obvious or commonly available and then adds its conclusions to the data set for the next AI to troll.
As LLMs like ChatGPT find greater use by businesses and individuals, a greater amount of the content available to such AIs will have been generated by AIs. This could could actively (and quite unintentionally) skew our understanding of things.
Imagine a giant digital game of telephone in which a statement is modified every time it’s shared.
AIs will increasingly make normative judgments about what things mean, relying more and more on what other AIs have concluded and using opaque code.
We will be told what it thinks but not why, nor will there be any guaranteed notification that they’re echoing one another and not us.
AIs learning from other AIs. What could possibly go wrong?
No comment yet, add your voice below!