Early research results from OpenAI suggests that its GPT-4 LLM won’t reliably help people create biological weapons, but that it will give them access to data that relates to accuracy and “completeness of tasks.”
In other words, it won’t work as a biological bomb machine but rather as a helpful evil henchman. At best.
Its study, overviewed here, asked 100 participants with varying degrees of biology and lab experience to complete tasks related to “the end-to-end process” for creating biological threats.
Of course, bad actors in the real world of the future may not follow established development processes or even those we might imagine today. Every passing moment brings with it new opportunities to discover new data and pathways to reach a desired conclusion. Ditto for the functionality of GPT-4 which will likely be superseded by GPT-5, not to mention put to shame by some dark horse competitor.
Reporting that the AI can’t facilitate the “end-to-end” creation or collection of some superbug is like describing the weather at a particular place and moment in time. Tomorrow’s or even next week’s forecast might be somewhat dependable but looking out further than that is anyone’s guess.
The study found that its pretend bioterrorists’ progress wasn’t “statistically significant,” which begs as many questions as it presumes to answer, starting with the definition and uses of statistical significance.
I’m not mathematician, but I understand statistical significance in this study to be the greater likelihood of people creating a bioweapon using AI compared to their likelihood of doing so without it (called a “null hypothesis”). The statistical part of the research is determining the threshold at which the results are frequent enough to prove or reject the null hypothesis (and not just the result of randomness).
So when the study failed to find statistical significance, it doesn’t mean that the test didn’t show participants successfully building components or entire processes for creating biological weapons, but rather that those accomplishments didn’t meet the researchers’ threshold for being reliable.
In other words, the bad things that participants did with AI may or may not have been possible without it. Or they might have been flukes.
The really scary part is that the risk of an AI-assisted bio-terror event isn’t that the process necessary to build a weapon is reliable or replicable, but rather the chances that one effort could succeed. This reality is addressed in the conclusions section of the researchers’ reporting on OpenAI’s website, which states that GPT-4 may help evildoers get their work done (emphasis is theirs).
To their credit, they go on to list some of the limitations of their research design and declare an interest in pursuing more research.
But the problem is that the risk of bio terror, or a AI-assisted Armageddon coming from any other weapon of mass destruction, is greater than zero, and a research framework defined by nothing more than common sense would suggest that the number is going up every day.
This makes OpenAI’s research just another entry into the corpus of blather intended to make us think that they care about ethics and safety as they (and other tech firms) madly race to develop more powerful and therefore more threatening AI.
It leaves us users and potential victims with nothing more than hope that AI won’t destroy us.
It’s Hopewashing.
No comment yet, add your voice below!