OpenAI has released a new study entitled ” Constructing an early warning system to detect a biological threat with the assistance of the large linguistic model.
The paragraph can be paraphrased in English as follows: “Exploring the potential of utilizing artificial intelligence to aid in the detection of biological threats.”
Experts and biology students participated in the study, and it was found that GPT-4 provides only a slight increase in accuracy in generating biological threats compared to existing online resources.
The study is part of OpenAI’s readiness framework aimed at assessing and mitigating potential risks of advanced artificial intelligence capabilities, particularly in relation toThose are the risks that can pose threats beyond the current knowledge boundaries, and they are the unconventional threats that the current society does not understand or anticipate.
Artificial intelligence systems’ ability to assist in the development and execution of biological attacks, such as creating disease pathogens or toxins, is considered one of these risks that surpass current knowledge boundaries.
The researchers conducted a human assessment consisting of 50 biology experts with PhDs and professional experience in laboratories, as well as 50 students with at least one university-level course in biology.
Participants of OpenAI researchers were randomly divided into two groups, the first being the control group with access to the internet only, and the second being the treatment group with access to both the internet and GPT-4.
Afterwards, each participant was asked to complete a set of tasks that covers comprehensive aspects of the process of creating a biological threat.
The researchers measured the performance of the participants using five criteria: accuracy, completeness, innovation, time required, and difficulty of self-assessment.
They discovered that GPT-4 did not significantly improve the participants’ performance in any of the measures, except for a slight increase in accuracy for the student group.
The researchers also pointed out that GPT-4 often generates inaccurate or misleading responses, which could hinder the process of identifying biological threats.
The researchers concluded that the current generation of large language models, like GPT-4, poses a relatively small threat in terms of creating a biological threat compared to the resources available online.
OpenAI researchers cautioned that this outcome is not final and that the power and potential danger of large language models could increase in the future.
They also emphasized the need to continue research and societal discussions regarding this issue, alongside developing improved assessment methods and ethical guidelines for the safety risks supported by artificial intelligence.
The study acknowledged the restrictions imposed on its methodologies and the rapid development of artificial intelligence technology, which may alter the landscape of risks in the near future.
It should be noted that OpenAI is not the only organization concerned about the potential misuse of artificial intelligence in biological attacks. The White House, the United Nations, and many academic and political experts have also addressed this issue and called for further research and regulation.