OpenAI Forms “Preparedness” Team to Mitigate AI Risks

On Thursday, OpenAI announced the formation of a new team to assess and investigate artificial intelligence models with the aim of safeguarding against what it terms “catastrophic risks.”

The team, named “Preparedness,” will be led by Alexander Madry, the director of the Deployable Machine Learning Institute at MIT. Madry joined OpenAI in May of the previous year as the “Head of Preparedness,” according to his LinkedIn profile.

The primary responsibilities of the Preparedness team include monitoring and predicting risks associated with future artificial intelligence systems, protecting against their capabilities to persuade and deceive humans, such as in phishing attacks, and their ability to generate malicious code.

Some risk categories, as addressed by the Preparedness team, seem less likely compared to others. For example, OpenAI included “chemical, biological, radiological, and nuclear threats” in its post as areas of concern regarding artificial intelligence models.

Sam Altman, the CEO of OpenAI, is known for his pessimism regarding technology, often expressing concerns that artificial intelligence “could cause human extinction.” The company also states that it is open to studying “the least clear and most entrenched domains” related to emerging technological risks.

In conjunction with the launch of the Preparedness team, OpenAI solicited ideas for risk studies, offering a $25,000 prize and an online role in the Preparedness team for the top ten submissions.

OpenAI states that the Preparedness team will also be responsible for formulating an “enlightened development policy for risks,” whose task is to outline the company’s approach to building evaluations for AI models and monitoring tools, as well as the company’s risk mitigation procedures and its model development management structure.

The company aims to complement its other work in the field of AI safety, focusing on both pre-deployment and post-deployment stages of models.

OpenAI stated, “We believe that advanced AI systems, which surpass the capabilities of even the most advanced current models, have the potential to benefit humanity as a whole. However, they also pose increasingly severe risks. We need to ensure that we have the understanding and infrastructure necessary for the safety of high-capability AI systems.”


Related:

The Author:

Leave A Reply

Your email address will not be published.



All content published on the Nogoom Masrya website represents only the opinions of the authors and does not reflect in any way the views of Nogoom Masrya® for Electronic Content Management. The reproduction, publication, distribution, or translation of these materials is permitted, provided that reference is made, under the Creative Commons Attribution 4.0 International License. Copyright © 2009-2024 Nogoom Masrya®, All Rights Reserved.