Business

The CEO of Anthropic calls for compulsory safety evaluations for AI models

Dario Amodei, the CEO of Anthropic, expressed that AI companies, including his own, should be mandated to undergo safety

The CEO of Anthropic calls for compulsory safety evaluations for AI models

Dario Amodei, the CEO of Anthropic, expressed that AI companies, including his own, should be mandated to undergo safety testing before releasing their technologies to the public.

“We absolutely need mandatory testing, but we also must approach it with caution,” Amodei remarked at an AI safety summit in San Francisco, organized by the US Departments of Commerce and State.

His comments followed the release of test results by the US and UK AI Safety Institutes, which evaluated Anthropic’s Claude 3.5 Sonnet model in various categories, including cybersecurity and biological risks. Anthropic and OpenAI had both agreed to submit their AI models for government testing.

Amodei noted that while companies like Anthropic and OpenAI have established voluntary safety guidelines, such as Anthropic’s responsible scaling policy and OpenAI’s preparedness framework, more concrete regulatory measures are needed.

“There’s no effective system to ensure these guidelines are being followed. Companies just say they will,” Amodei explained. “While public scrutiny and employee concerns provide some pressure, I believe it won’t be sufficient in the long run.”

Amodei’s viewpoint is influenced by his belief that advanced AI systems, potentially more intelligent than humans, could emerge as early as 2026. Although AI companies are currently testing for hypothetical risks, such as biological threats, he emphasized that these risks could soon become real.

At the same time, Amodei cautioned that any testing requirements must remain “flexible” to keep pace with the rapidly evolving technology. “This will be a difficult socio-political challenge,” he concluded.

About Author

mansi

Leave a Reply

Your email address will not be published. Required fields are marked *