Anthropic Drops Flagship Safety Pledge
In an abrupt shift, the company may release future AI models without ironclad safety guarantees
TIME (time.com)
"In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate."
"We felt that it wouldn't actually help anyone for us to stop training AI models,"
Am I reading correctly that Anthropic already can't guarantee their own safety measures are adequate?
https://tonysull.co/notes/anthropic-drops-flagship-safety-pledge
