Elon Musk’s AI firm, xAI, has missed a self-imposed deadline to publish a finalized AI security framework, as famous by watchdog group The Midas Challenge.
xAI isn’t precisely recognized for its robust commitments to AI security because it’s generally understood. A current report discovered that the corporate’s AI chatbot, Grok, would undress pictures of ladies when requested. Grok may also be significantly extra crass than chatbots like Gemini and ChatGPT, cursing with out a lot restraint to talk of.
Nonetheless, in February on the AI Seoul Summit, a worldwide gathering of AI leaders and stakeholders, xAI revealed a draft framework outlining the corporate’s strategy to AI security. The eight-page doc laid out xAI’s security priorities and philosophy, together with the corporate’s benchmarking protocols and AI mannequin deployment issues.
As The Midas Challenge famous within the weblog put up on Tuesday, nevertheless, the draft solely utilized to unspecified future AI fashions “not presently in growth.” Furthermore, it didn’t articulate how xAI would determine and implement threat mitigations, a core part of a doc the corporate signed on the AI Seoul Summit.
Within the draft, xAI mentioned that it deliberate to launch a revised model of its security coverage “inside three months” — by Could 10. The deadline got here and went with out acknowledgement on xAI’s official channels.
Regardless of Musk’s frequent warnings of the risks of AI gone unchecked, xAI has a poor AI security observe document. A current examine by SaferAI, a nonprofit aiming to enhance the accountability of AI labs, discovered that xAI ranks poorly amongst its friends, owing to its “very weak” threat administration practices.
That’s to not recommend different AI labs are faring dramatically higher. In current months, xAI rivals together with Google and OpenAI have rushed security testing and have been sluggish to publish mannequin security studies (or skipped publishing studies altogether). Some consultants have expressed concern that the seeming deprioritization of security efforts is coming at a time when AI is extra succesful — and thus doubtlessly harmful — than ever.



