OpenAI offers US AI Safety Institute early access to next model

August 1, 2024
Border
2
Min
OpenAI offers US AI Safety Institute early access to next model

Photo credit: SeongJoon Cho/Bloomberg

OpenAI is collaborating with the US AI Safety Institute to provide early access to its next generative artificial intelligence model for safety evaluations.

“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations,” OpenAI CEO Sam Altman wrote in an Aug. 1 X post.

While Altman was limited in details, the move appears to be a similar deal with the UK’s AI safety body reported in June last year. At the time, British Prime Minister Rishi Sunak said that OpenAI, along with research lab Google DeepMind and AI startup Anthropic, has committed to giving priority access to models for research and safety purposes.

Controversy surrounds OpenAI, with critics arguing that the company has deprioritized AI safety in pursuit of more powerful generative AI technologies. In May, it disbanded its unit focused on the existential dangers of AI. Differences in the company’s focus, including safety taking a “backseat,” led to the subsequent resignations of OpenAI co-founders Ilya Sutskever and Jan Leike. Both executives have since shifted their focus to new ventures, with Sutskever starting Safe Superintelligence and Leike leading Anthropic’s safety research.

Amid the criticisms, the company said it would eliminate restrictive non-disparagement clauses, create a safety commission, and dedicate 20% of its computing power to safety research.

“We want current and former employees to be able to raise concerns and feel comfortable doing so,” Altman wrote, explaining that the move is part of the company’s safety plan. “

He also confirmed that non-disparagement terms “that gave OpenAI the right to cancel vested equity” were voided for current and former employees in May.

Read more: OpenAI announces AI-backed search engine prototype to rival Google

Earlier this week, OpenAI voiced its support for the Future of Innovation Act, a proposed Senate bill that would formally establish the AI Safety Institute as an executive body responsible for setting standards and guidelines for AI models.

The US AI Safety Institute, operating within the Commerce Department's National Institute of Standards and Technology, collaborates with a consortium of AI companies, including Anthropic and tech giants like Google, Microsoft, Meta, Apple, Amazon, and Nvidia. This industry group is responsible for implementing actions outlined in President Joe Biden's October AI executive order, such as developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security measures, and watermarking synthetic content.

Read more: Upcoming Apple AI features set for post-iOS 18 launch: Report

Similar News

other News

Featured Offer
Unlimited Digital Access
Subscribe
Unlimited Digital Access
Subscribe
Close Icon