OpenAI vows to provide the US government early access to its next AI model
OpenAI will give the US AI Safety Institute early access to its next model as part of its safety efforts, Sam Altman has revealed in a tweet. Apparently, the company has been working with the consortium "to push forward the science of AI evaluations." The National Institute of Standards and Technology (NIST) has formally established the Artificial Intelligence Safety Institute earlier this year, though Vice President Kamala Harris announced it back in 2023 at the UK AI Safety Summit. Based on the NIST's description of the consortium, it's meant "to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world." The company, along with DeepMind, similarly pledged to share AI models with the UK government last year. As TechCrunch notes, there have been growing concerns that OpenAI is making safety less of a priority as it seeks to develop more powerful AI models. There were speculations that the board decided to kick Sam Altman out of the company — he was very quickly reinstated — due to safety and security concerns. However, the company told staff members in an internal memo back then, that it was because of "a breakdown in communication." In May this year, OpenAI admitted that it disbanded the Superalignment team it created to ensure that humanity remains safe as the company advances its work on generative artificial intelligence. Before that, OpenAI co-founder and Chief Scientist Ilya Sutskever, who was one of the team's leaders, left the company. Jan Leike, who was also one of the team's leaders, quit, as well. He said in a series of tweets that he had been disagreeing with OpenAI's leadership about the core priorities of the company for quite some time and that "safety culture and processes have taken a backseat to shiny products." OpenAI created a new safety group by the end of May, but it's led by board members that include Altman, prompting concerns about self-policing. a few quick updates about safety at openai:as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.our team has been working with the US AI Safety Institute on an agreement where we would provide…— Sam Altman (@sama) August 1, 2024 This article originally appeared on Engadget at https://www.engadget.com/openai-vows-to-provide-the-us-government-early-access-to-its-next-ai-model-110017697.html?src=rss
OpenAI will give the US AI Safety Institute early access to its next model as part of its safety efforts, Sam Altman has revealed in a tweet. Apparently, the company has been working with the consortium "to push forward the science of AI evaluations." The National Institute of Standards and Technology (NIST) has formally established the Artificial Intelligence Safety Institute earlier this year, though Vice President Kamala Harris announced it back in 2023 at the UK AI Safety Summit. Based on the NIST's description of the consortium, it's meant "to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world."
The company, along with DeepMind, similarly pledged to share AI models with the UK government last year. As TechCrunch notes, there have been growing concerns that OpenAI is making safety less of a priority as it seeks to develop more powerful AI models. There were speculations that the board decided to kick Sam Altman out of the company — he was very quickly reinstated — due to safety and security concerns. However, the company told staff members in an internal memo back then, that it was because of "a breakdown in communication."
In May this year, OpenAI admitted that it disbanded the Superalignment team it created to ensure that humanity remains safe as the company advances its work on generative artificial intelligence. Before that, OpenAI co-founder and Chief Scientist Ilya Sutskever, who was one of the team's leaders, left the company. Jan Leike, who was also one of the team's leaders, quit, as well. He said in a series of tweets that he had been disagreeing with OpenAI's leadership about the core priorities of the company for quite some time and that "safety culture and processes have taken a backseat to shiny products." OpenAI created a new safety group by the end of May, but it's led by board members that include Altman, prompting concerns about self-policing.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…— Sam Altman (@sama) August 1, 2024
This article originally appeared on Engadget at https://www.engadget.com/openai-vows-to-provide-the-us-government-early-access-to-its-next-ai-model-110017697.html?src=rss
What's Your Reaction?