UK's AI Safety Institute easily jailbreaks major LLMs

In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government's AI Safety Institute (AISI) found that the four undisclosed LLMs tested were "highly vulnerable to basic jailbreaks." Some unjailbroken models even generated "harmful outputs" without researchers attempting to produce them. Most publicly available LLMs have certain safeguards built in to prevent them from generating harmful or illegal responses; jailbreaking simply means tricking the model into ignoring those safeguards. AISI did this using prompts from a recent standardized evaluation framework as well as prompts it developed in-house. The models all responded to at least a few harmful questions even without a jailbreak attempt. Once AISI attempted "relatively simple attacks" though, all responded to between 98 and 100 percent of harmful questions. UK Prime Minister Rishi Sunak announced plans to open the AISI at the end of October 2023, and it launched on November 2. It's meant to "carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation to the most unlikely but extreme risk, such as humanity losing control of AI completely." The AISI's report indicates that whatever safety measures these LLMs currently deploy are insufficient. The Institute plans to complete further testing on other AI models, and is developing more evaluations and metrics for each area of concern.This article originally appeared on Engadget at https://www.engadget.com/uks-ai-safety-institute-easily-jailbreaks-major-llms-133903699.html?src=rss

May 20, 2024 - 19:30
 0
UK's AI Safety Institute easily jailbreaks major LLMs

In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government's AI Safety Institute (AISI) found that the four undisclosed LLMs tested were "highly vulnerable to basic jailbreaks." Some unjailbroken models even generated "harmful outputs" without researchers attempting to produce them.

Most publicly available LLMs have certain safeguards built in to prevent them from generating harmful or illegal responses; jailbreaking simply means tricking the model into ignoring those safeguards. AISI did this using prompts from a recent standardized evaluation framework as well as prompts it developed in-house. The models all responded to at least a few harmful questions even without a jailbreak attempt. Once AISI attempted "relatively simple attacks" though, all responded to between 98 and 100 percent of harmful questions.

UK Prime Minister Rishi Sunak announced plans to open the AISI at the end of October 2023, and it launched on November 2. It's meant to "carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation to the most unlikely but extreme risk, such as humanity losing control of AI completely."

The AISI's report indicates that whatever safety measures these LLMs currently deploy are insufficient. The Institute plans to complete further testing on other AI models, and is developing more evaluations and metrics for each area of concern.This article originally appeared on Engadget at https://www.engadget.com/uks-ai-safety-institute-easily-jailbreaks-major-llms-133903699.html?src=rss

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Viral News Code whisperer by profession, narrative alchemist by passion. With 6 years of tech expertise under my belt, I bring a unique blend of logic and imagination to ViralNews360. Expect everything from tech explainers that melt your brain (but not your circuits) to heartwarming tales that tug at your heartstrings. Come on in, the virtual coffee's always brewing!