Researchers at ETH Zurich created a jailbreak attack that bypasses AI guardrails

A pair of researchers from ETH Zurich, in Switzerland, have developed a method by which, theoretically, any artificial intelligence (AI) model that relies on human feedback, including the most popular large language models (LLMs), could potentially be jailbroken.

Jailbreaking is a colloquial term for bypassing a device or system’s intended security protections. It’s most commonly used to describe the use of exploits or hacks to bypass consumer restrictions on devices such as smartphones and streaming gadgets.

When applied specifically to the world of generative AI and large language models, jailbreaking implies bypassing so-called “guardrails” — hard-coded, invisible instructions…

Read more on Cointelegraph

29.2K Reads