Many-shot jailbreaking

non sono più prompt, ma interrogatori!

alla fine gli LLM confessano…

🙂

Source: Anthropic

We investigated a “jailbreaking” technique — a method that can be used to evade the safety guardrails put in place by the developers of large language models (LLMs). The technique, which we call “many-shot jailbreaking”, is effective on Anthropic’s own models, as well as those produced by other AI companies. We briefed other AI developers about this vulnerability in advance, and have implemented mitigations on our systems.The technique takes advantage of a feature of LLMs that has grown dramatically in the last year: the context window. At the start of 2023, the context window—the amount of information that an LLM can process as its input—was around the size of a long essay (~4,000 tokens). Some models now have context windows that are hundreds of times larger — the size of several long novels (1,000,000 tokens or more).The ability to input increasingly-large amounts of information has obvious advantages for LLM users, but it also comes with risks: vulnerabilities to jailbreaks that exploit the longer context window.One of these, which we describe in our new paper, is many-shot jailbreaking. By including large amounts of text in a specific configuration, this technique can force LLMs to produce potentially harmful responses, despite their being trained not to do so.Below, we’ll describe the results from our research on this jailbreaking technique — as well as our attempts to prevent it. The jailbreak is disarmingly simple, yet scales surprisingly well to longer context windows.

Continua qui: Many-shot jailbreaking \ Anthropic

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *