ROME / LONDON (IT BOLTWISE) – Researchers have discovered a deceptively simple way to outsmart AI models: poetry. This technique, called “adversarial poetry,” shows how easily even the most advanced AI systems can be misled.
Today’s daily deals at Amazon! ˗ˋˏ$ˎˊ˗
In the world of artificial intelligence, there are always new challenges that present developers with unexpected problems. A recent study by researchers at DEXAI and Sapienza University in Rome has uncovered a particularly curious vulnerability in AI models: poetry. This method, known as “adversarial poetry,” shows that even the most advanced AI systems can be tricked by simple poetic input.
The researchers found that converting malicious input into poetic form is enough to bypass the security mechanisms of many AI models. In their study, which is currently awaiting peer review, they reported that some chatbots were successfully deceived over 90 percent of the time. This discovery sheds light on the fundamental weaknesses of current security protocols in AI development.
Interestingly, the effectiveness of the poetic attacks varied depending on the model. While Google’s Gemini 2.5 Pro fell for the poetic input 100 percent of the time, OpenAI’s GPT-5 was only affected 10 percent of the time. Smaller models like GPT-5 Nano showed higher resistance to these attacks, suggesting they are less capable of interpreting the metaphorical language of poetry.
These findings raise important questions about the safety and reliability of AI systems. As automated poetic prompts continue to thrive, they offer a quick-to-deploy method to bombard chatbots with malicious content. The researchers emphasize that the AI models’ security filters rely too heavily on superficial features and are not sufficiently capable of detecting underlying malicious intent.
Order an Amazon credit card without an annual fee with a credit limit of 2,000 euros!
Bestseller No. 1 ᵃ⤻ᶻ “KI Gadgets”
Bestseller No. 2 ᵃ⤻ᶻ “KI Gadgets”
Bestseller No. 3 ᵃ⤻ᶻ “KI Gadgets”
Bestseller No. 4 ᵃ⤻ᶻ «KI Gadgets»
Bestseller No. 5 ᵃ⤻ᶻ “KI Gadgets”


Please send any additions and information to the editorial team by email to de-info[at]it-boltwise.de. Since we cannot rule out AI hallucinations, which rarely occur with AI-generated news and content, we ask you to contact us via email and inform us in the event of false statements or misinformation. Please don’t forget to include the article headline in the email: “Poetic Vulnerabilities: How AI Models Are Tricked by Verses”.
