Securing GenAI: A Comprehensive Report on

Prompt Attack Taxonomy, Risks, and Solutions

A New Frontier of GenAI Threats


Generative AI is revolutionizing productivity, but it is introducing critical security vulnerabilities that can compromise your sensitive data and information. Get a comprehensive understanding of prompt-based threats and develop proactive defense strategies.


Prompt-based attacks can have a success rate as high as 88%. Three vectors subject to attack are:


  • Guardrail bypass attacks exploit model flaws by overwhelming them, breaking security controls.
  • Information leakage attacks trick systems into revealing private data that should be kept secret.
  • Goal hijacking attacks craft inputs to make LLMs deviate from intended goals, breaking rules.


Future-proof your GenAI security strategy. Discover best practices and strategies for strengthening your security defenses against emerging adversarial prompt attacks


Please complete the form to access/download the content


    I want to receive related communications from Palo Alto Networks and acknowledge their Privacy Statement.