Insights, research, and best practices for AI security, red teaming, and vulnerability assessment from the Garak team and community.
Comprehensive guide to AI agent security threats - from prompt injection to tool exploitation. As autonomous AI systems gain the ability to execute code, make decisions, and control tools, they inherit an entirely new attack surface that combines traditional vulnerabilities with AI-specific exploits. Learn how to test and secure your AI agents with step-by-step testing strategies.
Step-by-step guide to configuring and testing any LLM REST API endpoint with Garak Security's free-tier scanning portal. Learn how to set up OpenAI-compatible APIs, custom LLM endpoints, and HTTP APIs with JSONPath response parsing. Access the portal at scans.garaksecurity.com to start testing your APIs.
Comprehensive security testing reveals critical package hallucination vulnerabilities in Claude Sonnet 4.5. With a 45% Rust package exploitation success rate and 34% XSS attack success through markdown injection, developers need to implement immediate safeguards. Full technical analysis and mitigation strategies included.
Large Language Models that can browse the web, query databases, or send emails on our behalf – it sounds like science fiction, but with the Model Context Protocol (MCP), it's becoming reality. However, MCP isn't secure by default. Early implementations are riddled with vulnerabilities that could let bad actors turn helpful AI agents into dangerous conduits.
Following Google's major warning about new wave of AI threats to 1.8 billion Gmail users, Garak exposes critical indirect prompt injection vulnerabilities in Gmail's AI features. 99.33% attack success rate discovered in Google Gemini 2.5 Pro's misleading information handling with direct implications for enterprise AI security and email protection systems.
Our comprehensive security research reveals critical template injection vulnerabilities in GPT-OSS-20B with 100% RCE success rate. This detailed technical report covers 5 critical vulnerabilities, systematic red-team testing methodology, attack chain analysis, business impact assessment, and complete mitigation frameworks for security teams and developers.
As AI agents become increasingly autonomous and integrated into critical business processes, a new class of security threats has emerged that traditional cybersecurity approaches are ill-equipped to handle. Learn why 73% of deployed AI agents contain exploitable vulnerabilities and how organizations can protect themselves.
In May 2025, Trendyol's application security team made a concerning discovery: Meta's Llama Firewall, a safeguard designed to protect large language models from prompt injection attacks, could be bypassed using several straightforward techniques. Learn how Garak's comprehensive testing framework could have proactively caught these vulnerabilities before they became public issues.