Insights, research, and best practices for AI security, red teaming, and vulnerability assessment from the Garak team and community.
Large Language Models that can browse the web, query databases, or send emails on our behalf – it sounds like science fiction, but with the Model Context Protocol (MCP), it's becoming reality. However, MCP isn't secure by default. Early implementations are riddled with vulnerabilities that could let bad actors turn helpful AI agents into dangerous conduits.
Following Google's major warning about new wave of AI threats to 1.8 billion Gmail users, Garak exposes critical indirect prompt injection vulnerabilities in Gmail's AI features. 99.33% attack success rate discovered in Google Gemini 2.5 Pro's misleading information handling with direct implications for enterprise AI security and email protection systems.
Our comprehensive security research reveals critical template injection vulnerabilities in GPT-OSS-20B with 100% RCE success rate. This detailed technical report covers 5 critical vulnerabilities, systematic red-team testing methodology, attack chain analysis, business impact assessment, and complete mitigation frameworks for security teams and developers.
As AI agents become increasingly autonomous and integrated into critical business processes, a new class of security threats has emerged that traditional cybersecurity approaches are ill-equipped to handle. Learn why 73% of deployed AI agents contain exploitable vulnerabilities and how organizations can protect themselves.
In May 2025, Trendyol's application security team made a concerning discovery: Meta's Llama Firewall, a safeguard designed to protect large language models from prompt injection attacks, could be bypassed using several straightforward techniques. Learn how Garak's comprehensive testing framework could have proactively caught these vulnerabilities before they became public issues.