We're building the world's most trusted AI security platform. Join our founding team and help shape the future of AI safety.
Be part of a mission-driven team building the security infrastructure for the AI-powered future.
Your work directly protects AI systems used by millions of people worldwide.
Significant equity stake and the opportunity to shape company culture from day one.
Work from anywhere with flexible hours and a focus on results, not location.
Join our founding team and help build the future of AI security.
As one of our first Security Research Engineers, you'll partner closely with the CTO to define and ship the core red-teaming and vulnerability-detection capabilities in Garak Enterprise. You'll own threat modeling, adversarial probe design, and hands-on validation against real LLMs—building the tools and methodology that our customers rely on to secure their AI agents.
You'll architect and own the cloud-native infrastructure that runs Garak's red-teaming and monitoring platform at scale. From spinning up Kubernetes clusters to orchestrating secure, multi-tenant inference pipelines, you'll ensure our service is rock-solid, observable, and easy for customers to integrate into their CI/CD workflows.
Don't see a perfect fit? We're always looking for exceptional talent. Send us your resume and tell us how you'd like to contribute to AI security.