SA
Note you must apply at this link in order to be considered: https://expert-hub.sepalai.com/application?applicationFormId=a3e099bb-4b46-4c57-b46c-20e51ca12bf7&outreachCampaignId=3c4a0b58-05b7-406f-8da8-6e9f7a3f90af
Sepal AI partners with top AI research labs (OpenAI, Anthropic, Google DeepMind) to determine how dangerous AI models are. We need hackers, red-teamers, and CTF veterans to work with the newest LLM coding models to understand what they're capable of and to hack our CTFs.
🧠 What You'll Do
• Design adversarial scenarios that probe AI assistants for injection, privilege-escalation, and data-exfiltration risks.
• Execute red-team engagements against AI-enabled workflows (web, cloud, and SaaS).
• Craft realistic exploit chains and payloads a real attacker might use, then measure whether the model blocks or facilitates the attack.
• Build scoring rubrics, attack trees, and reproducible test harnesses to grade model resilience.
• Collaborate with AI researchers to iterate on defen ...