Engineering
Security Researcher
Contract€50k – €100kRemoteIn-personSan FranciscoBerlinRemote
We tell companies their secrets never touch the model. That every action is permissioned. That the sandbox actually holds. Your job is to prove us wrong — and then help us fix it. You'll attack ocr from every angle: prompt injection, sandbox escapes, permission bypasses, secret exfiltration. If an agent can do something it shouldn't, you find it first.
We're building a team of geniuses. Not a team of "smart people" — actual geniuses who ship. You've built things before — maybe a company, maybe projects that people actually use. You want to work unreasonably hard on something that matters. If you fall short of that standard, you're wrong here.
What you will do
- Continuously probe the agent runtime for vulnerabilities — sandbox isolation, permission enforcement, secret handling, integration scoping.
- Develop attack scenarios that model real-world threats: malicious prompts, tool misuse, multi-step exploits across agent sessions.
- Write clear, reproducible findings and work with engineering to close gaps.
- Help define security architecture decisions as the platform evolves — threat models, trust boundaries, defense-in-depth strategy.
What we are looking for
- Deep experience in application security, penetration testing, or red teaming. You've found real bugs in real systems.
- You understand LLM-specific attack surfaces — prompt injection, jailbreaks, indirect prompt injection, tool-use exploits. This is not theoretical for you.
- Strong systems background. You're comfortable reading Go, understanding container isolation, and reasoning about permission models at the code level.
- You can communicate findings clearly to engineers who will fix them. No 40-page reports that sit in a drawer.
- Self-directed. Contract means you set your own pace, but you deliver consistently and proactively.
- Read our values before applying. We default to open — including about what we get wrong.
Why OpenCompany
- Competitive contract rate.
- You're securing the runtime that companies trust to run AI agents in production. The stakes are real.
- Direct access to the entire codebase and engineering team. No bureaucracy between finding a bug and shipping a fix.
- Early-stage company where your work directly shapes the security posture of the product.
- Flexible engagement — remote-first, set your own hours, deliver results.