Job Description
Get AI-powered advice on this job and more exclusive features.
This range is provided by Crossing Hurdles. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range
$54.00/hr - $111.00/hr
Position: AI Red-Teamer — Adversarial AI Testing (Novice)
Type: Hourly contract
Compensation: $54–$111/hour
Location: Remote
Commitment: 10–40 hours/week
- Role Responsibilities
- Conduct adversarial testing on AI models, including jailbreaks, prompt injections, misuse cases, and exploit discovery.
- Generate high-quality human data by annotating failures, classifying vulnerabilities, and flagging systemic risks.
- Apply structured red‑teaming methodologies using established taxonomies, benchmarks, and testing playbooks.
- Document findings in a reproducible manner by producing reports, datasets, and actionable attack cases.
- Support multiple testing efforts across projects, including LLM safety testing and socio‑technical abuse scenarios.
- Surface vulnerabilities that automated evaluation systems fail to detect.
- Requirements
- Prior experience in AI red‑teaming, adversarial testing, cybersecurity, or socio‑technical risk analysis OR
- A strong AI background or education with the ability to learn red‑teaming methodologies quickly.
- Strong adversarial mindset with curiosity and persistence in stress‑testing systems.
- Ability to apply structured frameworks rather than relying on ad‑hoc testing methods.
- Clear written communication skills for explaining risks to technical and non‑technical stakeholders.
- High adaptability and comfort working across multiple projects and evolving requirements.
- Ability to work independently in a remote, project‑based environment.
- Application Process
- Upload resume
- Interview (15 min)
Apply tot his job
Apply To this Job