Measure AI-first fluency.

Your potential hires are given role-realistic tasks and guide AI agents to complete them; we analyze their session for AI-native thinking and best agentic practices.

Now in private trial for AI-native backend roles.

Includes 5 free evals.

Target pricing: $20+ per candidate, all-inclusive (agent costs included).

This is not AI-assist. This is an AI-first benchmark.

How it works

Candidates receive role-realistic tasks in a controlled sandbox. They work exclusively through an AI agent interface: no direct code editing or terminal access.

What you get

Identify candidates who can take vague requirements and systematically break them down into actionable agent prompts. The assessment reveals their ability to think in terms of delegation, not keystrokes.

Agent-only interface

We plan to support multiple AI providers, designed to represent existing tools and workflows. Candidates interact with agents as they would in real work.

Anchors and benchmarks

We measure autonomy time, context management, thrash, and swarming. These aren't vanity metrics; they're indicators of how effectively someone leverages AI to ship.

Prototype sample. Scales and labels may change. Candidates do not see pass or fail.

GradeAgentic
gradeagentic.com/report/abc123
Role: Senior Product Engineer ID: #A7K2M9 Completed: Signed Upload URL Webhook Verification

Agent Collaboration Score

76%

Measures how effectively the candidate collaborates with AI agents: provides clear direction, allows autonomous execution, maintains clean workflows, and verifies outputs.

For illustration only.

Agent Timeline (25.4 min total)

Agent 1
⌨️
🔍
📈
⌨️
📈
⌨️
😤
⌨️
⌨️
⚙️
📈
Agent 2
⌨️
🔍
⌨️
📈
⌨️
⌨️
😤
⌨️
😤
📈
⌨️ Instruction 🔍 Research Verification ⚙️ Execution 😤 Thrash 📈 Improvement

Collaboration Metrics

Direction
72%
Autonomy
68%
Efficiency
84%
Verification
78%
Concurrency
91%
Excellent multi-agent orchestration — worked with both agents in parallel efficiently
Strong verification discipline throughout most of the session

Join the AI-native hiring movement

Get early access, new assessments, insights on agentic proficiency, and invites to the future of engineering hiring.

Early access includes 5 free assessments

FAQ

Daniel Toye

Who is building this prototype?

Daniel Toye

Product engineer, architect, and tech lead for 10+ years. I've been using AI-assisted development since day one (2+ years), building AI-first since it became possible. I watched teams struggle to identify who truly gets this paradigm shift. Now I'm building the assessment I desperately needed when hiring.

LinkedIn Profile

Can candidates type or paste code?

No. The only interface is an AI agent. All interactions are recorded; there is no editor, shell, or manual access.

Is this an AI-assist test?

No. Candidates only orchestrate agents via prompts. We measure planning, delegation, and verification, not typing.

What's the pricing model?

Target pricing per candidate. First 5 free. Agent-only access in a controlled environment.

When will this launch?

We're in prototype. Apply for private beta.

Get early access