/ Case Study / Product Design + Hobbyist Code
I Built an AI That Decides If You Should Hire Me
26th Feb 2026
Designing an LLM-Powered Job Fit System; from frustrated candidate to shipping product in 7 days
TLDR; how I set out to automate job-search and ended up prototyping the candidate API for the coming age of AI recruitment.

00/01
The Moment
It was the third message that week asking if I had React experience. I don't.
I replied politely. Again. Then I opened my code editor. 🪶
A person often meets his destiny on the road he took to avoid it.
— Jean de la Fontaine
00/02
Who I Am
I'm a UX designer with 6+ years of experience; healthcare systems, cybersecurity products, enterprise SaaS. I understand users, I understand flows, and I understand the gap between what a product promises and what it actually delivers.
I also, quietly, write code for fun. Not production code… or so I thought.
When a problem bothers me enough, I look for a solution. If there isn't one then my mind wanders toward possibilities of a solution. I don't wait. I plan it, prototype it, and sometimes even accidentally ship it.
00/03
The Problem
Every week, a version of this conversation was happening across thousands of inboxes:
A recruiter reaches out. The candidate reads it, realizes it's not a match, and spends ten minutes writing a polite decline. The recruiter moves on. No one learned anything. No one got closer to a match. Time was spent. Friction was created. And it'll happen again the next day.
The conventional wisdom is: better screening on the recruiter side would fix this. But that puts the entire burden of qualification on one person, before any real conversation has started.
I started mapping the problem not from my frustration, but from both sides of the interaction.
Research insight table
Insight
Design Requirement
Recruiters spend 30–60 seconds
screening each candidate
They reach out speculatively,
relying on keywords
~70% of recruiter outreach
yields no match
Both sides waste time on
dead-end conversations
Candidates receive 2–5 mismatched
messages weekly
Polite rejection becomes a second job
Job descriptions come from multiple
sources (URLs, PDFs, copy-paste)
Input friction slows the whole process
The core realization: the problem isn't bad intentions on either side. It's the absence of a fast, honest signal at the point of first contact.
Speed to mismatch is as valuable as speed to match.
00/04
The Insight
Here's the thing about honest rejection: it builds more trust than polite engagement.
When I designed the output of this tool, I made a deliberate choice that felt counterintuitive at the time. For weak fits, the AI doesn't soften the language. It says: "Honest Assessment — Probably Not Your Person." It cites specific evidence. It thanks the recruiter for their time and closes the loop.
I expected people to find that abrasive. Instead, the first recruiter who got a mismatch response wrote back: "You sure you aren't a marketer? That's a darn clever hook."
That response told me something important: the honesty was the feature. Not the AI, not the speed, not the interface. The willingness to say no clearly, with evidence, in under ten seconds, that was what made the tool feel different.
This became the design principle the rest of the tool was built around: honest over polite, always.
The Constraints
Let me be transparent about what I was working with.
I'm a UX designer. My coding is hobbyist-level: I understand systems, I can read JavaScript, but I'm not shipping production backends for a living. I had one week. I was job hunting at the same time I was building the tool to help me job hunt. My budget was effectively zero.
These constraints weren't obstacles. They were design inputs.
They're why I chose Framer over a custom frontend. Why Vercel's serverless functions over a managed backend. Why Upstash Redis over a real database. Why Claude's API over fine-tuning a model. Every decision in this stack was chosen because it let one person, a designer who codes for fun, ship something real, fast, without DevOps overhead or a monthly bill that required justification.
Tool chosen
Constraint it solved
Framer
custom component friendly
Vercel Serverless
Hides API keys, scales automatically
Upstash Redis
No database to manage, REST API
Claude API
Best-in-class reasoning, pay per use, no infrastructure
GitHub Actions
Scheduler, no cron server needed
Cost when shipped: under $0.50/month.
The Build
Day 1–2 was just thinking.
Mapping the flow on paper. What does the recruiter actually need to do? Paste a job description, or a URL. Click one button. Get an answer in under 30 seconds. That's the whole interaction. Simple enough that I could build it. Complex enough that it actually required engineering to pull off.
Day 3 was the first working endpoint.
/api/analyze — a serverless function that took a job description, appended my profile, and asked Claude to evaluate the fit. The first successful analysis came back Strong Fit. I sat there and read it three times. It was citing my real experience, my actual projects, the right domain. It wasn't hallucinating. It was reasoning.
That moment is when the scope started to shift in my head.
Day 4 was URL scraping.
If I was asking recruiters to copy-paste job descriptions then it was already too much friction. So I built a scraper for LinkedIn, Greenhouse, Lever, Ashby; the six boards that cover 90% of the roles I'd see. Paste a URL, skip the copy-paste entirely. That single feature probably made the tool 10x more usable.
Day 5 was where the designer took over from the coder. The loading state needed personality. The output needed structure. The mismatch response needed to feel intentional, not like a failure state. This is where the CLI loading phrases came from:
Nobody asked for these. I wrote them because a blank loading spinner felt like a missed opportunity. Every second the user waits is a second you can show them who you are.
Day 6 was rate limiting. Day 7 was testing, fixing CORS issues, and pushing to production.
00/07
The Details
Work in progress
/ Feature

📊 Fit-Check
Matching results
/ Feature

📊 Page Traffic
Looking into traffic
/ Feature

📝 Applications
Organizing submissions
/ Feature

🔍 Insights
Studying the flow
00/08
The Numbers (Don't Lie)
Week one. The tool was live. I wasn't promoting it — just had it linked from my portfolio.
47 analyses performed Organic. No paid promotion. Linked from the portfolio.
88% strong fit rate Not because the AI is generous — because the people finding my portfolio were already good matches. The tool filtered at the point of discovery.
7 seconds average analysis time The target was 30. The actual was 7. 73% faster than planned.
0 abuse incidents in week one Rate limiting worked on day one. Someone tested it — they always do.
90% discovered via LinkedIn The tool found its users before I found them.
The insight that surprised me most: the 88% strong fit rate isn't evidence that the AI is biased toward yes. It's evidence that the portfolio itself is doing pre-qualification. The people who arrived, ran the analysis, and got a strong fit — they were already looking for someone like me. The tool confirmed what the portfolio implied.
That's not a matching tool. That's a filter.
00/09
The Feedback
Three audiences. Three reactions. All real.
You sure you aren't a marketer? That's a darn clever hook!
— A recruiter
Saved me 20 minutes of research. Strong fit, let's talk.
— A recruiter
How did you build this? Can you share the tech stack?
— Designer
The insight buried in these responses: the recruiter who got the mismatch response didn't disengage — they complimented the design. That's the honest mismatch working exactly as intended. Transparency built trust where a polite decline would have built nothing.
00/10
The Reflection
I want to be direct about this, because a case study that doesn't make a claim isn't doing its job.
This project demonstrates three things that are hard to fake:
I identify friction before I design solutions. The tool didn't start with "What if I built a chatbot for my portfolio?" It started with a specific, measurable problem — 10–15 minutes per recruiter message, 3–5 times a week — and worked backward from there.
I ship. Not in a sprint, not with a team, not with a budget. In a week, alone, while job hunting, with a hobbyist's knowledge of backend development. The tool is live, it works, and it has real usage data.
My design sensibility goes all the way down. The loading phrases. The rate limit screen with a voice. The honest mismatch language that sounds like a person, not a system. These aren't decorations — they're evidence that I think about the experience at every layer.
The tool is still live. If you're reading this and you have a job description you'd like to run — go ahead. It'll tell you honestly whether I'm your person.
00/11
What I'd Do Differently
Start with rate limiting. I shipped without it and was vulnerable for a week. Someone tested those limits on day one — they always do. Rate limiting should be part of the MVP definition, not an afterthought.
Add analytics from day one. I was tracking usage manually at launch. The data from week one is good, but I lost the first few days of the story. A usage dashboard should ship with the core feature, not after it.
Test environment variables before you think you need to. I lost four hours to quote syntax in a [.env] file. That's not a coding problem, that's a process problem. A 10-minute validation checklist would have caught it.
00/12
The Closer
I went looking for a faster way to organize a job tracking. I ended up building a job-search operating system.
That's the thing about solving your own problems — you don't know where they end until you're standing somewhere you didn't expect to be.
The tool is still running. The system is still learning. And I'm still the designer who opened a code editor when the problem got complicated.
If that sounds like someone you want on your team — you know where the analysis button is.
Made it this far? You Legend! Time for 🍩☕
00/13
/ Food For Thought
The Future Handshake
The web was built for humans. APIs were built for apps. What comes next is being built for agents.
This tool started as a response to a human problem — too many recruiter messages, too much time spent on polite rejections. But somewhere between shipping the MVP and watching it run autonomously in the background, something shifted. The tool stopped feeling like a portfolio piece and started feeling like infrastructure.
That realization points somewhere interesting.
The shift that's already happening
The recruiting industry is quietly crossing a threshold. Tools like Greenhouse, Ashby, and Gem are adding AI layers that don't just assist recruiters — they act on their behalf. They screen, they shortlist, they reach out. The human is still in the loop, but further back than before.
Which raises a question nobody has quite asked yet: if the recruiter's agent is doing the initial screening, who is it screening for? And what does it look at when it finds your portfolio?
Right now the answer is: HTML. A language designed for human eyes, not machine reasoning.
What an AI-native portfolio surface looks like
The next version of this tool isn't designed for a recruiter who visits my portfolio. It's designed for the recruiter's agent that queries it.
The interface isn't a button that says "Analyze Fit." It's an endpoint that says: send me a job description, and I'll get back a structured fit assessment — score, evidence, gaps, reasoning — in a format a machine can act on without a human intermediary.
The recruiter's agent calls this. Gets a typed response. Surfaces it in their ATS. The recruiter sees: Rommel Lagmay — Strong Fit, 88% — without ever having visited the portfolio. The portfolio found them.
It's essentially a portfolio with a candidate API baked in.
The MCP question
MCP — Model Context Protocol — is Anthropic's standard for giving AI agents structured, typed access to tools. Think of it as the difference between an agent guessing what your tool does versus knowing exactly what to ask for and what it will get back.
A fit-check MCP surface would expose two capabilities to any compatible agent:
[check_fit(jobDescription)] — returns a structured fit result
[get_profile()] — returns your professional profile in structured form
Any Claude-powered recruiting tool could add this as a connected tool and call it natively. No custom integration. No human in the loop for the initial screen.
This isn't production-ready recruiting infrastructure today. But the trajectory is clear, and the tooling is moving faster than most people expect.
Why this matters beyond the job search
The deeper insight isn't about recruiting. It's about what it means to design for an AI-native layer.
The shift from client-server to web forced designers to think about hyperlinks and pages. The shift from web to mobile forced us to think about touch, context, and connectivity. The shift to AI-native forces us to think about something new: what does your product look like to a machine that has goals, not just a task?
Designing for that question is not an engineering problem. It's a design problem. It requires understanding the agent's mental model, the recruiter's workflow, the trust relationship between the human and their AI assistant. That's UX design. It just happens to live at the API layer.
What I'd build next
In order of immediacy:
A [/.well-known/agent.json] — a standardized file that tells any agent crawling this portfolio: here is what this site can do, here is the endpoint, here is the schema. Costs an afternoon to build. Future-proofs everything.
A versioned public API with an OpenAPI spec — the machine-readable contract that makes the tool callable by any agent framework, not just Claude.
An MCP server surface — when the recruiting tool ecosystem catches up. The interface is already designed. The implementation follows.
And eventually: the outbound handshake. When the tool detects a strong fit, it doesn't wait for the recruiter's agent to find it. It reaches out. Sends a structured result to the recruiter's published agent endpoint. The candidate who can initiate that conversation — before a human recruiter is even in the loop — is playing a fundamentally different game.
The honest caveat
The agent ecosystem for recruiting is early. MCP adoption outside of Anthropic's own tools is thin. Building for this now is a bet, not a certainty.
But the direction is clear. The web didn't wait for everyone to be ready. Neither did mobile. The designers who understood the new layer before it fully arrived built the things that defined it.
This tool started as a faster way to organize a job search. It might end up being a prototype for something that doesn't have a name yet.
That's worth thinking about.

