Zen Talent Platform V1
1) Core objects (data model)
Candidate / CV
Raw CV file storage + parsed structured profile
Source (inbound job, old pool, manual upload)
Job Description (JD)
Text + structured requirements (skills, years, location, etc.)
Match (JD ↔ Candidate)
Deterministic match score + score breakdown
Elo Battle
Battle session per JD, candidates involved, outcomes, Elo updates
Shortlist
Per JD shortlist with status tags and notes
2) Candidates / CV database (4000+)
Bulk upload/import CVs into a database
Uploaded CVs will be backed up in Google Drive / AWS S3.
Fast list view with filters (Education, Location, Tenure, Highlights, Skills)
Candidate profile page: overview + write notes + raw CV preview/download
Candidate fields (contact info, links, skills..) could be editable
Deduplication basics (same email/phone/linkedin URL)
It will take the latest uploaded CV as up-to-date (will override the existing one)
Overrides will be informed and confirmed before persisting
Tag normalization (skills taxonomy, company/university lists)
Missing-data detection (empty location, no skills, etc.)
3) CV parsing
CV parsing is already done and the structure is ready to be used as a separate service.
Parse CV on upload (queue/async)
Parsing status per CV (pending/success/failed) + retry
Parser config + mapping into your schema (education, roles, dates, skills)
Only english language support
4) JD creation / editing
Critical, the provided JD needs to be parsed as structured JSON as a given diagram below to be matched with candidates.
Create new JD from scratch or paste text (LLM parsing will be asked for confirmation to handle missing/unparsed fields like "no skills”)
Edit and version JDs (at least “last edited” + optional version history)
Structured fields derived from JD (skills, seniority, location, green flag/reg flag/must-haves/nice-to-haves) as manual inputs (since scoring is not LLM)
5) JD Evaluation
There will be a pipeline for the JD, that will evaluate the Candidate pool based on the JD so it creates a candidate short-list for leaderboard (5.1 Deterministic scoring). Then, the short listed candidates will be ELO battled, and this is our final leaderboard listed by ELO score. Final leaderboard will be persisted.
Owner can edit JD after results and system re-evaluate and will give new results (asynchronously)
When you open a JD, system ensures scores are computed/refreshed (with caching so you don’t recompute unnecessarily)
5.1) Deterministic scoring (no LLM)
Scoring engine that produces:
Total match score (0–100)
Score breakdown (e.g., skills, years, location, education, company tier)
Explainability: “Matched 7/10 required skills”, “Missing must-have X”
Weight configuration (per JD or global):
Weights for skill match, tenure, location, education, company tier, English level, etc.
Hard filters
Must-have skills, minimum years, location required, etc. (pass/fail)
5.2) Elo battles (Top 10–20)
For each JD:
Select Top N candidates by deterministic match score (configurable 10/20)
Run Elo battles among them
Persist Elo rating per (JD, candidate) or per candidate with JD-specific context (choose one; JD-specific is usually safer)
Battle rules (deterministic, not LLM):
Use your score breakdown to decide “winner” in a pairwise comparison consistently
Show:
Elo leaderboard
5.3) Leaderboard:
Name-Surname
Sort by match score / Elo
Filters (location, skills, highlights, tenure)
Open candidate profile from results(email-linkedin-phone number)
8) Shortlist / select candidate for JD (selected from from the JD Leaderboard by ELO scores manually)
Will be persisted around JD edits. Won’t be reset!!
Create shortlist per JD/Project
Add/remove candidate
Status per shortlisted candidate (e.g., New, Contacted, Interview, Offer, Hired, Rejected)
Notes field + “why shortlisted” quick tags
Notes will be assigned to the Candidate profile, with JD. (candidate_notes has foreign key to candidate_id (not null) AND job_id (nullable))
Export shortlist (CSV)
9) Candidates Search (without Job Matching)
Candidates will be filtered/listed by different filters.
10) Non-functional requirements (important for freelancer scope)
Performance: scoring 4000+ CVs should be batchable + cached
Background jobs: parsing and scoring should not block UI
Observability: logs for parsing/scoring/battles
Data consistency: deterministic scoring versioning, candidate deduplication
11) Technical Requirements
Backend: Python / FastAPI
Frontend: React
Railway deployment (DB / Redis)
Storage: Google Drive / AWS S3