Skip to Content
Back to Blog

AI-Orchestrated Frontend Development: A Case Study in LLM-Assisted System Architecture

December 31, 2025
ReactTypeScriptAI DevelopmentFrontendLLM Orchestration

Resume Helper Professional - Main Interface

Why This Was Possible: Clean Architecture

The open-source Resume Helper was built with clean architecture from the start: separating the presentation layer (UI) from the business logic (backend workflows). This architectural decision proved invaluable:

  • Gradio (Open Source): Perfect for proof of concept and fast iteration. I could validate the core workflows quickly without getting bogged down in UI details.
  • React (Professional): When ready for a production UI, I simply swapped the frontend layer. The backend API, workflows, LLM integration, and all business logic remained completely unchanged.

The core logic is in my open-source version and did not change. What changed was only the presentation layer: from Gradio's form-based interface to a modern React application. This demonstrates the real value of clean architecture: you can evolve the UI without touching the business logic, allowing rapid iteration on the proof of concept while preserving the investment in core functionality.


A. System Design: Understanding the Data Flow

How the Job Description Payload Flows Through the System

flowchart TB
    A["1️⃣ User Input<br/>AIResumeHelper.tsx<br/>User pastes job description + enters job URL"]
    B["2️⃣ State Management<br/>Zustand Store aiStore.ts<br/>Holds: provider, model, apiKey<br/>jobDescription, jobAnalysis<br/>matchScore, matchSummary"]
    C["3️⃣ API Call<br/>api.ts → Axios POST<br/>/api/ai/analyze-job<br/>job_description, resume_data, model"]
    D["4️⃣ Router Layer<br/>ai.py → @router.post<br/>Validates with Pydantic models<br/>Calls ResumeHelper.analyze_job_description"]
    E["5️⃣ Workflow Layer<br/>resume_workflows.py<br/>Creates cache key MD5 hash<br/>Sanitizes PII via PrivacyManager<br/>Constructs prompt 23 fields"]
    F["6️⃣ LLM Provider Layer<br/>litellm_provider.py<br/>Unified interface to 7 providers:<br/>OpenAI, Anthropic, Google, Ollama<br/>Groq, Perplexity, xAI<br/>Sends with JSON response format"]
    G["7️⃣ Response Processing<br/>Validates 23 required fields<br/>Caches result prevent duplicates<br/>Tracks cost via cost_tracker.py"]
    H["8️⃣ Backend Response<br/>success: true<br/>analysis: position_title, match_score<br/>skills_match, experience_match<br/>estimated_salary_range<br/>usage: tokens"]
    I["9️⃣ Frontend State Update<br/>aiStore.ts → set<br/>jobAnalysis<br/>matchScore<br/>matchSummary"]
    J["🔟 UI Rendering<br/>AIResumeHelper.tsx<br/>Renders: Match Score gauge<br/>Salary estimate, Progress bars<br/>ResultPalette for artifacts"]
    K["🎨 Tailwind CSS<br/>Utility Layer<br/>Styling engine - no separate CSS files<br/>Utility classes: flex, p-4, bg-navy-900<br/>Provides Lego bricks for styling<br/>Every visual detail gauge color<br/>progress bars, dark background"]
    L["🧩 shadcn/ui<br/>Component Layer<br/>Built on Radix UI primitives<br/>Copy-paste source code not npm install<br/>Provides high-level logic:<br/>Dialog open/close behavior<br/>Dropdown interactions<br/>Match Score Gauge component<br/>Salary Estimate card"]

    A -->|User Input| B
    B -->|HTTP POST| C
    C -->|API Request| D
    D -->|Workflow Trigger| E
    E -->|LLM API Call| F
    F -->|AI Response| G
    G -->|HTTP Response| H
    H -->|State Update| I
    I -->|Render| J
    J -.->|Styling| K
    J -.->|Components| L

    style A fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style B fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style C fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style D fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style E fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style F fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style G fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style H fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style I fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style J fill:#1e3a5f,stroke:#64ffda,color:#e6f1ff
    style K fill:#112240,stroke:#233554,color:#e6f1ff
    style L fill:#112240,stroke:#233554,color:#e6f1ff

The Component: AIResumeHelper.tsx

This is the command center of the entire application: a multi-modal interface that orchestrates 4 distinct AI workflows, manages complex state, and provides real-time visual feedback.

Technical Constraints I Provided to Claude:

State Architecture

  • Zustand State Management: Use Zustand for global state with separate concerns: aiStore for AI operations, resumeStore for resume data, applicationStore for tracker data.
  • Optimistic UI Updates: Implement optimistic UI updates: show loading states immediately, update on success.

Data Flow Constraints

  • Job Analysis Dependency: Job analysis MUST complete before Tailor/Cover Letter/Skill Gap buttons enable.
  • Data Passing: Pass job_analysis_data to all downstream operations to avoid duplicate analysis.
  • Auto-collapse: Auto-collapse job description to a pill after 1 second of inactivity.

UX Requirements

  • Match Score Display: Match score displayed as circular gauge with color coding: green (80+), yellow (60-79), red (<60).
  • Progress Bars: Show progress bars for skills_match, experience_match, education_match breakdown.
  • Salary Range Handling: AI-estimated salary range must handle both string and object formats from different LLMs.

Business Logic

  • Add to Tracker: Add to Tracker button must validate job URL exists, parse salary range to min/max integers, extract company/position from analysis.
  • LocalStorage Persistence: User prompt persists to localStorage across sessions but is NOT included in cache keys.

B. The "Engineer's Secret Sauce"

PII Sanitization

Before any resume data reaches an LLM, the system strips all personally identifiable information (PII) via PrivacyManager.sanitize_resume_data(). Names, phone numbers, email addresses, and physical addresses are removed, processed separately, and merged back only during document generation. This ensures that sensitive personal data never leaves the user's control during AI processing.

MD5 Caching

Job analyses are cached using MD5 hash keys generated from the combination of job description and resume content. The same job + resume combination results in zero API cost on repeat operations. This caching layer prevents unnecessary LLM calls and significantly reduces operational costs while maintaining instant response times for previously analyzed jobs.

Cost Tracking

Every LLM operation logs token usage to cost_tracking.json with real-time market pricing from LiteLLM's database. This provides complete transparency into API costs, allowing users to track spending per operation, compare provider costs, and optimize their usage patterns. The cost tracking system supports all 7 integrated providers and updates automatically as pricing changes.


C. Results


📹 Video Demo


Summary: What This Project Demonstrates

  1. Clean Architecture Value: The open-source version's clean separation between presentation and business logic allowed a seamless UI swap (Gradio → React) without touching any core functionality. The backend API, workflows, and LLM integration remained unchanged.

  2. AI Orchestration: I defined the constraints, data flows, and edge cases. Claude executed the React implementation. This is the new paradigm: orchestrating AI to build complex UIs while focusing on high-value logic.

  3. Production Awareness: I understand where SQLite breaks, why caching matters, and when to add complexity (vector DB) vs. keeping it simple.

  4. Process Engineering Mindset: The goal wasn't "write beautiful CSS" - it was "build a system that helps job seekers compete against AI-generated applications." Clean architecture made it possible to iterate quickly on the proof of concept (Gradio) and then swap to a professional UI (React) when ready.


Tech Stack Reference

Frontend

  • Framework: React 18 + TypeScript
  • State Management: Zustand (lightweight, no boilerplate)
  • UI Components: shadcn/ui (Radix UI primitives + Tailwind CSS)
  • Form Handling: React Hook Form + Zod validation
  • HTTP Client: Axios with interceptors
  • Build Tool: Vite

Backend

  • Framework: FastAPI (async Python)
  • ORM: SQLAlchemy 2.0
  • Database: SQLite (PostgreSQL-ready)
  • AI Integration: LiteLLM (unified LLM interface)
  • PDF Generation: Playwright + Jinja2 templates
  • Validation: Pydantic v2

This document is part of my "AI Applied & Process Engineering Portfolio." It demonstrates how clean architecture enabled a seamless UI swap: from Gradio (proof of concept) to React (professional UI), while preserving all core business logic from the open-source version.