Free during early access · 33 modules live now

Refactor your skills. Not your career.

The senior engineer who ships an agent makes 40% more. The PM who specs the eval runs the AI initiative. The platform lead who built the gateway leads the AI team. Become that person — in 8 weeks. For your current role, or your next interview.

  • Walk into the new AI system design interview at FAANG, Anthropic, or any AI-leaning startup
  • Ship a real AI feature your team can run in production
  • Be the person your company turns to when AI work shows up
250+ lessons live
Updated monthly
No credit card
Who Refactor4AI is for

If three of these sound like you, you're in the right place.

We didn't build this for absolute beginners or AI researchers. We built it for the senior tech professional who already knows their craft and wants AI fluency layered on top.

You have an AI system design interview coming up — and no framework to prep with.

You read 12 AI tutorials a week and still feel behind.

Your team is shipping AI features — and you're the one who isn't.

Your job description has an AI section that didn't exist 18 months ago.

You want to lead AI work at your company, not ask questions in AI meetings.

You've shipped real software for 5+ years and want depth, not hype.

Why now

AI fluency is the new dividing line.

The engineer who ships a working agent in two days makes more than the one who can't — sometimes 40% more. The platform engineer who built the company's AI gateway is now running the AI platform team. The PM who can credibly spec an eval is the one leading the AI initiative.

The gap isn't closing. It compounds with every quarter.

And the path to that fluency is broken. Generic AI courses teach a little of everything, badly. Tutorials and blog posts are everywhere — but the half-life of AI best practice has dropped to weeks. What was sharp last quarter is mid this quarter.

Refactor4AI is built around the one thing that actually matters: becoming the most AI-fluent person at your company in the seat you already hold.

Not a researcher. Not a generalist. The person whose role is exactly what yours is — who can build, ship, and review AI work in your stack, in your meetings, in your code reviews.

We pick your role. We meet your skills where they actually are. We update the curriculum monthly so what you learn this week is what shipped this week.

You don't refactor your skills once. You refactor them continuously.

How the curriculum is built

Calibrated to what you'll actually do — at work, and in your next interview.

We built the curriculum backwards from two questions: what is a senior engineer, platform lead, or PM actually being asked in 2026 AI interviews — and what do they ship in their first 90 days on the job? Every module exists because the answer pointed to it.

Backwards from real interviews
The capability map, RAG, agent design, evals, EU AI Act risk classification, FinOps for AI — every topic is here because hiring managers are screening for it. AI system design is the new system design interview, and we teach it as one.
Sources: hiring rubrics from FAANG, Anthropic, Stripe; public PM and platform interview reports.
Built from production code
Every lesson references real patterns shipping in production: Notion's prompt caching, Klarna's hybrid agents, Stripe's tool-calling architecture, Anthropic's MCP rollout, Google's A2A v1.2. No abstract theory — just what's working at scale.
Sources: vendor docs · Anthropic, OpenAI, AWS, Microsoft, Google · engineering blogs · postmortems.
Refreshed every month
The half-life of AI best practice is weeks. Every month we add what's new — new models, new compliance lines, new tools — and prune what's stale. You see the public changelog so you know exactly what changed and why.
Latest refresh covers reasoning models, MCP universal adoption, A2A, EU AI Act enforcement, and FinOps for AI.
What this gets you
Promoted in your current role
The engineer who shipped the company's first agent. The PM who wrote its first eval plan. The platform lead who built its AI gateway. That can be you — without changing jobs.
Ready for AI-leaning interviews
Senior interviews now include AI system design, agent orchestration, eval-driven dev, AI cost engineering. The capstone gives you a portfolio piece you can walk into an interview with.
Future-proofed for 2027 and beyond
Half-life of AI best practice is short. Refreshing your skills monthly through Refactor4AI is the only way to stay sharp without burning weekends on tutorials.
Browse the full curriculum

Every module.
Every lesson.
No login needed.

Three role-specific tracks · 33 modules · 250+ lessons. Pick your role, scan the journey, click any module to see what's inside.

Track overview

From AI-assisted development to shipping production agents with MCP, evals, and OWASP-aligned safety. Closes with case-driven AI system design. Calibrated for senior engineers who already ship code and need to layer AI fluency on top — including concrete patterns from the AWS Bedrock, Microsoft Foundry, and Google Vertex ecosystems.

~62h total·11 modules·86 lessons·+ portfolio capstone·Updated monthly
Your journey
P2
Build the stack
RAG, MCP, agents, evals — the production stack
P3
Ship & master
Voice, production, architecture, AI system design
Capstone
Portfolio-grade project
Ship an AI feature into a real codebase
P1
Phase 1 · 3 modules
Foundations
What every AI engineer needs in their bones
  1. 1
    Module 1
    AI foundations for engineers

    How modern LLMs work — transformers, attention, MoE. Tokens and context windows. Reasoning vs base models (when to use o-series / Claude reasoning vs Haiku-class). 1M+ context tradeoffs. Multimodal capabilities. Cost & latency models. The capability map you need before writing any prompt.

    LLMsTokensReasoning modelsMultimodal4h · 6 lessons
    On your cloud
    Bedrock model catalog · Foundry Models · Vertex Model Garden (200+ curated)
    What's new this quarter
    Reasoning models routine; 1M+ context windows; multimodal-by-default; OpenAI now on AWS Bedrock alongside Claude.
    Case study:Claude Sonnet 4.6, Gemini 3 Pro, GPT-5 reasoning capability map.
    Lessons in this module
    ~74 min total
    1. 1
      How modern LLMs actually work
    2. 2
      Tokens and context windows
    3. 3
      Reasoning vs base models — when to use which
    4. 4
      Multimodal capabilities
    5. 5
      Cost-latency-quality tradeoffs
    6. 6
      Capability map for May 2026

    All lessons free during early access — sign up to start.

  2. 2
    Module 2
    AI-assisted development as a discipline

    Cursor, Claude Code, Windsurf, Devin, Copilot Workspace — when each wins. Effective prompting for code. Spec-driven development (the rising 2026 pattern). AI-generated PR review (blocking vs advisory). Refactoring at scale. When NOT to trust AI output.

    CursorClaude CodeDevinSpec-driven devCode review4.5h · 7 lessons
    On your cloud
    Amazon Q Developer · GitHub Copilot Enterprise · Gemini Code Assist Enterprise
    What's new this quarter
    90% of Fortune 100 use Copilot. Devin runs sandboxed cloud envs. Spec-driven development is the senior-engineer pattern.
    Case study:Duolingo: 25% dev-speed boost, 70% more PRs with Copilot.
    Lessons in this module
    ~86 min total
    1. 1
      The 2026 AI dev tool landscape — Cursor, Claude Code, Windsurf, Devin
    2. 2
      Effective prompting for code generation
    3. 3
      Spec-driven development — the senior-engineer pattern
    4. 4
      AI-generated PR review — blocking vs advisory
    5. 5
      Refactoring at scale with AI
    6. 6
      When NOT to trust AI output — review heuristics
    7. 7
      Building your personal AI dev workflow

    All lessons free during early access — sign up to start.

  3. 3
    Module 3
    Building with LLM APIs

    Anthropic, OpenAI, Gemini APIs. Cloud SDKs (Bedrock, Foundry, Vertex). Structured outputs / JSON mode. Tool calling as the integration pattern. Streaming. Prompt caching (~90% cost cut on repeated prefixes). Batch processing. Multimodal inputs.

    Anthropic SDKTool callingStructured outputsPrompt cachingStreaming6h · 8 lessons
    On your cloud
    Bedrock InvokeModel + Converse · Foundry Responses API · Vertex Gemini API
    What's new this quarter
    Prompt caching now standard; structured outputs reliable; Foundry's wire-compatibility with OpenAI Responses API.
    Case study:Stripe customer support; Notion AI's prompt-caching savings.
    Lessons in this module
    ~109 min total
    1. 1
      Anthropic, OpenAI, Gemini SDKs — what's interchangeable
    2. 2
      Calling models via Bedrock, Foundry, and Vertex
    3. 3
      Structured outputs and JSON mode
    4. 4
      Tool calling — the integration pattern
    5. 5
      Streaming responses for chat UX
    6. 6
      Prompt caching for 90% cost cuts
    7. 7
      Batch processing for non-realtime workloads
    8. 8
      Multimodal inputs — images, audio, PDFs

    All lessons free during early access — sign up to start.

P2
Phase 2 · 4 modules
Build the stack
RAG, MCP, agents, evals — the production stack
  1. 4
    Module 4
    RAG and knowledge systems

    Embeddings (model selection in 2026). Chunking strategies — semantic, hierarchical, parent-document. Vector DBs (pgvector, Turbopuffer, Pinecone, Qdrant). Hybrid search (BM25 + vector + re-ranker). Query rewriting & decomposition. Long-context vs traditional RAG. Retrieval evaluation with RAGAS.

    EmbeddingspgvectorHybrid searchRe-rankersRAGAS7h · 9 lessons
    On your cloud
    Bedrock Knowledge Bases + S3 Vectors · Azure AI Search · Vertex AI Search · AlloyDB pgvector
    What's new this quarter
    AWS S3 Vectors (90% cheaper, GA late 2025); Turbopuffer rising; long-context-as-RAG patterns.
    Case study:Klarna's AI assistant retrieval layer; the chunking-first debugging discipline.
    Lessons in this module
    ~123 min total
    1. 1
      Embeddings — what they are and which to pick in 2026
    2. 2
      Chunking strategies — semantic, hierarchical, parent-document
    3. 3
      Vector DBs in 2026 — pgvector, Turbopuffer, Pinecone, Qdrant
    4. 4
      Hybrid search — BM25 + vector + re-ranker
    5. 5
      Query rewriting and decomposition
    6. 6
      Long-context vs traditional RAG — when each wins
    7. 7
      Re-rankers and the second-pass ranking layer
    8. 8
      Retrieval evaluation with RAGAS
    9. 9
      Building a production RAG pipeline end-to-end

    All lessons free during early access — sign up to start.

  2. 5
    Module 5
    MCP and the integration layer

    Model Context Protocol deep dive. Building MCP servers (TypeScript + Python). Consuming them across providers. Tool design ergonomics — descriptions matter more than names. Authentication (bearer tokens; OAuth for remote MCP). Governance and rate limits. The 5,800+ public server ecosystem.

    MCPTool designAuthGovernance6h · 8 lessons
    On your cloud
    MCP works identically against all three clouds — pick by what your tool surface needs
    What's new this quarter
    MCP universal across Anthropic, OpenAI, Google, Microsoft. 78% enterprise adoption. The integration default.
    Case study:Anthropic's official MCP servers; Sourcegraph's MCP for code search.
    Lessons in this module
    ~110 min total
    1. 1
      The Model Context Protocol — what and why
    2. 2
      Anatomy of an MCP server
    3. 3
      Building an MCP server in TypeScript
    4. 4
      Building an MCP server in Python
    5. 5
      Consuming MCP servers across providers
    6. 6
      Tool design ergonomics — descriptions matter more than names
    7. 7
      Authentication for MCP — bearer tokens and OAuth
    8. 8
      Governance, rate limits, and the public ecosystem

    All lessons free during early access — sign up to start.

  3. 6
    Module 6
    Agentic systems

    Single-agent, multi-agent, supervisor patterns. Framework choice — LangGraph (production reliability), CrewAI (fast workflow), PydanticAI (type-safe). State and memory. Human-in-the-loop checkpoints. A2A protocol for cross-agent communication. Computer use, browser agents, voice agents.

    Agent loopsLangGraphA2AComputer useMemory7h · 10 lessons
    On your cloud
    Bedrock AgentCore (runtime + memory + identity) · Microsoft Agent Framework + Agent Service · Vertex Agent Builder + A2A
    What's new this quarter
    A2A at v1.2 (signed cards, 150+ orgs). Sub-agent networks. Framework choice now well-differentiated.
    Case study:Cognition's Devin architecture; Anthropic's Computer Use; Google's A2A v1.2 in production.
    Lessons in this module
    ~143 min total
    1. 1
      The agent loop — what makes a system 'agentic'
    2. 2
      Single-agent vs multi-agent vs supervisor patterns
    3. 3
      Choosing a framework — LangGraph, CrewAI, PydanticAI
    4. 4
      Building your first agent with LangGraph
    5. 5
      State and memory across turns
    6. 6
      Human-in-the-loop checkpoints
    7. 7
      The A2A protocol for cross-agent communication
    8. 8
      Computer use and browser agents
    9. 9
      Voice agents and real-time UX
    10. 10
      Production agent patterns and failure modes

    All lessons free during early access — sign up to start.

  4. 7
    Module 7
    Evals and reliability

    Eval-driven development as a discipline. Building golden datasets (100+ cases). RAGAS metrics. LLM-as-judge with human calibration (sample 30+ pass cases manually). Binary PASS/FAIL > 1-5 ratings. Regression testing prompts in CI. Production observability (Langfuse / Langsmith / Braintrust). Hallucination detection. Drift monitoring.

    EvalsRAGASGolden setsLLM-as-judgeLangfuse5.5h · 8 lessons
    On your cloud
    Bedrock Evaluations · Foundry evaluation toolkit · Vertex AI Eval Service · Langfuse over all three
    What's new this quarter
    Binary judgments now norm (Descript + Bolt AI confirmed). Eval-as-monitor in production. RAGAS standardized.
    Case study:Descript's editing-agent eval suites; Bolt AI's 3-month eval system.
    Lessons in this module
    ~106 min total
    1. 1
      Eval-driven development as a discipline
    2. 2
      Building a golden dataset (100+ cases)
    3. 3
      RAGAS metrics for retrieval quality
    4. 4
      LLM-as-judge with human calibration
    5. 5
      Binary PASS/FAIL vs Likert ratings
    6. 6
      Regression testing prompts in CI
    7. 7
      Production observability — Langfuse, Langsmith, Braintrust
    8. 8
      Hallucination detection and drift monitoring

    All lessons free during early access — sign up to start.

P3
Phase 3 · 4 modules
Ship & master
Voice, production, architecture, AI system design
  1. 8
    Module 8
    Voice and computer-use agents

    Real-time speech-to-speech (Foundry Voice Live; Anthropic + ElevenLabs; OpenAI Realtime). Browser agents. Computer use (Anthropic Computer Use, OpenAI Operator). UX patterns for non-text AI. Latency budgets and how they shape architecture. Failure modes specific to voice — interruptions, partial transcripts.

    Voice LiveBrowser agentsMultimodal UXRealtime5h · 7 lessons
    On your cloud
    Foundry Voice Live · Polly + Transcribe + Bedrock · Vertex conversational + WaveNet
    What's new this quarter
    Voice Live in Foundry; STT→LLM→TTS collapsed into single APIs; computer use mainstream.
    Case study:Anthropic's Computer Use; OpenAI's Operator; voice agents for support.
    Lessons in this module
    ~93 min total
    1. 1
      Voice agent architectures in 2026
    2. 2
      Foundry Voice Live and OpenAI Realtime
    3. 3
      Anthropic + ElevenLabs voice patterns
    4. 4
      Browser agents — what works, what breaks
    5. 5
      Computer Use — Anthropic's and OpenAI's models
    6. 6
      Latency budgets and architecture tradeoffs
    7. 7
      Failure modes for voice — interruptions, partial transcripts

    All lessons free during early access — sign up to start.

  2. 9
    Module 9
    Production AI

    Cost engineering and token economics. Prompt caching strategies (~90% savings). Semantic caching at the gateway. Model routing (cheap → expensive). Security: OWASP LLM Top 10 — prompt injection defense (constrain behaviour, segregate content, adversarial testing). Output validation. Distillation. Model gateways (LiteLLM, Portkey).

    CostCachingOWASP LLMDistillationLiteLLM6h · 8 lessons
    On your cloud
    LiteLLM + Bedrock · LiteLLM + Foundry · LiteLLM + Vertex · Portkey commercial gateway
    What's new this quarter
    Prompt caching ~90% cost reduction. AI cost engineering as a named discipline. OWASP LLM Top 10 in CI.
    Case study:Notion's caching savings; the rising AI cost engineering job market.
    Lessons in this module
    ~111 min total
    1. 1
      AI cost engineering as a discipline
    2. 2
      Prompt caching strategies — 90% savings
    3. 3
      Semantic caching at the gateway
    4. 4
      Model routing patterns — cheap to expensive
    5. 5
      The OWASP LLM Top 10
    6. 6
      Prompt injection defense
    7. 7
      Output validation and safety filters
    8. 8
      Distillation and model gateways — LiteLLM, Portkey

    All lessons free during early access — sign up to start.

  3. 10
    Module 10
    AI-native software architecture

    Designing systems AI-first vs bolt-on. Persistent memory and personalization. Conversational UX vs traditional UX. Failure modes and graceful degradation. Human-AI collaboration patterns (Klarna's reintroduction-of-humans lesson — hybrid > full automation). Reading the next 3 years.

    ArchitectureMemoryAI UXFuture bets5h · 7 lessons
    On your cloud
    Memory primitives in AgentCore · Foundry agent state · Vertex Memory Bank (GA)
    What's new this quarter
    Architecture patterns settling; hybrid (humans + AI) outperforming full automation.
    Case study:Klarna's $60M saved + reintroduction of humans for emotional queries.
    Lessons in this module
    ~85 min total
    1. 1
      AI-first vs bolt-on architecture
    2. 2
      Persistent memory and personalization
    3. 3
      Conversational UX vs traditional UX
    4. 4
      Designing for failure modes — graceful degradation
    5. 5
      Human-AI collaboration — Klarna's lesson
    6. 6
      Hybrid systems beating full automation
    7. 7
      Architectural bets for the next 3 years

    All lessons free during early access — sign up to start.

  4. 11
    Module 11
    AI system design

    Case-driven, interview-grade AI system design. Design a customer-support agent at fintech scale (RAG + tools + escalation + audit). Multi-tenant AI features for SaaS — per-tenant RAG isolation, cost attribution, noisy-neighbour problems. Agent orchestration patterns (queues, retries, idempotency, dead-letter). Real-time vs batch agentic systems. Multi-step agent capacity planning under cost ceilings. Failure modes you only see at scale — cascade hallucinations, eval drift, prompt regression. The exercise senior interviews actually test.

    System designMulti-tenancyCapacity planningOrchestrationScale6h · 8 lessons
    On your cloud
    Bedrock at scale (provisioned throughput patterns) · Foundry PTU + multi-region · Vertex agent capacity planning
    What's new this quarter
    AI system design now its own interview category at FAANG; agent orchestration patterns standardising; cost-aware design is the new performance tuning.
    Case study:GitHub Copilot's request-routing architecture; Klarna's per-customer isolation; Notion AI's per-workspace RAG.
    Lessons in this module
    ~136 min total
    1. 1
      The AI system design interview format
    2. 2
      Case: customer-support agent at fintech scale
    3. 3
      Multi-tenant AI features for SaaS — per-tenant RAG
    4. 4
      Agent orchestration — queues, retries, idempotency
    5. 5
      Real-time vs batch agentic systems
    6. 6
      Capacity planning under cost ceilings
    7. 7
      Failure modes at scale — cascade hallucinations, drift
    8. 8
      Practice case — design an AI feature for your product

    All lessons free during early access — sign up to start.

Final capstone · your portfolio piece
Ship an AI feature into a real codebase

Multi-week project, full evals + observability, employer-style rubric, public artifact for your portfolio.

Multi-week projectEmployer-style rubricPublic artifactGoes on your CV
Ready to start the Software Engineer track?

The 10-minute skill assessment routes you to the right starting point.

Sign up free
What you'll actually build

Not a certificate. A thing you can ship.

Every track ends with a portfolio-grade capstone. Multi-week, employer-style rubric, public artifact. The work you do in Refactor4AI is the work you walk into your next interview with.

Software Engineer
Ship an AI feature into a real codebase

Multi-week project where you pick an actual feature, write the spec, build the agent, wire MCP tools, ship evals, and deploy with observability. Employer-style rubric.

Deliverables
  • Working agent with tool-calling + memory
  • Golden eval dataset and regression test
  • Production observability dashboard
  • Public portfolio artifact + tech write-up
DevOps / Platform
Build an internal AI gateway

End-to-end build of the thing platform teams ship at scale: auth, rate limits, cost controls, prompt-injection defense, observability, self-service onboarding for product teams.

Deliverables
  • Gateway with LiteLLM-class routing
  • Per-team quotas + cost attribution
  • Prompt injection / OWASP defenses
  • Self-service docs + admin dashboard
Product Manager
Spec a real AI feature end-to-end

PRD + eval plan + GTM brief + EU AI Act risk classification for an actual AI feature. Built to be the thing you walk into a senior PM interview with.

Deliverables
  • PRD with eval-driven success metrics
  • Golden dataset of 50+ cases (yours)
  • Pricing + margin model
  • EU AI Act risk classification doc
Interview prep · calibrated to 2026 hiring rubrics

The new system design interview is AI system design.

Senior interviews at Anthropic, OpenAI, Google DeepMind, FAANG and AI-first startups now include AI system design as its own dedicated round. Refactor4AI is built backwards from those rubrics — so you walk in with the framework, the vocabulary, and a portfolio piece to talk about.

Questions you'll be ready for
SWE

"Design a customer-support agent at fintech scale (RAG + tools + escalation + audit)."

SWE

"Design an AI code-review system for GitHub-scale. Walk me through cost, latency, evals, failure modes."

DevOps

"Design the AI gateway for a 5,000-engineer company. Multi-tenant, cost-attributed, prompt-injection-defended."

DevOps

"Your AI bill is $2M/year. Get it to $200K without losing quality. Talk me through it."

PM

"Pick an AI feature and walk me through the PRD, eval plan, success metric, and EU AI Act risk classification."

PM

"Pricing — your AI feature costs $0.40 per session. Per-token, per-action, or value-based? Defend."

The 5-phase framework

Clarify, architect, AI-specific layers, non-deterministic concerns, scale & failure modes. The structure that lets you handle any AI design prompt without freezing.

The vocabulary they grade you on

Prompt caching. Model routing. OWASP LLM Top 10. RAGAS. LLM-as-judge. EU AI Act risk classification. Cost-attribution at multi-tenant scale. You'll know them all cold.

The portfolio piece

The capstone you ship through Refactor4AI becomes your talking point. Real working artifact, employer-style rubric, walkthrough doc — you don't show up empty-handed.

Interview in 2–4 weeks? Start with the AI System Design module.

Module 11 in every track is built backwards from the FAANG / AI-lab hiring rubric. Or read the standalone interview-prep guide.

Compared honestly

What you actually save by not piecing it together yourself.

We're not the only way to learn AI. But for the senior tech professional with limited evenings, here's why we built Refactor4AI.

YouTube tutorialsGeneric AI courseRefactor4AI
Role-specific (you don't see other roles' content)
Updated for what's shipping this quarter
Production patterns from real companies
Portfolio-grade capstone you can interview with
MCP, evals, agents, EU AI Act covered in depth
Sequenced learning path (not a random playlist)
Free
Time to ship something realYears30+ hours scattered8 weeks structured

We've tried each. We still recommend a great YouTube channel for any specific concept — but for "I want to be the AI person at my company" there isn't a substitute for a sequenced curriculum.

The shifts that matter

Tech work doesn't look like it did 18 months ago.

If your day-to-day still feels familiar, it's because the change is happening around you faster than your habits are catching up. These are the shifts our curriculum is calibrated to.

Code
Reviewed before it's typed.
Cursor, Claude Code, and Windsurf agents write the first draft of most PRs. Senior engineers now spend their day reviewing AI output, not hand-rolling boilerplate.
Integration
MCP is the connective tissue.
Model Context Protocol is the standard way LLMs talk to tools, databases, and APIs. If you're building an agent and not using MCP, you're rewriting infrastructure.
Architecture
Agents replaced point automations.
Where teams used to wire a single API call into a workflow, they now ship multi-step agents with memory, tool use, and supervisors. The new system design interview is agent design.
Reasoning
Models split tasks by depth.
Reasoning models (o-series, Claude reasoning) handle planning and complex problem-solving; fast non-reasoning models do the rest. Knowing when to switch is the new performance tuning.
Quality
Evals are the new tests.
Eval-driven development has replaced "ship it and watch logs." Golden datasets, LLM-as-judge harnesses, and regression evals on every prompt change are now table stakes.
Compliance
EU AI Act is in force.
High-risk AI systems require risk classification, documentation, and audit trails. Whether you ship to the EU or not, the framework is becoming the global default.
Stories

From "I should learn AI" to "I'm running the AI initiative."

A small selection of what's happened in the last six months.

Six weeks of Refactor4AI and I shipped my first agent into production. I'm now the AI lead on the payments team. The curriculum caught gaps I didn't know I had — eval coverage was the thing that pushed me from prototype to production.

MO
Marcus O.
Senior Backend Engineer · Fintech

"I went from being the PM that asks questions in AI meetings to the one running them. Refactor4AI teaches AI as something you ship through a real product process."

DC
David C.
Senior PM · Data infrastructure

"The platform-engineering track is the only curriculum I've found that takes the infra side of AI seriously. Cost engineering, prompt injection, EU AI Act — all in one place."

PR
Priya R.
Platform Engineer · SaaS
Pricing

Free during the early access.

The whole platform — every track, every lesson, every walkthrough, your public portfolio. No credit card. We'll add paid tiers when we know what people actually use.

Every track
Every lesson + walkthrough
Public portfolio
Updated monthly
FAQ

Common questions

Who is Refactor4AI for?

Mid- to senior-level tech professionals — engineers, platform/DevOps, PMs — who already know their craft and need to layer AI fluency on top. If you've shipped real software and feel like AI is slipping past you, you're exactly who we built it for.

How is this different from a Coursera or Udemy course?

Three things. It's role-specific — engineers, platform people, and PMs all see different curricula. The curriculum is updated monthly because in AI, anything older than six months is outdated. And every module is anchored on a real shippable build, not abstract theory.

How current is the curriculum?

Updated monthly. The current version covers MCP, reasoning models, agent orchestration, the EU AI Act, eval-driven development, prompt caching, and AI cost engineering. We publish a public changelog so you can see what changed and why.

What if I'm a complete beginner with AI?

Module 1 of every track assumes zero AI background — it covers how modern LLMs actually work, capabilities, limits, and where the field is. You skip what you've already mastered.

Can my company sponsor or buy this for me?

Yes. The Team plan (when it launches) includes admin dashboards and skills reporting. We have a one-pager you can forward to L&D — request it via the team plan link.

Will this actually help me get hired?

The capstone is a real, shippable project that becomes the centerpiece of your portfolio. The structure is built so the work you do here is the work you can show in interviews.

Refactor4AI takes 10 minutes to start.

Find your gaps. Build something real. Ship it on your resume.

Start free
Free during early access · No credit card · Cancel any time