Interactive coding interview preparation platform with step-by-step algorithm visualisations.
Most interview prep platforms give you a text editor, a problem, and a submit button. Learning happens (or doesn't) in the gap between reading the problem and figuring out the approach. CodeTutor's thesis is that pattern recognition matters more than memorisation: watch the algorithm work before writing code, build transferable intuition you can apply to problems you've never seen.
The platform has an in-browser Python editor powered by Pyodide (CPython compiled to WebAssembly), 400+ problems tagged by difficulty, category, and algorithmic pattern, and step-by-step visualisations for 19 patterns. The stack is Next.js on the frontend, FastAPI on the backend, and PostgreSQL for structured problem content.
Nineteen patterns: Two Pointers, Sliding Window, Binary Search, BFS/DFS, Dynamic Programming, Backtracking, and more. Each visualisation renders data structures and animates state changes step by step, with forward and backward navigation so you can replay any transition.
Built with Framer Motion. Each step is a discrete state snapshot: pointer positions, highlighted cells, current values. Framer interpolates between snapshots for smooth transitions, but logical state advances atomically. The tricky part was keeping animation state and algorithm state locked together; early attempts that drove both from the same mutable state had subtle bugs where the visual lagged behind the algorithm by a step.
Watching a two-pointer converge from both ends of an array, or a BFS wavefront expand level by level, builds intuition that pseudocode and static diagrams can't.
Pyodide compiles CPython to WebAssembly, so code execution happens entirely in the browser. No server round-trip, no backend compute cost, and it works offline once cached. The trade-off is a ~10MB initial download. The editor UI renders immediately while the runtime lazy-loads in the background; by the time the user finishes reading the problem description, it's usually ready.
Test cases execute locally with instant pass/fail feedback. This was a deliberate architectural choice: it simplifies deployment, eliminates per-user compute costs, and means the platform scales with zero backend load for code execution.
Over 400 problems tagged by difficulty, category, and algorithmic pattern. Each includes intuition explanations, common pitfalls, and multiple solution approaches with complexity analysis. The content model separates problem statements, hints, solutions, and test cases so the UI can progressively reveal each layer: problem first, hint on request, then solution. That data model decision drove the UX.
Problems are stored in PostgreSQL and served via FastAPI REST endpoints. Pattern filtering is the key feature: five sliding window problems in a row reinforces the pattern more than five random problems at the same difficulty.
The core learning loop: pick a pattern, watch the visualisation, then solve problems that use it. The visualisation and editor sit side by side so you can reference the animation while coding. Nineteen patterns cover arrays, trees, graphs, dynamic programming, backtracking, greedy algorithms, intervals, and stacks. The goal is to see a new problem and immediately recognise which pattern applies.
Three-tier architecture: Next.js 15 frontend with static generation for problem pages and server-side rendering for filtered listings, FastAPI REST backend with SQLAlchemy ORM, and PostgreSQL for structured problem content. Algorithm visualisations are entirely client-side, built with Framer Motion and driven by discrete state snapshots. Python execution runs in the browser via Pyodide (WebAssembly), with no server involvement. The content model separates problem statements, hints, solutions, and test cases so the frontend can progressively reveal each layer.
Interactive algorithm visualisations for 19 patterns (Two Pointers, DP, Trees, etc.)
In-browser Python execution via Pyodide with instant test feedback
400+ curated problems with detailed explanations and multiple solutions
Pattern recognition training with visual walkthroughs
Filter by difficulty, category, and algorithmic pattern
Keeping Framer Motion animations in sync with algorithm state meant treating each step as a discrete snapshot (pointer positions, highlighted cells, current values) and letting Framer interpolate between them. Trying to drive both animation and logic from the same mutable state caused subtle bugs where the visual lagged behind the algorithm.
Pyodide's ~10MB download is fine once cached, but the first visit is painful without lazy loading. Loading it in the background while the user reads the problem description hides most of the latency. The key was making the editor UI fully interactive before the runtime is ready.
Building one algorithm visualisation is fun. Building nineteen that share navigation controls, consistent styling, and forward/backward stepping means you need a shared visualisation framework, not nineteen standalone components. Extracting that framework after the first three were built cost more time than building those three.
Showing the problem, hints, and solution all at once trains users to scroll to the answer. Separating them in the data model and gating the UI changed how people actually used the platform.
Grouping problems by algorithmic pattern instead of just difficulty made the learning path obvious. Working through five sliding window problems in sequence builds more intuition than solving the same problems in random order.
Step-by-step visualisations of common algorithmic patterns. Built to help developers understand the intuition behind each approach.