A physics-based simulation platform for Design Verification Testing and Automatic Test Equipment.
DVT engineers need expensive shared ATE racks to validate test procedures. That creates bottlenecks: you book time on the rack, wait your turn, and if something in your test sequence is wrong, you've wasted a slot. This platform lets engineers develop and validate characterisation test sequences at their desk, against a physics simulation that behaves like real hardware.
The idea came from building the TDK-Lambda ATE system. I'd written the real test framework and kept thinking about how much faster development would go if engineers could iterate on test procedures without waiting for rack time. So I built a simulation that models the same physics, speaks the same instrument protocol, and runs locally.
The core is a coupled thermal-electrical simulation for LDO voltage regulators. It models the chain from chamber temperature to case temperature to junction temperature (including self-heating from power dissipation) to temperature-dependent electrical parameters like output voltage, quiescent current, and dropout voltage.
The coupling is what makes it interesting. Power dissipation through the LDO causes self-heating, which raises junction temperature, which shifts the electrical parameters, which changes power dissipation. A naive mock with fixed values can't capture this feedback loop. The engine resolves it iteratively at 100 Hz: each timestep uses the current case temperature to calculate power dissipation, then updates the thermal state including the self-heating contribution, letting the system converge naturally over successive steps.
The thermal model uses first-order response with configurable time constants: 30 seconds for the chamber (mimicking a real thermal chamber's slew rate) and 5 seconds for the DUT case. Junction temperature is calculated instantaneously from case temperature plus the power-times-thermal-resistance product. All the DUT parameters (tempco, dropout voltage scaling, quiescent current drift) come from a YAML configuration, so modelling a new device means writing config, not code.
The simulation exposes three virtual instruments (thermal chamber, power supply, multimeter), each running on its own TCP port and speaking industry-standard SCPI (IEEE 488.2). These are the same commands real lab instruments understand: TEMP:SETPOINT, VOLT, MEAS:VOLT:DC?, OUTP ON. A test script that talks SCPI to the simulator can talk SCPI to real hardware by changing a connection string.
The simulation server runs as a separate process from the test application, which mirrors real ATE architecture where instruments are external devices on a bench. Test code and simulation communicate exclusively over TCP sockets. There's no shared memory or direct function calls. This separation caught real integration bugs during development (timing issues, command ordering assumptions) that in-process mocks would have silently hidden.
Tests are defined against abstract instrument interfaces using Python Protocol classes, not concrete implementations. The framework handles instrument discovery (scanning the test package for ITest subclasses), condition setup, and measurement collection. A TestRunner orchestrates the lifecycle: create a run record, build the test context with instruments and logger, execute, evaluate limits, and store results.
The included TempCo test demonstrates the full workflow. It sweeps the chamber from -40°C to +85°C, waits for thermal stability at each point, takes averaged voltage measurements, then calculates the temperature coefficient via linear regression (dV/dT normalised to ppm/°C). Results go to SQLite for metadata and pass/fail verdicts, and Parquet for columnar time-series measurements, keeping structured query data separate from bulk measurement storage.
A Streamlit dashboard provides three views: a live lab bench with real-time temperature curves, power dissipation, and electrical measurements updating at 10 Hz; a test execution panel where you configure and run the TempCo characterisation; and a results viewer with measurement charts and PDF export.
The lab bench tab has sidebar controls for chamber setpoint, input voltage, load current, and a time multiplier (1x to 100x) for accelerating the simulation. You can watch the self-heating feedback loop play out in real time: enable the output, increase the load, and see junction temperature climb as power dissipation and thermal response chase each other to steady state.
Five layers: a physics engine (coupled thermal-electrical simulation at 100 Hz), virtual SCPI instruments served over TCP, a hardware abstraction layer with Protocol-based interfaces, a test framework with YAML-driven configuration, and presentation layers (Streamlit dashboard, FastAPI). The key insight is that test code talks to instruments through the same SCPI commands whether they're simulated or real. The simulation server runs as a separate process, communicating exclusively over TCP sockets, so the architecture mirrors a real ATE bench where instruments are external devices.
Coupled thermal-electrical physics with self-heating feedback between power dissipation and junction temperature
Industry-standard SCPI protocol over TCP, same command language as real lab instruments
Process-separated simulation server mimics real ATE bench architecture
Abstract instrument interfaces (Python Protocols) so the same test code works with simulated or real hardware
YAML-driven configuration for DUT models, physics parameters, and test conditions
SQLite + Parquet storage: metadata with pass/fail verdicts, columnar time-series for measurements
Streamlit dashboard with live temperature, power, and voltage charts at 10 Hz refresh
TempCo characterisation test: automated temperature sweep with linear regression analysis
Modelling coupled physics taught me to think in feedback loops, where changing one variable ripples through the whole system
SCPI turned out to be the right abstraction boundary: it's what hardware engineers already know, and it forced clean separation between simulation and test logic
Process separation felt like overkill at first, but it caught integration bugs that in-process mocks would have hidden
Vertical slice delivery (physics engine, virtual instruments, test runner, dashboard) meant every phase produced something demonstrable