Floating-gate silicon for the next wave of analog machine learning.
We pair decades of floating-gate research with a modern tool stack to deliver programmable analog compute blocks that drop into today's supply chain. The result is a power-efficient ML chip that stays programmable long after deployment.
Floating gates give us precise, reconfigurable analog behavior without burning power.
Each floating-gate device stores charge inside a fully insulated gate, letting us program bias currents, offsets, and weights with subthreshold-level accuracy. The same mechanisms that power Flash memory become tunable analog parameters once we pair them with the right injection and tunneling controls.
Programmable analog memory
Floating-gate transistors trap charge without a refresh cycle, giving each cell precise, non-volatile analog weights that stay put for field deployments.
Consistency across silicon generations
We validated floating gates from mature 350 nm CMOS through 16 nm FinFET, confirming matched behavior and tooling portability across six production nodes.
Analog-first efficiency
Measured systems routinely deliver over 1000× energy improvements versus digital baselines, unlocking battery-friendly inference without sacrificing accuracy.
Fabrication confidence
Six process nodes, one behavior profile.
Floating-gate arrays built with our libraries and calibration flow stay stable from mature nodes to advanced FinFET processes. That continuity keeps device physics predictable, simplifies qualification, and lets designer choose the node that fits their volume and cost targets.
Design once, deploy from lab benches to production wafers.
ASHES synthesis flow
Describe architectures in Python and compile directly to FPAA bitstreams or ASIC layouts. One codebase handles both exploration and production.
Unified floating-gate libraries
Standard-cell libraries capture bias points, routing patterns, and calibration data, so every design starts with proven analog building blocks.
Prototype without detours
Our 600k-device FPAA platform mirrors the ASIC fabric, letting teams evaluate neural primitives, calibrate weights, and script measurement loops before tapeout.
Machine learning, analog end-to-end
Field demonstrations prove the workflow before tapeout.
Our team has already delivered multilayer perceptrons and audio inference front ends on the FPAA using the exact cells and compilation flow that map to ASICs. The outcome: a verified training-to-silicon loop where analog ML performance is measured, not just simulated.
MLP inference without ADCs
Custom floating-gate activations and current-mode winners-take-all blocks keep the entire multilayer perceptron analog, trimming power-hungry conversions.
Audio feature pipeline
A ladder-filter front end paired with on-chip classifiers captures time and frequency structure simultaneously, proven on our FPAA silicon.
Calibrated deployment
Automated measurement scripts translate digital weights into floating-gate biases, tighten corners, and prepare data for the production assembly line.