Role: Senior 3D Engineer
Company: Cubitts
Duration: 2025 (6 months)
Stack: Three.js, TypeScript, MediaPipe, iOS, Blender, Python, Node.js, AWS
Heru 2 fitting workflow
Related Projects:
- Heru 1 - The original consumer app (2019-2022)
- Frame Up - Automated 3D modeling pipeline
- Tartare - The Three.js framework powering the visualization
From Consumer App to Clinical Tool
I returned to Cubitts in 2025 to rebuild the platform I originally created three years prior. The mission: transform Heru from a consumer recommendation app into a precision instrument for professional opticians.
While Heru 1 was built for speed (<2s recommendations), the bespoke team needed clinical accuracy. I rebuilt the core physics engine and landmark system from scratch, achieving a 75% accuracy boost (±0.5mm) while keeping the real-time performance.
The Accuracy Gap
The original system used approximated geometry to scan 60+ frames in seconds. This was great for finding a “look,” but insufficient for manufacturing custom eyewear.
Opticians needed:
- Sub-millimeter precision (±0.5mm vs the old ±2mm)
- Interactive control to slide and adjust frames manually
- Real-time feedback on pressure points
- Support for non-standard faces that the old heuristic rejected
I had to ditch the approximations and solve the full geometric constraints in real-time.
Rebuilding the Physics Engine
I replaced the old ray-casting system with a custom grid-based collision engine.
1. Spatial Partitioning

Instead of testing every triangle, I implemented a uniform grid subdivision.
- Broad-phase: Instantly filters out 90% of non-colliding geometry.
- Narrow-phase: Performs precise triangle-triangle intersection only where needed.
This allowed us to use high-fidelity production meshes instead of simplified proxies, achieving a 33x speedup (<300ms per fit) with zero loss in precision.
2. Constraint-Based Solver
Opticians don’t just want a recommendation; they want to tweak it. But moving a frame on a 3D face is complex—changing one axis breaks the fit on others.
I built a real-time constraint solver. When an optician slides the frame forward (Z-axis):
- The system automatically recalculates height (Y) and tilt (pantoscopic angle).
- It ensures the frame “glides” along the nose bridge while maintaining proper ear contact.
- It visualizes pressure points instantly (red zones in the video).
Replacing the Eyes: MediaPipe Pipeline
The original app relied on Bellus3D for facial landmarks. When they shut down, we lost our data source.
I built a new pipeline using Google’s MediaPipe Face Mesh:
- Rotation Solving: The system analyzes the scan from multiple angles to find the optimal orientation.
- Landmark Extraction: Detects 468 facial points in under a second.
- 3D Projection: Maps 2D landmarks back onto the 3D scan topology.
The result was 99.9% detection success, handling even extreme head rotations that broke the old system.
Asset Pipeline Automation
To support this new level of precision, our 3D assets needed to be perfect. I wrote a Blender addon (Python) for the asset team that:
- Automates the cleanup of CAD files
- Generates the collision-optimized GLB files
- Validates geometry before export
Results
By focusing on physics and tooling rather than just “AI magic,” we turned a fun consumer app into a serious medical tool.
- Accuracy: Improved from ±2mm to ±0.5mm.
- Speed: Full fit calculation in <300ms on M1 iPads.
- Workflow: Saved opticians 2-3 minutes per consultation by automating manual measurements.