Geometry-first targeting for counter-drone defense. No radar. No RF. No emissions. Deterministic 3D coordinates at the sensor node.
THEIA is KUHAKEN's stereo vision engine, the first passive optical system to deliver deterministic fire-control coordinates, without radar, without lidar, and without a centralized command link.
No radar. No RF. No laser. No emissions of any kind. The defender cannot be located. The system cannot be jammed.
Centimeter precision at range deemed impossible until now. Not a probability cloud, a deterministic 3D coordinate, computed from geometry alone.
Runs on the edge. No fusion dependency. No centralized C2. Detection, lock, and aimpoint output in under 1 seconds.
THEIA completes the full engagement chain autonomously, from first pixel to intercept coordinate, without active emissions, radar cueing, or operator hand-off.
Both cameras must agree. THEIA cross-validates every signal across the stereo pair using epipolar geometry, if a contrast pixel in motion doesn't appear at the correct position in both frames simultaneously, it is discarded before it enters the pipeline. False alarms are rejected at the geometry level, not filtered after the fact.
Contrast pixels in motion → confirmed by both cameras → detection event.
Once confirmed, the target is assigned an asymmetric tracking volume, tight lateral boundaries, depth axis elongated to absorb the noise physics imposes at range. A fresh 3D position is computed every frame. When depth error spikes by tens of meters, the lock holds. The track never drops on geometry that the system was designed to expect.
No RF receiver. No trained model. Classification emerges from the trajectory itself, speed profile, directional change rate, altitude behavior, and kinematic consistency over time. A military UAS, a commercial drone, and a bird produce distinct signatures in 3D space. THEIA reads them without needing to hear the target transmit anything.
How it moves is what it is. The 3D track is the classification signal.
Candidates converge through iterative probabilistic gating, hundreds of possible positions reduced, frame by frame, to a single 10cm voxel. The output is not a bounding box center. It is a weighted spatial coordinate plus a velocity vector, computed from the optical density inside the volume. Effector-ready. Every frame.
Live outdoor proof-of-concept. Real camera hardware. Uncontrolled environment.
Fire-control-grade 3D lock on zero-emission drones. The only passive system that delivers an aimpoint without a radar cue.
Continuous passive surveillance for critical infrastructure. No emitter signature. No maintenance window for active sensors.
Replaces lidar with stereo cameras that work at highway distance. Same geometry, different scale.
Delivers a grip point, not a probability cloud, at the moment of grasp.
Replaces structured light and active depth sensors on the production line. Passive, reconfigurable, no distance limit.
Delivers the precise coordinates GPS cannot provide at the quayside. Removes manual spotting from crane and berth operations.
If you are building, deploying next-generation defense systems, let's talk.