Deploy native spatial
computing to the DOM.
Beautifully designed hand-gesture components that you can copy and paste into your apps. Accessible. Customizable. Fully Open Source.Components are completely headless. You copy the raw source code into your project. You own the execution. You can tweak the logic, styling, and framer-motion physics directly.
Powered by Google's incredibly fast MediaPipe Tasks Vision API. The entire hand-tracking pipeline runs deeply optimized in WebAssembly, completely locally on the client.
Raw landmark data from the webcam is inherently jittery. We pass all spatial coordinates directly into Framer Motion spring physics to guarantee buttery smooth UI updates.
Components automatically degrade gracefully. If users do not grant camera access, or if they are on mobile devices without ideal viewing angles, the components cleanly fallback to standard mouse/touch interaction.
Just like shadcn/ui, you don't install a heavy package. You run npx @opendevsociety/kine-ui add to fetch exact files.
Spatial Canvas
Deploy native OS-level spatial components directly into the DOM.
Air Cursor
Index-finger spatial tracking. Pinch velocity thresholding for ultra-precise DOM clicks.
Swipe Area
High-velocity palm tracking. Triggers strictly on sweeping lateral mass reduction.
Air Scroll
Wrist Y-axis velocity mapping. Hands-free page scrolling with native pinch-protection lockout.
Pinch to Zoom
Bimanual multi-hand Euclidean scaling. Manipulate DOM scale factors linearly.
The Physics Engine
Raw landmark data from the webcam is inherently jittery. Kine UI pipes all spatial coordinates directly into framer-motion spring physics to guarantee absolute fluid UI updates.
damping: 20,
stiffness: 300,
mass: 0.5
});
// Hardware-accelerated repaint
Start Building
> Two commands to go from zero to spatial computing. The init script scaffolds the engine, installs dependencies, and mounts the tracking core automatically.