All work
CodingHCI2025

Embodied — Kinect v2 × TouchDesigner

An interface-less installation where your body is the controller and the canvas — Kinect skeletal tracking piped into four TouchDesigner pipelines.

Embodied — Kinect v2 × TouchDesigner — cover
01 · Premise

Body as the only interface.

The brief was open: explore embodied interaction in a course rooted in HCI fundamentals. I went from there to a question — what happens when there's no screen to touch, no buttons, no menus? Just a person, a room, and a sensor. The Kinect v2 captures full skeletal data at 30 frames per second; TouchDesigner gives you a node graph to do anything with it. The hard part wasn't the tech — it was deciding what mappings between movement and image were worth making.

02 · Four pipelines

Four ways to be seen.

I built four distinct visualisation modes, each interrogating a different relationship between body and image. Time Slice samples your silhouette across moments and stacks them as overlapping ghosts, making your past visible. Cloudy Trail maps motion velocity into a fluid simulation, so faster gestures bloom into denser smoke. Two more modes — Generative Texture and Temporal Propagation — explore mapping joint positions into procedural patterns. The point of having four was to argue that there's no single 'right' way to translate motion into image.

Time Slice mode — overlapping silhouettes of the body across recent moments
Time Slice — every frame leaves a faint ghost. Walking through the space draws a temporal corridor behind you.
Cloudy Trail mode — fluid simulation following body motion
Cloudy Trail — joint velocity drives a fluid solver. Slow movement is sparse; sudden gestures bloom.
03 · Process

Trial, error, and a lot of cabling.

I'd never used TouchDesigner before this course, so the work was front-loaded with learning. I prototyped each mode in isolation — got skeletal data flowing in, then built the visual layer on top — before composing them into a single switchable installation. A lot of the time was spent tuning thresholds: what counts as 'fast' motion, how long ghosts should persist, when noise becomes pleasant texture vs. visual chaos. The HCI literature on direct manipulation and embodied cognition shaped which compromises I made.

Outcome

A working installation with four switchable modes, demonstrated live with a Kinect v2 and a projector. Documented as a course report with a system architecture, mode-by-mode algorithmic breakdown, and UX analysis.

Reflection

I'd push further on the social dimension next time — what happens with two bodies in the space, not one? The current pipelines collapse multiple skeletons into a single silhouette; making them dialogue with each other would be a much richer brief.