top of page
The audience's movement and presence in the space is detected by a ceiling mounted depth camera, and processed by the main software on a laptop.
The software then decides on the kinetic behaviours, sending commands to a microcontroller that drives four stepper motors, responsible for both axes of actuation of both top and bottom robot sections.

To add a more detailed level of interaction, I designed and installed PCBs in each vertebrae of the robot, each board uses an ESP32 to expose four capacitive touch inputs and four LED outputs at each vertebrae of the body.
These opened up a high resolution 3D map of touch inputs and LED outputs throughout the robot's body for more granular touch-based interactions and light-based behaviours.

Each PCB's MCU communicates with the main computer wirelessly over OSC to prevent an excess of data cabling running throughout the system.
A lot of care went into the distinct shape of the PCBs so that they could be installed and replaced in any vertebrae without having to dissemble the whole robot.


bottom of page