2nd Digital Arts Student Competition 2025 - Speculative Futures
Learning to Move, Learning to Play, Learning to Animate
Award Winner (Graduate)
Artist(s):
· Mingyong Cheng · Sophia Sun · Han Zhang · Yuemeng (Lindsey) Gu · erika
School Information:
University of California, San Diego
Graduate Student: PhD
Artwork Information:
Medium: Multimedia Performance
Duration: Teaser in 3 min, full performance in 20 min
Category: AI Artwork, Electronics and Art, Performance, and Robotics
Description:

“Learning to Move, Learning to Play, Learning to Animate” is a cross-disciplinary multimedia performance exploring intelligence as an emergent, symbiotic phenomenon beyond anthropocentric boundaries. Inspired by David Abram’s concept of the “more-than-human world,” this interactive work integrates human performers, robots crafted from organic materials, AI-generated visuals, and biofeedback-driven soundscapes in an evocative speculative environment. Movement, sonic resonance, and shadow play, echoing Plato’s allegory of the cave, invite audiences into an experiential inquiry about intelligence, perception, and coexistence. Challenging prevailing notions, the performance reimagines intelligence as extending beyond human confines, encompassing beings of wood, stone, metal, and silicon. By embracing speculative possibilities, the work critiques prevailing separations between technology, nature, and humanity, envisioning future worlds where synthetic and organic intelligences coexist and collaborate fluidly. Participants are encouraged to reflect on learning, adaptability, and co-creation, imagining alternative realities where collective intelligence transcends human-centered paradigms. Positioned within interconnected networks rather than portrayed as alien threats, artificial intelligences become partners in exploring complexity, agency, and knowledge inherent to our shared existence.

Technical Information:

This piece was developed through interdisciplinary collaboration across visual arts, music, and computer science, grounded in the “more-than-human world” philosophy by David Abram. The performance integrates found-object robots crafted from natural materials, inspired by the spiraling motions documented by Darwin. These robots are mechanized using Arduino-driven actuators synchronized with the performers’ movements. Real-time generative AI visuals, created through StreamDiffusion and motion-tracking data captured via Kinect Azure, dynamically respond to performers, layering radiographic and organic aesthetics. The soundscape employs field recordings from the building process, merged with electronically synthesized textures, highlighting physical interactions such as human touch and robotic movements. Additionally, biofeedback data is gathered from living plants using electromyography sensors, translated into real-time auditory and visual experiences through Arduino interfaces and software such as TouchDesigner and Ableton Live. The immersive environment unifies natural, technological, and synthetic elements, transforming stage and screen into interactive, responsive spaces. This layered, speculative exploration emphasizes the interconnected, adaptive nature of intelligence, inviting audiences to envision new, collaborative relationships among humans, nature, and technology.

Additional Images:
© © © © © © © © © © ©