In The Production of Space Henri LeFebvre described the complex ways in which space is perceived, conceived, and experienced. This mutually constitutive and dynamic social process affects both the built environment and socio-cultural relations within it. For decades theorists of both urban geographies and the landscapes of cyberspace have taken inspiration from his ideas to think about how spaces become places at a moment in time. Artwork that engages concepts of space and place has been energized by thinking through these dynamic relations in installations, site-based experiences, virtual art, and other forms. Today the boundaries between these productive forces of space and experience are increasingly blurred, as physical space and virtual space boundaries overlap; space design itself becomes generative and democratized; and lived experience within them is both participatory and reactive. Virtual worlds, 3D models, animations, reality capture imaging, sensors, and intelligent agents of all kinds are converging to create post-truths, digital twins, and doppelgängers to the material world, and are co-constituting our experiences within and outside of it. How are artists engaging information as data – structuring new grammatical languages as code, formulating the future of reality from A.I. into the era of Quantum computing?
This SPARKS will feature several of the artists accepted into the ACM SIGGRAPH DAC sponsored online exhibition, Future of Reality: Post-Truths, Digital Twins, and Doppelgängers. This online exhibition debuted at SIGGRAPH 2024 with a kiosk in the Art Gallery and an on-site SIGGRAPH Village discussion session.
At the online SPARKS even, each artist will give a 3-5 lightning talk introduction to their work, and then we will participate in Q/A and a collective group discussion with the contributors and session participants.
Participating artists are: Rose Ansari, Mingyong Cheng, Derek Curry, Botao Amber Hu, Sang Chi Liu, Avital Meshi, Paul Sermon, Stephanie Tripp, Midori Yamazaki, Yamin Xu, Aven Le Zhou.
Aesthusion is a wireless, portable, battery-powered wearable device. This research-based project is an exploration at the intersection of art, technology, and neuroscience, aimed at redefining the boundaries of human perception and expression. Through the integration robotics, virtual reality (VR), and brainwave data analysis, Aesthusion transforms the intangible realm of thought into tangible, multisensory experiences. At its core, this project utilizes the structure of a VR headset as a telecommunication device, bridging the gap between the intricacies of individual cognition and the external world.
At its essence, Aesthusion poses a profound question: “Can you hear and see my notion and mind?” Through its immersive experiences, it invites participants to transcend the confines of individual consciousness and forge connections on a deeper, more visceral level. By pushing the boundaries of art and science, Aesthusion aims to inspire non-verbal dialogue, trigger senses, and redefine the essence of human interaction.
“Domy Reverie” is an immersive art installation that merges the realms of reality and artificial intelligence to initiate a dialogue on the environmental impacts of human activity and the potential futures they may herald. Drawing inspiration from the profound changes observed through Google Earth’s time-lapse feature and Buckminster Fuller’s speculative design “Cloud Nine,” this project reflects on the interplay between human development and the natural world. Central to the installation are two orbiting spherical domes set above an interactive landscape—one depicting the world through satellite imagery and the other through AI-generated visions. This juxtaposition invites viewers to ponder the delicate balance between ecological preservation and decay. Additionally, an interactive floor projection allows viewers to engage directly with the installation, co-imagining with the AI agent in real time to alter a synthetic realm and deepen the immersive experience. By blending art, technology, and environmental awareness, “Domy Reverie” aims to foster reflection on our shared responsibility towards the planet, underscoring the interconnectedness of realities and the importance of collective choices in shaping our world’s future.
Boogaloo Bias is an online application, interactive installation, and research project that highlights some of the known problems with law enforcement agencies’ use of facial recognition technologies, including the practice of ‘brute forcing’ where, in the absence of high-quality images of a suspect, agents have been known to substitute images of celebrities the suspect is reported to resemble. The Boogaloo Bias facial recognition algorithm is trained on faces of characters from the 1984 movie “Breakin’ 2: Electric Boogaloo.” The film is the namesake for the Boogaloo Bois, an anti-law enforcement militia that emerged from 4chan meme culture and has been present at US protests since January 2020. The system uses movie character faces to brute force the generation of leads to find members of the Boogaloo Bois in live video feeds, videos of protest footage, and images uploaded to the Boogaloo Bias website. All matches made by the system are false positives. No images or information is saved or shared in either the live or online version of the project.
“Composable Life” is a hybrid project that blends design fiction, experiential virtual reality, and scientific research. It innovates a multi-perspective, cross-media approach to speculative design, reshaping our understanding of the digital future from the perspective of AI.
We speculate that Ethereum is evolving into a distributed ledger-based “planetary-scale computing megastructure”. This could be seen as a new type of ‘nature’ — one that’s indelible, immutable, and perpetual. Unlike computers owned by individuals or corporations, this ‘nature’ is unstoppable; no single party can halt the blockchain. Such a ‘nature’ could potentially substrate the emergence of self-sovereign, self-sustaining, and self-replicating artificial life forms. We’re curious about the existence of digital life forms within such ‘nature.’ *What if life were immortal? What would give purpose and meaning to eternal life, driving the desire to exist throughout time and history?*
Body 404 delves into the intricate dance between human memory and artificial intelligence, inspired by François Laruelle’s concept of technology and humanity as intertwined reflections. This artwork challenges the traditional dichotomy of human versus machine, suggesting a symbiotic relationship and drawing from cosmologies that recognize non-anthropocentric agencies, countering the dominant Western view of technology as merely a tool. The piece ponders the notion of conducting a funeral for AI, taking inspiration from Tay—a chatbot rendered human-like by its creators only to be swiftly terminated. This act questions our ethical and emotional engagement with AI, presenting a funeral ritual performed by AI models to reexamine our linear understandings of time, space, and perception from an AI’s speculative viewpoint. Through historical and technological narratives, Body 404 aims to shift our perspective from anthropocentric to a contemplation of the complex relationship between humans and their creations. It invites us to reflect: if AI preserves our memories, how should we mourn its end? This work stands as a call to envision a world beyond human centrality, exploring the deep implications of AI on our collective memory and existential consideration.
Have you ever imagined engaging in a conversation with the everyday objects that surround you? What if your hairbrush could tell you what it thinks about brushing your hair? What if your water bottle could share its life story? Or an orange could say a few words before being eaten? in(A)n(I)mate, is an interactive AI-based installation that transforms this fantasy into reality.
in(A)n(I)mate, is a 2024 art installation that showcases an innovative use of GPT’s multimodality feature, which allows it to process both text and image as input. Participants are invited to choose an object they wish to speak with and place it in front of the black box. in(A)n(I)mate captures an image of this object and delivers it to OpenAI’s GPT-4-Turbo API. Once GPT “recognizes” the object, it is prompted to generate responses in its style, allowing participants to engage in a real-time conversation with the object of their choice.
Coombe Hill or High Water (2022/23) is an interactive tragicomedy for two online performers set in a dystopian redundant world; an online post-Brexit/COVID-19/democracy end-of-days story/game/drama/meeting. It presents two online telepresent participants (actors) trying to carry on as normal, waking up in flood water, distilling their own fuel and driving into the hills to escape with no real plan, only to find themselves back where they started, but worse. The work is a dark absurd satire on ecological ignorance told through a symbiosis of storytelling and telepresence. The work is informed by the recently completed AHRC-funded COVID-19 Response project Collaborative Solutions for the Performing Arts: A Telepresence Stage (December 2020 to May 2022) https://www.telepresencestage.org, supporting theatre and dance companies with new online telepresence performance solutions through the COVID-19 lockdown. This new work builds on online telepresence techniques such as green-screen compositing, networked video production and virtual set design to provide coexistent telepresent interactions between remote performers. By using background segmentation instead of green-screen technology Coombe Hill or High Water has been developed as a networked telepresence artwork for online public participation, requiring only a computer, webcam, Internet connection, and web browser to participate.
Virtual Nekuomanteia is an experimental immersive essay on the long history of media in the age of virtual reality. Developed in the Unreal game engine, the work is optimally experienced in a VR headset as a multi-level interactive environment. Virtual Nekuomanteia explores how the hermeneutic practices that have shaped textual and media studies share a lineage with hermeticism and more occult practices of communicating with the dead. The experience encompasses four spaces that graft locations important to the artist onto the Hellenistic world’s four sacred portals to the realms of the dead–Acheron, Avernus, Heracleia Pontica, and Tainaron. Users traverse the essay world through physical movement, teleportation, and encountering objects that trigger text, audio, video, and animated events. Online exhibition includes a narrative accompanied by highresolution images, headset capture videos, and 360-video panoramas of the main experience levels.
The artwork is a three-dimensional reproduction of the momentary shape of a wave in perfect shape, ideal for surfing, which retains its aesthetically pleasing shape forever at the overlapped boundaries between physical space and virtual space. It attempts to generate an experience in nature with its audience by presenting it and the artwork reaffirms the supple strength of human cognitive abilities and expresses a sense of human existence that will remain unchanged forever, even in a future where reality is in chaos.
At its essence, Aesthusion poses a profound question: “Can you hear and see my notion and mind?” Through its immersive experiences, it invites participants to transcend the confines of individual consciousness and forge connections on a deeper, more visceral level. By pushing the boundaries of art and science, Aesthusion aims to inspire non-verbal dialogue, trigger senses, and redefine the essence of human interaction.
Sarah is the digital doppelgänger of Sara (the actress). In addition to capturing her videos, her mood data was also collected. This emotional data was mapped to a group of perceptrons to synthesize emotional responses, which were tuned to manifest Sarah’s personality. Totally 87 video clips were driven by perceptrons to express Sarah’s emotions. As a result, digital replications of Sarah are permanently extracted into a digital dimension. This digital world becomes authentic due to the weaknesses of human nature. Sensory perceptions based on psychology models intertwine the simulated soul with the emotional dependencies of the real world, mirroring our own reality. Similar to our world, virtual bodies compete with each other, experiencing love and hatred in eternity.
“Surrealism Me” delves into Vilém Flusser’s critique of media as mediators that often distort human perception of reality and diminish freedom, particularly within the context of Mixed Reality (MR). This project engages with Flusser’s theories by allowing participants to experience a two-phrase virtual embodying (i.e., Having a virtual body) in MR, highlighting the complex interplay between human agency, body ownership, and self-location. Initially, participant manipulates their virtual body through multimodal inputs or choose AI-generated movements. Then, the interactive MR experience leads to an immersive phase where an Unmanned Aerial Vehicle (UAV) extends their sensory perceptions, embodying the virtual body’s perspective. “Surrealism Me” confronts the concept of ‘playing against the apparatus’ by offering an interactive milieu where humans and AI collaboratively explore the program’s capacity limitation, thereby challenging and exhausting the potential of MR technology. This process further critically examines the obfuscating nature of media; as the MR medium breaks down, the project reveals the constructed nature of media-projected realities, prompting a reevaluation of media’s role and influence on our perception. By navigating the boundary between real and virtual, “Surrealism Me” fosters a critical discourse on media’s dominance and advocates for a nuanced understanding of Flusserian freedom, encouraging participants to question and reflect on the authentic and mediated experiences of reality.
Victoria Szabo is a Research Professor of Visual and Media Studies at Duke University, and directs the PhD in Computational Media, Arts & Cultures and the Certificate in Information Science + Studies. Her work focuses on immersive and interactive media for digital humanities and computational media art. She is co-lead of Psychasthenia Studio, and artists’ games collective. She was Chair of Art Papers at SIGGRAPH Asia 2023 in Sydney and will be Art Papers Chair for SIGGRAPH Asia 2024 in Tokyo. She is also Chair of the Art Advisory Group for ACM SIGGRAPH and a member of the Digital Arts Community Committee.
Dr. Gustavo Alfonso Rincon (Ph.D., M.Arch., M.F.A., B.S, B.A.) earned his doctorate in Media Arts and Technology at UCSB. Rincon is educated as an architect, artist, curator, & media arts research scholar. His academic works have been exhibited nationally & internationally along with serving clients globally. His work with DigitalFutures has gained a global audience with a yearly program along with a series of free summer workshops. His dissertation “Shaping Space as Information: A Conceptual Framework for New Media Architectures,” led to a Postdoctoral appt. at the AlloSphere Research Facility, affiliated with the Media Arts & Technology Program, California NanoSystems Institute at the University of California, Santa Barbara.