
This survey of AI, Computation, Fabrication, Information (Data) & Robotics – Speculating on future trends of built form – looking at how research practice and experimentation come together to blur the lines between the creative imagination and the real. An Aesthetic Challenge to existing formal languages of material form – redefining New Mediated Architectures. While humans have built environments inspired by nature, what new research practices have contributed to extending the potential of A.I. & Robotics – imagining new ways of thinking from physical to virtualized paradigms as well as mixed paradigms. Information is intertwined with our states of humanity. We ask our community of artists, scientists and researchers to share visions as “proposals” for a better world by revealing their research as a paradigm shift engaging current technologies. This session will explore the conceptual implications of the potentiality of new visions for change in contemporary research practice combining the Arts, Design, and Sciences in A.I., (AR/VR/XR/Real) Worlds & Verses, Robotics, and Speculative Design/Arts Futures.

When we get sick, we could a) constantly take pain killers (fight the symptoms), b) take curative medication until drug tolerance kicks in (non-sustainable intervention), or c) stop doing what’s making us sick (sustainable action). Technology today too often fights the symptoms. Interactive maps on smart phones help us navigate, but overreliance on mapping software can make us lose spatial skills. Automatic translation services help us communicate with people from other countries but blind reliance on it will prevent us from ever learning any foreign language. Could some smart technology maybe make us better at spatial reasoning while helping us navigate? Teach us a foreign language while translating for us? I am interested in technologies that improve our innate human skills and conditions in a sustainable way, so that we will have gained something even if the next time we won’t have the technology at hand.

Architectonic Media Interventions (AMI) is a research-creation project articulated by a series of works that lies at the intersection of computational art and architecture. Experimentation in this research area focuses on building media-centric architectural components in existing built environments. Developed in the n-D::StudioLab (York University, Toronto) artworks are the primary driver for innovation in AMI. AMI outputs include the development of hardware, software, and design workflow strategies that facilitate artistic expression in this work. Computational art provides a unique vantage point for exploring the impact that ubiquitous digital technology has on the built environment and our everyday lives.

We understand the world we inhabit through storytelling. At the same time, the stories we tell are only compelling to the extent that they immerse us in worlds that we believe in.
Through World Building, Experimental designs speculative architectures and human-scale narratives that invite people to step into these possible worlds and immerse them in believable futures. Using game engines and other digital tools, we can prototype and test new spatial systems and human journeys at varying scales and resolutions. Each world is based on deep research into the environment, culture, economy, resources, emerging technologies, and infrastructure, to ensure future reality rather than science fiction.

This talk explores the potential of AI for artistic expression and storytelling through video. Inspired by projects like “Anticipating 2020s,” I present a novel approach to video generation using deep learning. Unlike generative models, my research focuses on analyzing and rearranging existing film footage, specifically from science fiction movies. By categorizing this footage based on its content using deep learning algorithms, I create a database that allows for the algorithmic generation of new narratives. This approach aligns with the concept of “Database Aesthetics,” shifting the focus from understanding the inner workings of AI to its potential for artistic creation.

As learning-based methods increasingly supplant rule-based approaches across various computational applications, they are defining a new paradigm in visual media computation. This presentation delves into the principal challenges of employing learning-based techniques in image processing. It outlines three innovative strategies designed to address these challenges effectively. These strategies involve a blend of automated processes and human interaction to foster a richer and more varied ecosystem of computational visual media. By leveraging the strengths of machinery automation and human creativity, we anticipate a more diverse ecosystem of visual media outputs generated through learning-based methods.

Aesthusion = Aesthetic + Visual + Fusion. Aesthusion is a portable, wireless wearable device merging art, technology, and neuroscience. It explores human perception and expression through robotics, VR, and brainwave analysis. The VR headset serves as a communication bridge between cognition and reality, translating brainwave data into visual and auditory experiences. Drawing from synesthesia and Gestalt theory, it captures individual cognitive states. Using AI, it creates mesmerizing visual fluctuations. Aesthusion challenges art and perception norms by integrating dissonance in color and sound. It introduces the concept of the “third ear,” extending beyond auditory perception. Through immersive experiences, it fosters deeper connections and non-verbal dialogue. By blending art and science, Aesthusion redefines human interaction, aiming to provoke senses and inspire introspection.

Visitors to Cultural Heritage (CH) sites are often only able to observe the current degraded state of these locations without the understanding of their history and personal connection possessed by local inhabitants. To facilitate expression of this intangible aspects of CH, we collected Generative AI (GenAI) created images of the past and future of CH sites from workshop participants, and displayed these images in Augmented Reality (AR) form, and as drawn by a drawing robot. The collected imagined images are shown in an AR app that uses markers on a large scale 3D map model of the city. The images are also drawn physically by a robot, serving as markers created in progress for the AR. The experience illuminates intangible connections of people to CH sites. This work highlights the way GenAI-created images can be shown in AR and physical forms to empower imagination and expression for social purpose.

How does spatial medium convey non-textual languages through the depth of musical patterns? How do we survive in the chaotic environment of nowadays sensory overload and embrace the future of uncertainty? Maelstrom is a spatial instrument and audio-visual installation performed by the audience. It offers a means to navigate the whirlpool of overwhelming messages by providing an accessible audio-driven pattern recognition tool. Based on habits and memories of the past, the tool helps users to reorient themselves through recursive patterns of sound and visual in space, to further take decisive and mutual impacts on the ever-evolving environment. By challenging the capabilities and perceptive process of human brain, it critiques the politics of art and technology and sensory hierarchy of the prevailing ocular-centric culture.

Bhavleen Kaur is an interdisciplinary architect, researcher, and educator exploring the multiplicities of computational design and robotics in architectural education. Currently pursuing her Doctor of Design at FIU, she serves as a senior research assistant for the NSF’s Research on Innovative Technologies for Enhanced Learning (RITEL) program at the Robotics and Digital Fabrication Lab. Bhavleen is an AAUW International Fellow and recipient of the AUTODESK-ACADIA 2023 BIPOC scholarship. She is also a steering member for DigitalFUTURES, an independent, volunteer-run online platform for architectural education. Her work explores the future of autodidactic learning to address technical and societal challenges, generating new architectural visions that push the limits of what is considered probable, desirable, and even imaginable. Bhavleen earned her postgraduate degree from ISEC- Lisbon, and ELISAVA- Barcelona, and was an assistant professor of computational design at BSSA, Mumbai between 2015-2022.

Virginia Ellyn Melnyk is a computational architectural designer & researcher exploring textiles, craft, and deployable structures. Pursuing a PhD in DigitalFUTURES International program at Tongji University, she focuses on soft structures with material techniques and computational methods. Melnyk holds a Master of Architecture from the Weitzmann School of Design, the University of Pennsylvania, and a Bachelor of Science in Architecture from the University at Buffalo. Virginia’s work spans projects at Studio Pei Zhu and volunteer work at DigitalFUTURES and Architecture is Free. Currently, as the Inaugural Foundations Fellow at Virginia Tech, she explores deployable structures and soft materials for performative designs.

Dr. Gustavo Alfonso Rincon (Ph.D., M.Arch., M.F.A., B.S, B.A.) earned his doctorate in Media Arts and Technology at UCSB. Rincon is educated as an architect, artist, curator & media arts researcher. His academic works have been exhibited nationally & internationally along with serving clients globally. His dissertation “Shaping Space as Information: A Conceptual Framework for New Media Architectures,” led to a Postdoctoral appt. at the AlloSphere Research Facility, CNSI@UCSB.