
Friday, December 9, 2022 – 4PM EDT/21:00 GMT
Watch a recording of the presentations and discussion:
Session Description
What are the imagined dimensional flows, shapes, and spaces within contemporary Arts, Design, and Science research? How are creativity, intelligence, practice, and theory measured against our systemic societal narratives controlling our world today? Our spaces can be conceptually qualified through a formalization of space and time while also being quantified by measuring observations of behaviors and exchanges of information within the many complex systems that make up our various concatenated modern experiences.
Our awareness of time can be imagined in, out, and/or via reconceptualizing Non-Western modes of thought. Will we as researchers have the immediate courage to address the future by engaging the grand challenges (climate) that limit our full evolutionary human potential right now? What are the limiting factors that stop our local and global communities from banding together and addressing the urgent questions of access, education, and systemic inequality.
This session will explore the conceptual implications of the potentiality of new visions for change in contemporary research practice combining the Arts, Design, and Sciences in A.I., (AR/VR/XR/Real) Worlds & Verses, and Speculative Design Futures.
What are the foundational systemic elements adapting to our existing societal changes that make up our world today? How will myriad technological languages of experimentation affect socially mediated paradigms of experiences in the age of quantum computing?
This dialogue brings leaders of Media Arts, Design (Architectures), and Computational Science researchers together to discuss and challenge the existing canonical definitions and conceptual frameworks of the Arts, Humanities, and Sciences.
Presenters
The Collaborative XR Design Lab
Yara Feghali
We simultaneously occupy a multitude of virtual worlds whether through our social media profiles, online games, discussion forums, or extended reality avatars. We have entered since 2020 in a new digital era with internet technology, we are now in Web 3.0 or the Spatial Web where interfaces, computation, and data have drastically changed. Far from the antiquated concept of physical/virtual binary we now live in a technocultural society that encourages cyberfeminist pluralism, and indefiniteness. With The Collaborative XR Design Lab investigates the future of spatial web and immersive social spaces. The Lab is divided into four users, the Traders design the Marina and the Bazaar, the Explorers design the beaches and cultural spaces, the Dwellers design housing and parks for their dogs, the Farmers design land, and algae farms. We model each user’s avatars, buildings, infrastructure, asset packs, and mood boards through extensive research.
Yara is a French and Lebanese architectural designer and faculty at UCLA working at the intersection of architecture education, transmedia, and immersive technologies. She is the creative director of Folly Feast Lab co-founded with Viviane El Kmati. Her studio is based in Santa Monica and creates visually-led immersive and interactive experiences to address present social and urban themes using AI, Game Engines, and 3D scanning technology.
https://yarafeghali.com/The-Collaborative-XR-Design-Lab-The-Island-2022
Making an immersive virtual environment: How we perceive space
Kon Hyong Kim
Virtual Reality(VR) Technology is becoming more and more prevalent in various fields, ranging from art, design, entertainment to engineering and science. Some of the applications requires immersive virtual environments(IVE), in which the user would perceive the virtual scene as the real environment. This would include applications such as spatial art exhibitions, virtual tour of an architectural design, virtual mirror of a facility, and spatial data exploration. However, head-mounted displays aren’t always the best VR system for the situation, as it doesn’t support multiple users at the same time. Projection and large displays offer alternate advantages, but it is not easy to utilize them effectively.
In this talk I’ll be giving an overview of what immersion and presence is, how a person perceives space around them, and how to make an effective immersive virtual environment.
My research is on easily creating a highly immersive virtual environment, mainly through projection-based VR systems. I’ve designed, constructed & operated multiple VR environments, and actively researching how to enhance immersive experiences. I currently work for the AlloSphere Research Group.
Extended Digital Twins: Location-Based Theater Experience with Projected Dancers
You-Jin Kim
Augmented reality via the Integrated Visual Augmentation System (IVAS) has the ability to optimize training and operations by presenting the user with accurate, real-time information.
While few projects exploring the cognitive impacts of ambulatory AR on user locomotion and intractability in physical space have been presented in recent years, understanding of digital content in these inherited physical layouts has not yet been explored. With help of a complete model of the digital twin of the space, a LiDAR-scanned physical environment, we constructed a proof-of-concept 3D visualization system where we can further imagine and visualize our digital content layer on top of the physical layout.
Both the physical layer of the buildings and the digital stage layer will be rendered in the system, allowing the artist to visualize their content in relation to the physical layout; providing key insights into content creation for specific locations, historical sights, and architectures.
You-Jin Kim is a Ph.D. student in Media Arts and Technology at the University of California, Santa Barbara. My primary research interest lies in Virtual Reality, Mixed Reality, and HCI. They are interested in exploring new ways of interaction in immersive mixed reality.
https://www.yujnkm.com/
https://sites.google.com/view/artx22/home
Multimodal Interactive Granular Synthesis Design
Myungin Lee
AlloThresher is a multimodal instrument with audiovisual granular synthesis using the gestural interface. Granular synthesis is a sound synthesis method that creates complex tones by combining and mixing the simple micro-sonic elements called grains. With two smartphones in both hands, the gestural interface interpreted from the sensors enables you to precisely and intuitively decide and play the parameter of the granular synthesis in real time. Graphically, the corresponding visuals are generated simultaneously for each granule based on the spectrogram of the sound that morphs and blends dynamically with the gesture.
By breaking conventional interfaces like knobs and sliders, this seamless connection between modalities utilizes the profound advantage of the gestural interface. Moreover, the presence and gesture becomes part of the space and the performance so that the audience can observe and cohesively connect the audio, visual, and interface simultaneously.
Myungin Lee is a researcher designing multimodal instruments based on scientific theory, composition, signal processing and machine learning, and gestural interface. He pursues cohesive multimodal instrument design and compositions by establishing the crossmodal correspondence between different modalities including physics, audio, graphics, and interface. Currently, he is a PhD candidate affiliated with the AlloSphere Research Group at UCSB, and previously held an internship in the Experiments in Art and Technology (E.A.T.) program at Nokia Bell Labs in 2020.
https://www.myunginlee.com/allothresher
Equivalence (2022/2023): Interactive Real-time Space Construction Through Speech
George Legrady, Yixuan Li
In this talk, I will introduce “Equivalence,” a microphone-based installation that takes speech from audiences as input and transforms the signal into visual structures in 3D space based on language processing. This is a collaborative project with George Legrady, Dan Costa Baciu, and Yixuan Li. This interactive installation is an artistic work that explores cross-modal interaction’s potential in producing images. The intent is to investigate to what degree the syntactic structure of language can be a means by which to build a stream of varying emergent visual forms. The project is a contemporary updated version of “Equivalents II” realized in 1992 by Prof. George Legrady, a pioneer new media art text generating visualization installation featured in numerous museums. Three levels of language processing are conducted for the captured speech information: Word-Level, Sentence-Level, and Document-Level, representing how languages are interpreted.
The Experimental Visualization Lab (ExpVisLab) is one of 7 dedicated research labs in the Media Arts & Technology arts-engineering program at UCSB. The lab focuses on computational-based creative explorations in the fields of data visualization, visual language, machine vision, computational photography, interactive digital installations, and related directions addressing the impact of computation on visualization. The experiments, projects and courses we do, and offer contribute to the arts-engineering / scientific community in various ways.
Building and Using Distributed Media Systems Based on the Creative Process of Music Composition and Performance
Dr. JoAnn Muchera-Morin
As an orchestrally trained composer who has been building computer music studios and media arts facilities since 1984, I used the model of composing and performing complex systems to build distributed computational platforms. What does music composition and performance has to do with building and developing distributed multimedia computation platforms? Think of the orchestra as a distributed server/client system, the performers are intelligent interfaces using devices that perform a score, the program or complex mathematical model. An ensemble without a conductor is a client-to-client distributed system. I have used this concept to build my large computational platform, the AlloSphere instrument, a three-story immersive cylinder in a near-to-anechoic chamber, with 26 immersive projectors, 54.1 channels of sound, and a device server that facilitates any number of interactive devices to control the multimedia system through a 14 compute rendering cluster.
Director and Chief Scientist of the AlloSphere Research Facility and Professor of Media Arts and Technology and Music (Composition), in the California NanoSystems Institute at the University of California, Santa Barbara (UCSB). Also is the Founding Director of the Center for Research in Electronic Art Technology at UCSB.
https://allosphere.ucsb.edu/kuchera-morin/
https://allosphere.ucsb.edu/
“Alloplex XR: Hydnum” Extending Transmodal Worldmaking beyond Russel’s Circumplex Model of Affect
Marcos Novak, Iason Paterakis, Diarmid Flatley, Nefeli Manoudaki, Pau RosellĂł, Alan Macy
Hydnum is an extended reality interactive installation that immerses the visitor in a symbiotic transmodal continuum of scent, music, form, poetry, space, and light. This data-driven piece is based on research connecting virtual and physical elements to emotional affect and cross-sensory stimuli through human biodata. Hydnum extends James Russell’s “Circumplex Model of Affect” into transmodal worldmaking in XR. Tangibly, Hydnum is a transmodal/ symbiotic system consisting of a prototype medical scent delivery system, a sculptural cyber-physical artifact, an XR experience including VR, and a multi-channel generative audio component. The biodata visualizations extend into the surrounding area by using projection mapping. The interactive spatial system can transmit scents based on the user’s psychophysiological measures. The installation hardware is activated and animated by multiple interdependent algorithmic “species” that, together, help form a vibrant “rainforest” symbiotic ecology.
The Transvergent Research Group consists of a growing community of like minds focused on exploring the concept of Transvergence and working in or with the transLAB. It consists of present PhD candidates working with directly with Marcos Novak, MAT students interested in Transvergence and the transLAB mission, and an extended international community of collaborators and friends.
El Retablo Digital / The Digital Alterpiece
Giovanna Elizabet Pillaca Morote
During 2020 rituals that summoned families were prohibited and the process of paying homage to the dead was broken, opening the possibilities to reinterpret the ‘Ayacucho altarpiece’ in the metaverse. It is an hybrid approach which seeks to explore the recreation of a traditional mortuary ritual in Ayacucho (Peru) using data extraction from social media through deep learning algorithms to uncover a collective perspective.Deep learning algorithms are used to extract images and remap visual features. Leveraging the generative aspect of the models, a collection of 2D features were sorted and transformed to 3D elements, and using technologies that emulate analog traditional methods of craft and art making. The Digital Altarpiece proposes expanding personal memory to a collective memory, created to revive the mortuary ritual and commemorate life through spatial, symbolic and collective sound memories in AR, VR and virtual environment experience.
Giovanna is a professor at the University of Sciences and Arts of Latin America (UCAL) and manager of South America for Digital FUTURES. She also publishes illustrations on her Instagram account. Her academic and professional experience in architecture, urban design, bio-digital design, computational design, immersive environment design, artificial intelligence, and 3D printing has allowed him to grow by connecting disciplines and different scales of design.
https://giovannapillaca.com/elretablodigital/
https://vimeo.com/751432077
Instagram @mundodeotrouniverso
Envisioning and creating new narratives of inclusion through reciprocity and care between different cultures through the convergence of art and science
Audrey Rangel Aguirre
Through artistic intuition, scientific research can be enhanced. Developing, by imagination, new systems and scenarios that belong to the future of human civilisation. Through creative thinking, artists have the potential to envision future narratives to solve problems in society thanks to a characteristic: artists don’t have limits at intellectual or imaginative level. This imagination without limits helps artists to transcend scientific research limits. Allowing the artist to create innovations departing from scientific research. In the convergence of art/science the artist’s new role is to create new narratives that will emerge from this convergence and the innovations that will shape future societies. Linking scientific knowledge with the audience, the artist will help humanity to reach new levels of knowledge, intellectual and spiritual evolution of humanity through artistic synthesis. Enhancing science through artistic intuition, envisioning/reaching next phases of civilisation.
Audrey Rangel Aguirre is an Interdisciplinary Mexican artist based in England, researching at the intersection of art and science, focusing on artistic intuition to create new systems that belong to the future of human civilization. Currently developing the research project Terras Lux, focusing on the relation between energy on microbial micro ecosystems on soil and the energy on the human body.
https://www.interaliamag.org/articles/audrey-aguirre-terras-lux-project/
Context Futures: A Multi-perspective Approach to Generative Context
Bill Seaman
This talk will cover the following concepts: Creating a context for anticipating, generating and understanding new contexts. I will present Ranulph Glanville’s ideas related to attributing intelligence. I will explore How we can create new contexts that help us tackle our problems of sustainability and environmental collapse. I will discuss the assembling of disparate bodies of functional knowledge to form a new contexts. I will lay out a set of ideas concerning the development of systems that enable the exploration of Contextual Intelligence – Bridging the informational with the Physical. Cental will be reverse engineering context toward the development of new generative systems. The long-term need for the generation of a compendium of relationalities will be discussed. Semantics and epistemologies of context will be outlined. Anticipating generative contexts that will Anticipate, will be key.
Seaman’s work (Media research and artist) often explores an expanded media-oriented poetics through various technological means — Recombinant Poetics. Such works often explore the combination and recombination of media elements and processes in interactive and generative works of art. Seaman enfolds image/music/text relations in these works, often creating all the media elements and articulating the operative media-processes involved. Seaman is Co-director of The Emergence Lab, Computational Media, Arts and Cultures, Professor, Department of Art, Art History & Visual Studies at Duke University
Metaverse Washington Square Park – VR WSPark Project
Snow Yunxue Fu
VR WSPark is designed as a metaverse VR exhibition space resembling NYC’s Washington Square Park on the Social VR Platform Sansar by Snow Yunxue Fu – Artist, Curator, and Professor at NYU Tisch School of the Arts, co-hosted by DSLCollection. The virtual show space is devoted to the curated 3D Imaging artworks of Fu and her students, as well as other curatorial digital projects. It aims to function as a virtual cultural hub for additional events, lectures, and showings in the metaverse.
Snow Yunxue Fu is a New Media Artist, Curator, and Assistant Arts Professor at NYU Tisch School of the Arts, Institute of Emerging Media. Working with imaging technologies, such as 3D Simulation, AR, XR, and the Metaverse, she creates computer-rendered images, moving images, interactive projects, installations, etc, and merges sociological, anthropological, philosophical, and interdisciplinary explorations into the universal aesthetic and definitive nature of the techno sublime.
https://snowyunxuefu.com/home.html
https://snowyunxuefu.com/section/509146-VR%20WSPark%20Metaverse%20Project%20by%20Snow%20Yunxue%20Fu%2c%20Co-hosted%20by%20the%20DSLCollection.html Â
Moderators: Gustavo Alfonso Rincon and Liliana Conlisk Gallegos
Dr. Gustavo Alfonso Rincon (Ph.D., M.Arch., M.F.A., B.S., B.A.) is educated as an architect, artist, and media arts research scholar. Rincon is a Senior Associate Post-Doctoral Fellow for the AlloSphere. With research exhibited nationally and internationally, he works as an educator, practitioner, and thought leader in the fields of Art, Architecture, Media Arts/Computational design, and Speculative Design. Rincon creates curatorial projects, curriculum, educational and community outreach programs, exhibitions, events, and research works, traversing the domains of Art, Architecture, Computational Design, Engineering, and Science. He served as a Director and Curator in LA, CA at the Foundation for Art Resources.
Rincon’s doctoral dissertation “Shaping Space as Information: A Conceptual Framework for New Media Architectures,” was awarded by the Media Arts and Technology Program at the University of California, Santa Barbara.
Rincon is an International Senior Organizing Member for DigitalFUTURES World at https://digitalfutures.international/.
Liliana Conlisk Gallegos
With the goal of advancing the certain decolonial turn, Dr. Machete’s (Liliana Conlisk Gallegos) live, interactive media art production and border rasquache new media art pieces/performances generate culturally specific, collective, technocultural creative spaces of production that reconnect Chicana/o/x Mestiza Indigenous wisdom/conocimiento to their ongoing technological and scientific contributions, still “overlooked” through the logic of a decaying Eurocentric project of modernity. In her transfronteriza (perpetual border crosser) perspective, the current perceptions of what research, media, and technology can be are like a yonke (junkyard) from which pieces are upcycled to amplify individual and collective expression, community healing, and social justice. She has organized and curated 14 community-centered, interactive, decolonial, and environmentalist, research-based multimedia artivism and critical intervention performances and her work has been exhibited locally and internationally. She is Associate Professor of Decolonial Media and Communication Studies at CSU San Bernardino and a member of the ACM SIGGRAPH Digital Arts Committee.

You must be logged in to post a comment.