New York, USA Fri, Dec 9, 2022 at 4:00 pm EST
Chicago, USA Fri, Dec 9, 2022 at 3:00 pm CST
Los Angeles, USA Fri, Dec 9, 2022 at 1:00 pm PST
UTC, Time Zone Fri, Dec 9, 2022 at 9:00 pm
What are the imagined dimensional flows, shapes, and spaces within contemporary Arts, Design, and Science research? How are creativity, intelligence, practice, and theory measured against our systemic societal narratives controlling our world today? Our spaces can be conceptually qualified through a formalization of space and time while also being quantified by measuring observations of behaviors and exchanges of information within the many complex systems that make up our various concatenated modern experiences.
Our awareness of time can be imagined in, out, and/or via reconceptualizing Non-Western modes of thought. Will we as researchers have the immediate courage to address the future by engaging the grand challenges (climate) that limit our full evolutionary human potential right now? What are the limiting factors that stop our local and global communities from banding together and addressing the urgent questions of access, education, and systemic inequality.
This session will explore the conceptual implications of the potentiality of new visions for change in contemporary research practice combining the Arts, Design, and Sciences in A.I., (AR/VR/XR/Real) Worlds & Verses, and Speculative Design Futures.
What are the foundational systemic elements adapting to our existing societal changes that make up our world today? How will myriad technological languages of experimentation affect socially mediated paradigms of experiences in the age of quantum computing?
This dialogue brings leaders of Media Arts, Design (Architectures), and Computational Science researchers together to discuss and challenge the existing canonical definitions and conceptual frameworks of the Arts, Humanities, and Sciences.
We simultaneously occupy a multitude of virtual worlds whether through our social media profiles, online games, discussion forums, or extended reality avatars. We have entered since 2020 in a new digital era with internet technology, we are now in Web 3.0 or the Spatial Web where interfaces, computation, and data have drastically changed. Far from the antiquated concept of physical/virtual binary we now live in a technocultural society that encourages cyberfeminist pluralism, and indefiniteness. With The Collaborative XR Design Lab investigates the future of spatial web and immersive social spaces. The Lab is divided into four users, the Traders design the Marina and the Bazaar, the Explorers design the beaches and cultural spaces, the Dwellers design housing and parks for their dogs, the Farmers design land, and algae farms. We model each user’s avatars, buildings, infrastructure, asset packs, and mood boards through extensive research.
Virtual Reality(VR) Technology is becoming more and more prevalent in various fields, ranging from art, design, entertainment to engineering and science. Some of the applications requires immersive virtual environments(IVE), in which the user would perceive the virtual scene as the real environment. This would include applications such as spatial art exhibitions, virtual tour of an architectural design, virtual mirror of a facility, and spatial data exploration. However, head-mounted displays aren’t always the best VR system for the situation, as it doesn’t support multiple users at the same time. Projection and large displays offer alternate advantages, but it is not easy to utilize them effectively.
In this talk I’ll be giving an overview of what immersion and presence is, how a person perceives space around them, and how to make an effective immersive virtual environment.
Augmented reality via the Integrated Visual Augmentation System (IVAS) has the ability to optimize training and operations by presenting the user with accurate, real-time information.
While few projects exploring the cognitive impacts of ambulatory AR on user locomotion and intractability in physical space have been presented in recent years, understanding of digital content in these inherited physical layouts has not yet been explored. With help of a complete model of the digital twin of the space, a LiDAR-scanned physical environment, we constructed a proof-of-concept 3D visualization system where we can further imagine and visualize our digital content layer on top of the physical layout.
Both the physical layer of the buildings and the digital stage layer will be rendered in the system, allowing the artist to visualize their content in relation to the physical layout; providing key insights into content creation for specific locations, historical sights, and architectures.
AlloThresher is a multimodal instrument with audiovisual granular synthesis using the gestural interface. Granular synthesis is a sound synthesis method that creates complex tones by combining and mixing the simple micro-sonic elements called grains. With two smartphones in both hands, the gestural interface interpreted from the sensors enables you to precisely and intuitively decide and play the parameter of the granular synthesis in real time. Graphically, the corresponding visuals are generated simultaneously for each granule based on the spectrogram of the sound that morphs and blends dynamically with the gesture.
By breaking conventional interfaces like knobs and sliders, this seamless connection between modalities utilizes the profound advantage of the gestural interface. Moreover, the presence and gesture becomes part of the space and the performance so that the audience can observe and cohesively connect the audio, visual, and interface simultaneously.
In this talk, I will introduce “Equivalence,” a microphone-based installation that takes speech from audiences as input and transforms the signal into visual structures in 3D space based on language processing. This is a collaborative project with George Legrady, Dan Costa Baciu, and Yixuan Li. This interactive installation is an artistic work that explores cross-modal interaction’s potential in producing images. The intent is to investigate to what degree the syntactic structure of language can be a means by which to build a stream of varying emergent visual forms. The project is a contemporary updated version of “Equivalents II” realized in 1992 by Prof. George Legrady, a pioneer new media art text generating visualization installation featured in numerous museums. Three levels of language processing are conducted for the captured speech information: Word-Level, Sentence-Level, and Document-Level, representing how languages are interpreted.
The Experimental Visualization Lab (ExpVisLab) is one of 7 dedicated research labs in the Media Arts & Technology arts-engineering program at UCSB. The lab focuses on computational-based creative explorations in the fields of data visualization, visual language, machine vision, computational photography, interactive digital installations, and related directions addressing the impact of computation on visualization. The experiments, projects and courses we do, and offer contribute to the arts-engineering / scientific community in various ways.
As an orchestrally trained composer who has been building computer music studios and media arts facilities since 1984, I used the model of composing and performing complex systems to build distributed computational platforms. What does music composition and performance has to do with building and developing distributed multimedia computation platforms? Think of the orchestra as a distributed server/client system, the performers are intelligent interfaces using devices that perform a score, the program or complex mathematical model. An ensemble without a conductor is a client-to-client distributed system. I have used this concept to build my large computational platform, the AlloSphere instrument, a three-story immersive cylinder in a near-to-anechoic chamber, with 26 immersive projectors, 54.1 channels of sound, and a device server that facilitates any number of interactive devices to control the multimedia system through a 14 compute rendering cluster.
Hydnum is an extended reality interactive installation that immerses the visitor in a symbiotic transmodal continuum of scent, music, form, poetry, space, and light. This data-driven piece is based on research connecting virtual and physical elements to emotional affect and cross-sensory stimuli through human biodata. Hydnum extends James Russell’s “Circumplex Model of Affect” into transmodal worldmaking in XR. Tangibly, Hydnum is a transmodal/ symbiotic system consisting of a prototype medical scent delivery system, a sculptural cyber-physical artifact, an XR experience including VR, and a multi-channel generative audio component. The biodata visualizations extend into the surrounding area by using projection mapping. The interactive spatial system can transmit scents based on the user’s psychophysiological measures. The installation hardware is activated and animated by multiple interdependent algorithmic “species” that, together, help form a vibrant “rainforest” symbiotic ecology.
The Transvergent Research Group consists of a growing community of like minds focused on exploring the concept of Transvergence and working in or with the transLAB. It consists of present PhD candidates working with directly with Marcos Novak, MAT students interested in Transvergence and the transLAB mission, and an extended international community of collaborators and friends.
https://translab.mat.ucsb.edu
During 2020 rituals that summoned families were prohibited and the process of paying homage to the dead was broken, opening the possibilities to reinterpret the ‘Ayacucho altarpiece’ in the metaverse. It is an hybrid approach which seeks to explore the recreation of a traditional mortuary ritual in Ayacucho (Peru) using data extraction from social media through deep learning algorithms to uncover a collective perspective.Deep learning algorithms are used to extract images and remap visual features. Leveraging the generative aspect of the models, a collection of 2D features were sorted and transformed to 3D elements, and using technologies that emulate analog traditional methods of craft and art making. The Digital Altarpiece proposes expanding personal memory to a collective memory, created to revive the mortuary ritual and commemorate life through spatial, symbolic and collective sound memories in AR, VR and virtual environment experience.
Through artistic intuition, scientific research can be enhanced. Developing, by imagination, new systems and scenarios that belong to the future of human civilisation. Through creative thinking, artists have the potential to envision future narratives to solve problems in society thanks to a characteristic: artists don’t have limits at intellectual or imaginative level. This imagination without limits helps artists to transcend scientific research limits. Allowing the artist to create innovations departing from scientific research. In the convergence of art/science the artist’s new role is to create new narratives that will emerge from this convergence and the innovations that will shape future societies. Linking scientific knowledge with the audience, the artist will help humanity to reach new levels of knowledge, intellectual and spiritual evolution of humanity through artistic synthesis. Enhancing science through artistic intuition, envisioning/reaching next phases of civilisation.
This talk will cover the following concepts: Creating a context for anticipating, generating and understanding new contexts. I will present Ranulph Glanville’s ideas related to attributing intelligence. I will explore How we can create new contexts that help us tackle our problems of sustainability and environmental collapse. I will discuss the assembling of disparate bodies of functional knowledge to form a new contexts. I will lay out a set of ideas concerning the development of systems that enable the exploration of Contextual Intelligence – Bridging the informational with the Physical. Cental will be reverse engineering context toward the development of new generative systems. The long-term need for the generation of a compendium of relationalities will be discussed. Semantics and epistemologies of context will be outlined. Anticipating generative contexts that will Anticipate, will be key.
VR WSPark is designed as a metaverse VR exhibition space resembling NYC’s Washington Square Park on the Social VR Platform Sansar by Snow Yunxue Fu – Artist, Curator, and Professor at NYU Tisch School of the Arts, co-hosted by DSLCollection. The virtual show space is devoted to the curated 3D Imaging artworks of Fu and her students, as well as other curatorial digital projects. It aims to function as a virtual cultural hub for additional events, lectures, and showings in the metaverse.
Dr. Gustavo Alfonso Rincon (Ph.D., M.Arch., M.F.A., B.S., B.A.) is educated as an architect, artist, and media arts research scholar. Rincon is a Senior Associate Post-Doctoral Fellow for the AlloSphere. With research exhibited nationally and internationally, he works as an educator, practitioner, and thought leader in the fields of Art, Architecture, Media Arts/Computational design, and Speculative Design. Rincon creates curatorial projects, curriculum, educational and community outreach programs, exhibitions, events, and research works, traversing the domains of Art, Architecture, Computational Design, Engineering, and Science. He served as a Director and Curator in LA, CA at the Foundation for Art Resources.
Rincon’s doctoral dissertation “Shaping Space as Information: A Conceptual Framework for New Media Architectures,” was awarded by the Media Arts and Technology Program at the University of California, Santa Barbara.
Rincon is an International Senior Organizing Member for DigitalFUTURES World at https://digitalfutures.international/.
With the goal of advancing the certain decolonial turn, Dr. Machete’s (Liliana Conlisk Gallegos) live, interactive media art production and border rasquache new media art pieces/performances generate culturally specific, collective, technocultural creative spaces of production that reconnect Chicana/o/x Mestiza Indigenous wisdom/conocimiento to their ongoing technological and scientific contributions, still “overlooked” through the logic of a decaying Eurocentric project of modernity. In her transfronteriza (perpetual border crosser) perspective, the current perceptions of what research, media, and technology can be are like a yonke (junkyard) from which pieces are upcycled to amplify individual and collective expression, community healing, and social justice. She has organized and curated 14 community-centered, interactive, decolonial, and environmentalist, research-based multimedia artivism and critical intervention performances and her work has been exhibited locally and internationally. She is Associate Professor of Decolonial Media and Communication Studies at CSU San Bernardino and a member of the ACM SIGGRAPH Digital Arts Committee.