New York, USA Fri, Oct 29, 2021 at 5:00 pm EDT
Chicago, USA Fri, Oct 29, 2021 at 4:00 pm CDT
Los Angeles, USA Fri, Oct 29, 2021 at 2:00 pm PDT
Paris, France Fri, Oct 29, 2021 at 11:00 pm CEST
UTC, Time Zone Fri, Oct 29, 2021 at 9:00 pm
Data has become one of the most valuable resources in the digital age. Indeed, it is estimated that more than 5 billion users of the Internet (https://www.internetlivestats.com/) generate more than 59 trillion gigabytes of information (source from the State of Data column by Gil Press in Forbes magazine). However, while we all participate in creating data, it is not straightforward how our information is being processed, stored, and exchanged across the platforms, services, software, and devices that we use.
The 8th SPARKS session is interested in discussing innovative manners in which artists, technologists, researchers, and practitioners, express their visions of the impact and effects of data in art and science. With this goal in mind, the primary materials that the session seeks to highlight are visual media and visual methods that help, amplify, or challenge our visual perception and human interpretation.
Visual media is for us the means to make sense of visual information. If direct perception consists of discerning surfaces, textures, objects, places and other visual information in the natural world, then visual images act as media because they already contain a pre-figured point of view and disposition of elements. Moreover, different kinds of images allow diverse modes of interaction: from static to dynamic, from projection to immersion. In this respect, visual interfaces – such as data visualization, maps, diagrams, and graphics – are types of visual media that convey action and create a dynamic space where multiple actors interact at the same time (human users, computational procedures, digital media, computer devices).
This session welcomes the presentation of artworks, projects, prototypes, models, and datasets. We aim at gathering a variety of perspectives that consider and explore the past, the present, and the future of visual data, data art, data visualization, data representation, visual analytics, data criticism, among other fields. Overall, we hope to discover interrelations between people, events, places, and techniques.
In this talk, I will introduce my newly created data visualization projects about various environmental issues such as drought, wildfire, and plastic pollution. The projects explore new ways of visualizing and sonifying data in conjunction with art and technology, and bring awareness about climate change issues to the viewers.
The projects aim not only to explore new aesthetically meaningful visualizations in intersection between art and technology but also to allow users to learn about the causes and impacts of climate change issues by examining the past, present, and future of data. The data representations also lead to a new media interactive interface utilizing audio synthesis, visualization, 3d printed sculptures, and real-time interaction.
Know Thyself as a Virtual Reality (KTVR) is an art & science project that focuses on the ethics and aesthetics of the use of medical data and virtual reality. MRI and CT scanned bodies offer striking opportunities for producing new medical knowledge, but also render bodies transparent, accessible and manipulable in potentially exploitative ways, particularly during an era of increasing surveillance. Central to the KTVR project is the creation of two VR artworks – My Data Body and Your Data Body. My Data Body is constructed from a single individual’s data and Your Data Body is constructed from both open source data and donated data. This project aims to create virtual reality artworks that make visible our many data corpuses, allowing viewers to be immersed in multiple layers of personal data, hold it in their virtual hands and dissect it.
Our human experience of materialised data can be amplified and made visceral through the interactive elements of scenography – space, bodies, objects, movement and the images, atmosphere and experience that is subsequently created. This talk will look at two recent projects that materialise data. I will discuss ways in which ‘THE UNCANNY VALLEY OF BREATH’ reveals the felt human experience of biased data from AI and natural language programming and how kinaesthetically understanding genetic data through the BIOLUMENLAB offers a unique window into how genetics make us who we are. THE UNCANNY VALLEY OF BREATH is an immersive installation that shows the innate biases in AI through exposing the subtle boundaries in human speech. Breath, our most intimate connection to life, is, for an AI, a marker of an inefficient system. An AI can only perceive breathing as an interruption of data. Because of this, it is inherently biased to never understand the subtleties that make us human. THE UNCANNY VALLEY OF BREATH is a collaboration between two creative practitioners and two AI/Data scientists. UVOB has been commissioned for BIAS at the Science Gallery, Dublin in collaboration with Ireland’s Adapt centre.
This talk addresses how interactive elements of scenography can amplify and challenge our visual perception and human interpretation of data By creating kinaesthetic immersive and interactive environments that highlight tangible felt experiences of data, perception is shifted from an intellectual to a felt experience. These projects have created a framework for wider exploration of the nature and role of knowledge generated through working between communication design, data visualisation Biomedicine, genetics and AI. Particularly in regard to ways in which data can be explored as a media for creative practice and how the socially scenographic can be employed as a framework to highlight and challenge the impact and effects of data.
The Science Visualization Lab of the University of Applied Arts has been treading new paths for many years to make science tangible and visible. For this purpose, the possibilities of computer animation, documentary film and immersive experiences are used in a variety of ways to convey scientific information. Size scales that are invisible without technical aids can be made tangible through digital data from our very physical world. The projects briefly introduced will be LIFE and NOISE AQUARIUM, which deal with the effects of plastic and noise pollution on plankton, CRISPR/Cas9-NHEJ: ACTION IN THE NUCLEUS, which deals with genetic manipulation and VIRUS DICE, which presents the SARS-CoV-2 virus and our existence as a function of relationships with different probabilities.
Despite the apparent inanimation of a still image, its space presents an unsettled manifold from which arrays of visual moments are gathered. While its composition and figural components may suggest a privileged narrative, its perception is ultimately realized by personal encounter, and the biases that provoke the connection of one visual fragment to another. Through the meandering of its space, the stability of an image’s composition is deconstructed through the discrete moments of its reading. My work seeks to emphasize this manifold of perception by reconstructing stimuli using eye-tracking data. My ambition is to lay bare the inherently active, constructive, and unique means by which we make sense of the visual world, recognizing the image as a field of encounter that is always assembled anew, as a precarious horizon whose definition can only be approached but never fully resolved.
In this talk I will introduce my PhD research, which follows on from my Autoencoding Blade Runner project (SIGGRAPH ‘17 Art Papers). I will give an overview of three projects from my PhD research which seek to use generative deep learning to go beyond the imitation of data. The projects I will introduce are: searching for an (un)stable equilibrium – training generative models without data, amplifying the uncanny – divergent fine-tuning of deepfakes, and my network bending framework for manipulating generative models.
Imagine an alternate computer history where our monitors were triangular or hexagonal, with pixels arranged in those geometries. The Tri-Dithers project rebuilds the most fundamental photographic algorithm, dithering, in this configuration. Dithering is the breaking down of an image of many colors into one that can be represented with a smaller set. Most dithering kernels (the coefficient sets to distribute error) were developed in the 1970s, with the aim of representing images on black and white pixels of monitors of the time. In the Tri-Dithers, a machine learning technique generates dithering kernels for other geometries.
This is an extension of the Dither Studies project, links for that:
https://danieltemkin.com/DitherStudies (the web app),
https://danieltemkin.com/DitherStudies/HandRenders (painted works)
Visualizations allow analysts to rapidly explore and make sense of their data. The ways we visualize data directly influence the conclusions we draw and decisions we make; however, our knowledge of how visualization design influences data analysis is largely grounded in heuristics and intuition. By instead empirically modeling how people interpret visualized data, we can understand limitations in current visualization systems and drive new systems for creating, communicating, and exploring data. I will show how results from perceptual and cognitive models lead to novel visualization systems that support accurate analysis of complex data and better scale to the needs of modern analytics challenges by incorporating interactive statistical analytics and immersive display technologies to increase the accessibility, scalability, and pervasiveness of data-driven reasoning. These approaches increase agency, accessibility, and expressivity of data-driven reasoning.
Everardo Reyes is an Associate Professor in the Information and Data Sciences Department at the Université Paris 8—Vincennes-Saint-Denis, France. He is a permanent researcher at Laboratoire Paragraphe and member of the Cultural Analytics Lab. He investigates relationships between humanities, arts, and computer sciences, particularly visual forms such as graphical interfaces, data visualization, media art, digital text, Web design, and hypermedia systems. He has authored, edited and translated several books in digital culture and organized various conferences and exhibitions. He served as the Art Papers Chair for SIGGRAPH 2019 in Los Angeles.
Jan Searleman taught Computer Science at Clarkson University for 37 years, retired in 2015, and since retirement has been an Adjunct Research Professor at Clarkson. Her research areas are Virtual Environments, Human-Computer Interaction, and Artificial Intelligence, and she created and supervised an undergraduate Virtual Reality lab at Clarkson. In addition, Jan taught in Clarkson’s Robotics Academy and was a coordinator in a number of Clarkson’s FIRST Robotics Championships (FIRST Lego League and FIRST Tech Challenge). As a member of the SIGGRAPH Digital Art committee since 2015, Jan co-directs the ACM SIGGRAPH Digital Art Archive with Bonnie Mitchell. Also a member of the ACM SIGGRAPH History Committee, Jan co-directs the ACM SIGGRAPH History Archive with Bonnie Mitchell. In addition, she co-directs the ISEA Symposium Archive with Bonnie Mitchell and Wim van der Plas.