Robotics, Electronics and Artificial Intelligence
Moderated by Hye Yeon Nam and Jan Searleman
Los Angeles, USA Fri, Apr 30, 2021 at 1:00 pm PDT
Denver, USA Fri, Apr 30, 2021 at 2:00 pm MDT
Chicago, USA Fri, Apr 30, 2021 at 3:00 pm CDT
New York, USA Fri, Apr 30, 2021 at 4:00 pm EDT
London, UK Fri, Apr 30, 2021 at 9:00 pm BST
Vienna, Austria Fri, Apr 30, 2021 at 10:00 pm CEST
Tokyo, Japan Sat, May 1, 2021 at 5:00 am JST
Sydney, Australia Sat, May 1, 2021 at 6:00 am AEST
The 4th SPARKS online Zoom session will feature presenters who create art using robotics, electronics, or artificial intelligence. In popular culture, robots and artificial intelligence are often depicted as menacing, unethical and sinister threats to humanity. Yet these technologies have the potential to positively impact social and cultural systems as well as transform life as we know it. Artists have been working with electronics, robotics and AI to explore, speculate, and reflect on the broader relationship between human, machine, and the environment.
This session features presentations of artworks and research that stimulate people to view the world in different ways, provoke innovative interactions, spark creative responses, and challenge our notion of technology’s potential.
Hye Yeon Nam is a digital media artist working on interactive installations and robotics. She foregrounds the complexity of social relationships by making the familiar strange and interpreting everyday behaviors in performative ways. Hye Yeon’s art has been showcased in The Smithsonian National Portrait Gallery in Washington D.C, Times Square, the art gallery Eyebeam and The Tank, the Conflux, the D.U.M.B.O. Art Festival in New York, FILE, SIGGRAPH, CHI, ISEA, E3 Expo, the Lab in San Francisco, and several festivals in China, Istanbul, Ireland, the UK, Germany, Australia, Denmark, and Switzerland. She is currently an associate professor of digital art at Louisiana State University.
Jan Searleman taught Computer Science at Clarkson University for 37 years, retired in 2015, and since retirement has been an Adjunct Research Professor at Clarkson. Her research areas are Virtual Environments, Human-Computer Interaction, and Artificial Intelligence. In addition, Jan taught in Clarkson’s Robotics Academy and was a coordinator in a number of Clarkson’s FIRST Robotics Championships (FIRST Lego League and FIRST Tech Challenge). As a member of the SIGGRAPH Digital Art committee since 2015, Jan co-directs the ACM SIGGRAPH Digital Art Archive with Bonnie Mitchell. Also a member of the ACM SIGGRAPH History Committee, Jan co-directs the ACM SIGGRAPH History Archive with Bonnie Mitchell. In addition, she co-directs the ISEA Symposium Archive with Bonnie Mitchell and Wim van der Plas.
- What is your motivation for creating robotic artwork?
- Robots are ubiquitous throughout popular culture. Where did you find ideas that influenced your works?
- What are the challenges when you work with robots?
- Do you sometimes collaborate with scientists or engineers?
- Where do you find funding?
- What do you envision the future of robotic art will be?
- What venues/communities do you recommend for sharing ideas and techniques?
- Do you have any recommendations to students who would like to develop their careers in robotics, electronics & AI?
Art Science study of Body Ownership Illusion by body swapping experiments between humans and robots.
In the artistic research I was permitted to use Geminoid HI-2 and Telenoid robots, developed by the Advanced Telecommunications Research Institute International (ATR). Those two different android robots where chosen based on their technical and physical specifications. Androids are human-like-machines-robots, though a distinction must be made between robots and androids. Robots stand for a capitalist engineering of optimizing economic interests, while androids introduce a new meaning and an added value to the field of robotics. The research work is carried out in the form of studies at the intersection of cognitive sciences, arts and engineering. This tight bond between science, engineering and art is called android science and one of its visionaries is a Japanese scientist, Professor Hiroshi Ishiguro.
Breaking the Frame
Since the start of the pandemic, we’ve all had a little too much screen time, from Zoom birthday parties to binge watching burnout. In this talk, Neil will share some of his thoughts on using technology to create art works that live outside the screen, blurring the line between physical and the digital.
Generative Emotion for Robotic Media Art
Being emotionally intelligent is a form of artificial intelligence, both in the ability to express emotion and manipulate them. Robotic media art is an ideal place to explore these boundaries and question our relationship with technology. Algorithmically generated beeps and chirps can be used by robots to express emotion. I’ve mapped the way that people touch a robot to audio qualities of emotive sound to explore our touch mediated cyclical relationship with technological and biological agents. I’ll discuss the application of using decision trees to build emotionally manipulative robots to explore embodiment, agency, gender and social interactions. For Cacophonic Choir, an interactive art installation about sexual assault survivors experience, I trained a neural networks to generate emotionally charged text from first hand accounts. These works exemplify how robotic art can be used to probe ethics, embodiment, and agency of artificially intelligent systems.
Hyun Ju Kim
Poetic Machine : Machine Perspective
This talk will present the idea and the process of making the artist’s robotic art work series <Poetic Machine>. The series proposes a poetic or alternative attention toward machines in the age of Anthropocene era, and was produced from a personal and contemplative position toward non-human artificial agents such as machines, artificial intelligence, and robots that are pervasive in the techno-society. <Poetic Machine: Pause>, first released in 2017, proposes the thoughts around ‘pause’ or ‘non-production’, ‘inefficiency’, and ‘non-function’ in our contemporary techno-society where automation and optimization are considered as ultimate values. In <Poetic Machine no2: Prologue for Machine Perspective>(2020), the artist ironically illuminates the aspect of mechanical gaze. The work shows how the machine recognizes objects around it and the probability of the estimation in real time using Google’s Vision API. Facing the era of aggressive and indiscriminate mechanical gazes and epistemology, what are the things that human beings need to continue, change, or correspond to this techno-landscape? The artist asks.
Physical interfaces for Sensing and Expressing Air Quality
Invisible and irreplaceable—the Earth’s atmosphere is a complex and constantly changing mass of gases, particles and liquids that is not immediately visible to the human eye. Motivated to provide an experience with the air, we introduce physical interfaces for human-atmosphere interaction. These interfaces aim to augment the human senses to enable a visual-auditory feedback of air components in the physical world (e.g., particles and CO2). We also present the results of a participatory workshop that transferred knowledge about atmospheric issues through the use of prototyping methods and speculative design.
Artist and Machine
“Artist and Machine” is a performance series that examines the complex dialogue between AI, its creators, and its users. In each live performance, an Artist and a Machine draw portraits of their viewers. The Artist draws one person at a time, honing her skills over a lifetime. The Machine, trained by the Artist for mimicry, draws hundreds of portraits every minute and improves with every performance. Most participants choose only one staring session with the Artist, but repeatedly return to the Machine. They transform into users, treating the Machine like a mirror, a service, a tool. This talk will use these performances as a lens for discussing the achievements and implications of an AI dependent world.
From Experimental Human Computer Interaction to Machine Cohabitation: New Directions in Art, Technology, and Intimate Life
How do we prepare for a future living, working, and learning with machines? What new possibilities arise from the advent of always-on intelligent assistants, affordable co-robotic platforms, and ubiquitous AI? Now that we have invited the machines into our homes, our workplaces, our intimate everyday, how can we reimagine the terms of our human-computer interactions?
Through the presentation of a series of experimental arts projects, this talk addresses our machine cohabitant future. I will show key previous works building affective surrogates, developing inhabitable smart spaces, and situating machine observers with varying degrees of agency within shared environments. These projects lead to the discussion of my current work building embodied interfaces and staging experimental Human-Robot Interactions. I will raise critical concerns with language and communication, embodied intelligence, and the dynamics of model-limited experience within these contexts.
Predrag K. Nikolić
Syntropic Counterpoints: Metaphysics of The Machines
Creativity and the act of creating art are some of the most significant challenges the new generation for artificial intelligence models. In the presentation, we will present three artworks that are part of the art research project Syntropic Counterpoints. Conceptually, we expose philosopher AI clones into the debate and create authentic automated AI content through novel Robot-Robot and Human-AI interactions. Through the project, we aim to raise some of the fundamental questions related to the impact of artificial intelligence agents in future human-AI society. Metaphysics suppose to deal with knowledge at the highest level of abstraction, universal rather than particular. We are using the philosophical corpus to train our four Philosopher’s AI Clones of Aristotle, Nietzsche, Machiavelli and Sun Tzu. The question we are raising is, can we explain and understand the ideas and principles of humanity we are expecting from artificial intelligence to interpret, follow and make decisions.
Scottie Chih-Chieh Huang
An exploration by using biologically-inspired computing for generative design and robotics performance
This talk will share three art projects that use biologically-inspired computing for exploring novel mechanisms that apply in forms and behaviors for artistic exploration. In “Dandelion,” artist coding generative rules of symmetry into a fractal tree-like algorithm, altering and changing the geometric description of leaves and branches through the parameter and the proportion between the two, resulting in a diverse and complex morphologies. In “Oyster City,” Artist develop these inter-connective tensegrity structure system for generating transformable structural shape meet to recursion features for feasible in the manufacturing logic. In “Parameter, Algorithm and Nature,” The robot real time communicates with the artificial life creature by its mechanical movement. It realtime interact/stimulus with virtual content through the embedded computational mechanism of distributed behavior, defusing the swarm geometrical animation.