ABOUT | SPARKS | EXHIBITIONS | EVENTS | COMPETITIONS | VIDEO | COMMUNITY
AI and Artistic Autonomy
Moderated by: Mauro Martino and Rebecca Ruige Xu and Gustavo Alfonso Rincon
Date and Time: March 21, 2025

UTC, Time Zone Fri, Mar 21, 2025 at 4:00 pm

View the Recording of the Session:

Session Description:

In the era of artificial intelligence, artists using AI tools often find themselves dependent on models and algorithms developed by others, frequently by large tech companies. This session explores how this dependence influences creative practice, highlighting the dynamics of power, limitations, and opportunities that arise.

We will discuss how the lack of access to training data, model weights, and computational resources can limit artists’ ability to understand, modify, and customize the AI tools they use. We will examine how the promise of “openness” in AI often does not translate into true democratization of access or a reduction in the concentration of power in the tech sector.

The session will invite participants to reflect on key questions: How does dependence on models created by others influence authenticity and originality in art created with AI models? What are the ethical and legal implications of using models and data whose origin or composition is unknown? How can artists navigate an ecosystem where the resources needed to develop their own AI models are often out of reach?

Combining theory and practice, we will offer a space to discuss creative strategies that artists can adopt to work more autonomously, including collaboration on open models (increasingly rare), participation in shared development communities, and promoting greater transparency in AI.

This session is designed for artists, developers, and researchers interested in understanding the challenges and potentials of AI in contemporary creative practice.

Additional Information:

https://www.washingtonpost.com/technology/2024/11/26/openai-sora-ai-video-model-artists-protest/
https://techcrunch.com/2024/11/26/artists-appears-to-have-leaked-access-to-openais-sora/
https://www.theverge.com/2024/11/26/24306879/openai-sora-video-ai-model-leak-artist-protest


AI as a Memory Machine
Lev Manovich     

GenAI image tools, like all visual media, have distinctive strengths and limitations. These tools excel at generating visuals based on subjects and aesthetic styles well-represented in their training datasets. While recent AI models have significantly improved, they still perform best with familiar content and still struggle relatively more with truly novel, rare, or conceptually complex requests. Rather than viewing this behavior merely as a limitation to overcome, what if we approach it as a creative opportunity? Historically, artists and designers have always worked within the constraints of their tools and mediums – or deliberately imposed additional constraints (consider minimalism or systems art). I will discuss how I’ve exploited this perspective on the GenAI medium in my recent artworks featured in solo exhibitions “Unreliable Memories” and “Memory, Draw.”

Appropriation Art in the Age of the Plagiaristic Machines

With Machine Learning (ML), which relies on the accumulation and exploitation of cultural content from all over the internet, the digital commons seem to be over, replaced by new, overarching forms of enclosure. Sharing and open access appear to be losing their radical potential, when they start benefiting corporate superpowers much more than any individual subject with limited access to copyrighted resources. Does an anti-copyright agenda still make sense, when copyright regulations seem to provide the only possible defense against rapacious appropriation and data greed? Can appropriation practices still work as tactics of counter-cultural resistance, as critical tools for questioning and investigating concepts of authorship and intellectual property, and as forms of “algorithmic sabotage”, at a time in which they appear to have become instruments of algorithmic control and power?

Addressing Bias and Transparency in Commercial Generative AI models: Recent histories as templates for going forward
Amy Alexander     

Commercial generative AI image generation models often exhibit racial, gender, and other biases. These issues mirror recent histories of bias in online image search and in discriminative AI models used in facial analysis and image classification. While far from perfect, these earlier technologies have improved after the biases received public attention. Why do AI models for image generation continue to exhibit such significant biases? How do these earlier histories of public pressure around bias in image search and facial analysis intersect with recent activism around generative AI model transparency? Considering these histories together can help provide context for developing approaches toward addressing the overlapping concerns of bias and transparency in generative AI models.

Artificial Intelligence Meets Critical Media Art

This talk describes several artworks by the author that engage with agents of Artificial Intelligence (A.I.) such as generative art creation and rule-based design in critical media art production. The artist will discuss these topics and examine three underlying themes in the practice of utilizing A.I. in art creation, including 1.) Treating A.I. as a content creator, 2.) Advocating for A.I. as an artist collaborator, and 3.) Invoking A.I. as a critical instigator.

AI Art and Cultural Reproduction
Weidi Zhang     

This presentation will explore two recent AI art projects, ReCollection and Wayfarer, as examples of how AI art is utilized in cultural production and reproduction.

Real-time Stable Diffusion for Virtual Production: Room Scale Generative AI towards the Ultimate Display
Daniel Pillis     

Through a series of case studies using a 2240 x 1440 1.9mm pixel virtual production display, we are exploring the performance benefits of real-time stable diffusion as a way to generate immersive virtual environments, showcasing how generative AI enhances artificial set design, prop creation, and rapid prototyping for storytelling. The use of diffusion models will enable filmmakers to shift from manually intensive design processes to AI-assisted creative workflows, facilitating the development of virtual production environments that can enhance storytelling.

Depending on AI in the Formation of My Self
Avital Meshi     

GPT-ME is an exploration of human-AI hybridization, questioning notions of AI-mediated identity. A performative use of a wearable GPT-based device that generates real-time responses allows the pre-trained model to speak through me. This performance highlights the structural limitations of working with AI systems controlled by tech companies. Despite incorporating GPT into my identity, it remains inaccessible, opaque, and beyond my control. Each time OpenAI releases a new GPT version, my voice shifts, altering my “self” according to the latest training data, as well as the guardrails and values that are incorporated into it. This forced evolution raises critical questions: How does dependence on an evolving AI affect selfhood? What happens when pre-trained models shape not only creative output but thought itself? Through this performance, GPT-ME exposes the ways in which AI-generated identities are shaped by hidden layers of corporate priorities, and their capitalistic logic.

Artmaking as Experience: Reimagining Autonomy and the Creative Process

As AI models improve in generating hyper-realistic art, debates intensify about its threat to human creativity. Inspired by John Dewey’s view of art as a lived experience—a dynamic act of doing and undergoing—this work shifts attention from output to process. Rather than fixate on whether AI can replace artists, we explore how it reshapes creative agency: Can machines replicate the emotional depth of human intention? Three projects illustrate: neurodiverse performers co-creating live shows with AI; AI-driven video generation compared to Processing-based visual music; and the transformation of digital art into fine art printmaking. Each case reveals that human-led processes elicit deeper emotional resonance than AI’s autonomous outputs. By prioritizing collaboration over automation, we argue AI’s value lies in expanding creative possibility—if guided by human curiosity, vulnerability, and imagination. The challenge? Designing systems that amplify, rather than constrain, embodied making.


Moderator(s):
Mauro Martino

Mauro Martino, an Italian scientist and artist, combines artificial intelligence with data visualization to create interactive tools that simplify complex information. He is the founder of the Visual AI Lab at IBM Research and a Professor of Practice at Northeastern University. His work has been featured in The New York Times, The Guardian, Wired, and National Geographic, as well as in textbooks like Network Medicine (Harvard) and Network Science (Cambridge). A former research professor and MIT affiliate, he has collaborated globally on visualizing human mobility data. His award-winning projects, recognized by the NSF and Fast Company, have been showcased at Ars Electronica, Lincoln Center, and more. Published in top journals like Nature and Science, Mauro’s work highlights his significant contributions to art, science, and AI.

Rebecca Ruige Xu

Rebecca Ruige Xu is a professor of computer art at Syracuse University whose research explores the synergy between AI, data visualization, experimental animation, and interactive media. Her work investigates how emerging technologies, including AI, can enhance creative practice, particularly through artistic data representation, visual music, and digital performance. Xu’s projects have been showcased internationally at venues such as SIGGRAPH, Ars Electronica, and IEEE VIS. She also contributes to the field as the Chair of the ACM SIGGRAPH Digital Arts Committee and Co-Chair of the IEEE VIS 2023, 2024 Arts Program.

Gustavo Alfonso Rincon

Dr. Gustavo Alfonso Rincon (Ph.D., M.Arch., M.F.A., B.S, B.A.) earned his doctorate in Media Arts and Technology at UCSB. Rincon is educated as an architect, artist, curator & media arts researcher. His academic works have been exhibited nationally & internationally along with serving clients globally. His dissertation “Shaping Space as Information: A Conceptual Framework for New Media Architectures.” Currently he is a lead researcher at AlloSphere Research Facility, affiliated with the Media Arts & Technology Program, California NanoSystems Institute at the University of California, Santa Barbara.