How AI-Human Collaborations in Art Deepen Audience Engagement

An essay by renowned artist Refik Anadol and Pelin Kivrak, a scholar at his studio, on AI and machine hallucinations. The Museum of Modern Art in N.Y. just acquired his installation.

13 Min Read
Refik Anadol Studio

Part of a special issue on AI and customer engagement in Management and Business Review, a publication backed by top business schools worldwide.

The metaverse, with its immersive virtual reality environment, promises a dynamic canvas for the bold and imaginative. It is a new territory in which to explore our desire for collaborations between machine and human, whether by building a new virtual universe with its own intrinsic values or a parallel world where presence and perception have deeper meanings.

Using data as pigment

Refik Anadol Studio (RAS) in Los Angeles has been experimenting with hybrid forms of AI-based art-making, combining the creative possibilities of the metaverse with the studio’s decade-long vision of embedding new media arts in architecture. Anadol’s renowned body of works takes publicly available datasets which represent the human experience in various ways, ranging from millions of photographs of bustling urban centers to a comprehensive image and video dataset capturing the glaciers of the world.

"I use data as a pigment and paint with a thinking brush that is assisted by artificial intelligence. Using architectural spaces as canvasses, I collaborate with machines to make buildings dream and hallucinate." -- Refik Anadol, TED Talks 2020

Related:Case Study: AI for Customer Engagement at Google

The primary thread that runs throughout the studio’s visualizations of the unseen world is the use of data as pigment to create enriched immersive environments. Taking the data that surrounds us as the primary material and the neural network of a computer as a collaborator, the studio’s site-specific artificial intelligence (AI) data paintings and sculptures, live audiovisual performances, and environmental installations encourage us to rethink our engagement with the physical world, decentralized networks, collective synesthesia, and the quantifiable healing power of art.

Since 2012, RAS has been conducting interdisciplinary research on the interconnection of the human mind, architecture, and aesthetics in an effort to answer this question: If machines can learn or process individual and collective memories, can they also dream or hallucinate about them? After experimenting with digital paintings and sculptures of architectural data and exhibiting data-based immersive projections in public spaces, RAS started working with AI programs and machine learning algorithms using a diversity of data, be it visual, auditory, seismic, geographic, cultural, or institutional.

An ambitious beginning

The studio’s journey toward this unique machine intelligence approach to creating art began with the 2016 project titled Archive Dreaming. Refik Anadol was invited to join the Google Artist-in-Residence program for Artists and Machine Intelligence (AMI). There, engineers and thinkers in the field, including Google’s own Blaise Aguera y Arcas, Kenric McDowell and Mike Tyka, helped the entire RAS team to learn and use AI to create Archive Dreaming.

Related:Case Study: Candy Giant Mars Uses AI to Test Which Ads Lead to Sales

To complete the large-scale work, RAS collaborated with SALT Research collections in Istanbul, which provided access to 1.7 million documents from the 17th to 20th century archives of the Ottoman Bank, a dominant financial institution during the Ottoman Empire. We employed machine learning algorithms to search and sort relationships among these data points while translating interactions of the multidimensional data in the archives into an immersive and interactive media installation.

Archive Dreaming, presented in Istanbul in 2017 as part of The Uses of Art: Final Exhibition, supported by the Culture Programme of the European Union, was user-driven. Yet when idle, the installation ‘dreamt’ of unexpected correlations between documents, yielding generative, aesthetic, and fresh visuals representing those serendipitous links and overlaps.

Related:Mastercard: Using AI to Spot Micro Trends for Effective Customer Engagement

Refik Anadol Studios translated the resulting high-dimensional data and interactions into an architectural immersive space, projected onto a circular area in 7680 × 1200 resolution. The installation – at times displaying swirling points of light in a rotating tunnel, or flashes of brightness with shapes that shimmer and disappear, or cloud clusters and showers of illumination − allowed the audience to browse the millions of documents and images within the archive. Curious art lovers could peak inside this universe of data and zoom in on any file at their pleasure. After its debut at SALT Galata, an extension of the project was exhibited at the Ars Electronica Festival 2017 in the section titled Artificial Intelligence.

RAS rendered these images as three-dimensional forms determined by the similarities between documents found by machine learning algorithms. Thousands of Istanbulites and researchers from around the world visited the installation in the heart of bustling Istanbul. Kenric McDowell, who leads Google’s AMI program, described the generative potential that this groundbreaking project offered at the intersection of culture and technology: “The experience is visually and architecturally impressive, but what’s most exciting about it is the way it reframes the archive as a multidimensional space of images connected by features, that is to say, visual relatedness. The piece also uses the machine learning model to hallucinate new images that could exist in the archive but don’t. To my knowledge this is the first time a museum archive has become generative in this way.”

Architectural ‘consciousness’

Another turning point in the studio’s visualizations of institutional archives was Walt Disney Concert Hall Dreams (WDCH Dreams), a collaboration with the Los Angeles Philharmonic (LA Phil) to celebrate its 2018/19 centennial season.

For this project, RAS developed a unique software to process the orchestra’s digital archives – 45 terabytes of data containing 40,000 hours of audio from 16,471 performances. The AI parsed the files into millions of data points that it then categorized by hundreds of attributes using deep neural networks with the capacity to both record the totality of the LA Phil’s memories and create new connections between them.

This ‘data universe’ generated something new in image and sound by awakening the metaphorical consciousness of Walt Disney Concert Hall. The visual art was projected onto the building’s curved exterior walls, a swirl of moving images ranging from a rainfall of data, to musicians playing, to waves of white blocks surging like a digital ocean, and other abstract imagery. The result was a radical visualization of the LA Phil’s first century and an exploration of the synergies between art and technology, architecture, and institutional memory. The work presented the philharmonic’s entire digital archives in a non-linear way. It also featured an interactive companion installation in a U-shaped room in which two-channel projection provided multiple experiences for visitors.

RAS also collaborated with the Philadelphia Orchestra in the Spring of 2022 on a project titled Beethoven: Missa Solemnis 2.0. In Verizon Hall, a new AI-based artwork unfolded for an audience as they listened to the composer’s 1823 masterwork. The research question for this unique artwork’s dataset was: “Why not use artificial intelligence to try to reconstruct the reality of what Beethoven could imagine?” With this motivation, the RAS team started compiling a dataset of 12 million images of buildings that Beethoven could have encountered in Europe during his life. We then used this archive to train a custom algorithm that generated a new data universe of artificial architectural images for us to curate into the final artwork.

The dynamic visual artwork, consisting of the machine’s hallucinations of alternative European architectures, alluded to religious spaces and iconography, generating the feel of a virtual cathedral for the performance of this sacred music piece. We created another software tool that allowed the AI to listen to the orchestra performing Beethoven, simultaneously generating and projecting its dreams for the audience. This addition further manifested the vision of the borderless-ness of art that the studio espoused with the WDCH Dreams project.

These early works of interdisciplinary and international collaboration paved the way for the global impact of RAS’ immersive artworks to gradually expand. Infinity Room, an artwork developed in 2015, combines the boundlessness of space with the endless permutations of machine intelligence, inviting visitors to step into a 360-degree mirrored room that uses light, sound, scent, and technology to offer a seismic perception shift.

Originally presented in collaboration with the 2015 Istanbul Biennial, Infinity Room began traveling the world. More than two million people have now experienced it. In 2021, the National Gallery of Victoria exhibited Quantum Memories, the studio’s epic scale investigation of the intersection of architecture, machine learning, and the aesthetics of probability on the largest LED screen it had ever used. The exhibit has welcomed 1.7 million visitors, the largest audience to experience a digital artwork in Australia.

Machine Hallucinations: Nature Dreams, a solo exhibit of AI data sculptures and paintings based on nature-themed datasets in Berlin’s Koenig Gallery, drew 200,000 visitors in five weeks, the largest audience to visit a gallery in Europe. Machine Hallucinations: Space was the product of our collaboration with NASA’s Jet Propulsion Laboratory, beginning in 2018. It is an immersive art piece rooted in publicly available photographs of space taken by satellites and spacecraft. Ninety thousand visitors in three weeks gave it the largest audience ever to view an artwork in Hong Kong.

Engaging a diverse audience

RAS’s broad experience of creating such large-scale, immersive, multi-sensory installations around the world motivated the team to take these ideas to the metaverse. In the spring of 2021, RAS had the opportunity to explore its potential for creative production with a collaborative artwork centered around Barcelona’s iconic 1906 Gaudí building, Casa Batlló.

We created Living Architecture as an example of how an artwork could engage a diverse audience by existing in multiple and dynamic virtual forms while still exhibiting a strong connection to the physical world. This long-term project began when RAS was commissioned to create In the Mind of Gaudí, an AI-based immersive experience presented in an LED cube room, lined with six screens, in the basement Casa Batlló.

For this initial 360-degree experience, RAS collected approximately one billion images consisting of Gaudí’s sketches, visual archives of the building’s history, academic archives, and publicly available photos of Casa Batlló found on various internet and social media platforms. We processed them with machine learning classification models, which sorted images into thematic categories.

With the help of our custom-generated software and fluid simulation models, making the images dynamic, we then transformed this data universe, inspired by Gaudí, generated by machine, and curated by humans, into an AI data sculpture. We used subsets of the data to train an AI model, causing the machine to hallucinate new aesthetic images and color combinations through algorithmic connections. We then clustered the images into thematic categories to better understand the semantic context of the data universe.

This expanding data universe was not just the interpolation of data as synthesis, but a latent cosmos in which the hallucinations of the AI was the main creative and artistic currency. To capture these hallucinations, we used NVIDIA’s Style-GAN2 adaptive discriminator augmentation to generate an AI model through which the machine could process the archive. The model was trained on subsets of the sorted images, creating embeddings in 1024 dimensions.

The next stage is the data pigmentation. We combined visual elements from a range of sources into single images. For more than 10 years, we have been experimenting with custom software and fluid simulation models to give these visuals a unique dynamism. Fluid simulation allows a computer to emulate and generate the visual qualities and behavior of a fluid. It is one of our studio’s signature visual effects in data visualizations. We have used it at various levels of complexity including real-time and interactive animation.

Redefining the museum experience for visitors

For Living Architecture: Casa Batlló, we synthesized the vast data from Gaudí archives into ethereal data pigments and eventually into fluid-inspired movements. In the Mind of Gaudí was part of Casa Batlló’s monumental re-opening after the pandemic, a series of immersive and interactive tours that created new connections between the masterpiece and its visitors.

The installation used a 360-degree art experience in the LED cube, billed as the world’s first 10D experience, to offer unprecedented insight into the Catalan architect’s mind, using the world’s largest digital Gaudí library. The technology that we used to represent Gaudí’s eccentric aesthetic, a dynamic multi-sensory space with more than 1,000 digital projections, 21 audio channels, binaural sound, and scent, redefining the museum experience and attracting audiences from around the globe and from all ages and backgrounds.

We went on to build a second artwork from the same Gaudí archive, one that can be experienced and celebrated both physically and in the metaverse as an NFT (non-fungible token). NFT artwork is digital art that can’t be duplicated, and therefore has value as a unique item. In Living Architecture, RAS radically interpreted Gaudí’s building and its memories, making it the first UNESCO World Heritage Site to be represented in a dynamic NFT.

Using the building’s iconic façade as a canvas, we displayed an algorithmically generated, dynamic abstraction. The digital piece was also shown at Rockefeller Plaza in Manhattan, New York. To amplify the visual and physical impact of the work, we created a custom scent and audio soundtrack for the pre-sale of the exhibition which complemented the urban environment.

The work was sold at Christie’s 21st Century Evening Sale on May 10, 2022 for $1.38 million. We donated 10% of the proceeds to institutions that work with neurodiverse adults and children. Three days prior, we had projected a mapping version of the piece on the facade of Casa Batlló before 47,000 people. Living Architecture embodied how advanced our experimentation with machine learning tools and blockchain technology have become; we used sensors in Barcelona that collect real-time environmental data which caused the NFT piece to change along with the city’s weather and the art projected on the façade.

An ongoing experiential artwork

Technologist and startup CEO Sagi Eliyahu defines customer engagement as an ongoing emotional relationship that can best be described as the sum of multiple moments or the customer’s overall emotional connection that results from the totality of the experience. The Casa Battló project offered exactly this connection, culminating in the final sale of the NFT artwork. As Gary Gautier, Casa Batlló’s director, put it, “The landmark was seen around the globe thanks to an unprecedented audiovisual show that was watched from Barcelona, New York, the metaverse, and thousands of houses, live. Those who had the chance to watch it, we watched a unique moment in the art history: bringing together a World Heritage site and the digital artwork from Refik Anadol.”

The ultimate NFT project perfectly reflected RAS’s vision of reaching a diverse audience through multiple channels of sensory engagement. We see such meaningful collaborations between machine and human not only as aesthetic representations but also as productive experiments that can be used to create multiple forms of psychological fulfillment and empowerment in both the psychical and virtual world.

The NFT sale allowed us to contribute to a worthy cause, raising further awareness of such work as the meaningful merging of science, technology, and the arts to address global issues. It also demonstrated the potential of metaverse architectures to reaching a broader audience through decentralized creativity, as well as the innovative use of blockchain currencies, to raise funds for charities, providing tangible solutions to diverse global problems.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Refik Anadol

Internationally renowned artist

Refik Anadol is an internationally renowned artist whose data-driven machine learning algorithms create abstract art in motion. He is the director of Refik Anadol Studio in Los Angeles and a former Google Artist-in-Residence.

Pelin Kivrak

Pelin Kivrak is a senior researcher at Refik Anadol Studio and a lecturer at Tufts University's Department of English.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like