Examining the Impact of Artificial Intelligence in Museums

Brendan Ciecko, Cuseum, USA


Artificial Intelligence. It’s a concept that holds lots of promise, generates endless buzz, and is starting to make its way into everyday life. In 2015, artificial intelligence went mainstream, and undoubtedly, in 2016, we will begin to see an increase in experimentation within the cultural space. In this presentation, we’ll explore some of AI’s most powerful uses related to machine learning and its impact on galleries, libraries, archives, and museums in the areas of collections, ticketing, and attendance data. We’ll also examine machine vision; a computer’s ability to understand what it is seeing. Machine vision can be used to inspect and analyze images. Imagine being able to classify all of your visual objects with the flip of a switch (actually, a few lines of code). We’ll explore real examples of machine learning on the following topics: -Identifying subject matter -Exacting color composition -Sentiment analysis -Text/character recognition -Recognizing similarity and patterns -Art authentication Machine learning and vision are very powerful tools and are more accessible than ever before. In the hands of museums, these technologies will inevitably lead to interesting discoveries, rich data, and new paths into your collection.

Keywords: Artificial Intelligence, AI, Machine Learning, Machine Vision

Artificial Intelligence. It’s a concept that holds much promise, generates endless buzz, and is starting to make its way into everyday life. “In 2015, artificial intelligence went mainstream,” (Time, 2015) and undoubtedly, in 2017, we will begin to see an increase in experimentation within the cultural space.

We will explore several of AI’s most powerful uses related to machine learning and machine vision focusing on its impact on galleries, libraries, archives, and museums.

What is machine learning?
“Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.” (SAS, 2016)

Machine learning’s impact on collections
It comes as no surprise that museums have tremendous amounts of data. Strides have been made over the past decade towards structuring collections’ data and making it available for the public to access and experiment with. While still highly untapped, this valuable metadata holds power and yields interesting ways to analyze collections, objects, and creators in new ways. But, it also requires significant resources, tools, time, and expertise.

In an ideal world, GLAM (galleries, libraries, archives, and museums) collection data would be structured and well classified, but given that “more than 90 percent of (enterprise) data is unstructured, human-generated and sourced from various disparate entities” (IDC, 2015) we can assume that museum collection data would benefit from some clean-up, perhaps even an overhaul.

Could AI come to the rescue, even helping museums make new discoveries about their collections? Those working with the museum’s collections’ management system could “train” a system to effectively clean-up, classify, and further understand their data.

Pointing to one large scale initiative,
machine learning has become a recurring theme amongst the EU’s digital platform for cultural heritage, Europeana’s Search Strategy, published in 2016.

Do you want to quickly run a
sentiment analysis across the title and didactic text of every object in the collection? You can–and it’s becoming exceedingly easy to do with the tools that are currently available.

See how three museums have used sentiment analysis:

Machine learning’s impact on ticketing and attendance
Imagine taking those massive sets of ticket and visitor traffic data and using AI to look for clear correlations between them and social media activity, weather, advertising spending, and other variables.

Research at Pennsylvania State University has investigated methods of predicting attendance as outlined in the report,
“Who Will Attend? – Predicting Event Attendance in Event-Based Social Network.”

It’s feasible to say that museum departments could discover new and insightful information that could be used to make predicting crowd flow, allocating staffing resources, and overall planning more efficient.

Machine learning’s impact on membership and fundraising
Pattern recognition could easily help museums identify members who are most likely to renew, upgrade, or lapse. New tools can assist development teams on their fundraising campaigns by deciphering trends, navigating through the social graph, and automating aspects of the donor outreach.

Although relatively new to the market, software companies such as Gravyty and Affectly have used some of the aforementioned techniques to help nonprofits fundraise more effectively.

Machine learning’s impact on e-commerce
Major e-commerce sites like Amazon, eBay, and Zappos have been using recommendation and personalization engines for as long as anyone can remember. By analyzing your behavior, i.e. pages you visit, products you look at, and categories you explore, online retailers make recommendations to provide a more personalized experience for each visitor.

Major museum online stores such as those of
The Met, MoMA, and dozens of others already use recommendation engines. On the horizon is the mass concept of conversational commerce. Chris Messina of Uber said “2016 will be the year of conversational commerce.” (Messina, 2016)

What is machine vision?
Machine vision is the ability for a computer to understand what it is seeing.

“We’re going from computers with cameras, that take photos, to computers with eyes, that can see”
– Benedict Evans, Andreessen Horowitz

Back in 2014, the Museum of Arts and Design in New York hosted a panel examining the “Cultural Impact of Computer Vision” from the eyes of artists. Flash forward to the present, and we will take a look from the perspective of museums.

Impact of machine vision on identifying subject matter
Machine vision has become advanced enough to detect the subject matter and objects depicted in an image. What is depicted in this painting, photo, video, or sculpture?

Figure 1: image of “The Grand Canal in Venice from Palazzo Flangini to Campo San Marcuola” by Canaletto, J. Paul Getty Museum

Using Google Vision API we tested Canaletto’s The Grand Canal in Venice from Palazzo Flangini to Campo San Marcuola located at the J. Paul Getty Museum in Los Angeles. (see figure 2)

The results were acceptable and positive. The four terms returned (watercraft rowingrowing, gondola, and painting) were all accurate descriptions of the subject matter and objects.

Figure 2: image of Terminal running script to analyze the aforementioned Canaletto painting.

There is still a ways to go with object classification but it’s worth noting that the more you “train” a machine vision engine, the more accurate it becomes.

Museums such as the Harvard Art Museums, Minneapolis Arts Museums, Norwegian National Museum are amongst the first to experiment with this approach and share their findings publicly.

Machine vision’s impact on sentiment analysis
If there are unobstructed human faces in an image, machine vision can be used to determine the emotional state of those portrayed by analyzing the facial characteristics.

To put this process to the test, we ran a few portraits through the Emotion API of Microsoft Cognitive Services. (see figures 3-5)

Figure 3: Image of “Bust of a Laughing Young Man” (1629) by Rembrandt (circle of), Rijksmuseum.


Figure 4: Image of “Femme aux Bras Croisés” (1901) by Pablo Picasso, Private collection.


Figure 5: Image of “Self-Portrait” (1912) by Otto Dix, Detroit Institute of the Arts.

Machine vision’s impact on text/character recognition
The ability to easily extract text from every object in your collection has been possible for many years. The tool commonly known as “optical character recognition” has recently become more accessible and faster to use via cloud APIs.

Figure 6: Image of “California Grapeskins” (2009) by Ed Ruscha.

While this might not be absolutely necessary for pieces by Lawrence Weiner (as the title and text displayed in his works are usually the same), this function’s greatest value could come from extracting text from written documents (historical letters, etc.) so that it’s searchable and easy to classify.

In this Ed Ruscha piece titled California Grapeskins, the full text can be successfully extracted, providing additional information that may not be available in its collection data record. (see figure 6)

Machine vision’s impact on exacting color composition
Color composition is one meta-tag that you are unlikely to find in most museum collections’ databases. Running an object’s image through a computer vision tool can extract and output data related to its color clusters, partitions, and histogram data.

Cooper Hewitt, Smithsonian Design Museum and Google Arts & Culture have implemented this process to extend a new approach to discovery. (see figure 7-8)

Figure 7: Screenshot of Cooper Hewitt’s collections website where visitors can browse objects by color.


Figure 8: Screenshot of Google Art & Culture’s app where visitors can browse objects by color.

Machine vision’s impact on recognizing similarity and patterns
Are there other works in your collections that are very similar, not just on subject matter, but visual composition? A computer can see these relationships and quantify the differences and similarities.

For example, these two Clyfford Still “replica” paintings are slightly different, 5.58% to be exact. (see figure 9)

Figure 9: Image of Clyfford Still paintings (L to R) PH-225, (1956). Oil on canvas. Collection of the Modern Art Museum of Fort Worth; PH-1074, (1956–9). Oil on canvas. Clyfford Still Museum © City and County of Denver

I was personally inspired to uncover this after visiting the Clyfford Still Museum in October of 2015 for Repeat/Recreate, a fascinating exhibition in its own right. The museum’s director of digital media, Sarah Wambold, has also written about this concept in an article titled ”Twinsies!” (Wambold, 2016)

Walking into the Impressionism Gallery at the Museum of Fine Arts, Boston, you’ll find these two paintings by Claude Monet, side-by-side. According to computer analysis the two works are 96.81% similar. (see figure 10)

Figure 10: Image of Claude Monet paintings (L to R) Water Lilies, 1905. Oil on canvas; Water Lilies. 1907. Oil on canvas. Collection of Museum of Fine Arts, Boston.

Machine vision’s impact on art authentication
Back in 2008, PBS NOVA covered the case of computers helping distinguish forged art from original masterpieces. This project was in cooperation with the Van Gogh Museum and challenged computer scientists to build tools to analyze brush strokes and identify forgeries. (see figure 11)

Figure 11: Image of brush stroke analysis by computer program.

Recent revelations: TATE Britain IK Prize
In September 2016, artificial intelligence was the core topic of a museum exhibition and project at the TATE Britain. The winner of the 2016 IK Prize utilized various aspects of machine vision, such as subject matter identification, composition, and facial recognition. (see figure 12)

Figure 12: Images of photos and painting from Tate IK Prize. From left: Eduardo Munoz/Reuters. Stephen McKenna, via Tate.

In response to the project and exhibition, The New York Times published the story “Artificial Intelligence as a Bridge for Art and Reality” with voices from the museum community: “James Cuno, president of the J. Paul Getty Trust and an evangelist for the use of technology by art historians, assessed ‘Recognition’ as a ‘well-meaning and an interesting experiment.’ Then he added, ‘It shows that we are in the early stages of the development of this technology and that there’s still a long way to go.’”

Another notable position found in artnet News’ “Art World Predictions for 2017” stated  that “the burgeoning field of Artificial Intelligence will finally figure out curating” and pointed to a project called HUO 9000. Without question, we can expect more projects like this to emerge that further challenge the status quo of art curation.

Artificial intelligence is being lauded as “the future.” There is untapped value to be unleashed across sectors seeking its commercial, scientific, and educational potential. With machine learning and vision tools more accessible than ever before, museums have the opportunity to innovate and optimize in areas that were previously too costly or resource prohibitive to pursue.

Regarding broader applications of AI, we must acknowledge that creative bots are already creating paintings, writing screenplays, and composing music. In the future, will AI write object labels, script audio guides, and assist with interpretation? Should we allow machines to do this?

Stephen Hawking predicts “computers will overtake humans with AI within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.” This may sound ominous, but we can be (almost) certain that museums and cultural institutions will have mankind’s best interests in mind.

This is just the beginning.


Ciecko, Brendan. (2016) Exploring Artificial Intelligence in Museums. last updated February 25, 2016. Consulted February 2017. http://blog.cuseum.com/post/139971318568/exploring-artificial-intelligence-in-museums

Ciecko, Brendan. (2016) 6 Ways that Machine Vision can Help Museums. last updated March 10, 2016. Consulted February 2017. http://blog.cuseum.com/post/140786158798/6-ways-that-machine-vision-can-help-museums

Davis, Ben, Artnews (2017) Google Sets Out to Disrupt Curating With “Machine Learning”. last updated January 14, 2017. Consulted February 2017. https://news.artnet.com/art-world/google-artificial-intelligence-812147

Dobrzynski, Judith.
(2016). Artificial Intelligence as a Bridge for Art and Reality. New York Times. https://www.nytimes.com/2016/10/30/arts/design/artificial-intelligence-as-a-bridge-for-art-and-reality.html

Frankel, S., & Hammond, K. (2016). 5 Predictions for Artificial Intelligence in 2016. Time Magazine. http://time.com/4175663/5-predictions-for-artificial-intelligence-in-2016/

Hill, T., Haskiya, D., Isaac, A., Manguinhas, H., & Charles, V. (2016) Europeana Search Strategy. last updated April 23, 2016. Consulted February 2017. http://pro.europeana.eu/files/Europeana_Professional/Publications/EuropeanaSearchStrategy_whitepaper.pdf

Higgins, John. (2016) Sentiment Analysis. last updated February 2015. Consulted February 2017. https://www.sfmoma.org/read/sentiment-analysis/

Markoff, John. (2015). A Learning Advance in Artificial Intelligence Rivals Human Abilities. New York Times. https://www.nytimes.com/2015/12/11/science/an-advance-in-artificial-intelligence-rivals-human-vision-abilities.html

Messina, Chris. (2016) 2016 will be the year of conversational commerce. last updated January 19, 2016. Consulted February 2016. https://medium.com/chris-messina/2016-will-be-the-year-of-conversational-commerce-1586e85e3991

Museum of Arts & Design. (2014) Cultural Impact of Computer Vision. last updated November 2014. Consulted February 2016. http://madmuseum.org/events/cultural-impact-computer-vision

SAS. (2016) Machine Learning: What it is and why it matters. last updated March 2017. Consulted February 2017. http://madmuseum.org/events/cultural-impact-computer-vision

Villaespesa, Elena. (2013). “Diving into the Museum’s Social Media Stream. Analysis of the Visitor Experience in 140 Characters.” In N. Proctor & R. Cherry (eds). Museums and the Web 2013. Silver Spring, MD. Consulted February, 2016. http://mw2013.museumsandtheweb.com/paper/diving-into-the-museums-social-media-stream/

Wambold, Sarah. (2016) Twinsies!. last updated January, 2016. Consulted January 2016. https://clyffordstillmuseum.org/twinsies/

Westvang, Evan. (2016) Deep Learning at the Museum. last updated January 2016. Consulted January 2016. http://bengler.no/blog/deep-learning-at-the-museum

Zhang, X., J. Zhao, & G. Cao. (2015) Who Will Attend? – Predicting Event Attendance in Event-Based Social Network. Consulted February 2017. http://ieeexplore.ieee.org/iel7/7263115/7264280/07264306.pdf

Cite as:
Ciecko, Brendan. "Examining the Impact of Artificial Intelligence in Museums." MW17: MW 2017. Published February 1, 2017. Consulted .

Leave a Reply