Completing the picture: how a graphical interface for understanding Modern art is meeting the touch of non-sighted visitors

Sina Bahram, Prime Access Consulting, Inc., USA, Peter Samis, SFMOMA, USA, Thomas Ryun, Belle & Wissell, Co., USA, Scott Thiessen, Belle & Wissell, Co., USA

Abstract

Building on the landmark work done by the Canadian Museum of Human Rights (Wyman et al., 2016), the San Francisco Museum of Modern Art (SFMOMA) engaged the services of Sina Bahram from Prime Access Consulting (PAC) as the museum prepared digital interpretive programs for the re-opening of its new and expanded building. Together with interactive design partners Belle & Wissell, Co. we asked a critical question: how can we make the fundamentally visual experiences we were designing for our new Painting & Sculpture Interpretive Gallery equally available and engrossing to a non-sighted audience? Our goal was twofold: 1. To make large touchscreen graphical user interfaces accessible to the blind and vision-impaired; 2. To convey a sense of the artworks in the galleries, both in their visual aspect and the stories that inform them, to this same audience. By adding a bespoke digital talking overlay on top of the visual interface and following best practices laid out by Apple, Microsoft, et al. on their touch platforms, we created an approach that allows for an equitable experience for eyes-free audiences while still allowing us to achieve the full design vision for the experience. This paper describes the following: • Which design and pre-implementation considerations were made to help ensure an inclusive experience; • How a complex, visually rich graphical user interface (GUI) was transformed into a series of single-finger interactions in an equivalent and parallel audio environment; • What was required in both the technology implementation and the authoring platform within the content management system (CMS); • Implications of eyes-free use for physical design; • Implications of low-vision use, and use in tandem with a sighted companion, for visual design; • The content-related task of producing over 100 audio descriptions for artworks and visual ephemera in the wall and table stories; • Potential benefits to all visitors and to the museum’s institutional knowledge about its collection thanks to this initiative.

Keywords: inclusive design, touchscreen overlay, accessibility, eyes-free users, audio description, modern art

Introduction and motivation

We in museums know that many of our everyday visitors feel insecure when faced with artworks in our galleries. In fact, we have our work cut out for us just to entice sighted people who feel “art museums are not for them” to visit for the first time. This problem exists even more so with blind and low vision audiences; why would they even consider coming to an art museum, the “Palace of the Visual” par excellence? What’s in it for them? Well, for one thing, art museums enjoy enormous prestige in our culture. The visual expressions they contain are equated with the greatest achievements in other creative fields such as literature, music, and theater—which are all, arguably, far more accessible to them. Traditionally, blind and low vision people have avoided our halls unless they had friends or families willing to guide them and capable of describing the visual sensations on tap in some detail. Otherwise, we may as well have been black holes to them.

What, though, is the quality of experience that we can aspire to offer? Might it be that in serving blind and visually impaired visitors, we can actually model a more inclusive approach that could be mapped to other individuals who feel as though museums have nothing to offer them—like the general public? It was with this question in mind that the San Francisco Museum of Modern Art (SFMOMA) decided to design interpretive galleries in its new building that would be universally inclusive and offer rich, meaningful experiences built around our collection to sighted and non-sighted visitors alike. To do so, we enlisted two partners: Sina Bahram, principal of Prime Access Consulting (PAC), and Belle & Wissell, Co. (BWCo), an interactive digital design firm based in Seattle that signed onto our new project with its inclusive design mandate before they—or we at the museum for that matter—fully understood what it would entail. It was a long journey from good intentions to a finished, accessible product, with moments of uncertainty along the way. But the Canadian Museum of Human Rights team had shown it was possible (Timpson & Trevira, 2013; Wyman et al., 2016), and we were determined to apply their lessons to the art museum context.

A photo inside SFMOMA's Painting an Sculpture Interpretive Gallery shows three visitors interacting with the Wall Experience. Two are seated on cushions while a third visitor uses both hands to interact with a large touch screen.
Figure 1: SFMOMA’s Painting & Sculpture Interpretive Gallery; Touch-Wall showing three adjacent user-stations each two screens wide. From left to right: Theme Story “The Eyes Have It;” Theme Story “Found Objects;” and Artworks Up Close; “Frieda and Diego Rivera” (Photo © Henrik Kam)

The site for the museum’s initial foray into inclusive design was Gallery 218, the Painting and Sculpture Interpretive Gallery at the end of the second floor’s gallery loop (or the beginning, if you chose to enter from there) (figure 1). While the model of interaction that we chose for sighted visitors was highly visual—a 4 x 12-foot touch-wall experience divided into three large, immersive, graphical user interfaces (GUI) (figure 2), and a set of smaller touch-tables with another GUI comprising three parallel streams of text and image icons flowing across them (figure 3)—the determination to follow inclusive design principles into the gallery’s digital interfaces meant that someone who is blind or low vision would still be able to appreciate the content and quality of these digital experiences; that someone who is deaf or hard of hearing would be able to read the words spoken in an embedded video; and that a visitor in a wheelchair would be able to access the content they desired from seated height. With PAC and BWCo and Sina as partners, SFMOMA was ready to tackle the daunting task of making visual artworks presented in a highly visual GUI interface accessible, and far more importantly, delightful.

A screenshot of the Wall Experience interface shows a large artwork covered by several scattered circular plus icons, each indicating a touchable hotspot region. The menu overlay extends over the top of a fifth of the right hand side of the interface.
Figure 2: Wall Experience GUI for an “Artwork Up Close” story with menu column overlay extended on the right hand side.

A screenshot of the Table Experience GUI shows an arrangement of tiles—some containing a few words, some containing images—in three horizontal rows.

Figure 3: Table Experience “River” GUI of three parallel “streams” of tiles bearing keywords and images that lead to stories once selected or combined

This paper lays out our approach from the initial collaborations among all members of the team to the evaluations carried out to assess how well the system works and to gather feedback for future iterations. The paper is laid out in the following six sections: (1) we explore related work; (3) we review our process for collaboration, while sharing some practical tips and tricks that helped us work together remotely; (4) we explain the implementation of the system while considering various trade-offs and advantages within the context of design and execution; (5) we share the results of our informal formative and developmental evaluations; (6) we conclude.

Background and related work

The work we present in this paper relies heavily on eyes-free access to a graphical user interface (GUI), with a primary input modality of touch and a primary output modality of audio. When considering eyes-free access to touch-based interaction, it is important to realize that prior work exists in both computer science as well as museum literature. Previous work also exists on guidelines that codify digital accessibility best practices.

Within computer science, work by Kane et al. (2008) on SlideRule and Access Overlays (2011) lays the foundation upon which commercial touch-based screen reading systems are built. From Microsoft’s Narrator in Windows 10 and beyond (Microsoft Narrator) to the ubiquitous VoiceOver screen reader from Apple (Apple VoiceOver) to Android’s Talkback from Google (Google Talkback), a common thread of design decisions can be observed. An example of such a design decision is the use of an access overlay, e.g., the ability for the eyes-free user to lay a finger down on the screen and learn what is there, either by sliding the finger around or by performing swipe gestures. Another common thread includes various affordances that users can use to activate an item, e.g. double tap to activate the last component spoken aloud. Such design decisions are further discussed later in this paper as they relate to the work being presented, and many efforts were taken to align as closely as possible with accepted digital accessibility best practices. The primary departure from modern mobile-based screen reading technology is a self-imposed constraint that the entire system be usable with only a single finger. We discuss this constraint further in the implementation section.

Within the museum space, work by Wyman et al. (2016) lays out a strategy of inclusive design that served as the inspiring principles for the work discussed in this paper. More specifically, Wyman et al. lay out specific insights centered around eyes-free navigation of an interface—in their case, a physically actuated keypad-based one—that had a direct impact on the touch-based work we present. Wyman et al. claim that their suggestions and guidelines are equally extensible to touch-based environments, and the work presented in this paper is an instantiation of one such realization of that claim.

In terms of accessibility best practices, guidelines from the World Wide Web Consortium (W3C) such as the Web content accessibility guidelines (WCAG 2.0, 2008) played an important part in influencing the overall approach to systemic accessibility. Though the interface we present is not a website, the same insights centering around perceivable, operable, understandable, and robust (POUR) content that underpin WCAG 2.0 also helped guide the design and execution of the accessible overlay mode in our work. Furthermore, the mobile accessibility guidelines from the BBC (BBC Mobile Guidelines) informed our thinking about various accessibility considerations.

Collaboration and design choices

BWCo and SFMOMA initially spent some time collaborating to define what the experiences could be, focusing first on creating unique experiences that were meaningful and memorable, but primarily so for sighted visitors. Once these ideas had some substance, visualized in the form of wireframes, we started conversations with PAC about what we were planning to create and how we could not only make it more accessible, but also how we could preserve the uniqueness these experiences offered.

Initial conversations centered around talking through functionality and how users interacted with different content types. What options did they have? How were stories organized? How many story templates were available? These distinctions and methods of interaction were really what made these experiences unique in a gallery setting (in addition to scale and ergonomics). “Translating” them to PAC required lots of nuanced discussion. Because Sina Bahram from PAC is blind, our process, not just the technical and content-based artifacts therein, needed to be, by definition, accessible. This internal commitment to accessibility stayed with us throughout the design and implementation of these experiences.

Once we reached a common ground on what these experiences were striving to achieve, we started discussions about how to make said experiences accessible to non-sighted visitors. Some important learnings from these discussions are reported out in the implementation section.

One realization we quickly came to was that best practices alone are not sufficient to create a seamless experience. Visitors, regardless of functional ability, bring expectations from previous interactions to new experiences. The a priori expectations from non-sighted visitors primarily originate from their use of mobile devices such as those popularized by Apple. As accessibility design has developed across the now ubiquitous touchscreen landscape, a certain design language and set of standard interaction patterns for non-sighted interfaces has emerged. Mobile operating systems especially have paved the way through many problems in this area; how touch gestures can be used on a flat, featureless touchscreen that provides no tactile feedback; how to allow non-sighted users to navigate freely through complex systems of information, etc. Where we found the standard and expected solutions effective and applicable, we were quick to adopt them ourselves, giving non-sighted visitors a familiar user experience similar to their personal touchscreen devices.

Separately from on-screen interaction, we had many conversations about ergonomics and how to leverage these experiences to establish an institutional standard for accessible interactive experiences at SFMOMA. Because of the different interfaces employed in these experiences—and anticipating a similar variety in the future—we decided that Eyes-Free Mode would be activated by an actual physical button. It would be “stateful”—either on or off. Furthermore, we decided to always locate that button in the bottom left corner of a station in an experience. For the vision impaired, Bahram pointed out, “edges and corners are very significant areas.” The reason for prioritizing edges and corners is that they are able to be found easily through touch, and then followed, or trailed along, to reach a new destination. An additional benefit to the choice of the lower left-hand corner is that visitors who use wheelchairs may also reach this button, satisfying yet another criteria of physical accessibility and inclusion. More on the importances of edges and corners can be found in Wobbrock et al. (2003). Once Eyes-Free Mode is engaged with the physical button, a visual overlay appears, masking some of the screen. This allows visitors at neighboring stations to understand what they’re seeing and why it’s different from their experience. We explain the physical layout of the wall and table stations further in the implementation section.

Audio was also a critical discussion amongst the team. For sighted visitors, the wall doesn’t have audio for any content, while the table contains a number of videos that possess an audio track. The gallery has active visitor traffic and ambient noise in addition to audio coming from the tables. Alternatively, Eyes-Free Mode relies heavily on audio, namely speech output through a speech synthesizer called a text to speech engine (TTS), which is how the screen reader in Eyes-Free Mode vocalizes the information on the screen for someone who cannot see it. Sounds are also employed deliberately to serve as audio cues when navigating items, changing screens, starting/stopping eyes-free mode, and much more. We discussed the potential for the screen reader in Eyes-Free Mode to play through the system speakers to the entire gallery—the benefits of drawing positive attention to engaging with content in this manner—but ultimately decided to utilize headphones to deliver audio content to provide a more focused audio experience. The requirement of headphones, therefore, is not presented here as a best practice, but simply reported out as what worked well for this particular project.

Implementation

When considering the eyes-free implementation strategy, we sought to design the interactives in a way that is easy and comfortable to use, maximizes freedom and access to content, and provides as analogous an interaction experience to the sighted interface as possible. Designing a non-sighted experience that was simple, comfortable, and delightful for non-sighted visitors to use was a central objective. Principles from universal design and other accessibility best practices, as well as related work in the field, proved essential in these considerations. We present a description of the system, some selected decisions, and implementation considerations below.

Physical layout

In the case of the wall, we have three contiguous “stations” (figure 1); each station is composed of two side-by-side vertically-oriented 48 x 27 inch displays. Because all six displays are butted together, we designed a vertical fin to run the height of the left edge between each pair; blind or low vision visitors who touch the fin can follow it down to the button and headphone jack, which sit on a raised plate at the base of the screen. Similar plates are positioned at the bottom left corner of each station in the Table Experience. The button toggles Eyes-Free Mode on and off. The plate was machined to create a cone that descends around the headphone jack; this detail helps visitors locate the jack and guides their cable into the port. In a further attempt to follow inclusive design, the plate has both braille labeling and printed text (figure 4).

A photo of the lower left corner of each station shows, from left to right, the push-button and headphone jack in a raised plate; the label “Eyes-Free Mode. Insert headphones and push button to activate” in SFMOMA’s font and Braille.
Figure 4: Physical design; controls to activate Eyes-Free Mode are clustered in the lower left corner of each station

Eyes-Free Mode

When the button is pressed for Eyes-Free Mode, the behavior of the interface changes. Touches and gestures have a specific meaning that is derived from the accepted best practices in modern touch-aware screen readers. Our approach to accepting gestural input mirrors a typical smartphone and tablet interface in which the entire screen is treated as a single input for touch gestures, indifferent to where on the screen the gesture originates. Our mapping of these gestures matches a standard touchscreen device as well. Content and menus were mapped to a two-dimensional grid, and specific items are accessed by moving the current focus via simple swipe and tap gestures. A vertical swipe moves the current focus between top-level items (in this case, stories), while a horizontal swipe moves the focus between items within the selected story. A double-tap gives access to the details of a selected item, and a single-tap repeats a description of the location of the current focus. For further discussion around and explanation of these gestures, the reader is invited to investigate the user documentation for the popular screen readers discussed in the related work section.

One important takeaway, and the most crucial guiding principle from our approach, is that at any point, the eyes-free user is able to answer the following three questions from Wyman et al. (2015):

  • Where am I?
  • Where can I go from here?
  • How can I get there? (Or, how can I make that happen?)

The second guiding principal is that it should be possible to make every gestural input with a single finger. This ensures maximum flexibility for differently abled users.

Getting audio feedback right was also paramount. A visual interface allows a certain forgiveness in precision, as visual feedback can subtly indicate to a user whether their interactions are successful, effectively training the user quickly to use an unfamiliar application. Without the leeway afforded by this modality, the language of our non-sighted interface had to be clear and exact. We didn’t always get this right the first time. In early prototypes, for instance, lists of content were navigable by a sliding gesture, while spoken directions prompted the user to swipe to navigate across different content types. The discrepancy—the difference between a continuous pan through content and a discrete gesture to move through one item at a time—went initially unnoticed by the sighted testers of the experience who saw the visual feedback of their interactions, and therefore immediately understood what the system was asking of them. After an accessibility review with PAC in which cognitive walkthrough and speak aloud techniques were employed, we understood the importance of front-loading important information, a process sometimes referred to as semantic prioritization. Subsequently, the correct sequencing of information and prompts underwent a rigorous process of iteration in order to get it right.

A spoken interface cannot be quickly scanned in the same way as a visual interface, and thus the linear order information is presented in is very important. Our spoken interface reads a selected item of content in the following order:

  1. Item or context-specific sound queue
  2. Item’s title (e.g., “Sculpture and the Body Abstracted”)
  3. Ordinality (e.g., “item 1 of 5”)
  4. Body content (e.g., “how do you evoke the body when a body isn’t there? …”)
  5. Instruction prompts (e.g., “double-tap for more information”)

Content-based accessibility

In spite of the dimensionally constrained interaction modality that a non-sighted interface necessitates, it was important to provide visitors access to the same full range of content available in the sighted experience. Thus our accessibility investments needed to focus not only on the non-sighted interaction design patterns, but also on making as much of the visual content accessible as possible. This split between programatic and content-based accessibility exists in any digital project with inclusive design goals. Providing a means to manipulate the interface and access the content of the interactive (the programatic accessibility) is far less valuable if the majority of the content is by nature inaccessible to non-sighted visitors (the content-based accessibility).

The 50 stories created for the interactives consisted largely of text and images, with occasional videos included when they were particularly revealing. Both the voiced navigation prompts and the prose in the stories could be read by the computer’s built-in TTS engine, but that left the images out—and the images were, after all, the raison d’être of the museum. So, a parallel effort was required: investing in a full campaign of content creation to replace or augment all visual content. After all, our goal was not only equitable access, but enjoyment and delight. That effort began at the Content Management System level, where “Eyes-Free Description” fields were built in for every image and video file. To help achieve this goal, the CMS also needed an extra “Eyes-Free Override” field for every standard text field to allow for phonetic spelling of artist names, artwork titles, etc. TTS technology does not always pronounce words as a native speaker would expect. Common examples would be the two pronunciations of “read” as in “you should read a book” vs. “she read the book.” This problem is heavily exacerbated, however, when discussing the variety of proper nouns that regularly appear in the arts (figure 5).

A screenshot of the CMS editing panel for a story shows an image upload field and 6 text entry fields below it. An "Artist" field is filled out, "Henri Matisse," and below that, an "Artist (Eyes Free Override)" field spells out the artist's name phonetically as "Ann-ree Ma-teess." Below that, a "Description (Eyes Free)" field contains a large paragraph of text describing the relevant image.
Figure 5: the CMS includes fields to input Eyes-Free Overrides for title and artist name pronunciation and an Eyes-free Description for the artwork or related image

Even as the code for the wall and table experiences and their access overlays were being developed, and the stories were being researched, edited and authored, we undertook a parallel stream of content development: the writing of audio descriptions for the artworks on view in the museum—the true subjects of the stories. As Bahram pointed out at the time, to a non-sighted person, saying that the stories consisted predominantly of images and text means “it’s all text.”

To produce the texts that would be voiced in lieu of the images, PAC suggested working with J.J. Hunt in Toronto (a collaborative formula that PAC has followed with many others to great success), and after a test set of ten images, SFMOMA enthusiastically embraced that solution. On the assumption that people would be sitting at a touch-table while consuming these stories, target description length was set at 200 words per artwork—or roughly 90 seconds at average TTS reading speed. The feeling was that in developing these descriptions, we were investing in something that would be of enduring value to the museum. It is ironic that art museums have all kinds of information about the objects in their collection—medium and dimensions, exhibition histories and bibliographies—but they rarely have detailed verbal descriptions of what the artworks actually look like. As with curb cuts, that singular early triumph of the disability rights movement, it feels like the dividends from investing in a solution that aids the blind to apprehend visual content will in the end redound to the benefit of many other audiences as well.

The collaboration proceeded at a distance; Samis assembled a Google doc spreadsheet linking urls or JPGs of images with the following background data, column by column:

  • Table Experience story text (if finished);
  • Wall Experience story text (if finished);
  • Latest extended object label commentary;
  • Past extended object label commentary (drawn from the archives);
  • Current Mobile app commentary;
  • Past audio tour commentary (in some cases, also from the archives);
  • Artwork-specific commentary from recent and past catalogs;
  • When internal references were lacking, relevant commentary from an outside source.

In retrospect, it seems this may have been overkill; it is certainly not a requirement for every museum contemplating an accessibility initiative. But the museum’s temporary closure during the construction phase for the new building presented an opportunity to pull together an audit of previous text-based material regarding key works in the collection. Furthermore, even though the work of audio description is fundamentally fresh and visual, being able to peruse the words that have been used to frame/present an artwork in the past helps to ensure that features salient to its interpretation will invariably be highlighted.

Once Hunt drafted his descriptions, Bahram vetted them to ensure their clarity for a non-sighted listener (sometimes engaging in multiple iterations with Hunt before moving to the next step). Then Samis did a final pass, editing in situ, checking texts against the artworks in the galleries as needed. This sometimes led to consultations with conservators and fellow curators to make subtle discriminations—a chance to increase our own visual acuity and understanding of these works.

It was easier to justify budget expenditures for expert descriptions of the museum’s own collection works, which were indubitably of enduring value to us, than for outside documentary images, videos, or comparison artworks coming from other sources. That said, for the launch of the interpretive galleries and this first set of stories, we finally decided to include all images and videos, with slightly shorter descriptions for the outside works. In the future, the museum hopes to build more in-house capacity for visual description, and to integrate it fully into the story development workflow. That said, it will be hard to match the acuity of vision and evocative prose of Mr. Hunt!

However, content accessibility does not stop with static strings of text that describe images. Video content must be described via a process known as audio description. Audio description allows a non-sighted visitor to understand the visual content in a video with brief but critical insertions of narration. Therefore, a recorded voice description track was added to all video content, describing the video scenes as they are playing. It did not hurt that Hunt is also a professionally trained audio describer and could therefore record these descriptions. Briefly turning back to programmatic accessibility, we made sure to make the video player fully accessible. This setup allows non-sighted visitors to jump forward and back through video content freely, analogous to the visual experience.

Accessibly translating playful interfaces

Whenever possible, we wanted to go beyond simply providing access to content and instead aimed to match the sense of serendipitous magic and discovery that the visual experience provides. While certain constraints required us to simplify some aspects of the user experience, we wanted to be careful not to remove all the interesting or playful components. In some cases creating an analogous interaction experience to the sighted interface was not advantageous. In the “Artworks Up Close” stories on the wall experience, touchable hotspots are laid out at different points in each artwork and can be selected arbitrarily in any order. In the non-sighted experience, rather than forcing the visitor to guess where on the screen the hotspot items might be, we chose to arrange the content in a list that must be moved through sequentially.

Occasionally, though, we had the opportunity to create an experience that more closely matched the visual interface. On the table interactive, a “river” of touchable text and image tiles streams by horizontally, letting you in effect select filters by which to discover content stories. Rather than just providing a simple list of the stories for non-sighted visitors, we created an analogous interaction whereby a visitor can place their finger on the screen and the content of each “river tile” is spoken as it passes underneath. This non-sighted approach allows visitors to filter stories in the same way as sighted visitors for a heightened experience of magic and discovery. This approach is not one for which we had best practices to rely upon. It was just as much an experiment in the Eyes-Free Mode as it was visually, and this striving towards equitable experience is why we feel that these experiences are inclusively designed. It is important to also point out that this “river of content” experience has a graceful fallback. If someone wishes to swipe along the content, they may do so secure in the knowledge that all tiles will be announced as if they were laid out in a giant list.

Including all audiences

A screenshot of the Wall Experience interface with Eyes-Free Mode enabled shows a translucent black overlay covering the lower fourth of the screen with the title "Eyes-Free Mode is Currently On" along with additional information and buttons to collapse or turn off the mode. Above the overlay, text appears over the top of the rest of the interface that displays the words being read by text-to-speech.
Figure 6: Wall Experience Eyes-Free Overlay band across the lower portion of the screen indicates to sighted visitors that the story is currently operating in accessibility mode; meanwhile, onscreen captioning displays a readout of the words being transmitted to the visitor’s headphones

Non-sighted visitors wouldn’t necessarily be the only audience to encounter Eyes-Free Mode; considerations needed to be made for a few other audiences—namely, the sighted companions of non-sighted visitors, visitors with low vision who may be using Eyes-Free Mode to augment their visual experience, and sighted visitors who may simply stumble upon the experience while Eyes-Free Mode is enabled. We implemented a number of visual features to Eyes-Free Mode especially for these audiences. First, when a visitor is interacting with Eyes-Free Mode, all of their interactions are mapped to the visual interface. In this way, the same stories, artworks, photographs, and videos they are digesting auditorily are also represented visually on screen. Captions that display the words heard through the headphones appear as well. These visual features allow sighted and non-sighted visitors to experience the interactive together, and engage meaningfully about the same content in real time (figure 6). For low-vision visitors, focus rectangles highlight the content that is currently being read via TTS. While Eyes-Free Mode will automatically close after a certain period of inactivity, visual elements were added to address the problem that could occur if a sighted visitor approaches the interactive before the mode has closed. In order to avoid having this visitor try in vain to interact with the elements shown on the screen, a conspicuous, collapsible overlay covers a lower portion of the screen and alerts them that the interactive is in this special mode. The overlay provides an on-screen button to turn the mode off, but requires a confirmation tap in order to prevent the button from inadvertently being pressed by a non-sighted user. Planning, discussion, and plenty of trial and error led to visual elements that struck just the right balance for accommodating both audiences.

Evaluation

There were four types of testing and evaluation conducted in the development of the user experience:

  1. Sighted testing of the interfaces for Wall and Table at BWCo in Seattle;
  2. Non-sighted remote testing of interfaces and program functionality by PAC;
  3. Non-sighted testing by PAC on-site prior to museum’s re-opening;
  4. Informal evaluation by blind and low vision focus groups after re-opening.

Takeaways from each of these are briefly described below. Subsequent experience with accessible audiences will be presented at Museums and The Web 2017.

Takeaways from on-site testing with sighted users at BWCo in Seattle, February 2016

Prototyping is a key component in the BWCo process; as soon as we had the A/V hardware to do so, we utilized prototyping space to build mockups of the wall and table experiences. In partnership with Lockwood & Sons—a design/build company that fabricated the enclosures for Gallery 218—we were able to pre-visualize both the wall and table experiences. With the experiences up and running (and being added to constantly during the fabrication phase) we were able to get a clear sense of how the experiences would function in the gallery and finesse ergonomic, hardware, and enclosure details parallel to software development. Early on in the project, in collaboration with SFMOMA, we identified a core target audience and brought together a group of users who represented that group for testing. Some testers came alone; others came with their families. We went through a series of tasks and periods of free exploration with our usability tester on both individual and group experiences. Team members from BWCo and SFMOMA were present as observers and asked follow-up questions as they arose. The key insights here were largely centered around pacing, and making sure content delivery was concise and refined in timing for the wall; we also gained insight into initial story selection and zooming controls for the table (figure 7).

A young girl tests a prototype version of the Table Experience, placing her finger on a touchscreen mounted inside a wooden box.
Figure 7: formative user testing at Lockwood & Sons/Belle & Wissell’s studio in Seattle

Non-sighted remote testing of interfaces and program functionality by PAC

Because Bahram does not live on the West Coast, we needed to devise a way to enable him to remotely test intermediate builds of the two experiences. Through a collaborative effort involving the utilization of touch-screen computers, distribution of the code as a standalone binary application, and extensive communication via email and conference calls, we were able to iterate upon build after build, investigating interaction design, discovering bugs, being prompted to add features to enhance usability, and more. While the evaluation carried out in the next, on-site phase (see 5C below) proved to be incredibly fruitful per unit time, the sustainable and reproducible methodologies we employed for remote testing pre-beta facilitated a fantastic asynchronous collaboration that is usually not possible.

At one point, Bahram was in Seattle for an unrelated conference and was able to visit BWCo’s studio with his friend and mentor Shaun Kane of Microsoft Research. Together they were able to test the hardware in situ a month before it was shipped down to San Francisco (figure 8).

A photo inside the Belle & Wissell, Co. studio shows three individuals talking around a touch-screen laid on a conference table. From left to right are Shaun Kane, Sina Bahram, and Sarah Trueblood.
Figure 8: Shaun Kane and Sina Bahram testing eyes-free interfaces with Sarah Trueblood at BWCo offices in Seattle.

Non-sighted testing by PAC on-site prior to museum’s re-opening

The week before the museum re-opened to the public, Sina Bahram came to San Francisco for two days and conducted extensive diagnostic tests of the touch-wall and touch-tables with Peter Samis acting as recording scribe. Both software developers, Scott Thiessen for the Wall and Nathan Selikoff for the Table, were on call in Washington State and Florida to provide immediate code patches as errors were detected or non-intuitive functionalities were detected. In this way, we were able to roll through a series of real time iterations that reduced the bug list and resulted in at least one product deemed “almost ready for prime time” by the time of Bahram’s departure.

Informal evaluation by a blind and low vision focus group after re-opening

In July 2016, two months after the museum’s re-opening, with the accessible features implemented but not yet activated in the gallery or publicized, SFMOMA invited representatives from the local blind and disabled community, including the San Francisco Lighthouse for the Blind and the World Institute on Disability in Berkeley, to come to the museum after-hours and try out the wall and table experiences. The feedback was overwhelmingly positive, and revealed the value of a number of our assumptions, while indicating that others—e..g, one decision about swiping directionality—were not as intuitive as had been hoped. The value of focus rectangles was verified by our low vision test subject, and the onscreen display of spoken text as it was transmitted through the headphones proved a win for both companions of non-sighted users and others who were nearby and merely curious. This “verbose logging,” which had initially been used merely as a debugging tool, turned into a teachable moment (figure 9).

A screenshot of the Table Experience with Eyes-Free Mode enabled shows a collapsed red overlay covering the lower tenth of the screen with the title "Eyes-Free Mode is Currently On" along with additional information and buttons to expand the overlay or turn off the mode. Above the overlay, text appears over the top of the rest of the interface that display the words being read by text-to-speech.
Figure 9: Table Experience Eyes-Free Mode visual captioning on the left; note the focus rectangle on the right, highlighting the quote currently being read; the Eyes-Free Overlay band has been minimized to facilitate low vision use or use with a sighted partner

One focus group member, who had enjoyed a lifetime of museum-going before losing his sight in middle age, said he came to the museum to be with friends and would prefer to hear the artwork descriptions via the mobile app rather than the isolation of sitting and absorbing stories in the interpretive gallery. “Just sitting is not like a museum experience to me,” he said (Samis, 2016).

A group member who was blind from birth and admittedly less extroverted said she was fine staying alone in the room while her sighted friends wandered around the museum.

Finally, a low vision member said, “This was so much more engaging than other museum experiences I’ve had. It made me want to go back into the galleries.”

At a second, larger focus group conducted in January 2017, the different needs of blind and low vision users again came to the fore. For instance, while low vision users prized access to zooms that could maximize their views, blind users praised the program for uniting audio descriptions of individual artworks with the power of story: “I’ve never used a system where not only was the art described but then a history about the painting, or the painter, or the family who purchased the art—where I had both in the same system. And I really liked that. I really felt like that was education—and I’ve never experienced that combination.” (Samis, 2017). 

Outcomes of subsequent user testing will be reported at Museums and the Web 2017.

Conclusion

On the one hand, the visually impaired may be art museums’ ultimate disenfranchised group: everything we store and value is invisible to them. On the other, in the highly charged area of museum interpretation, they may ironically also be among the easiest to serve, because meeting their needs doesn’t require us to change the look of a single gallery. (It gets harder with sighted people; when art museums try to meet their needs for context within the physical space of the galleries by creating new forms of analog or digital interpretation, interdepartmental conflicts and turf wars can arise). (Samis & Michaelson, 2016). 

Of course for any solution to really be effective, it has to be part of a larger service design package. It is all well and good to have one destination gallery in a museum that has the capacity to flip from being a highly visual experience to a tactile and aural one, but that is just the thin end of the wedge in terms of understanding what the full set of needs of the blind and disabled community might be, and how to welcome them. To learn more, we will need to continue the work of inclusive design, welcoming new audiences to our galleries and learning from their frustrations as well as from their delights.

Such learning can come from journeymapping the itinerary of several blind, low vision, or other visitors with disabilities; from how they learn about the new resources that have been developed for them, to their decision to come to the museum—how they arrive and with whom, and how they pursue their visit, including finding the galleries or experiences that we take for granted.

Fortunately, the innovations in Gallery 218, the Painting & Sculpture Interpretive Gallery, also give SFMOMA opportunities to build on:

  • A set of physical design conventions including raised push buttons in lower left corners beside mounded headphone jacks and high contrast vinyl labels that pair text in the SFMOMA font and braille;
  • A CMS for storytelling with built-in fields for Eyes-free Descriptions and Eyes-free Override pronunciations of names or titles in foreign languages;
  • A raft of text descriptions of over 100 core artworks that can lead the way to a more accessible website and mobile app;
  • Perhaps most importantly, the beginning of a relationship with a community that had previously been largely ignored or at best underserved—one that has, conversely, felt excluded from our presentation of many of the essential objects and stories that define our culture.

It is by opening this relationship up, and the myriad dialogues it entails, that the museum will grow more generous and empathic, and learn unanticipated lessons in turn about the value of the objects it holds, seen or unseen.

References

Apple. (n.d.). Apple VoiceOver. Consulted February 11, 2017. Available http://www.apple.com/accessibility/iphone/vision/

BBC. (n.d.). BBC Mobile Accessibility Guidelines. Consulted February 11, 2017. Available http://www.bbc.co.uk/guidelines/futuremedia/accessibility/mobile

Google. (n.d.). Google Talkback. Consulted February 11, 2017. Availalble  https://support.google.com/accessibility/android/answer/6283677?hl=en

Kane, S.K., J.P. Bigham, & J.O. Wobbrock. (2008). “Slide Rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques.” In Proceedings of ASSETS ’08: The 10th International ACM SIGACCESS Conference on Computers and Accessibility. New York, NY: ACM, 73-80.

Kane, S.K., M.R. Morris, A.Z. Perkins, et al. (2011). “Access Overlays: Improving non-visual access to large touch screens for blind users.” Proceedings of UIST ’11. New York, NY: ACM. Consulted Sept. 25, 2016. Available http://research.microsoft.com/en-us/um/people/merrie/papers/access_overlays.pdf

Microsoft. (n.d.). Microsoft Narrator. Last Review: Aug 31, 2016 – Revision: 27. Consulted February 11, 2017. Available https://support.microsoft.com/en-us/help/22798/windows-10-narrator-get-started

“Museums and Accessibility.” Museum magazine special issue. 94, 5 (Sept-Oct. 2015).

Samis, P. (2016). Notes from the first eyes-free focus group for the Painting & Sculpture Interpretive Gallery. Internal document. San Francisco: SFMOMA.

Samis, P. (2017). Notes from the second eyes-free focus group for the Painting & Sculpture Interpretive Gallery. Internal document. San Francisco: SFMOMA.

Samis, P. & M. Michaelson. (2016). Creating the Visitor-Centered Museum. London and New York: Routledge.

Timpson, C. and J. Trevira. (2013). “Establishing Sound Practice: Ensuring Inclusivity with Media Based Exhibitions.” In Museums and the Web 2013, N. Proctor & R. Cherry (eds). Silver Spring, MD: Museums and the Web. Published February 11, 2013. Consulted September 24, 2016. Available http://mw2013.museumsandtheweb.com/paper/establishing-sound-practice-ensuring-inclusivity-with-media-based-exhibitions/

WCAG 2.0. (2008). Web Content Accessibility Guidelines (WCAG) 2.0. Consulted Feb. 11, 2017. Available http://www.w3.org/TR/WCAG20/

Wobbrock, J.O., B.A. Myers, & J.A. Kembel. (2003). “EdgeWrite: a stylus-based text entry method designed for high accuracy and stability of motion.” In Proceedings of the 16th annual ACM symposium on user interface software and technology (UIST ’03). New York, NY: ACM, 61-70.

Wyman, B., C. Timpson, S. Gillam, & S. Bahram. (2016). “Inclusive design: From approach to execution.” In MW2016: Museums and the Web 2016. Published February 24, 2016. Consulted September 24, 2016. Available http://mw2016.museumsandtheweb.com/paper/inclusive-design-from-approach-to-execution/

 


Cite as:
. "Completing the picture: how a graphical interface for understanding Modern art is meeting the touch of non-sighted visitors." MW17: MW 2017. Published January 31, 2017. Consulted .
https://mw17.mwconf.org/paper/whats-right-with-this-picture-how-a-graphical-interface-for-modern-art-is-meeting-the-touch-of-non-sighted-visitors/


Leave a Reply