Building distributed online exhibitions with IIIF

Robert Sanderson, J. Paul Getty Trust, USA

Abstract

The creation of online exhibitions—delivered via the web, mobile applications, or kiosks—often requires the painstaking duplication of images and object descriptions that are stored in collection management and digital asset management systems. This task is further complicated by the inclusion of objects held by other institutions, whether they have been loaned for the exhibition or are present only in the online version. The end product of this effort is most often a digital exhibition that itself is managed within its own data silo and maintained in a format that does not facilitate reuse or discovery. In this presentation we will describe how the International Image Interoperability Framework (IIIF) can benefit curators and technologists as they work together to build online exhibitions. Using examples from the collections of the Getty and the Yale Center for British Art, we will discuss how the interoperability provided by IIIF allows exhibition creators to easily assemble collections of objects from many different institutions; bring display metadata into the online exhibition without manual data entry; and enrich the content with annotations, such as highlights of regions of interest within images or allowing feedback from either select participants or the public, as desired. We will also demonstrate how the content of the exhibition can be reused in other contexts, because of the interoperable nature of the descriptions. The presentation will conclude with an overview of the IIIF-compatible tools that are available to support online exhibitions, as well as a discussion of the ways in which vendors and the museum community can contribute to the advancement of interoperable software.

Keywords: IIIF, interoperability, exhibitions, linked data

Introduction

The Web is now more than 25 years old, and has fundamentally changed the way that communication occurs at a global level. One of the fundamental tenets of the Web is its distributed nature; originally to protect it from disruption, this has evolved into becoming a participatory and inclusive environment thrugh which content can be created, shared, and interlinked by anyone. This cross-linking of content is of vital importance, as it provides a pattern for the discovery of new, related content by both humans and machines, rather than being partitioned in separate databases that happen to be on a network.

In times of xenophobia and both financial and social uncertainty, it is a moral imperative of the cultural heritage sector to work together to promote and celebrate our unique diversities, instilling the wonder that art can bring in as wide an audience as possible. By exposing our visitors to content that both reassures and challenges them, allowing them to explore their understanding of their own culture and those of others, and by facilitating their interactions with institutional specialists and each other, we are instrumental in breaking down the partitioning walls of ignorance and fear.

For cultural heritage organizations such as museums, image content is the primary means of creating compelling online engagement with visitors. Digitization efforts ranging from photography to photogrammetry have been ongoing at many of our institutions for years, providing the content for online exhibitions. But these exhibitions are treated the same way as we have treated their printed catalogs, as static and stand-alone products to be produced and forgotten. This approach does not take advantage of the shared social experience that the Web can so easily provide. Beyond that, it does not take advantage of the shared content ecosystem which can improve both the consumer’s and producer’s experience.

Workflows for the creation of digital exhibitions revolve around manual data re-entry and image upload, despite the fact that this content is already online and accessible. This is particularly true for objects from other institutions, or even other departments within a single organization that use a different content management system. There is no interrelationship between the exhibition system, the catalogs (print or digital), the collection management systems, or even the institutions. There is no potential for the reuse of the data in analysis or research, and the results are ultimately not reproducible.

Enter the International Image Interoperability Framework (IIIF).

An overview of IIIF

IIIF (pronounced “triple eye eff”) is a community primarily composed of cultural heritage organizations with a goal of enabling interoperability and reuse of image-based content (http://iiif.io/). Initially started in the higher education domain, it was quickly adopted by the digital library world and is increasingly expanding into museums, galleries and archives. The community includes organizations from the Americas, Europe, Asia, Africa, and Australasia, and is supported financially by a consortium that funds the salaries of two full-time employees to manage communication and technical adoption.

To fulfill its goal of interoperability, the community works together to produce specifications of how systems should interact when requesting and using image content, and the descriptions of the context in which those images are used. These specifications (called APIs) work with the Web infrastructure to ensure that the content is part of the Web, not just on it. They allow systems to be built that can interact directly with the content, regardless of the host institution or specific technology product used. By using the Web as the communication channel, the interrelationships between the digital content and the objects they depict and describe become part of the same social framework.

As a community IIIF also creates products that implement those specifications and encourages and facilitates their adoption and use. Specification without implementation is just a piece of digital paper, and worth about as much. The implementations range from commercial products built by Digital Asset Management System vendors, to free and open-source projects in a variety of programming languages, with a variety of capabilities. The community welcomes experimentation and diversity of implementation, rather than selecting one product to endorse or develop. The requirement is that the implementations follow the technical interaction patterns laid out in the API specification, of which there are currently four such specifications:

  • The Image API (version 2.1) describes how to retrieve segments or complete images at different sizes, and a linked data description of the technical and rights metadata about the images that depict the objects
  • The Presentation API (version 2.1) describes the context in which the image should be displayed to the user, including labels, ordering of the content, and relationships to other objects.
  • The Search API (version 1.0) describes a method of searching any available full text content or comments about the objects.
  • The Authentication API (version 1.0) describes how viewing applications should allow users from around the Web to authenticate with the hosting organizations to get access to controlled resources.

The APIs provide several key features of interest and relevance for digital exhibitions.

Flexible, cross-repository access to images

The IIIF APIs provide the information needed to deliver a wide range of online user experiences. From the perspective of a developer of a digital exhibition, the Image API capabilities are central. It is common practice today to copy digital images of collection objects into content management platforms, which then host the images and create derivatives at varying sizes in order to meet the requirements of the user interface design. Even in cases where an integration has been made with a local DAMS, the content management platform may still require a copy of the image in order to efficiently manage image requests at specific resolutions.  In addition, the local DAMS may not contain the images of objects from other institutions, necessitating the manual upload of these images as well as manual entry of any metadata required for management purposes.

The IIIF Image API addresses these issues by providing a consistent method of requesting images at varying sizes from any compliant image repository. Given the location of an image made available via the Image API, an application can request derivatives at any scale required by its user interface. An example of this is the Mirador viewer, which displays smaller thumbnails when browsing for an object, as an aid to identifying the object or distinguishing it from others.  When the user navigates to an individual object’s view, the application requests a larger thumbnail for each image. The application can query the size and aspect ratio of the image using the Image API and can request images scaled to the exact pixel dimensions required in each case.

An online exhibition that draws on content from multiple IIIF-enabled institutions can likewise forgo the local duplication and management of images that has traditionally been required. Use of a shared API allows the development of systems that are agnostic as to the location of the image provider, so long as the image servers conform to the Image API specification. That this approach is capable of scaling globally is evidenced by the Mirador viewer demonstration instance (http://projectmirador.org/demo/), which draws on images delivered via the Image API  from more than a dozen institutions in the United States, Europe, and Japan.

Descriptive text

The goal of “international interoperability” carries with it the assumption that the IIIF APIs will be capable of providing the information necessary to support internationalized software applications. The Presentation API defines a number of properties that contain text that are intended to be displayed to users, so it defines an internationalization pattern that allows the language of display strings to be specified if needed. Software applications that take into account language preferences supplied by the user’s browser can deliver content in the appropriate language, even if the application itself has not been localized to the desired language.

The Presentation API also provides a metadata property that contains a list of label-value pairs. These are intended to provide a mechanism for the display of metadata fields that the content provider desires to make available. Note that these pairs carry no semantic meaning, are not bound to any specific metadata schema, and are not intended for any purpose other than display. As is the case with other descriptive properties, the strings can be supplied in alternative languages, which allows for labels and values to be translated separately. For example, the National Library of Wales in some cases publishes alternative labels (e.g., “Period” and “Cyfnod”), while providing a single value (e.g., “1400-1500”) to be displayed regardless of the user’s language preference.

Collections of objects

At the core of the IIIF Presentation API is the concept of a manifest, which carries the information needed to display an object; but in many cases, content owners wish to organize these objects into lists or hierarchies. As a result, the Presentation API provides a collection resource that may contain both manifests and nested collections (manifests may be part of more than one collection at the same time). Collections admit the same descriptive properties as manifests: a label, description, thumbnail, display metadata, and rights information can all be applied to collections and used in the construction of hierarchical views. In addition, both collections and manifests can carry a timestamp to facilitate the creation of date-based browse features or timelines in the user interface. IIIF content providers have made use of collections to publish manifests grouped by creator, medium, curatorial department, and other characteristics. For the purposes of creating a digital exhibition, IIIF collections provide the mechanism for organizing, describing, and publishing the manifests of the exhibition contents.

Embedded data and services

The IIIF APIs make widespread use of the concept of services, which either embed data directly into a manifest or reference external APIs. Authentication and search functions are defined as services within a manifest, and even IIIF Image API endpoints are services. While the expression and behaviors of these core services are defined in relevant IIIF API specifications, the pattern is extensible and custom services can be defined by content providers, or, more productively, by wider agreement among the IIIF community. One simple but useful service is provided as an example: a service that embeds geographical coordinates within a manifest using a JSON-LD representation of data defined by the GeoJSON format (Butler et al., 2016). Just as the navigation date allows a single date to be associated with a manifest or collection for the purposes of rendering a timeline or browse function, the GeoJSON service allows a point to be associated with the object. Similar to the manifest’s metadata construct, the service does not specify the significance of the specified location: it could indicate the location depicted, or the find site of the object. A repository that consistently exposes location data via GeoJSON can lower the barrier to the development of map-based interfaces in software clients.

Annotations

Annotations are another core component of the IIIF APIs, which employ a data model developed by the Open Annotation Collaboration (Sanderson et al., 2013) that has since evolved into a Web standard published by the W3C (Sanderson et al., 2017). These annotations are used to associate images, text, and other information with IIIF objects. An annotation typically consists of a target, such as a point or area on a IIIF Canvas, and a body, such as text that describes the target region. This data model has seen increasing adoption in applications designed to enrich collections through annotation by curators as well as crowdsourcing; the Rijksmuseum’s use of Accurator (http://annotate.accurator.nl/intro.html) is just one example. Open Annotation provides a simple yet highly flexible model for authoring and publishing annotations in custom exhibition software as well as Open Annotation-compatible viewers such as Mirador.

Digital exhibitions

There is a wide range of potential definitions for what counts as a “digital exhibition;” however, since IIIF is a foundational technology, it can play a role in the creation and dissemination of any of the exhibitions encountered at the authors’ institutions. The intent or audience of the exhibition (along which lines distinctions have been drawn) does not alter the need to interact with images or the descriptive context that enables the audience to engage with and appreciate the objects they depict.

To demonstrate this, we can distinguish between the following intents or forms of a digital exhibition. The list is not intended to be comprehensive, but only to demonstrate the broad spectrum.

  1. Digital Only. The digital exhibition is the sole form of the exhibition, which does not, and perhaps cannot, exist in a physical form. This sort of exhibition would allow objects that are impossible or prohibitively costly to move to be displayed next to each other, to reconstruct objects from their fragments, and to see views of objects that are impossible in real life such as both the front and back of a painting hung on a wall.
  2. Blended. The exhibition consists of both physical and digital components, where the digital components do not replicate the physical but are additive. For example, the Getty’s recent exhibition on the Cave Temples of Dunhuang  (http://www.getty.edu/research/exhibitions_events/exhibitions/cave_temples_dunhuang/index.html) had a traditional gallery experience, a physical reconstruction of several caves, and a virtual reality experience of other caves.
  3. Exhibition Memorial. The exhibition is primarily physical and bounded in time, and the digital part is intended to provide a simulacrum of the experience. As the complexity and completeness of the digital exhibition increases, it starts to blend into either a new digital-only exhibition or an exhibition catalog.
  4. Exhibition Advertisement. The exhibition is almost exclusively physical and bounded in time, and the digital part is intended to encourage the audience to attend it. The complexity and completeness of the digital exhibition is intentionally low so as not to  detract from the physical exhibition.

In all of these cases, there are requirements for displaying a series of images, potentially at different sizes or picking out a particular region of interest, along with descriptive information. This functionality is enabled by the same IIIF APIs over the same underlying data, providing economies of scale and ease of reuse.

Current technology

The Yale Center for British Art (YCBA) and the programs at the Getty do not have exactly the same technology stacks; however, the differences in workflow are minor. As such we present the current technology in use at YCBA as an exemplar of the place from which we collectively start.

The Yale Center for British Art has deployed a Drupal-based platform to create and publish digital exhibitions. The site employs responsive design techniques to publish content for mobile, kiosks, and the Web. The user interface provides a browse function for the objects contained in the exhibition, and map- and timeline-based displays of selected objects. Additional pages can be created to provide biographical information about artists and to highlight multimedia content.

While the platform has been used to host a number of exhibitions, it is tightly coupled to YCBA’s TMS and DAMS infrastructure. The platform integrates with YCBA’s collection management system, Gallery Systems’ TMS, by consuming exported XML data that is also used to populate YCBA’s online catalog. A second integration is required to access content from an internal API that allows access to content from YCBA’s DAMS. The pair of integrations is sufficient to populate the application with the required tombstone information for each object, as well as thumbnail and larger images.  Other data, such as geographic coordinates and date information, are entered manually. Of course, these integrations are only effective for content published from the YCBA’s collection.  Entries must be created in the system manually for objects from beyond YCBA.  Even records from the YCBA’s archives, which are managed using ArchivesSpace, and objects from Yale University’s libraries, must be entered manually.

The development of the exhibition platform did not anticipate a need to repurpose the exhibition content for other applications.  At the time of the system’s development, it was designed to serve content to mobile devices and kiosks as well as the Web. As YCBA explores other technology options for serving mobile users and developing new in-gallery experiences, the desirability of publishing digital exhibition content through multiple channels rapidly becomes  apparent. When viewed in the context of a broader ecosystem of applications, the standalone exhibition platform is revealed as an unfortunate data silo.

IIIF promises to ease the acquisition of digital images and metadata for online exhibitions by offering a mechanism for interoperability of internal museum, library, and archive systems. At the same time, it presents a mechanism for collecting and organizing the content of digital exhibitions in a format that can be shared with other IIIF-aware applications. This functionality will improve all steps of the exhibition’s lifecycle, from planning through publication and later reuse.

Digital exhibition workflow

The planning phase of an exhibition, whether physical or digital, necessarily involves the curation of a list of objects that are to serve as its focus. The selection process itself can entail significant discussions among curatorial staff, the exhibitions department, and other units within the museum. Logistical concerns, such as the availability of high resolution images for use online, can play a role in determining which objects are suitable for inclusion. The planning process is also likely to require many parties to work across institutional as well as departmental boundaries. Standard tools for online collaboration, such as e-mail and shared spreadsheets, are often used to coordinate this planning workflow.

IIIF was designed from the outset to enable the curation of lists of objects as well as collaborative annotation of objects and images. Objects under consideration for the exhibition can be grouped into IIIF collections. Annotations can be used as a means of discussing the suitability of an object or a particular image, highlight areas areas of interest within the image, or to create commentary to be used within the online presentation of the objects within the exhibition.

IIIF also promises to reduce the administrative overhead of obtaining images from other institutions. While institutions may apply access control under IIIF, the value of interoperability is realized when images are made available at sufficiently high resolution to be reused in other contexts on the Web. In order to encourage publication for reuse, IIIF provides dedicated fields within its data model for attribution, a link to information about the license under which the image is published, and a logo image.  Together these serve to inform the user of the originator of the content and the terms of use for content retrieved via the IIIF APIs. If the publishing institution has complied with best practices, it is likely that the curator can immediately make a determination of whether the IIIF image can be used in an online exhibition. This is in stark contrast with the many bilateral discussions required when such rights statements are not made available in a shared and commonly understood way, and stands to save a lot of time and money.

The construction of a digital exhibition can require a significant amount of manual data entry and file management. Metadata and descriptive information created using ad hoc processes during the planning phase must be transferred to the exhibition application. Images must be selected, cropped, and uploaded as well, usually using desktop applications to perform the image manipulation. These manual processes are indicative of a lack of integration between the exhibition software and tools used for planning, as well as the current inability to integrate content from external repositories.

The need for manual intervention can be greatly reduced through the use of IIIF. If IIIF manifests are available it may be possible to publish the provided metadata directly in the exhibition. An exhibition system can assemble objects into IIIF collections and use that data structure to provide additional information required to render the objects in context, such as the navigation date, geographic position, and thumbnail image. The exhibition system can also provide image cropping functionality through the Image API, which allows URL access to regions of an image directly, without the manual and time-consuming use of tools such as Photoshop.

Once the digital exhibition has been planned and the necessary content assembled, it is necessary to display it online to users. This rendering is most frequently done with one-off code, or requires extensive customization of a more generic tool. This raises the costs per exhibition significantly. IIIF does not obviate the need for specific customizations for the detailed and highly interactive virtual experience level of exhibition, but for advertising and memorials of physical exhibitions, this is unlikely to be necessary or justified. Instead, with some tidying, the collection and manifests used in the planning and construction can serve as the description for off-the-shelf viewers to render the exhibition online. For more detailed exhibitions, IIIF can still help with the structure and descriptive content, allowing callouts from a framework to individual object and exhibition-specific functionality.

Beyond rendering, there is minimal reuse of digital exhibitions as they are simply Web pages without machine-readable data backing them. The focus is on the presentation, not on the description; however, these two are combined in a developer- and publisher-friendly way in IIIF. Further, linking within and across exhibitions becomes straightforward using the IIIF paradigms, as it is built on top of Linked Open Data. The identity of the exhibition and its components are a native part of the Web, not just the HTML pages.

Community participation

While the groundwork is well set, several challenges have been identified with this approach. Those challenges can be overcome relatively easily with the participation of the community in the process of understanding and building exhibitions using IIIF and the implementation of viewers that support them.

In order to provide consensus around the range of activities that should be considered, an established pattern in the IIIF community is to spend time examining shared feature sets and whether it would be valuable to provide a common framework that implements them.  This focus on what is useful, combined with discussion about what new features could be implemented at the same time, keeps the work focused and practical. IIIF viewers to date have focused on single objects and paged objects, such as books and manuscripts, and would greatly benefit from this use case analysis of a wider range of viewing paradigms.

The development community across IIIF can also work to ease development effort of the viewing applications, to ensure that bespoke exhibition requirements can take place within a framework without significant overheads. In this scenario, extensions can be shared within the community rather than duplicated, even if they are not core IIIF functionality. However, it requires the core viewing applications to be sufficiently stable and extensible that every extension doesn’t need to be constantly rewritten to maintain the integration.

It would be greatly beneficial if vendors within the museum sector adopted the IIIF APIs natively within their offerings, rather than requiring organizations to add inefficient shims over top of them. With access to and understanding of the internal functionality, vendors are best positioned to do this work for their products. The YCBA workarounds of getting content out of TMS and digital asset management systems are echoed at the Getty, and doubtless elsewhere in as many flavors as there are museums. Vendor engagement would ease this, allowing museums to get on with creating more and better exhibitions and less time worrying about software interoperability.

There are also a few areas of work needed for the IIIF specifications in order to ensure the best end user experience. The Presentation API assumes that there is a single set of owner-provided descriptive metadata for an object which is appropriate for use in any context, whereas an exhibition might like to provide exhibition-specific information.  Use cases for this include how the object fits into the sequence of objects that make up the exhibition, or simply to standardize the name used for an artist where different organizations have different cataloging and display rules. Secondly, assessment of best practices around different ways of linking resources together within and across exhibitions should be enabled by the specifications. Finally, with most widespread requirements, integration of audio, video, and 3D material would provide significant impact in the presentation of digital exhibitions on the Web.

Conclusions

In this paper we first discussed the functionality of the IIIF specifications, and then related that functionality to requirements for several different types of digital exhibition; we then discussed the various steps in the workflow from planning the exhibition to delivering it on the Web for viewing and reuse in a wider environment.  The approach described has some significant benefits, such as the automation of repetitive tasks that are currently done manually and the facilitation of collaboration across departments and organizations at all stages within the lifecycle of the exhibition. These improvements will save both time and money, enabling smaller organizations to engage more easily and large organizations to do more.

While IIIF may have appeal at the level of the individual institution as a mechanism for improving internal processes, the publication of interoperable content has a positive global impact. The adoption of IIIF for exhibitions broadens the interoperable content ecosystem, providing greater access to art historical resources. The challenges and opportunities that this new use of IIIF has uncovered are better understood through engagement, and IIIF would benefit significantly from the participation of the broader museum community to ensure that all relevant, shared use cases are able to be met.

References

Butler, H., M. Daly, A. Doyle, S. Gillies, S. Hagen, & T. Schaub. (2016). The GeoJSON Format, RFC 7946.  Consulted January 20th, 2017. Available  http://www.rfc-editor.org/info/rfc7946

Sanderson, R., P. Ciccarese, & H. Van de Sompel. (2013).  Open Annotation Data Model. Consulted January 18th, 2017.  Available http://www.openannotation.org/spec/core/

Sanderson, R., P. Ciccarese, & B. Young. (2017).  Web Annotation Data Model, W3C Candidate Recommendation.  Consulted January 20th, 2017. Available https://www.w3.org/TR/annotation-model/


Cite as:
. "Building distributed online exhibitions with IIIF." MW17: MW 2017. Published February 9, 2017. Consulted .
https://mw17.mwconf.org/paper/building-distributed-online-exhibitions-with-iiif/


Leave a Reply