1. Two API Specifications
Image API
• The Pixels
• (Just Enough) Technical Metadata
• Server Capabilities
Presentation API
• Metadata Labels and Values
• Ordering and Arrangement of Images and Other Content
• Relationships to Related Resources
7. Presentation API
Features
• Metadata Labels and Values
• Ordering Arrangement of Images and
Other Content
• Object Structure and Layout
• Including Links to the Image API
• Relationships to Related Resources
• Attribution and Licensing
23. Properties
Descriptive
label
Name of the resource
description
Textual summary
thumbnail
Image summary
metadata
Pairs of Label and Value
Metadata Example:
label:"Created", value:"1300"
24. Properties
Rights
license
Link to license description
attribution
Text required to be displayed
logo
Image required to be displayed
Linking
service
Additional service endpoint
seeAlso
Semantic metadata resource
related
Resource to display to the user
28. Future Work
• Authorization / Authentication
• Search within (text and annotations)
• Discovery of Manifest and Image Identifiers
• CRUD
Notas do Editor
As you've heard already IIIF has published two API specifications:
The Image API: for getting at images and relevant metadata
The PresentationAPI: images with relevant descriptive properties, in the context of related content included text transcriptions, annotation, and other related images.
Without standards we can only have closed systems
Shared APIs make technologies interchangeable, giving us choices between different technologies in the different roles within our application stack
Shared APIs make technologies interchangeable, giving us choices between different technologies in the different roles within our application stack
The image API defines URI syntax and packs all of the parameters into a clean path-based syntax.
While one can carefully craft URIs (as I'll do while demonstrating), it is generally expected and intended that URIs will be built using rich web-clients, some of which we’ll demonstrate a bit later on.
That said, having a tidy persistent URL for citations, annotations, web exhibitions, emailing, and other means of sharing can be quite useful.
It is required that servers apply each transformation from left to right, i.e. in the order specified by the API,
[SWITCH TO LIVE DEMO HERE]
Presentation API: What it is:
A bit more complex, but easy to sum up:
When you have a bunch of content that taken in aggregate represents a real-world object, you need to create relationships between those bits of content to make an accurate and useful representation.
A set of data structures that is focused on user experience
Enough to drive a rich client
Facilitates, ordering/sorting, arranging, transcribing/annotating
A syntax that is friendly to web developers
Native to Javascript
Thy don’t need to understand, e.g., metadata semantics to draw a feature rich user interface
Presentation API: What it is not!
Agnostic of content standards
No descriptive metadata sematics
Instead…
Middleware
There are five core Parts in the Presentation API
They’re best explained by example
We’re going walk up this graph, and use the IIIF Presentation model to build a collection of manuscripts.
It’s a little easier to talk about Content and Canvas together….
Canvas is the fundamental building block. It represents the notion of a physical unit. You might not have an image; maybe you just know it exists
Following the shared canvas data model, and the Canvas metaphor any content is “painted” onto the Canvas.
You can think of it like a PowerPoint slide
The Content could be an image of the whole thing, or just a part of it, or multiple images positioned relative to each other….
….or text based content in the form of transcriptions, OCR, or annotations.
Continuing our way up the model; so far we’ve painted a single image onto a canvas.
Presumably our manuscript has multiple leaves, and each leaf will have a canvas, so we’ll need a way to relate those to each other, put them in order, structure them etc. This is where Sequence comes in.
If we take this example from the Mirador viewer (you’ll see a live demo a bit later), Sequence allows us to do a few things…
Most notable in this case is Paging
The API distinguishes between rtl, ltr, ttb, btt directionality
There are also features for, e.g. indicating that a page should be skipped
Filmstrips or reference strips
and pages of ordered thumbnails.
It’s also worth noting that there is a slightly different way of arranging Canvases, using a feature called Ranges. I’m not going to cover it in more detail here, but you’ll note that this manuscript has a TOC along the left margin. Ranges enable this.
Finally we have Manifests. As its name suggests, the Manifest is the package of all of the content, canvases, sequences, ranges and metadata we have about an object. All of these constituent parts are either contained in a JSON-LD document that represents the Manifest, or are referenced via URIs in the Manifest.
And collections, not surprisingly, are groups of manifests.
Again, back in the Mirador viewer we can see how collections and manifests relate to each other.
Moving on, there are a few properties that can be attached to most of the the nodes in the model. These take the form of simple key-value pairs, and, as I said earlier, there are content semantics attached; they’re just labels and values—we did not set out to create another metadata standard.
You can see how these properties are used in Mirador.
Just a quick word about serialization: Like the image API, the Prezi API uses JSON-LD, which is:
Easy for web developers to understand and consume
Without sacrificing the semantic of links data.
Talk a bit about each, what we mean, scope and current use cases.