Artificially shifting the “zoom center” of a map

I’ve worked on a couple of projects recently that involved positioning some kind of container on top of a slippy map that blocks out a significant chunk of the screen. Usually, the motivation for this is aesthetic – it looks neat to bump down the opacity on the overlay container just a bit, allowing the tiles to slide around underneath the overlay content when the map is panned or zoomed:

doi-border

gemini-border

The problem with this, though, is that it creates a “functional” viewport that’s different from the actual height and width of the map container, which is scaled to fill the window. This causes trouble as soon as you need to dynamically focus the map on a given location. For example, in the Declaration of Independence project, Neatline needs to frame the viewport around the annotated signatures on the document when the user clicks on one of the names in the transcription. By default, we get this:

hancock-overlap

OpenLayers is faithfully centering the entire map container around the geometric centroid of the annotation – but, since we’ve covered up about a third of the screen with the text, a sizable chunk of the annotation is slipping under the text container, which looks bad. Also annoying is that the map will zoom around a point that’s displaced away from the middle of the visible region. If you want to zoom in on Boston, and put Boston in what appears to be the center of the map, Boston will actually scoot off to the right when you click the “+” button, since the map will zoom in around the “true” centroid in the middle of the window. This cramps your style when you want to zoom in a couple of times, and have to manually drag the map to the right after each step to keep the thing you care about inside the viewport.

Really, what we want is to shift the working center point of the map to the right, so that it sits in the middle of the un-occluded region on the map:

hancock-fixed

Mapping libraries generally don’t have any native capacity to do this, so I had to MacGyver up something custom. After tinkering around, I realized that there’s a really simple and effective solution to this problem – just stretch the map container past the right edge of the window until the center sits in the correct location. For simplicity, imagine the window is 1200px wide, and the overlay is 400px, a third of the width. We want the functional center point of the map to sit 400px to the right of the overlay, or 800px from the left side of the window. In order for the center to sit at 800px, the map has to be twice that, 1600px:

diagram5

Conveniently, this is always equal to the width of the window plus the width of the overlay. So, the positioning logic is only slightly more complicated:

The one downside to this is that it adds a bit of wasted load to the tile server(s) that are feeding imagery into the map – OpenLayers has no way of knowing that the rightmost 400px of the map is invisible to the user, and will load tiles to fill the space. But, life is short, servers are cheap.

The Declaration of Independence – on a map, with a painting

Launch the Exhibit

declaration-of-independence

Way back in the spring of 2012, a couple months before we released the first version of Neatline, I drove up to Washington to give a little demo of the project to the folks at the Library of Congress. I had put together a couple of example exhibits for the presentation, but, the night before, I was bored and found myself brainstorming about Washington-themed projects. On a lark, I downloaded a scan of the 1823 facsimile of the Declaration of Independence from the National Archives website, and spent a couple hours tracing polygons around each one of the signatures at the bottom of the document. I showed the exhibit the next day, and had big plans to flesh it out and turn it into a real, showable project. But then I got swept up in the race to get the first release of Neatline out the door before DH2012 in Hamburg, and then sucked into the craziness of the summer conference season, and the project slipped down into the towering stack of things that I could never quite find time to work on.

For some reason, though, the idea popped back into my head a couple months ago – maybe because Menlo Park is submerged in a kind of permanent summer, and it pretty much always feels like a good time to eat ice cream and shoot off fireworks. After mulling it over for a couple weeks, I decided to resurrect it from the dead, spruce it up, and post it in time for the 4th of July. So, with two days to spare, here we go – an interactive edition of the Declaration of Independence, tightly coupled with three other “views” in an effort to add dimension to the original document:

  1. A full-text, two-way-linked transcription of the manuscript and the signatures at the bottom. Click on sentences in the transcription to focus on the corresponding region of the scanned image, or click on annotated blocks on the image to scroll the text.

    transcript

  2. An interactive edition of Trumbull’s “Declaration of Independence” painting, with each of the faces outlined and linked with the corresponding signature on the document.

    painting

  3. All plastered on top of a map that plots out each of the signers’ hometowns on a Mapbox layer, which makes it easy to see how the layout of the signatures maps on to the geographic layout of the colonies. Which, by extension, tracks the future division between Union and Confederate states in the Civil War – Georgia and the Carolinas look awful lonely over on the far left side of the document.

    map

Once I positioned the layers, annotated the signatures and faces, and plotted out the hometowns, I realized that I had painted myself into an interesting little corner from an information design standpoint – it was difficult to quickly move back and forth between the three main sections of the exhibit. In a sense, this is an inherent characteristic of deeply-zoomed interfaces. The ability to focus really closely on any one of the three visual grids – which is what makes it possible to mix them all together into a single environment – has the side effect of making the other two increasingly distant and inaccessible, more and more so the further down you go. For example, once you’ve focused in on Thomas Jefferson’s face in the Trumbull painting, it’s quite a chore to manually navigate to the corresponding signature on the document – you have to zoom back, pan the map up towards the scanned image, find the signature (often no easy task), and then zoom back down.

This is especially annoying in this case, since this potential for comparison is a big part of what’s interesting about the content. What I really wanted, I realized, was to be able to switch back and forth in a really simple, fluid way among the different instantiations of any individual person on the document, painting, and map – I wanted to be able to flip through them like a slideshow, to round up all the little partial representations of the person and hold them side-by-side in my head. So, as an experiment, I whipped up a little batch of custom UI components (built with the excellent React library, which fits in like a dream with Neatline’s Javascript API) that provide a “toggling” interface for each individual signer, and the exhibit as a whole.

By default, when you hit the page, three top-level buttons in the right corner of the window link to the the three main sections of the exhibit – the hometowns plotted along the eastern seaboard, the declaration over the midwest, and the painting over the southeast. In addition to the three individual buttons, there’s also a little “rotate” button that automatically cycles through the three regions, which makes it easy to toggle around without looking away from the map to move the cursor:

exhibit-buttons

More useful, though, it’s possible to bind any of the individual signers to the widget by clicking on the annotations. For example, if I click on Thomas Jefferson’s face in the painting, the name locks into place next to the buttons, which now point to the representations of that specific person in the exhibit – “Text” links to Jefferson’s signature, “Painting” to his face, and “Map” to Monticello:

signer-toggle

Once you’ve activated one of the signers, click on the name to show an overlay with a picture and biography, pulled from a public domain book published by the National Park Service called Signers of the Declaration:

bio-overlay

This is pretty straightforward on the map and document, where there’s always a one-to-one correspondence between an annotation and one of the signers. Things get more complicated on the map, though, where it’s possible for a single location to be associted with more than one signer. Philadelphia, for example, was home to Robert Morris, Benjamin Rush, Benjamin Franklin, John Morton, and George Clymer, so I had to write a little widget to make it possible to hone in on just one of the five after clicking the dot:

philadelphia

Last but not least, each sentence in the document itself is annotated and wired up with the corresponding text transcription on the left – click on the image to scroll the text, or click on the text to focus the image:

text

Happy fourth!

NeatlineText – Connect Neatline exhibits with documents

[Cross-posted from scholarslab.org]

Download the plugin

nltext-detail

Today we’re pleased to announce the first public release of NeatlineText, which makes it possible to create interactive, Neatline-enhanced editions of text documents – literary and historical texts, articles, book chapters, dissertations, blog posts, etc. – by connecting individual paragraphs, sentences, and words with objects in Neatline exhibits. Once the associations are established, the plugin wires up two-way linkages in the user interface between the highlighted sections in the text and the imagery in the exhibit. Click on the text, and the exhibit focuses around the corresponding location or annotation. Or, click on the map, and the text scrolls to show the corresponding sections in the text.

We’ve been using some version of this code in internal projects here at the lab for almost two years, and it’s long overdue for a public release. The rationale for NeatlineText is pretty simple – again and again, we’ve found that Neatline projects often go hand-in-hand with some kind of regular text narrative that sets the stage, describes the goals of project, or lays out an expository thesis that would be hard to weave into the more visual, free-form world of the Neatline exhibit proper. This is awesome combination – tools like Neatline are really good at displaying spatial, visual, dimensional, diagrammatic information, but nothing beats plain old text when you need to develop a nuanced, closely-argued narrative or interpretation.

The difficulty, though, is that it’s hard to combine the two in a way that doesn’t favor one over the other. We’ve taken quite a few whacks at this problem over the course of the last few years. One option is to present the text narrative as a kind of “front page” of the project that links out to the interactive environment. But this tends to make the Neatline exhibit feel like an add-on, something grafted onto the project as an after-thought. And this can easily become a self-fulfilling prophecy – when you have the click back and forth between different web pages to read the text and explore the exhibit, you tend to write the text as a more or less standalone piece of writing, instead of weaving the narrative in with the conceptual landscape of the exhibit.

Another option is to chop the prose narrative up into little chunks and build it directly into the exhibit – like the numbered waypoints we used in the the Hotchkiss projects back in 2012, each waypoint containing a small portion of a longer interpretive essay. But this tends to err in the other direction, dissolving the text into the visual organization of the exhibit instead of presenting it as a first-class piece of content.

NeatlineText tries to solves the problem by just plunking the two next to each other and making it easy for the reader (and the writer!) to move back and forth between the two. For example, NeatlineText powers the interactions between the text and imagery in these two exhibits of NASA photograph from the 1960s:

nltext-gemini

nltext-saturn-v

(Yes, I know – I’m a space nerd.) NeatlineText is also great for creating interactive editions of primary texts. An earlier version of this code powers the Mapping the Catalog of Ships project by Jenny Strauss Clay, Courtney Evans, and Ben Jasnow (winner of the Fortier Prize prize at DH2013!), which connects the contingents in the Greek army mentioned in Book 2 of the Iliad with locations on the map:

nltext-ships

And NeatlineText was also used in this interactive edition of the first draft of the Gettysburg Address:

nltext-gettysburg

Anyway, grab the code from the Omeka add-ons repository and check out the documentation for step-by-step instructions about how to get up and running. And, as always, be sure to file a ticket if you run into problems!

FedoraConnector 2.0

fedora

[Cross-posted from scholarslab.org]

Hot on the heels of yesterday’s update to the SolrSearch plugin, today we’re happy to announce version 2.0 of the FedoraConnector plugin, which makes it possible to link items in Omeka with objects in Fedora Commons repositories! The workflow is simple – just register the location of one or more installations of Fedora, and then individual items in the Omeka collection can be associated with a Fedora PID. Once the link is established, any combination of the datastreams associated with the PID can be selected for import. For each of the datastreams, FedoraConnector proceeds in one of two ways:

  • If the datastream contains metadata (e.g., a Dublin Core record), the plugin will check to see if it can find an “importer” that knows how to read the metadata format. Out of the box, the plugin can import Dublin Core and MODS, but also includes a really simple API that makes it easy to add in new importers for other metadata standards. If an importer is found for the datastream, FedoraConnector just copies all of the metadata into the item, mapping the content into the Dublin Core elements according to the rules defined in the importer. This creates a “physical” copy of the metadata that isn’t linked to the source object in Fedora – changes in Omeka aren’t pushed back upstream into Fedora, and changes in Fedora don’t cascade down into Omeka.

  • If the datastream delivers content (e.g., an image), the plugin will check to see if it can find a “renderer” that knows how to display the content. Like the importers, the renderers are structured as an extensible API that ships with a couple of sensible defaults – out of the box, the plugin can display regular images (JPEGs, TIFs, PNGs, etc.) and JPEG2000 images. If a renderer exists for the content type in question, the plugin will display the content directly from Fedora. So, for example, if the datastream is a JPEG image, the plugin will add markup like this to the item show page:

    Unlike the metadata datastreams, then, which are copied from Fedora, content datastreams pipe in data from Fedora on-the-fly, meaning that a change in Fedora will immediately propagate out to Omeka.

(See the image below for a sense of what the final result might look like – in this case, displaying an image from the Holsinger collection at UVa, with both a metadata and content datastream selected.)

For now, FedoraConnector is a pretty simple plugin. We’ve gone back and forth over the course of the last couple years about how to model the interaction between Omeka and Fedora. Should it just be a “pull” relationship (Fedora -> Omeka), or also a “push” relationship (Omeka -> Fedora)? Should the imported content in Omeka stay completely synchronized with Fedora, or should it be allowed to diverge for presentational purposes? These are tricky questions. Implementations of Fedora – and the workflows that intersect with it – can vary pretty widely from one institution to the next. The current set of features was built in response to specific needs here at UVa, but we’ve been talking recently with folks at a couple of other institutions who are interested in experimenting with variations on the same basic theme.

So, to that end, if you’re use Fedora and Omeka and interested in wiring them together – we’d love to hear from you! Specifically, how exactly do you use Fedora, and what type of relationship between the two services would be most useful? With a more complete picture of what would be useful, I suspect that a handful of pretty surgical interventions would be enough to accommodate most use patterns. In the meantime, be sure to file a ticket on the issue tracker if you find bugs or think of other features that would be useful.

fedora

SolrSearch 2.0

solr

[Cross-posted from scholarslab.org]

Today we’re pleased to announce version 2.0 of the SolrSearch plugin for Omeka! SolrSearch replaces the default search interface in Omeka with one powered by Solr, a blazing-fast search engine that supports advanced features like hit highlighting and faceting. In most cases, Omeka’s built-in searching capabilities work great, but there are a couple of situations where it might make sense to take a look at Solr:

  • When you have a really large collection – many tens or hundreds of thousands of items – and want something scales a bit better than the default solution.

  • When your metadata contains a lot of text content and you want to take advantage of Solr’s hit highlighting functionality, which makes it possible to display a preview snippet from each of the matching records.

  • When your site makes heavy use of content taxonomies – collections, tags, item types, etc. – and you want to use Solr’s faceting capabilities, which make it possible for users to progressively narrow down search results by adding on filters that crop out records that don’t fall into certain categories. Stuff like – show me all items in “Collection 1″, tagged with “tag 2″, and so forth.

To use SolrSearch, you’ll need access to an installation of Solr 4. To make deployment easy, the plugin includes a preconfigured “core” template, which contains all the configuration files necessary to index content from Omeka. Once the plugin is installed, just copy-and-paste the core into your Solr home directory, fill out the configuration forms, and click the button to index your content in Solr.

Once everything’s up and running, SolrSearch will automatically intercept search queries that are entered into the regular Omeka “Search” box and redirect them to a custom interface, which exposes all the bells and whistles provided by Solr. Here’s what the end result looks like in the “Seasons” theme, querying against a test collection that contains the last few posts from this blog, which include lots of exciting Ivanhoe-related news:

solr-search

Out of the box, SolrSearch knows how to index three types of content: (1) Omeka items, (2) pages created with the Simple Pages plugin, and (3) exhibits (and exhibit page content) created with the Exhibit Builder plugin. Since regular Omeka items are the most common (and structurally complex) type of content, the plugin includes a point-and-click interface that makes it easy to configure exactly how the items are stored in Solr – which elements are indexed, and which elements should be used as facets:

solr-config

Meanwhile, if you have content housed in database tables controlled by other plugins that you want to vacuum up into the index, SolrSearch ships with an “addons” system (devised by my brilliant partner in crime Eric Rochester), which makes it possible to tell SolrSearch how to index other types of content just by adding little JSON documents that describe the schema. For example, registering Simple Pages is as simple is this:

And the system even scales up to handle more complicated data models, like the parent-child relationship between “pages” and “page blocks” in ExhibitBuilder, or between “exhibits” and “records” in Neatline.

Anyhow, grab the built package from the Omeka addons repository or clone the repository from GitHub. As always, if you find bugs or think of useful features, be sure to file a ticket on the issue tracker!