A poem, typeset inside its own title

Start the Poem


I’ve spent a lot of time recently thinking about really weird, experimental ways to represent text documents. This grows out of a number of other projects that I’ve had the good luck to work on in recent years, many of which have turned on the question of how to take a chunk of static text and find a useful and appealing way to put it on a computer screen. At first, I thought this was a coincidence, but then I started to wonder if it might actually be a kind of common denominator that binds together lots of different types of DH-y projects, even ones that aren’t directly interested in questions of textual representation. Neatline, for example, started out as just a mapping project, but then we realized that we almost always wanted to say something about the maps we were making, and started brainstorming about ways to wire them up with articles, primary texts, blog posts, book chapters, etc. Suddenly, it was as much about the texts as the maps – how do you tweak the functionality of a text document to make it play well with a digital map?

These kinds of questions are still very new, in the long view. We’ve had thousands of years to think about how to organize texts on physical pages, and really just about 20 years to figure out how to do it on computer screens. And, books are a hard act to follow – they’re extremely mature, and we’ve grown into very sophisticated users of them over time. I think this puts a lot of pressure on digital book design to be really functional and legible right out of the gate, lest it seem like a gimmick. In fact, in most cases, text on screens isn’t actually different from text on pages. This post, for example, is almost completely interoperable with analog text – I could print out a physical copy in just a couple seconds, and the experience of reading it wouldn’t be different in any kind of functional way. This premium on usability means that we haven’t spent much time mucking around with the low-level mechanics of the reading interface. What we’re really doing, a lot of the time, is using computers to migrate texts, to convert ink into pixels, to recreate physical pages on digital screens – but not really diving into the question of how (whether?) computers could be used to rethink the functionality of text documents from first principles.

And, really, this makes a lot of sense. The printed page is put together the way it is for good reasons. The basic building blocks of physical documents are probably governed by pretty low-level constraints – the physiology of vision, the cognitive structure of language understanding, the simple pragmatics of reading, etc. Having said that, I often find myself daydreaming about what kinds of totally crazy, impractical, ludic reading interfaces could be built if the requirements for usability and legibility were completely lifted. This lends itself to weird thought experiments. If I were an alien with no knowledge of books or human culture, but magically knew how to read English and program javascript – how would I lay out a piece of text?

The answers tend to be impractical, by definition, and they often seem most promising as some flavor of electronic literature. I was thinking about this recently, and decided to try my hand at “printing” a text on a screen in a way that would be difficult on a physical page. I played around with a couple of similar projects last fall, creating digital typesettings of isolated little fragments of poetry, but this time I wanted to try to do an entire work – a whole poem, a complete text that could actually be read from start to finish. I decided to try it out with a little poem called Polyphemus that I’ve been playing with for about four years – I wrote a long version of it back in 2010, right after finishing college, rewrote it a handful of times, and eventually typed out a really short version as a block comment in the source code of Exquisite Haiku, a web application for collaborative poetry composition that I built as a 20% project back at the Scholars’ Lab.

The idea was this – take the words in a poem, and physically place them inside the letter glyphs of the poem’s title:


Once all the words were in place, I realized that I wanted some way to “traverse” the poem, to slide through it, to smoothly read the entire thing without having to manually pan the display. This is ironic – really, what I wanted was to be able to read it more normally, which gets back to the question of just how much innovation is really desirable at the level of the basic reading mechanic. But, I decided to stick with it, because I liked something about the way the new layout of the poem jibed with the content. Or, really, the genre – dramatic monologues are always trying to characterize or define the speaker, to be the speaker, in a sense. Here, I realized, this gets encoded into the dimensional layout of the text – the poem is inscribed inside the name of the speaker, it literally conforms to the shape of the word that signifies the person being described.

To chip away at the problem of finding a way to add a sense of linearity, I added a little slider the bottom of the screen, which can either be dragged with the cursor or “stepped” forward or backward by pressing the arrow keys. Since Neatline always centers the selected record exactly in the center of the screen, this results in something similar to commercial speed-reading interfaces like Spritz – you can just leave your eyes focused on the center of the screen, hold down the right arrow key, and “watch” the poem as much as read it:

The dream – the deep page

These projects are fun to make, but they’re brittle and can’t really scale much beyond this. OpenLayers does a good job of rendering the really high-density geometries that are needed to represent the letter shapes, but the content management is slow and difficult – Neatline is a GIS client at heart, not a text editor, and the workflow leaves a lot to be desired (Polyphemus is a brisk 94 words, and I bet I spent at least 10-15 hours putting this together). Really, this is a simple attempt to prototype a much larger and more interesting project, a generalization of this type of deeply-zoomable “page” that would be much easier to create, edit, and navigate.

I’m not totally sure what this would look like, but I think the crux of it would be this – a writing and reading environment that would treat depth or zoom as a completely first-class dimension. I’ve had a vague sense that something like this would be interesting and useful for a while, but I couldn’t really put a finger on exactly why. Then, a couple weeks ago, I stumbled across a fascinating paper abstract from DH2014 that surveys a number of different variations on exactly this idea, and frames it in a way that was really clarifying for me. In “The Layered Text. From Textual Zoom, Text Network Analysis and Text Summarisation to a Layered Interpretation of Meaning,” Florentina Armaselu traces this idea back to work by Neal Stephenson and Barthes, who propose the notion of a “z-text,” a collection vertically stacked text fragments, each expanded on by the snippet that sits below it. Reading takes place on two dimensions – forward, along the conventional textual axis, and also down into any of the fragments that make up the text.

Poetry aside, this could be fantastically useful whenever you want to formalize a hierarchical relationship among different parts of a text, or create some kind of graded representation of complexity or detail. A footnote or annotation could sit “below” a phrase, instead of being pushed off into the margin or to the bottom of the document. Or think about the whole class of organizational techniques that we use to make it easy to engage with texts at different levels of detail or completeness. We start with a table of contents, which then feeds into an abstract, an introduction, an executive summary – each step providing a progressively more detailed fractal representation of the structure of the document – and then finally provide the complete treatments of the sections. A deeply zoomed page would formalize this structure and make it unnecessary to fall back on the “multi-pass” approach that regular, one-dimensional documents have to rely on, which, seems inefficient and repetitive by comparison. This seems like a case where computers really could improve on a fundamental limitation of printed texts, which is exciting.

From this perspective, it actually makes a lot of sense that technologies that were originally designed for mapping (Neatline, OpenLayers) turn out to be weirdly good at mocking out rickety little approximations of this idea. If you think about it, digital maps do exactly this, just in a visual sense. You often start out with a broad, low-detail, zoomed-back representation of a place – say, the United States – and then progressively zoom downwards, losing the comprehensiveness of the original view but gaining enormous detail at one place or another – a state, a city, a block. The z-text is exactly the same idea, just applied to text. The question of how to build that interface in a way that’s as fluid and intuitive as a slippy map is totally fascinating and, I think, unsolved.

Neatline 2.3

[Cross-posted from scholarslab.org]

Today we’re happy to announce Neatline 2.3! This release includes a couple of nifty new features and, under the hood, a pretty big stack of bug fixes, performance tweaks, and improvements to the development workflow. The coolest new feature in 2.3 is a simple little addition that we’ve gotten a number of requests for in the last few months – the ability to “hard link” to individual records inside of an exhibit. In the new version, when you select a record in an exhibit, a little fragment gets tacked on to the end of the URL that points back to that record. For example, if the record has an ID of 16, the URL will change to something like:


Then, if someone goes directly to this URL, the exhibit will automatically select that record when the page loads, just as if the reader had manually clicked on it – the map will focus and zoom around the record, the popup bubble will appear, the timeline will scroll, and any other custom event bindings added by the exhibit’s theme will fire. This is nice because it makes it easier to use Neatline as a kind of geospatial “footnoting” system that can be referred to from external resources – sort of like the Neatline Text extension, except the text doesn’t have to be housed inside of Omeka. Imagine you’re working on an article that makes reference to some geographic locations, and you want to plot them out in Neatline. This way you could put the text of the article anywhere on the web (a WordPress blog, an online journal, etc.) and just link to the relevant parts of the Neatline exhibit using plain old anchor tags.

For example, check out this simple little Neatline exhibit, which just plots out the locations of eight US cities. Then, click on these links to open up the same exhibit, this time focused on the individual cities: New York, San Francisco, Chicago, Los Angeles, Seattle, Denver, Atlanta, and (but of course) Charlottesville.


Check out the change log for the full list of updates in 2.3, and grab the new production package from the Omeka addons repository. Thanks Jenifer Bartle, Jacki Musacchio, Rachel King, Lincoln Mullen, and Miriam Posner for helping us find bugs and brainstorm about features! As always, drop a note on the GitHub issue tracker if you run into problems or have ideas for new features.

Artificially shifting the “zoom center” of a map

I’ve worked on a couple of projects recently that involved positioning some kind of container on top of a slippy map that blocks out a significant chunk of the screen. Usually, the motivation for this is aesthetic – it looks neat to bump down the opacity on the overlay container just a bit, allowing the tiles to slide around underneath the overlay content when the map is panned or zoomed:



The problem with this, though, is that it creates a “functional” viewport that’s different from the actual height and width of the map container, which is scaled to fill the window. This causes trouble as soon as you need to dynamically focus the map on a given location. For example, in the Declaration of Independence project, Neatline needs to frame the viewport around the annotated signatures on the document when the user clicks on one of the names in the transcription. By default, we get this:


OpenLayers is faithfully centering the entire map container around the geometric centroid of the annotation – but, since we’ve covered up about a third of the screen with the text, a sizable chunk of the annotation is slipping under the text container, which looks bad. Also annoying is that the map will zoom around a point that’s displaced away from the middle of the visible region. If you want to zoom in on Boston, and put Boston in what appears to be the center of the map, Boston will actually scoot off to the right when you click the “+” button, since the map will zoom in around the “true” centroid in the middle of the window. This cramps your style when you want to zoom in a couple of times, and have to manually drag the map to the right after each step to keep the thing you care about inside the viewport.

Really, what we want is to shift the working center point of the map to the right, so that it sits in the middle of the un-occluded region on the map:


Mapping libraries generally don’t have any native capacity to do this, so I had to MacGyver up something custom. After tinkering around, I realized that there’s a really simple and effective solution to this problem – just stretch the map container past the right edge of the window until the center sits in the correct location. For simplicity, imagine the window is 1200px wide, and the overlay is 400px, a third of the width. We want the functional center point of the map to sit 400px to the right of the overlay, or 800px from the left side of the window. In order for the center to sit at 800px, the map has to be twice that, 1600px:


Conveniently, this is always equal to the width of the window plus the width of the overlay. So, the positioning logic is only slightly more complicated:

The one downside to this is that it adds a bit of wasted load to the tile server(s) that are feeding imagery into the map – OpenLayers has no way of knowing that the rightmost 400px of the map is invisible to the user, and will load tiles to fill the space. But, life is short, servers are cheap.

The Declaration of Independence – on a map, with a painting

Launch the Exhibit


Way back in the spring of 2012, a couple months before we released the first version of Neatline, I drove up to Washington to give a little demo of the project to the folks at the Library of Congress. I had put together a couple of example exhibits for the presentation, but, the night before, I was bored and found myself brainstorming about Washington-themed projects. On a lark, I downloaded a scan of the 1823 facsimile of the Declaration of Independence from the National Archives website, and spent a couple hours tracing polygons around each one of the signatures at the bottom of the document. I showed the exhibit the next day, and had big plans to flesh it out and turn it into a real, showable project. But then I got swept up in the race to get the first release of Neatline out the door before DH2012 in Hamburg, and then sucked into the craziness of the summer conference season, and the project slipped down into the towering stack of things that I could never quite find time to work on.

For some reason, though, the idea popped back into my head a couple months ago – maybe because Menlo Park is submerged in a kind of permanent summer, and it pretty much always feels like a good time to eat ice cream and shoot off fireworks. After mulling it over for a couple weeks, I decided to resurrect it from the dead, spruce it up, and post it in time for the 4th of July. So, with two days to spare, here we go – an interactive edition of the Declaration of Independence, tightly coupled with three other “views” in an effort to add dimension to the original document:

  1. A full-text, two-way-linked transcription of the manuscript and the signatures at the bottom. Click on sentences in the transcription to focus on the corresponding region of the scanned image, or click on annotated blocks on the image to scroll the text.


  2. An interactive edition of Trumbull’s “Declaration of Independence” painting, with each of the faces outlined and linked with the corresponding signature on the document.


  3. All plastered on top of a map that plots out each of the signers’ hometowns on a Mapbox layer, which makes it easy to see how the layout of the signatures maps on to the geographic layout of the colonies. Which, by extension, tracks the future division between Union and Confederate states in the Civil War – Georgia and the Carolinas look awful lonely over on the far left side of the document.


Once I positioned the layers, annotated the signatures and faces, and plotted out the hometowns, I realized that I had painted myself into an interesting little corner from an information design standpoint – it was difficult to quickly move back and forth between the three main sections of the exhibit. In a sense, this is an inherent characteristic of deeply-zoomed interfaces. The ability to focus really closely on any one of the three visual grids – which is what makes it possible to mix them all together into a single environment – has the side effect of making the other two increasingly distant and inaccessible, more and more so the further down you go. For example, once you’ve focused in on Thomas Jefferson’s face in the Trumbull painting, it’s quite a chore to manually navigate to the corresponding signature on the document – you have to zoom back, pan the map up towards the scanned image, find the signature (often no easy task), and then zoom back down.

This is especially annoying in this case, since this potential for comparison is a big part of what’s interesting about the content. What I really wanted, I realized, was to be able to switch back and forth in a really simple, fluid way among the different instantiations of any individual person on the document, painting, and map – I wanted to be able to flip through them like a slideshow, to round up all the little partial representations of the person and hold them side-by-side in my head. So, as an experiment, I whipped up a little batch of custom UI components (built with the excellent React library, which fits in like a dream with Neatline’s Javascript API) that provide a “toggling” interface for each individual signer, and the exhibit as a whole.

By default, when you hit the page, three top-level buttons in the right corner of the window link to the the three main sections of the exhibit – the hometowns plotted along the eastern seaboard, the declaration over the midwest, and the painting over the southeast. In addition to the three individual buttons, there’s also a little “rotate” button that automatically cycles through the three regions, which makes it easy to toggle around without looking away from the map to move the cursor:


More useful, though, it’s possible to bind any of the individual signers to the widget by clicking on the annotations. For example, if I click on Thomas Jefferson’s face in the painting, the name locks into place next to the buttons, which now point to the representations of that specific person in the exhibit – “Text” links to Jefferson’s signature, “Painting” to his face, and “Map” to Monticello:


Once you’ve activated one of the signers, click on the name to show an overlay with a picture and biography, pulled from a public domain book published by the National Park Service called Signers of the Declaration:


This is pretty straightforward on the map and document, where there’s always a one-to-one correspondence between an annotation and one of the signers. Things get more complicated on the map, though, where it’s possible for a single location to be associted with more than one signer. Philadelphia, for example, was home to Robert Morris, Benjamin Rush, Benjamin Franklin, John Morton, and George Clymer, so I had to write a little widget to make it possible to hone in on just one of the five after clicking the dot:


Last but not least, each sentence in the document itself is annotated and wired up with the corresponding text transcription on the left – click on the image to scroll the text, or click on the text to focus the image:


Happy fourth!

NeatlineText – Connect Neatline exhibits with documents

[Cross-posted from scholarslab.org]

Download the plugin


Today we’re pleased to announce the first public release of NeatlineText, which makes it possible to create interactive, Neatline-enhanced editions of text documents – literary and historical texts, articles, book chapters, dissertations, blog posts, etc. – by connecting individual paragraphs, sentences, and words with objects in Neatline exhibits. Once the associations are established, the plugin wires up two-way linkages in the user interface between the highlighted sections in the text and the imagery in the exhibit. Click on the text, and the exhibit focuses around the corresponding location or annotation. Or, click on the map, and the text scrolls to show the corresponding sections in the text.

We’ve been using some version of this code in internal projects here at the lab for almost two years, and it’s long overdue for a public release. The rationale for NeatlineText is pretty simple – again and again, we’ve found that Neatline projects often go hand-in-hand with some kind of regular text narrative that sets the stage, describes the goals of project, or lays out an expository thesis that would be hard to weave into the more visual, free-form world of the Neatline exhibit proper. This is awesome combination – tools like Neatline are really good at displaying spatial, visual, dimensional, diagrammatic information, but nothing beats plain old text when you need to develop a nuanced, closely-argued narrative or interpretation.

The difficulty, though, is that it’s hard to combine the two in a way that doesn’t favor one over the other. We’ve taken quite a few whacks at this problem over the course of the last few years. One option is to present the text narrative as a kind of “front page” of the project that links out to the interactive environment. But this tends to make the Neatline exhibit feel like an add-on, something grafted onto the project as an after-thought. And this can easily become a self-fulfilling prophecy – when you have the click back and forth between different web pages to read the text and explore the exhibit, you tend to write the text as a more or less standalone piece of writing, instead of weaving the narrative in with the conceptual landscape of the exhibit.

Another option is to chop the prose narrative up into little chunks and build it directly into the exhibit – like the numbered waypoints we used in the the Hotchkiss projects back in 2012, each waypoint containing a small portion of a longer interpretive essay. But this tends to err in the other direction, dissolving the text into the visual organization of the exhibit instead of presenting it as a first-class piece of content.

NeatlineText tries to solves the problem by just plunking the two next to each other and making it easy for the reader (and the writer!) to move back and forth between the two. For example, NeatlineText powers the interactions between the text and imagery in these two exhibits of NASA photograph from the 1960s:



(Yes, I know – I’m a space nerd.) NeatlineText is also great for creating interactive editions of primary texts. An earlier version of this code powers the Mapping the Catalog of Ships project by Jenny Strauss Clay, Courtney Evans, and Ben Jasnow (winner of the Fortier Prize prize at DH2013!), which connects the contingents in the Greek army mentioned in Book 2 of the Iliad with locations on the map:


And NeatlineText was also used in this interactive edition of the first draft of the Gettysburg Address:


Anyway, grab the code from the Omeka add-ons repository and check out the documentation for step-by-step instructions about how to get up and running. And, as always, be sure to file a ticket if you run into problems!

FedoraConnector 2.0


[Cross-posted from scholarslab.org]

Hot on the heels of yesterday’s update to the SolrSearch plugin, today we’re happy to announce version 2.0 of the FedoraConnector plugin, which makes it possible to link items in Omeka with objects in Fedora Commons repositories! The workflow is simple – just register the location of one or more installations of Fedora, and then individual items in the Omeka collection can be associated with a Fedora PID. Once the link is established, any combination of the datastreams associated with the PID can be selected for import. For each of the datastreams, FedoraConnector proceeds in one of two ways:

  • If the datastream contains metadata (e.g., a Dublin Core record), the plugin will check to see if it can find an “importer” that knows how to read the metadata format. Out of the box, the plugin can import Dublin Core and MODS, but also includes a really simple API that makes it easy to add in new importers for other metadata standards. If an importer is found for the datastream, FedoraConnector just copies all of the metadata into the item, mapping the content into the Dublin Core elements according to the rules defined in the importer. This creates a “physical” copy of the metadata that isn’t linked to the source object in Fedora – changes in Omeka aren’t pushed back upstream into Fedora, and changes in Fedora don’t cascade down into Omeka.

  • If the datastream delivers content (e.g., an image), the plugin will check to see if it can find a “renderer” that knows how to display the content. Like the importers, the renderers are structured as an extensible API that ships with a couple of sensible defaults – out of the box, the plugin can display regular images (JPEGs, TIFs, PNGs, etc.) and JPEG2000 images. If a renderer exists for the content type in question, the plugin will display the content directly from Fedora. So, for example, if the datastream is a JPEG image, the plugin will add markup like this to the item show page:

    Unlike the metadata datastreams, then, which are copied from Fedora, content datastreams pipe in data from Fedora on-the-fly, meaning that a change in Fedora will immediately propagate out to Omeka.

(See the image below for a sense of what the final result might look like – in this case, displaying an image from the Holsinger collection at UVa, with both a metadata and content datastream selected.)

For now, FedoraConnector is a pretty simple plugin. We’ve gone back and forth over the course of the last couple years about how to model the interaction between Omeka and Fedora. Should it just be a “pull” relationship (Fedora -> Omeka), or also a “push” relationship (Omeka -> Fedora)? Should the imported content in Omeka stay completely synchronized with Fedora, or should it be allowed to diverge for presentational purposes? These are tricky questions. Implementations of Fedora – and the workflows that intersect with it – can vary pretty widely from one institution to the next. The current set of features was built in response to specific needs here at UVa, but we’ve been talking recently with folks at a couple of other institutions who are interested in experimenting with variations on the same basic theme.

So, to that end, if you’re use Fedora and Omeka and interested in wiring them together – we’d love to hear from you! Specifically, how exactly do you use Fedora, and what type of relationship between the two services would be most useful? With a more complete picture of what would be useful, I suspect that a handful of pretty surgical interventions would be enough to accommodate most use patterns. In the meantime, be sure to file a ticket on the issue tracker if you find bugs or think of other features that would be useful.


SolrSearch 2.0


[Cross-posted from scholarslab.org]

Today we’re pleased to announce version 2.0 of the SolrSearch plugin for Omeka! SolrSearch replaces the default search interface in Omeka with one powered by Solr, a blazing-fast search engine that supports advanced features like hit highlighting and faceting. In most cases, Omeka’s built-in searching capabilities work great, but there are a couple of situations where it might make sense to take a look at Solr:

  • When you have a really large collection – many tens or hundreds of thousands of items – and want something scales a bit better than the default solution.

  • When your metadata contains a lot of text content and you want to take advantage of Solr’s hit highlighting functionality, which makes it possible to display a preview snippet from each of the matching records.

  • When your site makes heavy use of content taxonomies – collections, tags, item types, etc. – and you want to use Solr’s faceting capabilities, which make it possible for users to progressively narrow down search results by adding on filters that crop out records that don’t fall into certain categories. Stuff like – show me all items in “Collection 1″, tagged with “tag 2″, and so forth.

To use SolrSearch, you’ll need access to an installation of Solr 4. To make deployment easy, the plugin includes a preconfigured “core” template, which contains all the configuration files necessary to index content from Omeka. Once the plugin is installed, just copy-and-paste the core into your Solr home directory, fill out the configuration forms, and click the button to index your content in Solr.

Once everything’s up and running, SolrSearch will automatically intercept search queries that are entered into the regular Omeka “Search” box and redirect them to a custom interface, which exposes all the bells and whistles provided by Solr. Here’s what the end result looks like in the “Seasons” theme, querying against a test collection that contains the last few posts from this blog, which include lots of exciting Ivanhoe-related news:


Out of the box, SolrSearch knows how to index three types of content: (1) Omeka items, (2) pages created with the Simple Pages plugin, and (3) exhibits (and exhibit page content) created with the Exhibit Builder plugin. Since regular Omeka items are the most common (and structurally complex) type of content, the plugin includes a point-and-click interface that makes it easy to configure exactly how the items are stored in Solr – which elements are indexed, and which elements should be used as facets:


Meanwhile, if you have content housed in database tables controlled by other plugins that you want to vacuum up into the index, SolrSearch ships with an “addons” system (devised by my brilliant partner in crime Eric Rochester), which makes it possible to tell SolrSearch how to index other types of content just by adding little JSON documents that describe the schema. For example, registering Simple Pages is as simple is this:

And the system even scales up to handle more complicated data models, like the parent-child relationship between “pages” and “page blocks” in ExhibitBuilder, or between “exhibits” and “records” in Neatline.

Anyhow, grab the built package from the Omeka addons repository or clone the repository from GitHub. As always, if you find bugs or think of useful features, be sure to file a ticket on the issue tracker!

Project Gemini over Baja California

Launch the Exhibit


A couple weeks ago, somewhere in the middle of a long session of free-association link hopping on Wikipedia, I stumbled into a cluster of articles about Project Gemini, NASA’s second manned spaceflight program. Gemini, I quickly discovered, produced some spectacular photographs – many of them pointed downward towards the surface of the earth, capturing a dizzying opposition between the intelligible scale of the foreground (the 20-foot capsule, 100-foot tethering cords, 6-foot human bodies floating in space) and the completely unintelligible scale of the massive geographic entities below (peninsulas, continents, oceans).

As I started to click through the pictures, I found myself reflexively alt-tabbing back and forth between Chrome and Google Earth to compare them with the modern satellite imagery of the same geographic locations. Which made me think – why not try to actually combine the two into a single environment? Over the course of the next few days, I sketched out a little Neatline exhibit that plasters two photographs of Baja California Sur – taken about a year apart on Gemini 5 (August 1965) and Gemini 11 (September 1966) – around the Mapbox satellite imagery of the peninsula. Instead of lining up the coastlines to make the images overlay accurately on top of the satellite tiles, I just plunked them down on the map off to the side at a scale and orientation that makes it easy to compare the two. (We’ve played around with this before, and I like to think of it as faux – or just especially humanistic! – georectification.)


Then, using the drawing tools in Neatline, I blocked in some simple annotations that visually wire up the two sets of imagery – outlines around the four islands along the eastern coast of the peninsula, and arrows between the different instantiations of La Paz and San José del Cabo. I also wanted to find a way to visually formalize the difference in perspective between the Gemini photographs (oblique, wide-angle, deliberate) and the Mapbox tiles (flat-on, uniform). Using Illustrator, I created a long, ruler-like vector shape to label the ~200-mile distance between La Paz and the approximate positon of the Gemini 5 capsule when the picture was taken, and then used the “Perspective Grid” tool to render the shape in three dimensions and place it on top of the Gemini photograph, as if the same shape were physically positioned in front of the lens. In Illustrator:


And placed in the Neatline exhibit, first to match the shallow angle of the Gemini shot:


And then to match the perpendicular angle of the Mapbox tiles:


I was also fascinated by the surreal opposition in scale between the Agena Target Vehicle (an unmanned spacecraft used for docking practice in orbit) and Isla San José, which sits serenely in the dark blue of the Gulf of California hundreds of miles below, but occupies almost exactly the same amount of space in the photograph as the 7-foot boom antenna on the Agena. In the space between the two, I dragged out two little shapes that map the sizes of things onto recognizable objects – a 6-foot person in the foreground, Manhattan in the background:


Perspective and Perspectivelessness

These images fascinate me because they roll together two types of imagery – both ubiquitous on the web – that are almost exact opposites of one another. On the one hand, you have regular pictures, taken by regular (non-astronaut) people. These photographs freeze into place one particular perspective on things. In a literal sense, the world recedes from the lens in three dimensions – walls, buildings, bridges, mountains, valleys, clouds. Close things are big, distant things are small. Some are in focus, others aren’t. And unlike other forms of art like painting, poetry, sculpture, or music, which can claim (overconfidently, maybe) to graft completely new material onto the world, photographs innovate at the level of stance and viewpoint, the newness of the perspective on things that already exist. It’s less about what they add, more about what they subtract in order to whittle the world down to one particular frame. Why that angle? Why that moment? Why that, and not anything else?

On the other hand you have spatial photography – the satellite imagery used in Google Maps, Mapbox, Bing Maps, etc. – which is almost completely perspectiveless, in just about every sense of the word. The world becomes a flat, depthless plane, photographed from a distance at a perpendicular angle. Instead of trying to find interesting new ways to crop down the world, spatial tiles try to be comprehensive and standardized. Instead of showing one thing, in one way, at one moment in time, they try to show everything, in the exact same way, at the exact same moment – now. The companies that source and assemble the satellite imagery race to keep it as current as possible, right at the threshold of the present. Last year, Google announced that its satellite imagery had been purged of all clouds. No doubt this makes it more functional, but it also does away with those wispy, bright-white threads of cloud that used to hang over the rainforests in Peru and Brazil, which were lovely. What’s gained, of course, is the intoxicating grandeur of it all, the ability to hold in a single view a photograph of the entire world – which, if nothing else, is some sort of crazy affirmation of human willpower. I always imagine Whitman, scratching out “Salut au Monde“, panning around Google Maps for inspiration.

Photographs taken by astronauts, though, collapse the distinction in interesting ways. They’re literally “satellite” photography, but they’re also drenched in subjectivity and historical stance. The oceans and continents spread out hundreds of miles below, just like on Google Maps or Mapbox – but the pictures were snapped with regular cameras by the hands of actual people, stuffed into little canisters falling around the world at thousands of miles an hour, which were only up there in the first place due to a crazy mix of socio-political ambitions and anxieties that were deeply characteristic of that particular moment in history. The Gemini imagery is haloed with little bits of space-race technology that instantly historicize the frame – the nose cone of the capsule blocks out a huge swath of desert and ocean, the Agena vehicle hangs just a couple of hundred feet from the camera, tethered by a slight, ribbon-like cord that twists for hundreds of miles across the Gulf of California.

A Neatline-ified Gettysburg Address

Launch the Exhibit


This is a project that I’ve been hacking away at for some time, but only found the time (and motivation) to get it polished up and out the door over the weekend – a digital edition of the “Nicolay copy” of the Gettysburg Address, with each of the ~250 words in the manuscript individually traced out in Neatline and wired up with a plain-text transcription of the text on the right side of the screen. I’ve tinkered around with similar interfaces in the past, but this time I wanted to play around with different approaches to formalizing the connection between the digitally-typeset words in the text and the handwritten words in the manuscript. Your eyes tend to dart back and forth between the image and the text, and it’s easy to lose your place – how to reduce that cognitive friction?

To chip away at this, I wrote a little sub-plugin for Neatline called WordLines, which automatically overlays a little visual guideline (under the hood, a d3-wrapped SVG element) on top of the page that connects each pair of words in the two viewports when the cursor hovers on either of the instantiations. So, when the mouse passes over words in the transcription, lines are automatically drawn to the corresponding locations on the image; and vice versa. From a technical standpoint, this turns out to be quite easy – just get the pixel offsets for the element in the transcription and the vector annotation on the map (for the latter, OpenLayers does the heavy lifting with helpers like getViewPortPxFromLonLat, which maps spatial coordinates to document-space pixel pairs), and then draw a line connecting the two points. The one hitch, though, is that this involves placing a large SVG element directly on top of the page content, which, by default, will cover all of the underlying elements (shapes on the map, words in the text) and block them from receiving the cursor events that drive the rest of the UI – including, very problematically, the mouseleave event that garbage-collects old lines and prevents them from getting stuck on the screen.


The work-around is to put pointer-events: none; on the SVG container, which causes the browser to treat it as a purely visual veneer over the page – cursor events drop through to the underlying content elements, and everything else behaves normally. This is just barely and only very recently cross-browser, but I’m not sure if there’s actually any other way to accomplish this, given the full set of constraints.

Modeling intuitions about scale

Originally, I had planned to just leave it at that, but, as is almost always the case with these projects, I ended up learning lots of interesting things along the way, and I ended up going back and adding in another set of annotations that make note of some of the more historically noteworthy aspects of the manuscript. Namely, I was interested in the different types of paper used for the two different pages (Lincoln probably wrote the first page in Washington before departing, the second page after arriving in Gettysburg) and the matching fold creases on the pages, which some historians have pointed to as evidence that the Nicolay copy was perhaps the actual “reading copy” that Lincoln used when delivering the speech, since eyewitness accounts describe Lincoln pulling a folded piece of paper out of his coat pocket.

The other candidate is the Hay draft, which includes lots of changes and corrections in Lincoln’s hand, giving it the appearance of working draft that was prepared just before the event. One problem with the Hays draft, though, is its size – it’s written on larger paper and has just a single fold down the center, which would seem to make it an unlikely thing to tuck into coat pocket. When I read about this, I realized that I had paid almost no attention to the physical size of the manuscript. On the screen, it’s either extremely small or almost infinitely large – a tiny speck when you zoom far back, and an endless plane of beige-and-black when you zoom in. But, in this case, size turns out to be of great historical significance – the Nicolay copy is smaller than the Hays copy, especially when folded along the set of matching creases clearly visible on the pages.

So, how big is it? This involved a bit of guesswork. The resource page for the manuscript on the Library of Congress website doesn’t include dimensions, and direct Google searches didn’t turn up an easy answer, so I started poking around the internet to see if I could find other Lincoln manuscripts written on the “Executive Stationery” used for the first page. I rooted up a couple of documents for sale by rare book sellers, and in both cases the dimensions are listed at about 5 inches in width and 7-8 inches in height, meaning that the Nicolay copy – assuming the stationery was more or less standardized – would have folded down to a roughly 5 x 2.5-inch rectangle, which seems reasonably pocket-sized. (Again, this is amateur historical conjecture – if I’m wrong, please let me know!)

I sketched out little ruler annotations labeling the width of the page and the height of the fold segment, but, zooming around the exhibit, I realized that I still didn’t any intuitive sense of the size of the thing. Raw numerical measurements, even when you’re beat across the head with them, become surprisingly abstract in the a-physical, point-of-reference-less netherlands of deeply-zooming digital landscapes. I dug out a ruler and zoomed the exhibit back until the first page occupied five real-world inches, and then held my hand up to the screen, imagining the sheet of paper in my hand. And then I thought – why not just bake some kind of visual reference directly into the exhibit? I hunted down a CC-licensed SVG illustration of a handprint, and, using the size of my own hand as a reference, used Neatline’s import-SVG feature to position the outline in the whitespace to the right of the first page of the manuscript:


Lazy-loaded Backbone models

Imagine you’re building some kind of travel application and you have a Backbone model called City, with fields like name, latitude, longitude, and population, which are just run-of-the-mill, static values tucked away in a database row off on the server. Then, you realize that you also want to display the current weather forecast for each city – you need, in other words, to do something like:


Which is rather more difficult. The weather changes constantly – the only way to get it is to query against an API endpoint on the server, which, let’s say, dials out to a third-party service that returns a little 2-3 sentence summary forecast for a given location. It’s not the kind of thing that you can bake into the database – it has to be fetched on-the-fly, at runtime. Obviously, this is the exception, not the rule – in almost all cases, Backbone models will have a one-to-one correspondence with rows or documents in a database. Once in a while, though, there are situations in which models on the front end need to have “dynamic” or “compiled” fields that have to be filled in by AJAX requests in real-time, a pattern for which Backbone doesn’t really provide much guidance.

We ran into this problem recently with Neatline, in the process of reworking the interaction between records in Neatline exhibits with items in the underlying Omeka collection. In Neatline, a “record” is the basic unit of content in a exhibit – a piece of vector geometry on an interactive map, an image thumbnail, a plotting on a timeline, a clickable waypoint, a georectified historical map, etc. Records can either be free-floating, unaffiliated little atoms of content, confined to a single exhibit (usually cosmetic elements like arrows or text captions that don’t need to have any kind of formal metadata description), or they can be linked back to items in the Omeka collection. Once a record has been associated with an item, it becomes a kind of proxy or alias for the item, a visual avatar responsible for presenting the item’s metadata in the exhibit. Think of it as an elaborate hyperlink that gives the Omeka item a spatial or temporal anchoring inside of a specific environment – one Omeka item could have ten customized instantiations inside of ten different Neatline exhibits.

The Neatline record, then, needs to be able to access the compiled metadata output of its parent item in the same way that it would access any of its own, locally-defined attributes. So, in other words, this:


Needs to work in basically the same way as:


Where title is just a standard-issue VARCHAR column on the neatline_records table. The difficulty, of course, is that there is no item field, at least not in the same way. Items are a hodgepodge of differently-structured components – individual metadata attributes (all stored separately as entity-attribute-value triples in the database), file attachments, images, and custom content threaded in by plugins. When it comes time to display an item, Omeka gathers up the pieces and glues them together into a final chunk of HTML, a process that, depending on where you draw the boundaries, involves many hundreds or thousands of lines of PHP. The item is an artifact generated by the application code, a customizable “view” on a bucket of related bits of information – not something that can just be selected straightforwardly from the database. This flexibility is what makes Omeka so powerful – it lets you represent almost any conceivable type of object.

But how to map all of it into a single Backbone model field?

Failed attempts

We’ve taken quite a few swings at this over the course of the last couple years. An obvious solution is just to fill in the item metadata at query-time on the server, right before pushing the results back to the client. So, query once to load the Neatline records and then walk through the results, calling some kind of loadItem() method that would render the item template and set the compiled HTML strings on the Neatline record objects. This is attractively simple, but it buckles under any kind of load – compiling the metadata output for each item involves touching the database at least once to gather up the element text triples (and often as many as three or four times, depending on the type of information requested by the templates), meaning that the Neatline API will generate at least one additional query for each record in the result set linked to an Omeka item. So, if a query matches 100 item-backed Neatline records, the MySQL server would get hammered with 101 queries, at minimum, and in practice more like 201 or 301, which slows things down pretty quickly.

In previous versions, we got around this by pre-compiling the item metadata and storing a copy of it directly inside the neatline_records table, thus making it immediately available at query-time. This was one of those good-on-paper ideas that dies by a thousand little cuts when it comes time to actually implement it. For starters, what happens if the user changes the template used to generate the metadata? All of the pre-compiled HTML snippets stored on the Neatline records suddenly become obsolete. This wasn’t a huge problem – you could always just click a button to re-import all of the item-backed records, which had the effect of re-rendering the templates. Still, an annoyance. More problematic was the whole set of obscure bugs that cropped up when trying to render the item templates inside of background processes, which Neatline uses when importing large collections of items into exhibits to keep the web request from timing out. For example, Omeka’s template rendering system depends on global variables (stuff like WEB_ROOT) that don’t get bootstrapped when code is run outside of the web container, which has the effect of mangling URLs in the templates, and so forth an so on.

Omeka as an API, not just a codebase

Then, a couple months ago, Jeremy had a great idea – why not wait until the item metadata is actually needed in the user interface, and then AJAX in the content from Omeka on an as-needed basis? Fundamentally, the insight here is to treat Omeka as an API, not just a codebase – circling back to the original example, the Neatline record becomes the City model, the item metadata snippets become the weather forecasts, and Omeka becomes the API that delivers them. Instead of trying to freeze away static snapshots of the item metadata, embrace the fact that the items are living resources that change over time (as metadata is updated, files added and removed, plugins installed and uninstalled) and turn Neatline into a client that pulls in final, presentational metadata produced by Omeka.

This turns out to be a really concise and low-code solution to the problem, but the implementation involves some interesting patterns to essentially trick Backbone models into supporting asynchronous getters – calls to the get method that need to spawn off an AJAX request to fetch the attribute value. First, though, I had to run the basic piping to get the data in place – I started by adding a little REST API endpoint on the server the emitted the metadata for a given item as a raw HTML string, and then added a simple method to the Neatline record Backbone model that pings the endpoint and sets the response on the model under an item key:

Once fetchItem has been called, the item metadata is fully hydrated on the model and ready to be accessed with regular get('item') calls. The problem, though, is that the extra step – needing to call fetchItem – breaks Backbone’s conventions for getting/setting data. Who’s responsible for calling fetchItem, and when? The model can’t really do it automatically in the constructor, which, for a collection of 100 records, would hammer the server with 100 simultaneous AJAX requests. But passing the buck to the calling code breaks things at various points downstream. For example, Neatline uses a library called Rivets to automatically bind Backbone models to Underscore templates, and Rivets assumes that the model attributes are all accessible from the get-go by way of the default get method. So, once again, this:


Needs to behave in exactly the same way as:


Even though the item value is actually being AJAX-in from Omeka, instead of just being read off of the internal attributes object.

Backbone.Mutators to the rescue

To do this, I used a library called Backbone.Mutators, which makes it possible to define custom getters and setters for individual attributes on a Backbone model – by folding a small bit of custom logic into a getter for the item field, it’s possible to automate the call to fetchItem in a way that preserves the basic get/set pattern:

Basically, the getter just checks to see if the item key on the internal attributes object is undefined (which is only the case the first time the key is accessed, before any data has been loaded), and, if so, calls fetchItem, which fires off the request for the metadata. A few milliseconds later, the request comes back and the response gets set on the model under the item key, which, in turns, triggers a change:item event on the model. If the record is bound to a template, this will cause Rivets to automatically re-get the item key, which calls the custom getter again. This time, though, the newly-loaded metadata set by the first call to fetchItem is returned, but fetchItem isn’t called (ever) again, since the item metadata is already set.

In practice, this means that there will be a short blip when a record is first bound to a template while the initial request is in flight, but, once it finishes, any subscriptions to the item key will automatically synchronize, and the item metadata will be available immediately for the rest of the lifecycle of the model instance.

Conclusion: An omeka-html action context or API format?

I can imagine a number of other situations in which it would useful to use Omeka as an API that emits the final, synthesized HTML metadata output for Omeka items – the item “show” pages, essentially, but just the item metadata HTML snippets, without the surrounding HTML document and theme markup. Neatline does this by grafting on a simple little Neatline-controlled API endpoint (essentially, /neatline/items/5) that renders an item.php partial template and spits back the HTML. I wonder, though, if it could be useful to actually bake some kind of omeka-html action context or API format into the Omeka core – something like /api/items/5?format=html.

The JSON API added in version 2.1 is ideal when you need to pluck out specific little chunks of data from the items for programmatic use, but less useful when you just want to show the item metadata. The API consumer could always pass the JSON representation of an item through some kind of templating system that would render it as HTML, but that just duplicates the same logic already implemented (no doubt much more robustly) in the Omeka core.

Neatline release-apalooza: Neatline 2.2.0, Neatscape, Astrolabe

[Cross-posted from scholarslab.org]

Today we’re excited to announce the release of Neatline 2.2.0! This is a big update that ships out a cluster of features and fixes that address a couple of rough spots identified by users over the course of the last couple months. 2.2.0 focuses on improvements in two areas – first, we’ve overhauled the workflows that connect Neatline records with Omeka items to make them more intuitive, flexible, and feature-rich, with the goal of making the overall integration between the two environments feel more seamless and low-friction. Second, we’ve added a system of interactive documentation to the editor that builds reference materials and tutorials directly into the interface, which should make it easier for new users to find their way around.

We’re also pushing out maintenance releases of the two extensions, NeatlineSimile and NeatlineWaypoints, which add compatibility for Neatline 2.2 and deal with a couple of minor bugs. As always, grab the code from the Omeka addons repository:

Neatline 2.2.0 | NeatlineSimile 2.0.1 | NeatlineWaypoints 2.0.2

What’s more, we’re also making release candidates available for two new Omeka themes designed to showcase Neatline exhibits: Astrolabe and Neatscape. Loyal readers may recall that a while back, we ran a theme naming contest, and we’re finally making good on our word! These are just release candidates, but we wanted to get them out in the open for testing and feedback before cutting off stable releases. Give them a spin, and be sure to file a ticket on the respective issue trackers (Astrolabe and Neatscape) if you find quirks or need new features.

Some highlights in Neatline 2.2:

  • Adds a new interface for linking Neatline records to Omeka items that makes it possible to browse the entire collection of items, run full-text searches, and instantaneously preview the Omeka content (metadata, images, etc.) as it will appear in the public Neatline exhibit.


  • Frees up the “Title” and “Body” fields for modification on Neatline records linked to Omeka items. Previously, these fields were automatically populated with the item content imported from Omeka, making it impossible to add custom information not contained in the Omeka metadata. Now, Neatline leaves these fields open for editing and displays them above the content synced in from Omeka, making it possible (though not required) to add exhibit-specific headings and text descriptions for imported items.
  • Makes it possible to import raw data from the Dublin Core “Coverage” field. When Omeka items are imported into Neatline exhibits, existing values in the Dublin Core “Coverage” field (either KML or WKT strings) are now automatically imported into Neatline and displayed on the map. Previously, this only worked if the coverage on the item was created with the Neatline Features plugin. With this functionality in place, it’s much easier to bulk-import existing spatial data sets – use the CSV Import plugin to populate a collection of items, and then push the new items to a Neatline exhibit.

  • Adds interactive documentation to the editor that builds reference materials for each individual control directly into the interface. Now, the heading for each input is followed by a little “?” button that, when clicked, overlays a document with information about what the control does, how to use it, and how it interacts with other functionality. The goal is to make the editor effectively self-documenting, so that it’s unnecessary to find separate documentation and toggle back and forth between different tabs as you work.


Last semester was a busy one for Neatline – we were supporting twelve classes here at UVa that are using Neatline for research assignments, and had the good fortune to collaborate with a number of folks at Harvard, Stanford, Northeastern, Duke, Indiana, and elsewhere who were using Neatline or gearing up for upcoming projects in the new year. We’ve also got a couple of exciting ideas brewing here in the lab for new, Neatline-powered projects – keep an eye on this space over the course of the next couple months.

As always, don’t hesitate to file bug reports on the issue tracker, post questions to the forums, or contact us directly. Happy new year!

Neighborhoods of San Francisco

Launch the Exhibit


Built on the Stamen Toner layer.

Back in October, about a month after moving from Scholars’ Lab HQ in Virginia out to Menlo Park (my partner started a PhD program at Stanford), I drove up the peninsula to San Francisco on a Saturday morning and set out on a long, rambling, 10-mile trek along the northwest shoulder of the city. It was a fantastic day – I walked west through Golden Gate Park, north along the Richmond beach, past the Cliff House, into the fog over Lincoln Park, through the mansions in Sea Cliff, past the abandoned artillery nests on the coast of the Presidio, and finally out onto the Golden Gate bridge. From there, I headed south through the trails in the Presidio, into Richmond, and eventually back to where I started, near the top right corner of the park.

From the middle of the Golden Gate Bridge, you can look out to the east over a large swath of the city – the skyscrapers of the financial district, the new span of the Bay Bridge hanging over Treasure Island, Alcatraz, and the faded outline of the East Bay, the Berkeley campus a little smudge at the base of the ridgeline. But, scanning my eyes over the rest of the city, I realized that I had very little sense of what I was actually looking at. I could attach labels onto all the touristy landmarks, but I didn’t have any kind of intuitive mental geography of the place – the names of all the little hills and neighborhoods, what connects to what, how to string the pieces of the city together into workable routes and itineraries.

So, over the course of the next few weeks, I slowly cobbled together a Neatline exhibit that plots out each of the neighborhoods in the city – 87 of them, by my count, although it’s somewhat a matter of interpretation as to how they should be sliced and diced. Working mainly from this image as reference, I started by tracing rough outlines of the boundaries (using Neatline’s standard-issue “Draw Polygon” tool) on top of Stamen’s Toner layer. Then, once the borders were in place, I used Neatline’s SVG import tool to place vector-geometry text labels on top of each of the individual neighborhoods, inspired by other spatial-wordcloud experiments like this and this.

Adventures in geospatial typesetting

This was great fun, and, interestingly, it ended up overlapping in unexpected ways with the interactive typesetting projects that I was playing with earlier in the semester. The process of positioning the labels becomes a kind of textual jigsaw puzzle, a game of trying to wrangle the raw, geometric instantiations of words into a coherent organizational scheme – except, this time, against the backdrop of actual geospatial coordinates and locations, not the abstract, featureless voids of the poetry experiments. Often, this is pretty straightforward – Noe Valley and the Inner Mission, for example, just get tagged as such:


In other places, though, the names of the neighborhoods overlap with one another in ways that make it possible to find interesting “economies” in how the words can be laid out on the map – when adjacent neighborhoods share the same words, it’s sometimes possible to essentially atomize the names into their component parts, and then rebuild them according to their own spatial logic, in a sense, by visually stringing together the pieces on the map. Take Richmond, for example, which is divided into three side-by-side segments: Outer Richmond, Central Richmond, and Inner Richmond. Instead of cluttering things up by repeating “Richmond” for each of the three sections, I just dragged out a single, all-encompassing “Richmond” across the entire width of the three sub-neighborhoods, and the blocked in the three modifiers as smaller words on top of the corresponding sections:


This worked much the same way for the Sunset and Parkside neighborhoods, which share the same cleanly partitioned spatial organization:


With the exception of the “middle” portion of Parkside, which is just the un-prefixed “Parkside,” meaning that the center piece doesn’t get a separate modifier:


In other cases, though, it gets much trickier, and much more interesting. Take the little cluster of neighborhoods at the southwest corner of the Presidio, the big park at the base of the Golden Gate Bridge. It’s a hodgepodge of repeated names, but in a much more scrambled and overlapping way – Presidio, Presidio Heights, Pacific Heights, Lower Pacific Heights. In this case, I had to take a bit more care to place the little black arrows in ways that didn’t connote incorrect linkages among the names. For example, the relationship between Presidio and Presidio Heights moves in just one direction – Presidio Heights (labelled with just “Heights” on the map) needs to “inherit” the “Presidio” modifier from the Presidio, but not the other way around, since the Presidio ceases to be the Presidio when “Heights” is tacked onto it:


To encode these relationships, I settled on a rule of thumb that the arrows would always be contained inside the neighborhoods that they modify. So, the arrow pointing from “Presidio” to “Heights” is fully contained inside of the geographic boundaries of Presidio Heights, in the sense that it pulls “Presidio” downward into the “Heights,” without also pushing “Heights” back in the other direction (which would effectively mislabel the Presidio). Likewise the link between “Pacific” and “Heights” is contained within the Pacific Heights outline, since otherwise Presidio Heights would be implicitly but incorrectly prefixed by “Pacific.”

Anyway, this is all completely useless as actual cartographic practice, but great fun as a kind of abstract étude of information design. It’s also incredibly useful as a mnemonic device – after untold hours in Palo Alto coffee shops sketching out all the outlines and positioning the labels, they’re all thoroughly burned into my mind. This is an interesting aspect of digital mapping projects that doesn’t get a lot of attention – we tend to focus on the final products, the public-facing visualizations and interactions (for good and obvious reasons), but much less on the process that goes into creating them, the personal acquisition of knowledge that takes place when you force yourself to spend dozens or hundreds of hours painstakingly positioning and repositioning things on maps. It gives you an incredible sense of cognitive intimacy with the space, the ability to load a little chunk of the world into working memory and reason about it in really complex and interesting ways.

“The Song of Wandering Aengus”

Launch the exhibit

One last little experiment with Neatline-powered interactive typesettings – this time with the ending of Yeats’ endlessly recitable “The Song of Wandering Aengus,” which, like many great poems, seems to somehow signify the entire world and nothing really in particular. I chose to use just the last three lines so that it would be possible to play with a more dramatic type of geometric nesting that, with more content, would quickly run up against the technical limitation that I mentioned in Wednesday’s post about “A Coat” – the vector geometry used to form the letters starts to degrade as the Javascript environment runs out of decimal precision at around 40 levels of zoom, making it impossible to continue the downward beyond a certain point.

With just three lines, though, I was able to place each consecutive line completely inside of one of the dots above an “i” in the previous line. So, the “silver apples of the moon” are inscribed into the dot over the “i” in the “till” of “till time and times are done,” and the “golden apples of the moon” are then contained by the dot on the “i” in “silver.” Since the nested lines are placed literally inside the shaded boundaries of letters (as opposed to the empty spaces delineated by the “holes” in letters, as was the case with the first two experiments), the color of the text has to alternate in order to be legible against the changing color of the backdrop. What I didn’t expect (although in retrospect I guess it’s obvious) is that this shift in the color palette completely modulates the visual temperature of the whole environment – the backdrop swerves from bright white to solid black back and then back to white over the course of the three lines, with the last transition mapping onto the night-to-day, moon-to-sun thematic movement in the final couplet.

Interestingly, this effect was almost thwarted by another unexpected quirk in the underlying technologies, although I managed to maneuver around it with a little hack in the exhibit theme. The problem was this – it turns out that OpenLayers will actually stop rendering an SVG geometry ones the dimension of the viewport shrinks down below a certain ratio relative to the overall size of the shape. So, in this case, as the camera descends down into the black landscape of the dot over the first “i,” the black background supplied by the vector shape would suddenly drop away, as if the camera were falling through the surface, which of course had the effect of making the second-to-last line – typeset in white – suddenly invisible against the default-white background of the exhibit.

I thought this was a showstopper, but then I realized that I could programmatically “fake” the black background by directly flipping the CSS background color on the exhibit container. So, I just fired up the Javascript console, inspected the zoom attribute on the OpenLayers instance to get the zoom thresholds where the color changes needed to happen, and then dropped a little chunk of custom code into the exhibit theme that manifests the style change in response to the zoom events triggered by the map:

Weird, but effective. Whenever I work on projects like these I’m fascinated by the wrinkles that arise in the interaction between what you want to do and what the technology allows you to do. It’s very different from analog scholarship or art practice, where you have a more complete mastery over the field of play – you have a much more direct and unmediated control over the sound of your words, the shape of a line in a physical sketch, the pressure of a brush stroke. With digital objects, though, you’re building on top of almost unimaginably huge stacks of technology – the millions of man-hours of work that it took to create the vast ecosystem of Javascript and PHP libraries that Neatline depends on, the whole set of lower-level technologies that shape the underlying browser rendering engines and Javascript runtimes, which in turn are implemented in still lower-level languages, which eventually brush up against the dizzying rabbit hole of physical hardware engineering, which to my mind is about as close to magic as anything that people have produced.

That kind of deep, massively-distributed collaboration can definitely exist offscreen (eg., all of intellectual history, in a sense), but it’s more loosely coupled, and certainly less fragile – if I write an essay about Yeats, Yeats can’t break in the way that a code dependency literally can (will). At first this really bothered me, but I’ve come to peace with it – digital work is by definition a relinquishing of control, a give-and-take with the machine, a negotiation about what’s possible.

More fun with interactive typesetting: “A Coat,” by Yeats

Launch the exhibit

After spending the weekend tinkering around with an interactive typesetting of a couplet from Macbeth that tried to model reading as a process of zooming downward towards the end of the phrase, I became curious about experimenting with the opposite analogy – reading as an upward movement, an climb from the bottom (the beginning) to the top (the end), with each word circumscribing everything that comes before it. Really, this is just the flip side of the same coin. Meaning certainly flows “downhill” in a phrase – each word is informed by the previous word. But it also flows back “uphill” – each word casts new meaning onto what comes before it. What would it would feel like to visualize that?

This time I decided to work with “A Coat,” Yeats’ wonderful little ode to simplicity (he renounces what he thinks to be the stylistic affectation of his work from the 1890’s, and announces an intention to write “naked[ly]“). Originally, I planned to exactly invert the layout of the Macbeth couplet – start with the “I” at the bottom of the stack, and work upwards towards the end with “naked,” which, in the final frame, would geometrically contain each of the preceding words. I started to do this, but quickly ran into an interesting computational obstacle, which actually cropped up in the Shakespeare example as well.

Trip Kirkpatrick noticed the problem:

Indeed, the last two words – “way” and “comes” – are pixelated and malformed:



This wasn’t on purpose – I couldn’t figure out why it was happening when I was working on the exhibit, but decided against trying to fix it, half out of laziness and half because the visual effect had some satisfying affinities with the content of the line, especially when paired with the “descending” motif – a plunge down to hell, where order disintegrates, smooth lines are forbidden, etc. Anyway – after thinking it over, I’m pretty sure I know what’s going on, although I’m not certain. At the extremely deep zoom levels (far beyond anything you’d ever need for a regular map), I think that OpenLayers is actually losing the floating point precision that it needs to accurately plot the SVG paths for the letters – the computer is running out of decimal places, essentially.

I squeaked by with the Macbeth couplet, but this turned out to be a showstopper for the Yeats, since I was effectively trying to plot geometry about four and a half times deeper – 45 words versus 11. At that depth, the text becomes completely illegible, so I had to find a way to squeeze more content into fewer zoom levels. In the end, I managed to fit it all in by positioning each line into a geometric “notch” formed by the ascenders of two letters on the following line, which more or less preserves the philosophical rationale of the exhibit (each bit of text “envelops” the previous, if somewhat less completely than before) while limiting the zooming to just ten magnification contexts, one for each line.


To scan the poem, just zoom out by clicking on the “minus” button (or scrolling the mouse wheel or pinching the screen, if applicable), or click on the lines in the reference text at the top left to auto-focus on a particular part of the poem.

Experimental typesetting with Neatline and Shakespeare

Launch the exhibit

I’ve always been fascinated by the geometric structure of text – the fact that literature is encoded as physical, space-occupying symbols that can be described, measured, and manipulated just like any other two-dimensional shapes. There’s something counter-intuitive about this. When I look at a letter or a word, I see particles of sound and meaning, transcendental cognitive forms, not things that could be straightforwardly described as a chunks of vector geometry. And there’s definitely a truth to this – I do think that texts have a kind of extra-physical cognitive essence that’s independent of their visual instantiations on pages or screens, and that it’s usually this common denominator that’s most interesting when we talk about literature with other people. And yet, in the context of any individual reading, the physical structure of documents – the set of pragmatic decisions that go into the design, layout, and formatting of text on the page – can have subtle but significant effects on how a text feels, on the imaginative dreamscape that surrounds it in your mind when you think back on it days or weeks or years after the fact.

This is definitely true, for example, at the level of some-thing like font selection, which encodes a kind of “meaning metadata” about the text – where it comes from, who it’s intended for, how serious it imagines itself to be, etc. But I think it also holds at the level of more incidental, pseudo-random aspects of typesetting. For example, how does the vertical line traced out by the right margin of a paragraph or stanza color the reader’s affective reaction to the literary content? Does a jagged, unjustified border make the text feel more tumultuous and Dionysian? Would the same text, printed with a justified margin, become more emotionally controlled and orderly? I think of cases like Whitman’s long lines, which often have to be prematurely broken at the right edge of the page, resulting in a kind of clumpy, disorganized visual gestalt. I doubt that Whitman intended this, in the strong sense of the word (although I don’t know that he didn’t), but it has a kind of symbolic affinity with the poetry itself – sprawling, organic, uncontainable. When I think of Whitman, the image that appears in my mind consists largely of this. I wonder what it would be like to read an “unbroken” Leaves of Grass, printed on paper wide enough to accommodate the lines? Would it become more metaphysical, detached, ironical? Or would it be just the same?

Anyway, this kind of speculation fascinates me. I’ve been thinking a lot recently about experimental modes of digital typesetting that would be completely impossible on the analog page – new ways of presenting text on screens to evoke certain feelings or model intuitions about the structure of language. As a quick experiment, I decided to use Neatline to try to capture a certain aspect of my experience of reading Shakespeare. I’ve always been interested in the notion of language as kind of progressive enveloping of words – they’re printed side-by-side on the page as equals, but the meaning of a syntagm grows out of the ordering of the tokens. Each exists in the context of the last and casts meaning onto the next; each word is contained, in a sense, inside the sum of its predecessors. I was taken by this idea when I read Saussure and company in college, because it seemed to map onto my own experience of reading poetry – the sensation of scanning a line always felt more like a descent than a left-to-right movement, a shift from the surface (the beginning) to the center (the end).

To play with this, I built a Neatline exhibit that typesets a single Shakespearean couplet in a kind of recursive, fractal, Prezi-like layout in which each successive word is “physically” embedded inside of one of the letters of the previous word. Reading the couplet literally becomes a matter of magnification, zooming, burrowing downwards towards the end of the syntagm. To scan the fragment, either pan and zoom the environment manually, as you would a regular, Google-like slippy map, or click on the words in the reference text at the bottom of the screen to automatically focus on individual slices of the text.

Neatline 2.1.0

[Cross-posted from scholarslab.org]

We’re pleased to announce the release of Neatline 2.1.0! This is a fairly large maintenance release that adds new features, patches up some minor bugs, and ships some improvements to the UI in the editing environment. Some of the highlights:

  • A “fullscreen” mode (re-added from the 1.x releases), which makes it possible to link to a page that just displays a Neatline exhibit in isolation, scaled to the size of the screen, without any of the regular Omeka site navigation. Among other things, this makes it much easier to embed a Neatline exhibit as an iframe on other websites (eg, a WordPress blog) – just set the src attribute on the iframe equal to the URL for the fullscreen exhibit view:

    Thanks coryduclos, colonusgroup, and martiniusDE for letting us know that this was a pain point.

  • A series of UI improvements to the editing environment that should make the exhibit-creation workflow a bit smoother. We bumped up the size of the “New Record” button, padded out the list of records, and made the “X” buttons used to close record forms a bit larger and easier-to-click. Also, in the record edit form, the “Save” and “Delete” buttons are now stuck into place at the bottom of the panel, meaning that you don’t have to scroll down to the bottom of the form every time you save. Much easier!


  • Fixes for a handful of small bugs, mostly cosmetic or involving uncommon edge cases. Notably, 2.1.0 fixes a problem that was causing item imports to fail when the Omeka installation was using the Amazon S3 storage adapter, as we do for our faculty-project installations here at UVa.

Check out the release notes on GitHub for the full list of changes, and grab the new code from the Omeka add-ons repository. And, as always, be sure to send comments, concerns, bug reports, and feature requests in our direction.

In other Neatline-related news, be sure to check out Katherine Jentleson’s Neatline-enhanced essay “‘Not as rewarding as the North’: Holger Cahill’s Southern Folk Art Expedition,” which just won the Smithsonian’s Archives of American Art Graduate Research Essay Prize. I met Katherine at a workshop at Duke back in the spring, and it’s been a real pleasure to learn about how she’s using Neatline in her work!

Parsing BC dates with JavaScript

Last semester, while giving a workshop about Neatline at Beloit College in Wisconsin, Matthew Taylor, a professor in the Classics department, noticed a strange bug – Neatline was ignoring negative years, and parsing BC dates as AD dates. So, if you entered “-1000″ for the “Start Date” field on a Neatline record, the timeline would display a dot at 1000 AD. I was surprised by this because Neatline doesn’t actually do any of its own date parsing – the code relies on the built-in Date object in JavaScript, which is implemented natively in the browser. Under the hood, when Neatline needs to work with a date, it just spins up a new Date object, passing in the raw string value entered into the record form:

Sure enough, though, this doesn’t work – Date just ignores the negative sign and spits back an AD date. And things get even funkier when you drift within 100 years of the year 0. For example, the year 80 BC parses to 1980 AD, bizarrely enough:

Obviously, this is a big problem if you need to work with ancient dates. At first, I was worried that this would be rather difficult to fix – if we were hitting up against bugs in the native implementation of the date parsing, it seemed likely that Neatline would have to get into the tricky business of manually picking apart the strings and putting together the date objects by hand. It’s always feels icky to redo functionality that’s nominally built into the programming environment. But I didn’t see any other option – the code was unambiguously broken as it stood, and in a really dramatic way for people working with ancient material.

So, grumbling at JavaScript, I started to sketch in the outlines of a bespoke date parser. Soon after starting, though, I was idly fiddling around with the Date object in the Chrome JavaScript terminal when stumbled across an unexpected (and sort of inexplicable) solution to the problem. In reading through the documentation for the Date object over at MDN, I noticed that the constructor actually takes three different configuration of parameters. If you pass in a single integer, it treats it as a Unix timestamp; if you pass a single string, it treats it as a plain-text date string and tries to parse it into a machine-readable date (this was the process that appeared to be broken). But you can also pass three separate integers – a year, a month, and a day. Out of curiosity, I plugged in a negative integer for the year, and arbitrary values for the month and day:

Magically, this works. A promising start, but not a drop-in solution for the problem – in order to use this, Neatline would still have to manually extract each of the date parts from the plain-text date strings entered in the record forms (or break the dates into three parts at the level of the user interface and data model, which seemed like overkill). Then, though, I tried something else – working with the well-formed, BC date object produced with the year/month/day integer values, I tried casting it back to ISO8601 format with the toISOString method. This produced a date string with a negative date and…

two leading zeros before the four-digit representation of the year. I had never seen this before. I immediately tried reversing the process and plugging the outputted ISO string back into the Date constructor:

And, sure enough, this works. And it turns out that it also fixes the incorrect parsing of two-digit years:

I am deeply, profoundly perplexed by this. The ISO8601 specification makes cursory note of an “expanded” representation for the year part of the date, but doesn’t got into specifics about how or why it should be used. Either way, though, it works in all major browsers. Mysterious stuff.

Why do we trust automated tests?

I’m fascinated by this question. Really, it’s more of an academic problem than a practical one – as an engineering practice, testing just works, for lots of simple and well-understood reasons. Tests encourage modularity; the process of describing a problem with tests makes you understand it better; testing forces you to go beyond the “happy case” and consider edge cases; they provide a kind of functional documentation of the code, making it easier for other developers to get up to speed on what the program is supposed to do; and they inject a sort of refactoring spidey-sense into the codebase, a guard against regressions when features are added.

At the same time, though, there’s a kind of software-philosophical paradox at work. Test are just code – they’re made of the same stuff as the programs they evaluate. They’re highly specialized, meta-programs that operate on other programs, but programs nonetheless, and vulnerable to the same ailments that plague regular code. And yet we trust tests in a way that we don’t trust application code. When a test fails, we tend to believe that the application is broken, not the tests. Why, though? If the tests are fallible, then why don’t they need their own tests, which in turn would need their own, and so on and so forth? Isn’t it just like fighting fire with fire? If code is unreliable by definition, then there’s something strange about trying to conquer unreliability with more unreliability.

At first, I sort of papered over this question by imagining that there was some kind of deep, categorical difference between resting code and “regular” code. The tests/ directory was a magical realm, an alternative plane of engineering subject to a different rules. Tests were a boolean thing, present or absent, on or off – the only question I knew to ask was “Does it have tests?”, and, a correlate of that, “What’s the coverage level?” (ie, “How many tests does it have?”) The assumption being, of course, that the tests were automatically trustworthy just because they existed. This is false, of course [1]. The process of describing code with tests is just another programming problem, a game at which you constantly make mistakes – everything from simple errors in syntax and logic up to really subtle, hellish-to-track-down problems that grow out of design flaws in the testing harness. Just as it’s impossible to write any kind of non-trivial program that doesn’t have bugs, I’ve never written a test suite that didn’t (doesn’t) have false positives, false negatives, or “air guitar” assertions (which don’t fail, but somehow misunderstand the code, and fail to hook onto meaningful functionality).

So, back to the drawing board – if there’s no secret sauce that makes tests more reliable in principle, where does their authority come from? In place of the category difference, I’ve started to think about it just in terms of a relative falloff in complexity between the application and the tests. Testing works, I think, simply because it’s generally easier to formalize what code should do than how it should do it. All else equal, tests are less likely to contain errors, so it makes more sense to assume that the tests are right and the application is wrong, and not the other way around. By this logic, the value added is proportional to the height of this “complexity cliff” between the application and the tests, the extent to which it’s easier to write the tests than to make them pass. I’ve starting using this as a heuristic for evaluating the practical value of a test: The most valuable tests are the ones that are trivially easy to write, and yet assert the functionality of code that is extremely complex; the least valuable are the ones that approach (or even surpass) the complexity of their subjects.

For example, take something like a sorting algorithm. The actual implementation could be rather dense (ignore that a custom quicksort in JavaScript is never useful):

The tests, though, can be fantastically simple:

These are ideal tests. They completely describe the functionality of the code, and yet they fall out of your fingers effortlessly. A mistake here would be glaringly obvious, and thus extremely unlikely – a failure in the suite almost certainly means that the code is actually defective, not that it’s being exercised incorrectly by the tests.

Of course, this is a cherry-picked example. Sorting algorithms are inherently easy to test – the complexity gap opens up almost automatically, with little effort on the part of the programmer. Usually, of course, this isn’t the case – testing can be fiendishly difficult, especially when you’re working with stateful programs that don’t have the nice, data-in-data-out symmetry of a single function. For example, think about thick JavaScript applications in the browser. A huge amount of busywork has to happen before you can start writing actual tests – HTML fixtures have to be generated and plugged into the testing environment; AJAX calls have to be intercepted by a mock server; and since the entire test suite runs inside a single, shared global environment (PhantomJS, a browser), the application has to be manually burned down and reset to a default state before each test.

In the real world, tests are never this easy – the “complexity cliff” will almost always be smaller, the tests less authoritative. But I’ve found that this way of thinking about tests – as code that has an imperative to be simpler than the application – provides a kind of second axis along which to apply effort when writing tests. Instead of just writing more tests, I’ve started spending a lot more time working on low-level, infrastructural improvements to the testing harness, the set of abstract building blocks out of which the tests are constructed. So far, this has taken the form of building up semantic abstractions around the test suite, collections of helpers and high-level assertions that can be composed together to tell stories about the code. After a while, you end up with a kind of codebase-specific DSL that lets you assert functionality at a really high, almost conversational level. The chaotic stage-setting work fades away, leaving just the rationale, the narrative, the meaning of the tests.

It becomes an optimization problem – instead of just trying to make the tests wider (higher coverage), I’ve also started trying to make the tests lower, to drive down complexity as far towards the quicksort-like tests as possible. It’s sort of like trying to boost the “profit margin” of the tests – more value is captured as the difficulty of the tests dips further and further below the difficulty of the application:


[1] Dangerously false, perhaps, since it basically gives you free license to to write careless, kludgy tests – if a good test is a test that exists, then why bother putting in the extra effort to make it concise, semantic, readable?

Announcing Neatline 2.0.2

[Cross-posted from scholarslab.org]

Today we’re pleased to announce the release of Neatline 2.0.2! This is a maintenance release that adds a couple of minor features and fixes some bugs we’ve rooted up in the last few weeks:

  • Fixes a bug that was causing item-import queries to fail when certain combinations of other plugins were installed alongside Neatline (thanks Jenifer Bartle and Trip Kirkpatrick for bringing this to our attention).
  • Makes it possible to toggle the real-time spatial querying on and off for each individual exhibit. This can be useful if you have a small exhibit (eg, 10-20 records) that can be loaded into the browser all at once without causing performance problems, and you want to avoid the added load on the server incurred by the dynamic querying.
  • Fixes some performance issues with the OpenStreetMap layer in Chrome.

And more! Check out the release notes for the full list of changes, and grab the new code from the Omeka add-ons repository.

Also, watch this space for a couple of other Neatline-related releases in the coming weeks. Jeremy and I are working on a series of themes for Omeka specifically designed to display Neatline projects, including the NeatLight theme, which is currently used on the Neatline Labs site I’ve started playing around with (still a work in progress). We’re also just about ready to cut off a public release of the NeatlineText plugin, which makes it possible to connect records in Neatline exhibits to individual sections, paragraphs, sentences, and words in text documents (check out this example).

Until then, give the new code a spin, and let us know what you think!

Announcing Neatline 2.0.0! A stable, production-ready release

[Cross-posted from scholarslab.org]


It’s finished! Today we’re excited to announce Neatline 2.0.0, a stable, production-ready release of the new codebase that can be used to upgrade existing installations. If you’re starting fresh with a new project, just download the new version and install it like any other Omeka plugin. If you’re upgrading from Neatline 1.x, be sure to read through the 2.0 migration guide before getting started (most important – the 2.0 migration runs as a “background process,” which means that there could be a 10-20 second lag before your old exhibits are visible under the “Neatline” tab). Then, if you want to use the SIMILE Timeline widget and item-browser panel that were built into the first version of Neatline, download NeatlineSimile and NeatlineWaypoints, the two new sub-plugins that integrate those features seamlessly into the Neatline core. For more information, check out the (all new!) documentation, which walks through the installation process in detail.

Download the plugins: Neatline | NeatlineWaypoints | NeatlineSimile

Neatline 2.0 is a major update that significantly expands the scope of the project. Building on the core set of geospatial annotation tools from the first version, we’ve turned Neatline into a general-purpose visual annotation framework that can be used to create interactive displays of almost any type of two-dimensional material – maps, paintings, drawings, photographs, documents, and anything else that can be captured as an image. We’ve also made a series of changes to the user interface and code architecture that are designed to make Neatline more accessible for new users (college undergraduates working on class assignments) and, at the same time, more flexible for advanced users (professional scholars, journalists, and digital artists who want to use Neatline for complex projects).

Some of the highlights:

  • Improved performance and scalability, powered by a real-time spatial querying system that makes it possible to work with really large collections of records – as many as about 1,000,000 in a single exhibit;
  • A more sophisticated set of drawing tools, including the ability to import high-fidelity SVG documents created in programs like Adobe Illustrator or Inkscape and interactively drag them into position as geometry in Neatline exhibits;
  • An interactive, CSS-like stylesheet system, build directly into the editing environment, that makes it possible to quickly perform bulk updates on large collections of records using a simplified dialect of CSS;
  • A flexible user-permissions system, designed to make it easier to use Neatline for class assignments and workshops, that makes it possible to prevent users from modifying or deleting content they didn’t create;
  • Expanded support for non-spatial base layers that makes it possible to build exhibits on top of any web-accessible static image or non-spatial WMS layer – paintings, drawings, photographs, documents, etc.
  • A more powerful theming system, which makes it possible for designers to completely customize the appearance and interactivity of each individual Neatline exhibit. This makes it possible to host completely independent and thematically-distinct projects inside a single installation of Omeka.
  • A total rewrite of the front-end JavaScript applications (both the editing environment and the public-facing exhibit views) that provides a more minimalistic and responsive environment for creating and viewing exhibits;
  • A new programming API and “sub-plugin” system that makes it possible for developers to add custom functionality for specific projects – everything from simple user-interface widgets (sliders, legends, scrollers, forward-and-backward buttons, etc.) up to really extensive modifications that expand the core data model and add totally new interactions.

And much more! Over the course of the next week, leading up to our panel about Neatline at the DH 2013 conference in Lincoln, Nebraska (“Circular Development: Neatline and the User/Developer Feedback Loop,” Wednesday at 10:30), we’re going to be fleshing out the new documentation and building a set of Neatline-2.0-powered projects designed to put the new feature set through its paces.

Also, watch this space later in the week for another code release – we’ve built an extension called NeatlineTexts that connects Neatline exhibits with word-level annotations in long-format documents, which makes it possible to use Neatline as a publication platform for essays, blog posts, scholarly articles, monographs, etc., and built a special Omeka that’s specifically designed to frame these interactive editions.

Until then – grab the new code, give it spin, and let us know what you think!

Announcing Neatline 2.0-alpha2!

[Cross-posted with scholarslab.org]

We’re pleased to announce Neatline 2.0-alpha2, a second developer-preview version that gets us one step closer to a stable 2.0 release! For now, this is still just an testing release aimed at engineers and other folks who want to experiment with the new set of features. Grab the code here:

Neatline-2.0-alpha2 | NeatlineWaypoints-2.0-alpha2 | NeatlineSimile-2.0-alpha2

This revision fixes a couple of bugs and adds two new features that didn’t make it into the first preview release:

  1. A user-privileges system, which makes Neatline much easier to use in collaborative, multi-user settings like classrooms and workshops. In a lot of ways, this feature reflects an expanded focus for Neatline. During the first cycle of development last year, we were mainly focused on building a tool designed for individual scholars and students working on focused projects. In that setting – when just a handful of trusted collaborators are working on a project – it’s often not necessary to assign “ownership” to individual pieces of content to protect them from being changed or deleted by other users.

    Over the course of the last year, though, we’ve realized that there’s a lot of interest in using Neatline in a classroom setting, which introduces a new set of requirements. When 50 students are all building their own Neatline exhibits inside a single installation of Omeka, it would be easy for someone to accidentally edit or delete someone else’s work – there need to be guard rails to prevent users from modifying content that doesn’t belong to them.

    In Neatline 2.0-alpha2, we’ve added an ACL (access control list) that makes it possible to enforce a three-level user privileges system:

    • Admin and Super users can do everything – they can create, edit, and delete all Neatline exhibits and records, regardless of who they were originally created by.

    • Contributor users can add, edit, and delete their own exhibits, but can’t make changes to exhibits or records that they didn’t create.

    • Researcher users are denied all Neatline-related privileges – they can’t create, edit, or delete any Neatline exhibits or records.

    This is a simple approach, but we think it addresses most of the basic patterns for classroom use that we’ve encountered here at UVa and elsewhere. If students are working on individual projects, each can be given a separate “Contributor” account, which allows them to create and update their own exhibits, but blocks them from changing anyone else’s work. If students are working together in groups, each group can be assigned an individual “Contributor” account, which allows group members to update each other’s work, but prevents them from making changes other groups’ exhibits.

  2. An exhibit-specific theming system that makes it possible to create completely custom “sub-themes” for individual Neatline exhibits. Before, it was possible to customize the layout and styling of the Neatline exhibit views by editing the Omeka theme, which would change the appearance of all the exhibits on the site. In many cases, though, individual exhibits have specific requirements. Depending on the content, it might be useful for different exhibits to have different page headers, typography, or viewport layouts; and it’s also really useful to be able to load exhibit-specific JavaScript files, which can be used to define custom interactions for individual exhibits.

    In this release, every aspect of an exhibit’s public view can be completely customized by adding an “exhibit theme” that sits inside of the regular Omeka theme. For example, if I have an exhibit called “Testing Exhibit” with a URL slug of testing-exhibit, I can define a custom theme for the exhibit by adding a directory in the public theme at neatline/exhibits/themes/testing-exhibit. With the directory in place, Neatline will automatically load any combination of custom assets:

    • If a template.php file is present in the directory, it will be used as the view template for the exhibit in place of the default show.php template that ships with Neatline.

    • All .js and .css files in the directory will be loaded in the public view. This makes it possible to break additional styling and JavaScript functionality across multiple files, which makes it easier to break complex customizations into smaller units.

    This gives the theme developer full control over the appearance and behavior of each individual exhibit, making it possible to build a extremely diverse collection of Neatline projects inside a single installation of Omeka.

Check out the change log for more details. And let us know what you think!

Testing asynchronous background processes in Omeka

I ran into an interesting testing challenge yesterday. In Neatline, there are a couple of controller actions that need to spawn off asynchronous background processes to handle operations that are too long-running to cram inside of a regular request. For example, when the user imports Omeka items into an exhibit, Neatline needs to query a (potentially quite large) collection of Omeka items and insert a corresponding Neatline record for each of them.

Jobs extend Omeka_Job_AbstractJob and define a public perform method:

And can be dispatched asynchronously by getting the job_dispatcher out of the registry and passing the job name and parameters to sendLongRunning:

It’s easy enough to directly unit test the perform method on the job, but, since actual execution of the process is non-blocking, the jobs can’t be tested at the integration level in the ordinary manner. For example, I’d like to just dispatch a request with a mock item query, and check that the correct Neatline records were created. This can’t be asserted reliably, though, since there’s no guarantee that the job will have completed before the testing assertions are executed.

The job itself is non-blocking, but the job invocation in the controller code is blocking, and can be tested pretty easily by replacing the job_dispatcher with a testing double and spying on the sendLongRunning method. Since this is a pattern that needs to be implemented in more than one test, I started by adding a mockJobDispatcher method to the abstract test-case class that mocks the job dispatcher and injects it into the registry:

Then, in the test, we can just call this method to mock the dispatcher, assert that the dispatcher is expecting a call to sendLongRunning with the correct job and parameters, and then fire off a mock request to the controller action under test:

This is a pretty good solution, but not perfect: The integration test is really asserting an intermediate step in the implementation of the controller action, not the end result – it tests that the job was called with certain parameters, not the final effect of the request. This opens up the door to false positives. For example, in the future, I might make a breaking change to the public API of the Neatline_ImportItems. Assuming I’ve changed the job’s unit tests to assert against the new API, the test suite would pass even if I completely forget to update any of the job invocations, since the integration tests are just asserting the structure of the invocation, not the final effects.

I’ve encountered a version of this problem more than once, and I’ve never really found a good solution to it. Short of moving up to something like in-browser Selenium tests, or resorting to hacky execution pauses in the integration tests, has anyone ever come across a better way to do this?

Interactive CSS in Neatline 2.0

[Cross-posted with scholarslab.org]

Neatline 2.0 makes it possible to work with really large collections of records – as many as about 1,000,000 in a single exhibit. This level of scalability opens up the door to a whole range of projects that would have been impossible with the first version of Neatline, but it also introduces some really interesting content management challenges. If the map can display millions of records, it also needs utilities to effectively manage content at that scale.

This often involves a shift from working with individual records to working with groups of records. With a million records on the map, it’s pretty unlikely that you’ll want to change the color of just one of them. More likely, that record will exist as part of a large grouping of related records (eg, “democratic precincts,” or “photographs from 1945″), all of which should share a certain set of attributes. There needs to be a way to slice and dice records into overlapping clusters of related records, and then apply bulk updates to the individual clusters.

Really, this is a familiar problem – it’s structurally identical to the task of styling web pages with CSS, which makes it possible to address groupings of elements with “selectors” and apply key-value styling rules to the groups. Inspired by projects like Mike Migurski’s Cascadenick, Neatline 2.0 makes it possible to use a Neatline-inflected dialect of CSS to update groups of records linked together with “tags,” which can be applied in any combination to the individual records.

Neatline Stylesheet Basics

Let’s take a look at how this works in practice. Imagine you’re plotting results from the last four presidential elections. You load in a big collection of 800,000 records (200,000 precincts for each of the four elections), each representing an individual polling place with a point on the map. Each point is scaled to represent the number of ballots cast at that location, and shared red or blue according to which party won more votes. In this case, there are really seven different nested and overlapping taxonomies in the data. All of the records are precincts, but each falls into one of the our election seasons – 2000, 2004, 2008, or 2012. And each precinct went either democrat or republican, regardless of which election cycle it belongs to. Each record can be tagged with some combination of these tags:


Each of the groupings needs to share a specific set of attributes – and also not share some attributes that need to be assigned separate values on individual records. For example, all of the precincts – regardless of date or party – should share the same basic fill-opacity and stroke-width styles. All records in each of the groupings for the four election seasons need to share the same after-date and before-date visibility settings so that the records phase in and out of visibility in unison. And all republican and democratic records should share the same shares of red and blue. Meanwhile, none of the groupings should define a standard point-radius style, which is used on a per-record basis to encode the number of ballots cast at that location.

Neatline-inflected CSS makes it easy to model these relationships. To start, I’ll define some basic styles for the top-level precinct tag, which is applied to all the records in the exhibit:

Now, when I click “Save,” Neatline instantaneously updates the stroke-width and fill-opacity styles on all records tagged with precinct:


Next, I’ll set the before-date and after-date properties for each of the for election season tags, which ensure that the four batches of records phase in and out of visibility in unison as the timeline is scrolled back and forth:

Now, when I open up any individual record, the before-date and after-date fields will be updated with new values depending on which election the record belongs to:


Last, I’ll define the coloring rules for the two political parties. First, the Democrats:

Click “Save,” and all democratic precincts update with the new color:


Auto-updating stylesheet values

So far, we’ve just been entering hard-coded values into the stylesheet. This often makes sense for properties that have inherently semantic values (eg, dates). For other attributes, though (namely colors), it’s much harder to reason in the abstract about what value you want. For example, I know that I want the republican precincts to be “red,” but I don’t know off-hand that #ff0000 is the specific hexadecimal value that I want to use. It makes more sense to open up the edit form for an individual record and use the color pickers for the “Fill Color” field to find a color that looks good.

And even for styles that can be reasoned about in the abstract, it’s often easier and more intuitive to use the auto-previewing functionality on one of the record forms to tinker around with different values. Once you’ve decided on a new setting, though, it’s annoying to have to manually propagate the value back into the stylesheets so that all of the record’s siblings stay in sync – you’d have to copy the value, close the form, open up the stylesheet, find the right rule, and paste in the new value. To avoid this, Neatline also automatically updates the stylesheet when individual record values are changed, and immediately pushes out the new value to all of the record’s siblings.

Let’s go back to the election results. For the republican precincts, instead of pasting in a specific hex value for the fill-color style, we’ll just “register” fill-color as being one of the properties controlled by the republican tag by listing the style and assigning it a value of auto:

When I click “Save,” nothing happens, since a value isn’t defined. Now, though, I can just open up any of the individual republican records, choose a shade of red, and save the record. Since we activated the fill-color style for the republican tag, Neatline automatically updates all of the other republican records just as if we had set the value directly on the stylesheet:


And now, when I go back to the stylesheet, the fill-color rule under republican is automatically updated with the value that we just set in the record form:


This also works for styles that already have concrete values. For example, say I change my mind and want to tweak the shade of blue used for democratic precincts. I can just open up any of the individual democrat-tagged records, pick a new value with the color picker, and save the record. Again, Neatline automatically replaces the old value on the stylesheet and propagates the change to all of the other democratic precincts.

Announcing Neatline 2.0-alpha1!


[Cross-posted with scholarslab.org]

It’s here! After much hard work, we’re delighted to announce the first alpha release of Neatline 2.0, which migrates the codebase to Omeka 2.0 and adds lots of exciting new things. For now, this is just an initial testing release aimed at developers and other brave folks who want to tinker around with the new set of features and help us work out the kinks. Notably, this build doesn’t yet include the migration to upgrade existing exhibits from the 1.1.x series, which we’ll ship with the first stable release in the next couple weeks once we’ve had a chance to field test the new code.

45 minutes of Neatline 2.0 alpha testing, compressed to 90 seconds, set to Chopin.

In the interest of modularity (more on this later), the set of features that was bundled together in the original version of Neatline has been split into three separate plugins:

  • Neatline – The core map-making toolkit and content management system.
  • NeatlineWaypoints – A list of sortable waypoints, the new version of the vertical “Item Browser” panel from the 1.x series.
  • NeatlineSimile – The SIMILE Timeline widget.

Just unpack the .zip archives, copy the folders into the /plugins directory in your Omeka 2.x installation, and install the plugins in the Omeka admin. For more detailed information, head over to the Neatline 2.0-alpha1 Installation Wiki, and take a look at the change log for a more complete list of changes and additions.

We’re really excited about this code. Since releasing the first version last summer, we’ve gotten a huge amount of incredibly helpful feedback from users, much of which has been directly incorporated into the new release. We’ve also added a carefully-selected set of new features that opens up the door to some really interesting new approaches to geospatial (and completely non-geospatial) annotation. It’s a leaner, faster, more focused, more reliable, and generally more capable piece of software – we’re excited to start building projects with it!

Some of the additions and changes:

  • Real-time spatial querying, which makes it possible to create really large exhibits – as many as about 1,000,000 records on a single map;
  • A total rewrite of the front-end application in Backbone.js and Marionette that provides a more minimal, streamlined, and responsive environment for creating and publishing exhibits;
  • An interactive “stylesheet” system (inspired by projects like Mike Migurski’s Cascadenick), that makes it possible to use a dialect of CSS – built directly into the editing environment – to synchronize large batches of records;
  • The ability to import high-fidelity SVG illustrations created in specialized vector editing tools like Adobe Illustrator and Inkscape;
  • The ability to add custom base layers, which, among other things, makes it possible to annotate completely non-spatial entities – paintings, photographs, documents, and anything else that can be captured as an image;
  • A revamped import-from-Omeka workflow that makes it easier to link Neatline records to Omeka items and batch-import large collections of items;
  • A flexible programming API and “sub-plugin” system that makes it easy for developers to extend the core feature set with custom functionality for specific projects – everything from simple JavaScript widgets (legends, sliders, scrollers, etc.) up to really deep modifications that extend the core data model and add completely new interactions.

Over the course of the next two weeks, I’ll be writing in much more detail about some of the new features. In the meantime – let us know what you think! We’re going to be pushing out a series of alpha releases in pretty rapid succession over the course of the next couple weeks, and we’re really keen to get feedback about the new features before cutting off a stable 2.0 release. If you find a bug, or think of a feature that you’d like to see included, be sure to file a report on the issue tracker.

Restarting Marionette applications

Over the course of the last couple months, I’ve been using Derick Bailey’s superb Marionette framework for Backbone.js to build the new version of Neatline. Marionette sits somewhere in the hazy zone between a library and a framework – it’s really a collection of architectural components for large front-end applications that can be composed in lots of different ways. I use Marionette mainly for the core set of message-passing utilities, which make it easy to define interactions among different parts of big applications – pub-sub event channels, command execution, request-response patterns, etc. I’ve come to completely rely on these structures, and can’t really imagine writing non-trivial applications without them anymore.

The only big kink I’ve encountered was in the Jasmine suite. Since almost all of the integration-level test cases mutate the state of the application (trigger routes, open/close views, etc.), I needed to completely burn down the app and re-start it from scratch at the beginning of each test to ensure a clean slate. The top-level Marionette Application has a start method that walks down the tree of modules and runs the initializers. As it exists now, though, start can only be called once during the lifecycle of the application, and does nothing if it’s called again later on.

I was getting around this by defining independently-callable init methods for all of my modules and wiring them up to the regular Marionette start-up system:

And then manually calling all of the init methods in my Jasmine beforeEach()‘s to force-restart the application:

This is icky – I have to exactly recreate a specific start-up order that’s automatically enforced in the application itself by before: and after: initialization events. And it introduces lots of opportunities for false-negatives – if you add a module, and forget to explicitly start it in the test suite, everything falls apart.

Really, I wanted to just re-call Neatline.start() before every test. I realized tonight, though, that the application object can be tricked into restarting itself by (a) stopping all of the modules and (b) resetting the top-level Callbacks on the application:

Much cleaner. Assuming all state-holding components are started in the initializers, this has the desired effect of completely rebooting the application.

I’d imagine this is a pretty common issue – is there any philosophical reason for the prohibition against re-calling Application.start() more than once?

Neatline and Omeka 2.0


[Cross-posted with scholarslab.org]

We’ve been getting a lot of questions about when Neatline plugins will be ready for the newly-released Omeka 2.0. The answer is – very soon! In addition to migrating all of the plugins (Neatline, Neatline Time, Neatline Maps, Neatline Features) over to the new version of Omeka, we’re also using this transition to roll out a major evolution of the Neatline feature-set that incorporates lots of feedback from the first version.

Some of the new, Omeka-2.0-powered things on tap:

  • Real-time spatial querying on the map, which makes it possible to work with really large collections of data (as many as 1,000,000 records in a single exhibit);

  • The ability to import SVG documents from vector-editing programs like Adobe Illustrator, making it possible to render complex illustrations on the map;

  • A portable stylesheet system that allows exhibit-builders to use a CSS-like syntax to apply bulk updates to large collections of records;

  • An improved workflow for displaying Omeka items in Neatline exhibits – mix and match individual Dublin Core fields, entire metadata records, images, and other item attributes;

  • A flexible workflow for adding custom base layers in exhibits, which makes it possible to use Neatline to annotate non-spatial materials: paintings, drawings, abstract maps, and anything else that can be captured as an image.

  • A new set of hooks and filters – both on the server and in the browser – that make it easy to for developers to write modular add-ons and customizations for Neatline exhibits – legends, sliders, record display formats, integrations with long-format texts, etc.

The new version is just about feature-complete, and we’re now in the process of tying up loose ends and writing the migration code to upgrade projects built on the 1.1.x releases. We’re on schedule for a public beta by the end of March, and a full release by the end of the semester.

Going forward, we’ll continue supporting the Omeka 1.5.x-compatible releases of Neatline from a maintenance standpoint, but we’re moving all new development efforts into the new versions of the plugins, which only work with Omeka 2.0.

As the final pieces fall into place over the course of the next couple weeks, we’ll start posting a series of alpha releases for developers and other folks who want to test-drive the new feature set. Between now and then, check out some of the feature-preview articles we’ve posted in the last couple weeks:

Neatline Feature Preview – 1,000,000 records in an exhibit
Neatline Feature Preview – Importing SVG documents from Adobe Illustrator

And watch this space for ongoing weekly updates!

Converting SVG into WKT

[Cross-posted with scholarslab.org]

Last week, I wrote about the some of the new functionality in Neatline that makes it possible to take SVG documents created in vector-editing programs like Adobe Illustrator and drag them out as spatial geometry on the map. Under the hood, this involves converting the raw SVG markup – which encodes geometry relative to a “document” space (think of pixels in a Photoshop file) – into latitude/longitude coordinates that can be rendered dynamically on the map. Specifically, I needed to generate Well-Known Text (WKT), the serialization format used by spatially-enabled relational databases like PostGIS and MySQL.

It turned out that there wasn’t any pre-existing utility for this, so I wrote a little library called SVG-to-WKT that does the conversion.

The top-level convert method takes a raw SVG document and spits back the equivalent WKT GEOMETRYCOLLECTION:

The library supports all SVG elements that directly encode geometry information, and exposes the individual helper methods that handle each of the elements:








If you look at the output strings, you’ll notice that the Y-axis coordinates in the WKT are inverted relative to the input: SVGtoWKT.polyline('1,2 3,4') returns LINESTRING(1 -2,3 -4), not LINESTRING(1 2,3 4). This is because the Y-axis “grows” in the opposite direction on maps as it does in document space. In Illustrator, the coordinate grid starts at the top left corner, and the Y-axis increases as you move down on the page; on maps, the Y-axis increases as you move “up,” to the north. SVG-to-WKT just flips the Y-axis coordinates to make the orientation correct on the map.


  • Make it work in Node.js. This is actually a bit trickier that I thought it would be, because Node doesn’t implement the browser-native methods that jQuery’s parseXML uses. It may make sense to move to a generic XML parser that works in Node, which would be lighter-weight than jQuery anyway.

  • Instead of just being purely functional (SVG in, WKT out), it might be useful to return some sort of SVGDocument object that could then be used to generate specific WKT strings at different density levels, orientations, etc. This would have come in handy while writing the custom OpenLayers handler that Neatline uses to actually position the generated WKT on the map (more on this later).

  • Get rid of the Underscore.js dependency.

Neatline Feature Preview – Importing SVG documents from Adobe Illustrator

[Cross-posted with scholarslab.org]

;tldr – The new version of Neatline makes it possible to take SVG documents created in vector editing software like Adobe Illustrator and Inkscape and “drag” them directly onto the map, just like a regular polygon. This makes it possible to create really sophisticated illustrations that go far beyond the blocky, “sharp-edge” style that we usually associate with digital maps. Check out the screencast:

The first version of Neatline implemented a pretty standard set of GIS controls for sketching vector geometry onto maps – points, lines, and polygons. It was easy to sketch out simple shapes, but more difficult to create really intricate, complex illustrations.

Really, this is a sort of ubiquitous problem with digital maps, which tend to be good at representing points, but bad at representing curves. Under the hood, shapes on digital maps are represented by a series of X/Y coordinate pairs, wrapped up into different geometry types that store information about how the points should be displayed. For example, in Well-Known Text – the serialization format used by databases like PostGIS and MySQL – a line is represented by LINESTRING(1 2,3 4,5 6), a polygon by POLYGON((1 2,3 4,5 6,1 2)), and so on and so forth. At the end of the day, everything is just a series of hard-coded points, strung together to form shapes.

This low-level organization in the data tends to bubble up to the level of user interfaces in the form of map sketching tools that make it easy to draw jagged shapes but hard to draw smooth shapes. For example, in the first version of Neatline, drawing this is easy:


But this is much harder:


It’s still possible, but it’s time-consuming and brittle – if you change your mind later and want to adjust the curvature of the arrow, you have to manually reposition dozens of points. This especially frustrating since, in other domains, this is a well-understood problem with lots of high-quality solutions: Vector graphics editors like Adobe Illustrator, Inkscape, and even in-browser tools like svg-edit make it easy to create smooth, complex vector-based geometries that can be serialized to a portable XML format called SVG (Scalable Vector Graphics).

In the upcoming release of Neatline, we’ve made it possible to take SVG markup created in any vector editing tool and place it directly onto the map. Just save off any vector graphic as a SVG document, open up the file in a text editor, and paste the raw markup into the Neatline editor. Then just drag out the shape to any position, dimension, and orientation on the map. Once the new geometry is in place, it behaves just like regular points and polygons added with the default controls – it can be styled and edited just like anything else on the map.

This also opens up a whole new front of high-fidelity text-based annotation on digital maps. Since vector editors can convert strings of text into SVG paths, this makes it possible to sketch out labels, snippets, or even little paragraphs of content directly onto the map itself.






Neatline Feature Preview – 1,000,000 records in an exhibit

[Cross-posted with scholarslab.org]

;tldr – The upcoming version of Neatline makes it possible to build huge interactive maps with as many as 1,000,000 records in a single exhibit. It also introduces a new set of tools to search, filter, and organize geospatial data at that scale. Watch the screencast:

One of the biggest limitations of the first version of Neatline was the relatively small amount of data that could loaded into any individual exhibit. Since the entire collection of records was loaded in a single batch on page-load, exhibits were effectively constrained by the capabilities of the browser Javascript environment. Beyond a certain point (a couple hundred records), the front-end application would get loaded down with too much data, and performance would start to suffer.

In a certain sense, this constraint reflected the theoretical priorities of the first version of the project – small data over large data, hand-crafted exhibit-building over algorithmic visualization. But it also locks out a pretty large set of projects that need to be built on top of medium-to-large spatial data sets. In the upcoming version 1.2 release of the software (which also migrates the codebase to work with the newly-released Omeka 2.0) we’ve reworked the server-side codebase to make it possible to work with really large collections of data – as many as 1,000,000 records in a single Neatline exhibit. Three basic changes were needed to make this possible:

  1. Spatial data needed to loaded “on-demand” in the browser. When the viewport is focused on San Francisco, the map doesn’t need to load data for New York. Huge performance gains can be had by loading data “as-needed” – the new version of Neatline uses the MySQL spatial extensions to dynamically query the collection when the user moves or zooms the map, and just loads the specific subset of records that fall inside the current viewport.

    As long as the exhibit creator sensibly manages the content to ensure that no more than a couple hundred records are visible at any given point (which isn’t actually much of a limitation – anything more tends to become bad information design), this means that the size of Neatline exhibits is effectively bounded only by the capabilities of the underlying MySQL database.

  2. The editor needed more advanced content management tools to work with large collections of records. In the first version of Neatline, all the records in an exhibit were stacked up vertically in the editing panel. If the map can display 1,000,000 records, though, the editor needs more advanced tooling to effectively manage content at that scale. Neatline 1.2 adds full-text search, URL-addressable pagination, and a “spatial” search feature that makes use of the map as a mechanism to query and filter the collection of records.

  3. There needed to be an easy way to make batch updates on large sets of records in an exhibit. Imagine you’re mapping election returns from the the 2012 presidential election and have 20,000 points on neighborhoods that voted democratic. If you decide you want to change the shade of blue you’re using for the dots, there has to be an easy way of updating all 20,000 records at once, instead of manually updating each of the records individually.

    In version 1.2, we’ve made it possible to assign arbitrary tags to Neatline records, and then use a CSS-like styling language – inspired by projects like Cascadenik – to define portable stylesheets that make it easy to apply bulk updates to records with a given tag or set of tags.

These are big changes, and we’re really excited about the new possibilities that open up with this level of scalability. At the same time, all development carries an opportunity cost – working on features A and B means you’re not working on features C and D. Generally, Neatline is on a trajectory towards becoming a much more focused piece of software that hones in on a lean, extensible toolset for building interactive maps. We’re taking a hard look at features that don’t support that core competency.

In the coming weeks, we’ll release an alpha version of the new codebase and solicit feedback from users to figure out what works and what doesn’t. What’s essential? What’s expendable? What assumptions are we making that nobody else is making?

Populating MySQL tables with Node.js

Over the course of the last week or so, I’ve been working on implementing “as-needed” spatial geometry loading for Neatline – the map queries for new data in real-time as the user pans and zooms on the map, just loading the geometries that fall inside the bounding box of the current viewport. Using the spatial analysis functions baked into MySQL, this makes it possible to build out exhibits with many hundreds of thousands of spatial records, provided that the content is organized (in terms of spatial distribution and min/max zoom thresholds) so that no more than a couple hundred records of visible at any given point. I needed a way to build out a really big exhibit to run the new code through its paces.

Originally, mostly just because I was lazy to write the SQL, I had been generating testing data using a temporary development controller that called out to a helper functions that actually created the exhibits / records. These actions were invoked by Rake tasks that just spawned off GET requests to the controller actions. This works fine for relatively small data sets, but once I started trying to insert more than about 10,000 rows the loop ran for so long that the request timed out and the process died (at least, I think this was the problem).

And, either way, this is just generally slow (all in PHP) and clunky (litters up the codebase). Instead, I decided to write a couple of little standalone scripts that would programmatically build out a big SQL insert and run it directly on the database. In the past, I might have done this with Python, but I remembered how difficult it was to get the Python <-> MySQL bindings working in the past and decided to try in with Node.

This turns out to be easy and performant. The basic gist, using the standard node-mysql package:

It’s inefficient to run a separate INSERT query for each row; better to clump them together into a single, massive query, which can be accomplished by stacking up a bunch of parentheticals after the VALUES:

This builds out a 500,000-record exhibit in about 10 seconds:


Full code here.

Using Neatline with historical maps :: Part 3 – GeoServer

[Cross-posted with scholarslab.org and neatline.org]

This is part 3 of a 3-post tutorial that walks through process of georeferencing a historical map and using it in GeoServer and Neatline.

In part 1 of this series, we used ArcMap to convert a static image into a georeferenced .tiff file. In part 2, we post-processed the file with gdal to remove the black borders around the image. In this article, we’ll load the .tiff file into GeoServer and import the final web map service into a Neatline exhibit.

Generating the web map service on GeoServer

There are two ways to upload the .tiff file to GeoServer – the entire process can be performed through the Omeka interface using the Neatline Maps plugin, or the file can be uploaded directly onto the machine running GeoServer and the service created by way of the GeoServer administrative interface.

The first option is easier, but there’s a fundamental restriction that makes it unworkable in certain situations – since Neatline Maps has to upload the .tif file through Omeka before it can create the map service via the GeoServer API, it’s impossible to upload files through Neatline Maps that are larger than the file upload limit set by the upload_max_filesize and post_max_size settings in the php.ini file on your server.

Depending on the hosting environment, these values can be set to anywhere from 2-20 megabytes by default. If you have access to the php.ini file, you can bump up the limit, but beyond a certain point it makes more sense just to upload the file directly to the server running GeoServer and create the web services manually using the GeoServer administrative interface. Since high-resolution .tiff files can weigh in a hundreds of megabytes or even gigabytes, this is often a more controlled and reliable approach, especially in cases where you’re working with multiple files at once.

Regardless of how the file is uploaded, the final process of importing the map service into Omeka and Neatline works the same way.

Option 1: Upload through Neatline Maps

If your file is small enough to be uploaded through Omeka, the Neatline Maps plugin provides plug-and-play connectivity with GeoServer:

  1. With Neatline Maps installed, click on the “Neatline Maps” tab in the top toolbar of the Omeka administrative interface and click on “Create Server.” Fill in the URL, Username, and Password for your GeoServer. In the Name section, enter a plaintext identifier for the server (used for content management in Omeka) and use the Workspace field to specify the workspace on the GeoServer installation that will house the new stores and layers. Click “Save” to create the server record.

    (Note: If you want to upload files to more than one installation of GeoServer, you can create as many server records as you want. At any given point, though, only one of the record can be marked as the “Active” server – this the server that the plugin will use to handle new .tif uploads).

  2. Create an item to associate the web map service with (or edit an existing item). In the Item add/edit form, click on the “Files” tab, click on “Choose File,” and select the .tiff file as you would for a regular file upload. When you save the item, Neatline Maps will automatically detect that you’re trying to upload a georeferenced .tif file and create a corresponding web map service by way of the GeoServer API.

    Once you’ve saved the file, if you go back into the Item edit form and click on the “Web Map Service” tab, you’ll notice that “WMS Address” and “Layers” fields have been automatically updated to point to the new web map service. On the show page for the item, the map will be displayed in a small, interactive widget below the default metadata fields.

Option 2: Upload directly to GeoServer

  1. First, upload the file to the server running GeoServer with scp or another file transfer protocol. It’s usually a good idea to get the file out of the /tmp directory, but it doesn’t matter beyond that – GeoServer can read the entire file system. We’ve gotten into the habit of putting the source .tiff files in /var/geotiff.

  2. In the GeoServer administrative interface, click on “Stores” in the left column and then click “Add new Store.” On the next screen, click GeoTIFF under the “Raster Data Sources” heading.

  3. Select a workspace for the store and enter a name. Under “Connection Parameters,” click the “Browse..” link, and use the pop-up window to navigate to the file. Click “Save” to create the store.

  4. Next, we have to publish the store as a public-facing layer. On the next screen, click the “Publish” link.

  5. Now, the tricky part. We have to manually tell GeoServer to deliver the layer using a coordinate projection system that Neatline can use to layer the map on top of the real-geography base layers in OpenLayers. Scroll down to the “Coordinate Reference Systems” heading and enter EPSG:900913 into the “Declared SRS” field. Under “SRS handling,” select “Force declared.” Under the “Bounding Boxes” heading, click both the “Compute from data” and “Compute from native bound” links.

Now, with the layer created, we can associate the new web map service with an item in your Omeka collection by manually filling in the two fields in the “Web Map Services” tab:

  1. Go back the Omeka administrative interface and find the item that you want to associate the map with (or just create a new item). Open up the edit form for the item.

  2. Click the “Web Map Services” tab. Fill in the the top-level WMS address for the GeoServer installation (this always ends with /wms, and might look something like localhost:8080/GeoServer/wms) and enter the list of comma-delimited layers that you want to be associated with the item. For example, if you have a workspace called “hotchkiss” with layers “chancellorsville” and “fredericksburg,” you could enter:


  3. Save the item.

Use the map in a Neatline exhibit

The two methods both have the end result of filling in the two fields in the “Web Map Services” tab. The only difference is in whether the .tif file is uploaded through Omeka or directly into GeoServer.

Once an item is linked to a web map service, Neatline automatically detects the map and loads it into an exhibit when the item is activated on the map. With the item queried into the editing environment for an exhibit, just check the middle of the three checkboxes next to the listing for the item in the content management panel:

…and the WMS layer will appear on the map:

Using Neatline with historical maps :: Part 2 – Transparency

Update 8/27/12:

After posting this last week, a comment by KaCeBe on the Scholars’ Lab blog led me to go back and look for a way to get Geoserver to render transparent borders without having to manually add an alpha channel to the file. Although I still can’t find way to make Geoserver do it automatically, I did find this thread on the OSGeo.org forums in which user bovermyer finds a solution that’s much faster than the Photoshop workflow described in this post.

With gdal installed (see below), open up the terminal and run this command:

gdalwarp -srcnodata 0 -dstalpha file1.tif file2.tif

…where file1.tif is the name of the original file generated by ArcMap and file2.tif is the name of the new, transparency-added copy of the file1.tif generated by gdal. Then (re)build the geotiff with this command:

gdal_translate -of GTiff -a_srs EPSG:4326 file2.tif file2_rebuilt.tif

…which we’ve found is necessary to avoid errors during the Geoserver upload process. At this point, file2_rebuilt.tif is ready to be loaded into Geoserver and brought into a Neatline exhibit.

Much faster than pointing-and-clicking in Photoshop!

[Cross-posted with scholarslab.org and neatline.org]

This is part 2 of a 3-post tutorial that walks through process of georeferencing a historical map and using it in Geoserver and Neatline. Check out part 1, which covers rectification in ArcMap.

In the first part of this series, we brought a static image into ArcMap and converted it onto a georeferenced .tif file. In this article, we’ll post-process the image in Photoshop to get it ready to be loaded into Geoserver.

The problem: Black borders around the image

If you open up the newly-generated .tif file in a regular image editing program, you’ll see that ArcMap added in regions of black around the actual map to make it fill the rectangular aspect ratio of the file. This happens almost every time, since the process of rectification usually involves rotating the image away from its original orientation.

In the context of a Neatline exhibit, this is problematic because the black borders will completely occlude the real-geography base layer (or underlying historical maps) immediately surrounding the image. Fortunately, former Scholars’ Lab GIS assistant Dave Richardson figured out how to strip out the borders in Photoshop by converting them into transparencies. This step is a bit of a nuisance, but we’ve found that it dramatically improves the final appearance of the map.

Here’s how to do it:

  1. Go to the directory that the file was originally saved to. You’ll notice that ArcMap actually generated four files – the .tif, along with a .tfw, tif.aux.xml, and .tif.ovr. Leave all the files in place, since we’ll need them at the end of the process to rebuild the geospatial header information after we post-process the image. Open up the main .tif file in Photoshop.

  2. In Photoshop, right click on the starting background layer and click “Layer from Background.” This will delete the locked background and replace it with a regular layer with the same content.

  3. Use the “Magic Wand Tool”     to select each of the borders by holding down the shift key and clicking inside the black areas. A dotten line will snap to the edges of the borders. If the wand tool is selecting parts of the actual map image, drop down the “Tolerance” setting to 1, which will limit the selection to the exact color value of the clicked location on the image. Once the borders are selected, press the delete key to clear out the selection. At this point, the image should be surrounded by the default, checkered background graphic.

  4. Add an alpha channel to the image by clicking on the “Channels” tab on the top toolbar of the layers window (If the “Channels” tab isn’t available by default, activate it by clicking Window > Channels). Click the dropdown icon at the right of the toolbar, and click “New Channel.” Check the “Masked Areas” radio button, and set the color to be pure black with 0% opacity. Click “OK” to create the channel.

  5. Now, activate the Magic Wand Tool again and select each of the checkered, transparent areas around the image (the regions that were originally filled with the black borders). Then, invert the selection by clicking on Select > Inverse. At this point, the selection should exactly frame the map itself (the portion of the image that should not be transparent).

  6. Back over in the Channels tab, click on the listing for the Alpha channel that was created in step 4 and hide the RBG channels by clicking the visibility checkbox next to the top-level RGB listing. This will cause the image to go totally black, with the selection of the map region still active on top of the alpha channel.

  7. Activate the Paint Bucket Tool     and set the foreground color to pure white (If you don’t see the icon for the paint bucket in the Tools column, click and hold the icon for the “Gradient” tool     and a drop-down select will appear with a listing for the Paint Bucket). Then apply the paint bucket on the selected area on the Alpha channel, creating a white area over the region occupied by the map.

  8. Make sure that both the Alpha channel and all of the RGB color channels are marked as visible in the Channels window. Then go to File > Save As. So as not to override the name of the original file, change the name to something like [original filename]_processed. Uncheck “Layers,” check “As a Copy” and “Alpha Channels,” and click “Save.”

  9. On the “Tiff Options” dialog box, leave “Save Image Pyramid” and “Save Transparency” unchecked and make sure “Discard Layers and Save a Copy” is checked.

Now, we have a second version of the .tiff file with an Alpha channel that converts the black borders into transparent regions. The problem, though, is that the process of re-saving the file strips out the critical geospatial information in the original .tiff – we’ll have to insert this data back into the processed file before it can be used in Geoserver and Neatline.

Rebuilding the geotiff

We’ll take care of this using a utility called gdal, a powerful command line library that can do a wide variety of transformations on geospatial files. Head over to gdal.org for full documentation on how to install the command line utilities. On Mac OSX, using the homebrew package manager, it should be as easy as brew install gdal. If you’re on Windows, a binary distribution of the tool can be found here.

With gdal installed, fire up the terminal and change into the directory with the original .tif, the processed .tif, and the three *.tfw files.

  1. First, create a copy the original .tfw file with a name that matches the processed .tif file that was created in step 8 above. So, if the original .tif was called hotchkiss.tif, and the processed file was saved as hotchkiss_processed.tif, copy hotchkiss.tfw as hotchkiss_processed.tfw (this can be done with cp hotchkiss.tfw hotchkiss_processed.tfw). The file names have to match in order for gdal to know where to pull information about the coordinate projection when we rebuild the header.

  2. Now, still assuming we’re working with files named hotchkiss_processed.tif and hotchkiss_processed.tfw, rebuild the header with this command:

    gdal_translate -of GTiff -a_srs EPSG:4326 hotchkiss_processed.tif hotchkiss_processed_rebuilt.tif.

  3. (Note: It doesn’t actually matter what you call the derivative files at the various steps of the process. All that matters is that the .tfw file matches the name of the processed .tif file.)

This will create a new file called hotchkiss_processed_rebuilt.tif that contains the transparency channel and the reconstructed geospatial information. At this point, the file is ready to be uploaded to Geoserver and brought into a Neatline exhibit.

Using Neatline with historical maps :: Part 1 – Georeferencing

[Cross-posted with scholarslab.org and neatline.org]

Out of the box, Neatline (our recently-released framework for building geotemporal exhibits) can be used to create geo-temporal exhibits based on “modern-geography” base-layers – OpenStreetMap, Google satellite and street maps, and a collection of beautiful, stylized layers from Stamen Design. For historical and literary projects, though, one of Neatline’s most powerful features is its deep integration with Geoserver, an open-source geospatial server that can pipe georeferenced historical maps directly into Neatline exhibits. For some examples of this, check out these four demo exhibits built on Civil War battle maps by Jedediah Hotchkiss.

Geoserver is a pretty complex piece of software, and the process of assigning geographic coordinates to static image files (called “georeferencing” or “georectifying”) can be a bit tricky at first. This is the first post in a three-part series that will walk through the entire process of rectifying a historical map using ArcMap, post-processing the image, uploading it to Geoserver, and importing the final web map service into a Neatline exhibit.


To start, all you need is a static image file that can be positioned in some way or another on top of a real-geography base layer. Usually, this is a map of some sort, but it could also be aerial photography, or, in more experimental and interpretive use-cases, it could even be a totally non-geographic image that would gain some kind of meaning from being situated in a geospatial context (for example, see the georeferenced manuscript pages in the “My Dear Little Nelly” exhibit).

Since the final map will be presented in an interactive environment that lets the user zoom in and out at will, it’s best to try to find a high-resolution version of the image you want to work with, which will make it possible to zoom further in before the image starts to noticeably pixelate. That said, the images don’t need to be excessively large – as Kelly Johnston (one of the GIS specialists in the Scholars’ Lab) pointed out, extremely high-fidelity images (~10,000 pixels in height or width) often don’t really provide that much more value than somewhat smaller images, and can have the effect of choking up Geoserver and slowing down the speed with which the map is rendered in the final Neatline exhibit. For historical and literary use cases, I’ve found that images with dimensions in the 3000-5000 pixel range provide a good balance of resolution and speed.

In this tutorial, I’ll be working with map #124 in the Hotchkiss Map Collection at the Library of Congress (see the full list of maps here). To get the static image file, go to the view page for the map and right click on the “Download JPEG2000 image” link at the bottom of the screen and click “Save Link As…”

With the image in hand, let’s fire up ArcMap and get the environment set up:

  1. Add a base map by clicking on File > Add Data > Add Basemap. The base map is the real-geography foundation, the “true” map against which the image will be referenced. Select one of the nine options and click “Add.” This is largely just a matter of preference. For for maps with a lot of human geography (roads, railroads, cities), I like the “Bing Maps Road” layer, and for maps with natural geography (rivers, mountains, coastlines) I like the “USA Topo Maps” layer. After you’ve added a base map, a listing the layer will appear in the “Table of Contents” column on the left, which lists out all of the assets available in the environment. You can toggle layers on and off by clicking the checkbox next to the layer title.

  2. Add the static image that you want rectify by clicking on File > Add Data > Add Data. Navigate to the location of the image, select it, and click “Add.” (Note: If the folder containing the image is not already available in the dropdown menu to the right of “Look in,” you may have to “connect” to the folder by clicking on the folder icon with the black “+” symbol in the toolbar to the right. Select the folder, click “OK,” and the folder should become available in the main dropdown menu.) If you get a popup asking if you want to generate pyramids, click “No,” and if you get an alert labeled “Unknown Spatial Reference,” click “OK” (ArcMap is just reacting to the fact that the image doesn’t have existing geo-coordinates).

  3. Enable the Georeferencing toolbar by clicking Customize > Toolbars > Georeferencing. The toolbar will appear at the top of the screen, and can be merged into the main top bar by dragging it upwards in the direction of the main navigation controls.

  4. Move to the rough location of the image that’s being rectified by using the navigation controls at the left of the top toolbar to zoom the base map to the approximate location and bounds of the historical map. In this example, since the image I’m working with shows the town of Fredericksburg and the course of the Rappahannock southeast of the town, I’ll center the viewport a bit below and left of Fredericksburg, maybe zoomed back a bit to show the whole area that will be covered by the image.

  5. Show the static image by clicking on Georeferencing > Fit To Display. This just plasters the map directly on top of the base layer, using the bounds of the current viewport (set in the first step) to determine the position and scale of the image. Basically, this is just setting a crude, starting starting set of geo-coordinates that can be refined by laying down point associations.

Now, the actual rectification. All this entails is creating a series of associations (at least two, as many as ~15-20) between points on the static image and points on the real-geography base layer. As you add points, ArcMap will automatically pan, rotate, scale, and ultimately “warp” the image to match the underlying base layer.

  1. Lay a positioning point: I like to start by picking the most obvious, central, easy-to-find point on the historical map. In this case, I’ll use the position at which the Richmond Fredericksburg Railroad crosses over the west bank Rappahannock. To lay the first point, click on the “Add Control Points” button in the Georeferencing toolbar and click at the exact position on the historical map that you want to use as the starting point. Then, without clicking down on the map viewport again, move the cursor over to the “Table of Contents” pane and check off the historical map, leaving just the base layer visible. Then, click on the location on the base layer that corresponds to the original location on the historical map.

    Once you’ve clicked for a second time, the dotted line between the two clicks will disappear. Display the historical map again by checking the box next to its title in the “Table of Contents.” The image will now be anchored onto the base layer around the location of the first point association.

  2. Lay a scaling and rotation point: Next, pick another easily-mappable point on the historical map, this time ideally near the edges of the image, or at least some significant distance from the first point. Follow the same steps of clicking on the historical map, hiding the historical map, clicking on the corresponding location on the base layer, and then re-enabling the historical map to see the effect.

At this point, you already have a minimally rectified image – the second point will both scale the image down to roughly correct proportions and rotate the image to the correct orientation. From this point forward, adding more points will make the rectification increasingly accurate and granular by “warping” the image, like a sheet of rubber, to fit the lattice of points as accurately as possible.

How many points is enough? Really, it depends on the accuracy of the map and objectives of the Neatline exhibit. In this case, Hotchkiss’ map is already quite accurate, and the just first two points do a pretty good job of orienting the map and showing how it fits into the larger geography of the region. For literary and historical projects that don’t gain anything from extreme precision, a handful of points (2-5) is often sufficient.

When a higher level of precision is required, though, or when the historial map is significantly inaccurate (as is the case for older maps), more points (10-20) can be necessary. It’s not an exact science – just lay points until it looks right.

As you work (especially in cases where you’re laying down a lot points) experiment with different “transformation” algorithms by clicking Georeferencing > Transformations and selecting one of the five options (1st Order Polynomial, 2nd Order Polynomial, etc). Behind the scenes, these algorithms represent different computational approaches to “fitting” the image based on the set of control points – some of the transformations will leave the image roughly polygonal, whereas others will dramatically “warp” the shape of the image to make it conform more accurately to the point associations. Depending on the type of image you’re working with and its accuracy relative to the base layer, different transformations will produce more or less pleasing results. For now, I’ll just leave it at 1st Order Polynomial.

Once you’re done laying points, save off the image as a georeferenced .tiff file by clicking Georeferencing > Rectify. As desired, change the filename and target directory, and click “Save.”


ArcGIS georeferencing documentation
Quantum GIS georeferencing tutorial (open-source alternative to ArcMap)
Georeferencing – making historic maps spatial

Neatline and the framework challenge

[Cross-posted from scholarslab.org]

With the first public release of Neatline out the door, I’ve had the chance to take a short break from programming and think back over the nine-month development cycle that led up to the launch. In retrospect, I think that a lot of the exciting challenges we ran up against – the big, difficult questions about what to program, as opposed to how to program – emerged from tensions that are inherent in the task of creating frameworks as opposed to conventional applications.

What’s a framework? As an experiment, I’ll define the term broadly to mean applications that make it possible to create things, as opposed to applications that make it possible to accomplish tasks. Frameworks are generative in a way that normal applications are not. Instead of controlling systems, crunching numbers, automating processes, boosting efficiency, or providing entertainment, frameworks are set apart by the fact that the allow the user to spawn off new things that are independent of the software itself.

Microsoft Word is used to create documents; WordPress is used to create blogs and blog posts; Drupal is used to create websites; Ruby on Rails is used to build web applications; Illustrator is used to create vector graphics; Maya is used to create 3d models and animations.

Omeka and Neatline fit straightforwardly into this definition. Omeka is used to build online digital collections; Neatline, a framework-within-a-framework built on top of Omeka, is used to create interactive maps and timelines. In each case, the final unit of analysis is some sort of discrete, addressable thing that is generated with the assistance of the software. It can be viewed, visited, or printed. Frameworks empower users to create things that would be difficult or completely impossible to create without the assistance of the software.

The paradox, though, is that frameworks have to simultaneously constrict the user’s agency in the act of expanding it. Barring some kind of mythological ur-framework that would allow for direct, unmediated, and unbounded realization of thought (Prospero’s book of magic), all frameworks, whether implicitly or explicitly, have to define a range of final outputs that will be “supported” by the software. In practice, this means paring down the supported outputs to a vanishingly small subset of the original possibility space. Frameworks are defined as much by what they disallow as by what they allow.

For the developer, deciding on the “range” of the framework is a difficult and sometimes agonizing process because it involves a fundamental tradeoff between power and accessibility – and, by extension, the size of the potential audience. As a framework becomes more powerful and allows a wider range of possible outputs, it also becomes more complex and locks out users who aren’t willing to invest the effort to become proficient with the tool. As a framework becomes more narrow and focused, a larger number of people will be able and willing to use it, but the diversity of the final outputs drops, and the tool becomes suitable for a much smaller range of use cases. It’s a zero-sum game.

Over the course of the last couple months, I’ve realized that this opposition between power and ease-of-use provides an interesting vocabulary for defining Neatline and situating it in the ecosystem of existing geospatial tools. Up until now, it seems to me that existing frameworks have clustered around the two ends of the power / ease-of-use spectrum. Consumer web applications like the Google mapmaker allow the user to drop pins and annotate them with short captions. This is delightfully easy, but all of the end-products look the same, and the tool doesn’t really provide a critical mass of flexibility and the opportunity for real intellectual ownership that’s required for serious scholarly use.

Meanwhile, at the other end of the spectrum, desktop GIS applications like ArcMap provide an incredibly powerful and feature-rich platforms for analyzing geospatial data and creating visualizations. For projects that have access to custom software development, programming libraries like OpenLayers, Leaflet, PolyMaps, and Timeglider provide flexible, highly-customizable toolkits for creating interactive maps and timeline – but only at the level of code.

There’s been an underpopulated zone in the middle the spectrum, though – not many spatio-temporal tools have tried to more evenly balance power and accessibility. Neatline tries to land in a “goldilocks” zone between the two poles. It tries to be simple enough out-of-the-box that it can be used by a large majority of scholars and students who do not have programming experience or advanced GIS expertise, but still complex enough to allow for significant diversity in the structure and style of the final output.

This means, of course, that Neatline could be more powerful and could be easier to use. My argument, though, is that it couldn’t both – at least, not without tripping over itself and breaking apart into incoherence.

Instead of choosing one pole at the expense of the other, we decided to make a studious attempt to balance the two. This is difficult to do – perhaps more difficult than committing wholesale to one or the other, which can often have the effect of locking in a cascading series of almost automatic design decisions leading towards a more singular objective. Building “middle-ground” frameworks requires a constant (re-)calibration of the feature set over the course of the development process, a sort of gyroscopic vigilance to keep the software perched in the tricky zone between flexibility and accessibility.

Like all real challenges, though, this one was also fantastically exciting to tackle. Now that Neatline is out in the wild, I can’t wait to see what people create with it.

Neatline Sneak-Peek

[Cross-posted from scholarslab.org]

The R&D team here at the lab has been quiet over the course of the last couple weeks, but there’s been a flurry of activity under the surface – we’ve been hard at work putting the finishing touches on Neatline, a geotemporal exhibit-building framework that makes it possible to plot archival collections, narratives, texts, and concepts on interactive maps and timelines.

Neatline is built as a collection of plugins for Omeka, a digital archive-building framework developed by our partners at the Roy Rosenzweig Center for History and New Media at George Mason University. If you already have an Omeka collection, Neatline provides a deeply integrated, plug-and-play mapping solution that lets you create interpretive views on your archive. If you don’t have an Omeka collection, though (or if it doesn’t make sense represent your material as a collection of archival objects), Neatline can also be used as an effectively standalone application from within the Omeka administrative interface.

If you haven’t been following the project, check out the podcast of the workshop that Eric Rochester and I gave at THATCamp Virginia 2012 and read the announcement about our partnership with RRCHNM.

So, what kinds of things can you do with Neatline? Here are a few:

  • Create records and plot them on interlinked maps and timelines with complex vector drawings, points, and spans. Set colors, opacities, line thicknesses, point radii, and gradients.
  • Add popup bubbles and define interactions among the map, timeline, and a record-browser viewport, which can display everything from short snippets and captions to long-format interpretive prose.
  • Connect your exhibits with web map services delivered by Geoserver, which makes it possible to create rich displays of historical maps.
  • Drag the viewports around to create custom layouts.
  • Set visibility intervals on a per-item basis, making it possible to create complex time-series animations.
  • Create hierarchical relationships among items, making it possible to curate “batches” of elements in an exhibit that can be manipulated as a group.
  • (Using the Neatline Editions plugin, which is still in alpha and won’t be ready until later in the summer) Create interactive editions of texts by connecting individual paragraphs, sentences, or words with locations on maps and timelines.

Watch neatline.org in the first week of July for the full public release with the dedicated website, code, documentation, and a hosted “sandbox” version of the application that will let you register and experiment with creating exhibits before downloading the software.

Future possibilities for Prism

[Crossposted from scholarslab.org]

It’s been incredibly exciting to watch Annie, Alex, Lindsay, Brooke, Sarah, and Ed work together over the course of the last two semesters to take Prism from idea to working software. Considering the fact that most of them had never written a line of Ruby, Javascript, or CSS when they started last semester, the end result is pretty remarkable.

One of the reasons that programming is so invigorating is that software is constantly leaning forward into further elaboration and complexity. Every feature is the precursor of a hundred possible new ones. Code is always the double-delight of what it is and what it could become.

Prism currently occupies the place in the evolution of a software project where there’s enough functionality in place to really engage with the shape of the idea, but still enough unfinished that there’s space for broad, exploratory thought about the future direction of things. Annie did a fantastic job implementing a core interface that allows users to apply concept verticals (“highlighters”) to texts. Looking forward, the motivating question is how this capability can be leveraged to produce concrete scholarly outcomes – ideally, new understandings of texts.

The first public release seems like an excellent opportunity to start trying to whittle down this question to a set of specific research goals to guide future development. To me, Prism points towards two general lines of inquiry:

  1. How can experiments in collaborative markup capture uncommon or dissenting readings? The concept of crowdsourcing – and, really, the social internet in general – has proven highly adept at extracting majority opinions, at taking the pulse of a group of people. What is “liked” by the community of participants? Where is there agreement? Always implicitly contained in the data that yields these insights, though, is information about how individuals and dissenting groups diverge from the majority consensus.

    Usually, in the context of the consumer web, these oppositions are flattened out into monolithic “like” or “dislike” dichotomies. Tools like Prism, though, capture structurally agnostic and highly granular information about how users react to complex artifacts (texts – the most complex of things). I think it would be fascinating to try to find ways of analyzing the data produced by Prism that would illuminate places where the experimental cohort profoundly disagrees about things. These disagreements could be interesting irritations into criticism. Why the disagreement? What’s the implicit interpretive split that produced the non-consensus?

  2. Continuing on the concept of the “experiment.” Prism points at the provocative possibility that literary study could literally take the form of experiments, similar in structure to the “studies” conducted in disciplines like research psychology, sociology, and experimental philosophy. Literary criticism generally asks questions about how texts can be read. The critic conjures highly creative statements of meaning that often stake their value claim on the extent to which they are unexpected, unanticipated, not obvious, or atypical. Of course, this mode of introspective exegesis is ancient, beautiful, and permanent – an important counterweight to modern modes of loud, fast thinking. Far from replacing these traditional efforts, tools like Prism hold the promise of extending and advancing them in profoundly new ways.

    Prism provides information about how texts just are read. I think it would be fascinating to take this to the next level and stage formal experiments in which subjects are presented with a text and asked to mark it up with a small number (even just one) of carefully-selected, highly-controlled terms. Done responsibly and with a healthy aversion to the sugary siren call of “Data” in a field that’s fundamentally in the business of studying art, I think that this could provide fascinating insights about everything from the concept of Kantian “expertise” in the formation of aesthetic judgments to the questions about how people of different ages, ethnicities, genders, and disciplinary affiliations engage with texts. How does a college freshman read differently than a 5th year English graduate student? How do physicists read differently from philosophers?

Either way, I can’t wait to see where it all goes.


After all the difficulty trying to test Require.js AMD modules with Jasmine I ended up switching over to load.js, a barebones little script loader that basically makes it possible to formalize dependencies as an ordered chain of functions. Now, the entire load order is defined in a single file, loader.js, that sits above the application in the file structure:

From a theoretical/structural standpoint this is clearly somewhat less sophisticated since the component files don’t map out dependencies as standalone units, meaning that they work on magic constructs that aren’t locally defined (for example, the top-level Ov object domain, and Backbone). It also means that the loader file has to be manually curated as files are added, renamed, or removed from the file system. This is annoying, and a source of errors – I frittered away about ten minutes last night trying to figure out why I couldn’t access some methods added by a Backbone add-on module, only to realize that I had dropped the file into the vendor directory but neglected to add it in to the load sequence.

At the same time, this approach has major benefits. Jasmine can vacuum up the files directly into the test runner, and the components can be manipulated directly without having to do bizarre contortions to trick the test suite into indulging AMD. And part of me kind of likes having a single file that defines the entire load process in a single place. I can imagine that this would get less fun in a really large application with many dozens of files, but for the time being it’s nice to be able to quickly scan the whole dependency stack in one file.

Meanwhile, the front-end architecture is really starting to come together in an exciting way. I’ve had a series of unpleasant false starts during this cycle of the project, but I think I have finally figured out how to implement an actually scalable Javascript architecture. This is important. Most of the things that I want to build over the course of the next 4-5 years will require large, complex front-ends – in some cases, far more elaborate than what’s required for Oversoul.

Require.js difficulties

Frustrating day yesterday trying to get Require.js to play nice with Jasmine. As far as I can tell, this is essentially impossible without resorting to unacceptably aggressive hacks at various levels of the stack (this post by Chris Stom, which, as far as I can tell, is the only actually functional solution, involves directly clobbering parts of the Rack server that spins up the testing suite). Pretty bad.

The problem, essentially, is that the Jasmine gem is designed to load all of the application source files directly into the test environment. This goes against the basic philosophy of Require.js, though, in which all dependencies are only ever loaded by way of the define() calls that wrap all of the application modules. Even if you can hack a special Require.js-compliant “runner” file for the test suite that manages to pull in the source files, the application modules themselves are locally scoped inside of their respective define() wrappers, and thus unavailable to the global namespace where the actual testing takes place. Stom gets around this by manually globalizing all of the application assets when they get loaded in (eg, window.PoemView = PoemView;). Again, though, this doesn’t feel tenable.

Thus far I’ve been following Addy Osmani’s Backbone dependency boilerplate almost verbatim. In the application proper, I really like Require.js – it eliminates big stacks of script includes in server templates, and the separate-file templating by way of the text.js plugin really make it feel like you’re writing a first-class application in the browser (usually, it feels kind of like building a ship in a bottle). Unfortunately, the testing problems are a deal breaker – I’m really determined this time around to make the front-end suite as robust as the server suite, and I don’t want to spend dozens of hours over the course of the next couple months wading through (no doubt ongoing) configuration chaos.

Tonight or tomorrow I’m going to strip out all of the Require.js code and try to drop in one of the more straightforward, lightweight Javascript loaders (this article is a good tour of the many options). The load.js syntax caught my eye; the tiny filesize is nice too. Also, since I can no longer use the text.js template file loading, I’ll need to do the Javascript partial templating in jade, and I’ll need to rig up an automated fixture-generator workflow like I did for Neatline.

I find that setting up testing workflows is always more difficult than I expect. I guess that kind of makes sense, in a way – you’re essentially trying to “use” the application in a way that it’s not really designed to be used.

When to test, when to wait

When you first start working on a difficult, open-ended problem, the codebase is like a jigsaw puzzle. The pieces are scattered and you’re not really sure how things connect. At first, your actions are haphazard. You shuffle and reshuffle the pieces into clumps by color, randomly try to fit them together.

There’s sort of a rich-get-richer phenomenon – when you really understand a codebase, you have immediate and correct intuition about how to proceed. When you’re roaming in the wilderness, though, the only real recourse is to fall back on a sort of evolutionary, quasi-intelligently-random process of trial and error that plays out over the course of dozens or hundreds of individual touches on the code. At the start, the only way to develop a meaningful understanding of a new, non-trivial problem is try lots and lots and lots of different approaches.

It’s like the “burn-in” phase in machine learning algorithms – a period of volatility and randomness precedes the stable, workable result. As the codebase becomes more established, each iteration of changes becomes increasingly “shallow,” where the shallowest possible change is a commit that just adds new code, and doesn’t change anything that’s already there.

Anyway, though, my argument is this: Don’t write tests during an intense burn-in phase. There are two related risks here:

  1. Testing too early adds “latency” to each progressive iteration of experimental changes during the burn-in period. The burn-in is a series of fast, almost improvisational updates to the codebase. It’s like evolving bacteria – things change so fast because each generation only lives a couple minutes. Premature testing is risky because drags out lifespan of each of these updates at exactly the time when they need to be as short as possible. Assuming that development time is finite, this means that the burn-in phase will churn through fewer iterations. My theory here is that it’s just the total number that matters, not any metric of quality – you can’t compensate by being especially clever over the course of a smaller number of generations.
  2. The presence of tests can prematurely ossify the codebase before it’s had time to stabilize. Why? Because passing tests can make you numb to bad programming. The tests pass, right? Maybe, but it’s not hard to write passing tests for awful application code. When I block in tests too soon during a burn-in period, I think it can actually reduce the probability that I’ll go back and thoroughly refactor the rough areas. Never use tests as Febreeze for smelly application code. All good code has passing tests; but not all passing tests are testing good code.

Of course, some projects are inherently stable and don’t require a burn-in period. I’ve written enough Omeka plugins at this point that I can sit down and write confident, well-organized code from the start.

When this is the case, I start testing immediately. Why wait? In the context of modern web development, this might actually be the rule, not the exception. I suspect that this is why test-driven development is so effective when paired with opinionated tools like Rails. If burning-in on a project is the process of building your own set of “rails” and structural conventions for the codebase, then Rails (or Django, or Zend, etc.) comes “pre-burned-in.”

So, to sum up – when you know what you’re doing, test first. But when you don’t know your tools, or when you’re trying to build something exotic or conceptually fuzzy, test early, test often – but maybe don’t test immediately.

The anxious keystroke

A couple weeks ago, Jeremy mentioned that John Flatness, the lead developer on Omeka at CHNM, likes to measure his productivity by the number of lines of code that he deletes in a day. This is fascinating, and, in a practical sense, deeply wise. But it also points at a fundamental question about programming:

Are programmers fundamentally creators or mitigators of complexity?

On the one hand, code is complexity – any formalization of process, no matter how simple, is always more complex than no process at all. Programmers exist to produce these expressions of process.

But process is problematic. Code is fundamentally volatile, and always on the cusp of incomprehensibility. How many lines of code are you really able to meaningfully hold in your head at once? Maybe lines of code is a bad metric. What’s the most complex intellectual apparatus that you’re able to “compile” in your mind and manipulate as a mirror of the codebase as you work? When I work, I imagine a vast physical structure, hanging in a black void – a bundle of tubing that wires up a constellation of colored nodes. But only so much complexity can be booted into working memory – sooner or later the stack overflows, things fall apart, and I loose control. The code spills over into a chaos that I’m not smart enough to master.

Programming is distilled complexity, but complexity is also a programmer’s great foe. The programmer is in the business of system-making, of creating a fresh, brand-new, whirring little mechanism out of nothing. But the programmer’s real objective, in the hoary thick of the codebase, is often to destroy complexity, to achieve the dream of a Perfect System.

What’s the Perfect System? It’s the completely intuitive, infinitely maintainable, and inevitable expression of what we intend. I use “inevitable” in the poetic sense of the concept. We dream of being conduits for applications that unfold themselves, line by line, to form sublime and un-refactorable codebases, final executions that can’t be changed without being degraded.

Of course, we always build fallen systems. They have bugs. They compromise. What’s the closest we can imagine, though? I immediately think of concise, direct implementations of clever algorithms. Really, though, this is an asymptotic thought – the closest thing I can imagine to Perfect System is an infinitely small system. Take the limit of that sentiment, though, and the closest imaginable thing to the Perfect System is no system at all.

Freud thinks of life as a circular movement that loops from a state of non-life before birth back to a state of non-life after death. Forward movement along that line (everything we do as living things, essentially) is overdetermined – it’s both a move away from nothingness and towards nothingness. We quest through life, climbing further and further from away from the horrible nullness of not being alive, but always dreaming of a final discharge of all the chaos and anxiety of the world, a return to the original state of zero-entropy.

Programming is much the same, I think – death is the black, empty vim buffer, and we strike outward from the void. We fill the screen, striving upwards and away towards an embodiment of purity and sustainability that only exists in the absence of code, in the non-system, in deleting all the lines.

Sooner or later, NoSQL == inconsistent data

Ever since I took the plunge and moved over to Mongo for Oversoul, I’ve been waiting to see if (when?) I would get bitten by the whole NoSQL ability to accommodate inconsistent data. In the world of SQL, the data schema is the deepest and most fundamental unit of structure in the application, a sacrosanct blueprint that comprehensively defines the arrangement of data that is allowed to enter or exit the system. Any attempt to insert to query data that doesn’t conform to the schema will be outright rejected, and, if the exception isn’t handled, the code will fail.

In Neatline, the data record schema looks like this:

As an experiment, I randomly deleted one of the lines in the create statement to see what would happen to the test suite. It would be easy here to totally bomb out the whole application be getting rid of something that’s fundamental to the relational structure (like one of the foreign keys, or even the id column on the data records). Instead though I just got rid of the “title” field, which is hardly central to the functionality of the application – it’s a little bucket for plaintext data, but nothing depends on it in the sense that it’s never used as a carrier pigeon for any kind of important information through the system.

Right now, blog-post-inspired-mucking-around aside, the server-side Neatline test suite consists of 214 tests that make a total of 1185 assertions, all of which pass. When I took out the title field in the schema declaration, the test suite was only able to even make 626 assertions (about half), and recorded 87 errors during the run. In other words, the application is completely decimated. From the standpoint of a user, essentially nothing at all works.

Of course, the advantage to all of this is that you can be completely certain that any data that does, in fact, make it into the system will be completely normal and reliable – when you query for a collection of data records in Neatline, you can be completely confident that every single one of the records will have the exact structure that’s defined in the original schema.

NoSQL stores, by contrast, place no (native) restrictions on the structure of incoming and outgoing data, and no such assurance of structural consistency exists. Mongo, for instance, is essentially in the business of just persisting to disk little clumps of data that take the structural form of JSON strings. These strings, or “documents,” can be clumped together arbitrarily into “collections,” the hazy-to-the-point-of-not-even-being-analogous analog of a relational “table.” For example, you could have a Mongo collection called “records” and push in these two documents:

Mongo, beautifully or horrifyingly depending on your perspective, couldn’t care less. You can even execute queries on the inconsistent data. I could issue a db.find({ living: false }), and Mongo walk the collection, test each document to see (a) if it has a public attribute, and (b) if so, does it have a value of true, and return the document in the queried collection if both checks pass.

From an application developer’s standpoint, this is all both fantastic and scary. Fantastic because the process of making evolutionary updates to the “schema” is much more straightforward. Just update the application code – no need to write a migration. Scary, though, because there’s a virtual certainty that inconsistent data will, in fact, be introduced into the system given a large and complex enough application, and especially when more than one developer is working on application code that touches on the same collection of data.

When I started working on Oversoul, I was curious to see how long it would take before some combination of application and testing code resulted in the presence of schematically inconsistent data in Mongo. Really, Oversoul is a comparatively low-risk application as far as this matter is concerned – it’s a studiously simple codebase written by a single person. And yet, yesterday as I was browsing through a test suite for a collection of custom form validators that I wrote back in February, I realized that there were dozens of tests that were working on mocked user documents that had has many as 3 extraneous fields that have long since been popped off of the User schema declaration. Originally, when I was writing the application with the idea that it could be used either as an “installable” application, like WordPress, or as a public-facing service with administrator registration, I had a couple of extra fields on the User model to track the different access levels:

Once I abandoned the idea of accommodating both use patterns (mainly because I realized that it’s totally impossible for a centralized server to handle more than one to two concurrently running poems…) I stripped all of the abstractions off of the model that were designed to handle the public-facing administrator registrations (active, superUser, etc.), and just went with a single boolean field called “admin” to keep track of whether the user is one of the “owners” of the installation or a publicly-registered poem participant. So, it changed to this:

But, littered copiously throughout one of the older test suites, were user document stubs that were still mocking out documents with the attributes that had been removed from the canonical prototype of the data schema (email, superUser, and active):

These save just fine, and the tests were passing silently. And, in fact, there was no actual problem here – the extra fields are really, truly extraneous in this case, and the application code isn’t affected by the presence of additional attributes on the documents. But the fact that it was non-breaking this time seems to be largely a matter of luck. And if I can get turned around in a small, surgical, two-month-old application of which I’m the sole developer, I can only imagine how hard it would be to keep things tidy and normalized with a large, sprawling, long-duration, multi-developer application.

On Javascript architecture

On Friday I spent some time reading through an article that Wayne sent me by Addy Osmani about how to organize non-trivial Javascript applications. This is a timely issue – it’s been really interesting to watch how a lot of my early architecture decisions in Neatline have panned out over the course of the last six months, and I’ve been trying to settle on an approach for the front end of Oversoul, which, to my delight, is becoming a more and more pressing issue as the server-side development rapidly approaches completion.

As I see it, there’s a fundamental (and to some extent indissoluble) tension in Javascript architecture between an impulse to modularize your code much as possible and an effort to make the various components of your code communicate with one another in a straightforward, low-overhead, and low-maintenance way. By modularization, I mean, essentially, classes, in the actual sense of the concept. Even though most server-side development frameworks can claim to be “object oriented” to the extent that application code is generally subclassing the framework-provided master classes for the various pieces of the puzzle (controller, database table, database row, etc.), the actual classes that you write as a developer are, in fact, singletons – they’re concrete pieces of code designed to do a single thing (to be an application, and no other), and can’t be reused in any kind of meaningful way.

In Javascript development, meanwhile, there’s a strong sense in which you can actually practice classically-imagined object oriented programming, even though, paradoxically, the Javascript object model is so peculiar and warty. Think of the whole jQuery “widget” pattern – you can write completely abstracted little chunks of code (usually, taking some permutation of options or parameters) which then graft a functionality onto some minimally-patterned markup structure. Custom scrollbars, drop-down menus, form widgets, etc. There are really significant advantages to this development pattern – when written well, the widgets can be completely isolated from any surrounding application code while still being configurable enough blend into the design and interaction patterns of the application. They can also be unit tested, although that almost never happens.

This pattern has been abstracted upwards to form a general development philosophy for Javascript applications – large programs are written as an assemblage of modularized chunks. Intuitively, this makes sense. When you look at a web application (or even just a wireframe of an application), it’s easy to slice and dice things into tidy little buckets. With Oversoul, there’s the poem, the rank stack, the churn stack, the word search box, and the countdown timer across the top of the screen. Each of these things begs to be broken out and written as modular code.

The problem with this, though, is that even though these chunks are conceptually modular, in practice they have to communicate with one another, often in really high-volume, complex, and functionality-critical ways. When a user mouseenters on one of the words in the rank or churn stacks, the word should appear, grayed out, in the blank space at the end of the poem that represents the position of the currently-being-selected word. So, when the mouseenter event gets triggered inside of the modularized code for the word stack, the stack needs to dial out and make the poem rendering module aware that a word has been highlighted, and needs to be rendered on the poem.

As a rule of thumb, the mode modular and silo-ed up your code is, the more complex and tangled the “wiring” code that handles the web of crosswalks that needed to glue everything together. Neatline is a good example of this. The Neatline front-end application is engineered as a series of jQuery widgets that are instantiated in a series of hierarchical “stacks.” The “item browser” widget manages the vertical pane of records in the editor; it, in turn instantiates and manages an “item form” widget, that takes care of the display, data loading, and data saving functionality of the form itself (which in turn instantiates and manages about 15 small, UX-focused widgets that add functionality onto the form – text editors, integer draggers, color pickers, the date ambiguity slider, etc.). This whole stack of nested modularization is kicked off in the constructor script for the editor, which acts as the base-level piece of code that turns the ignition key and gets everything started. This code also instantiates the Neatline exhibit itself that sits next to the data entry interface in the editor – the “neatline” widget instantiates separate component widgets for the map, timeline, and records browser pane, and each of these widgets in turn runs a series of still-smaller modules that manage editing interfaces, zoom controls, ad infinitum.

There’s a good reason for all of this – the Neatline exhibit application needs to work both inside and outside the context of the editor. It has to be totally cordoned off – but still accessible by – the code that handles the editing functionality. What this means, though, is that the process of passing messages from, say, the item form widget to the map widget is excessively complex – the item form widget issues a _trigger('eventname') call, which trips a callback in the parent widget, the item browser; the item browser then immediately (re-)issues a _trigger('eventname') call, which trips “bottom-level” callback in the constructor script. This callback then issues a chain of method class that ascends back “up” the exhibit module stack – a “piping” method is called in the neatline widget, which immediately calls a terminal method in the map widget, which actually manifests the necessary change. This is chaos – mountains of callback code is necessary to pass messages through the system when the sender and receiver are both very “high” up in different module stacks.

At the same time, this decoupling comes along with enormous advantages – it means that the Neatline exhibit application is extremely flexible, and can be wired up for use in a huge range of environments. The challenge, as I see it, is to find a way of preserving this flexibility and meaningful modularity while still being able to pass messages along the structural “hypotenuses” in applications – from peak to peak, as it were, without having to descend into the valley. I’ve been playing with some ideas for how to go about this, and I plan to user the Oversoul front end as a sandbox to try them out.

Scoring routine optimizations

After implementing the new everything-in-memory approach to the scoring in Oversoul, I enlisted the help of Eric and Wayne on Friday morning and spent some time combing though the two methods that do the heavy lifting – vote.score() and poem.score() – and tried out a progression of small changes on the code that yielded really dramatic speed increases. These kinds of focused “algorithms” optimizations can often be more academic than practical – more often than not, the pain points that really choke up performance tend to crop up at the integration level, the plumbing code that wires up the various chunks of the system and makes the big, architectural choices about what gets stored where, how, and when. For example, the big shift away from from writing the vote data into Mongo and leaving it in RAM was this class of optimization, and, indeed, it resulted in a far greater one-off performance gain than the sum effect of everything that follows here.

High-level interpreted languages (Javascript, Python, Ruby) generally supply carefully-chosen, native implementations of most speed-critical, algorithmically non-trivial process that you need in workaday application development (sorting comes to mind), and it’s usually unlikely that a custom implementation would be faster. Oversoul is an interesting exception in this regard, though, because the scoring routine is both (a) pretty irregular from a computational standpoint and (b) incredibly central in the application – if there are 10,000 active votes in a give word round and the poem is running on a 500ms scoring interval, the vote.score() method is run 10,000 times every 500ms, or 1,200,000 times a minute. Changes that shave a couple thousandths of a second off the runtime of the method can add up to savings of dozens or hundreds of milliseconds in production, depending on the volume of data that’s being worked with.

The goal with Oversoul is to build it so that 1000 people can participate in a poem running on a 1000ms scoring interval with 5-minute word rounds. What does that require from a performance standpoint at the level of the scoring routine? It depends on the frequency with which people make point allocations in the voting phase, which is an unknown at this point. The voting rate could probably be regulated to some extent by cutting down the starting point allotments, which would probably make people a bit more trigger-shy on the allocations. But that’s a cop-out. My guess is that people would average, at most, about one vote per ten seconds over the course of the five minute word round. So, 6 (votes per minute) x 5 (minutes in the word round) x 1000 (players) would add up to 30,000 active votes at the end of the round.

Assuming a 1000ms scoring interval, my (really, quite uninformed at this point) guess is that the scoring routine would need to max out at about 500ms in order for the application to stay on its feet and not fall behind it’s own rhythm. In the worst case scenario, if the scoring slice takes longer than the scoring interval to complete, you’d would have the situation where the application would initiate a new scoring computation while the previous computation is still running, which would choke things up even more and likely send the whole affair in a sort of viscous circle of over-scheduled scoring. Honestly, I’m really curious about exactly how this will play out, and how resilient the whole system will be to severe overload.

So, the algorithmic efficiency of the scoring process matters here, a lot. I wrote a little benchmarking routine for the vote.score() method that just runs the function on a single vote X number of times. This is a simplistic approach to benchmarking that’s perhaps a bit dubious when used as a point of comparison across different runtimes, but all that matters here is the relative performance of different passes on the code in any given runtime.

The benchmark:

And here’s the original pass on vote.score():

10,000,000 executions runs in 84,913ms. The poem benchmark, run with 100 discrete words each with 300 active votes (the 30,000-vote requirement needed for the 1000-person-5-minute-rounds-at-1000ms-slicing-interval threshold), executes in 540ms, a bit over where I want it. With Eric and Wayne’s help, I made these changes:

  1. Instead of dividing by 1000 in the rank scaling computation, multiply by 0.001;
  2. this.quantity * -decay is computed twice, once in the bound1 computation and again as a component of the bound2 computation. Reuse the bound1 value in the bound2 computation.
  3. Math.pow(Math.E, (-delta / decay)) is computed twice, once in the churn computation and again to get bound2. Break this out as an “unscaled” value of the decay function, and use the value in both computations.
  4. The division at -delta / decay would be faster as a multiplication with that uses the inverse of decay, computed outside of the vote scoring method. I changed the method to take decayL (the mean decay lifetime) and decayI (the inverse of the decay lifetime), and change the calling code in poem.score() to compute 1 / decayLifetime and pass the value into vote.score().

The method now looks like this:

This gets the 10,000,000 benchmark on vote.score() down to 58,801ms, almost exactly a 30% increase. The poem benchmark at 100-words-300-votes-per-word drops to 431ms, a 20% increase, and under the (sort of arbitrary) 500ms barrier.

Three-dimensional renderings of text

I’ve been thinking a lot recently about how to “augment” text. I realize all of the projects I’ve worked on over the course of the last two years, and I realized that all of them can essentially be thought of as efforts to use software either to (a) graft some kind of additional functionality on to text or (b) to provide some kind of “view” or representation of text that wouldn’t be possible with analog technologies. Public Poetics tried to formalize the structure of New-Criticism-esque close readings by making it possible to “physically” attach threads of conversation onto the poem; as Neatline evolves, it’s increasingly starting to morph into a text-focused application, a kind of geo-spatial footnoting system that links portions of a text to locations on a map. Oversoul is much more radical and transgressive to the extent that it stops working with extant texts and instead tries to formalize, in the shape of interactive software, the classical semiotic model of language construction – choose words from a paradigmatic pool of signifiers; string them together into syntagms.

Really, all of these projects make pretty heavy-lift interventions, in the sense that they construct completely new systems and capabilities around texts (or, create texts from scratch). This is why they’re interesting. Yesterday, though, I was trying to think about ideas for projects in this domain that would be simpler and more limited (by design) in the depth and complexity of the intervention. Instead of providing platforms for adding new content next or adjacent to texts (Public Poetics, Neatline), what remains undone in the realm of just presenting texts, of showing what’s inherently present in the thing itself?

I’ve always been interested in the physicality of language. I don’t really mean that in the sense of the history of the book as a technology, but more in the sense that printed language has a structural, dimensional, measurable embodiment on the page as a collection of letters – in the end, we’re dealing with matter, with a highly specific arrangement of ink that has mass and volume.

Volume. That got me thinking – imagine a dead simple web application where you could enter in a string of text (picture a basic text area that would allow characters and line breaks), and the application would use a Javascript engine like three.js to render the text inside of an endless, pure-white environment. Then, you could traverse the environment with standard mouse/WASD movement, perhaps also with E moving the perspective vertically up (perpendicular relative to the current view angle) and Q moving down (this would make it easier to float “downwards” to read vertically high texts without needing to “look” downwards and use W to move forward, which would interrupt the view of the text itself).

I imagine two sliders that would make real-time adjustments to the environment – one that would change the “depth” of the rendered letterset, and another that would change the scale of the user’s perspective relative to the size of the letterset (In other words, how large are the letters relative to your perspective? How long does it take you to float along a sentence? A couple seconds, or a couple minutes? How huge or minuscule is the text object?).

Even better, but much more complex and difficult: Imagine taking this idea and converting it into a freeform, “multiplayer,” three-dimensional, infinitely-large canvas where anyone could come and plot three dimensional text at any orientation, in any size, and in any location in the endless volume of the application. This could then be traversed in a similar way, a constantly-changing cloudscape of floating three-dimensional text.

Scrap the database?

Serious progress in the Oversoul performance melodrama. After talking with Wayne and Eric on Monday – and after flirting with the idea yesterday of completely scrapping Mongo and rebuilding the whole thing on a different store – I think I stumbled across a viable solution to the problem. After lunch in the fellow’s lounge yesterday, Wayne and I got talking about large-query performance in NoSQL databases, and which of the many options (Mongo/Redis/Riak/Couch/Casandra) would be best for low-write high-read storage, and, more narrowly, the specific task of popping out a large collection of records (in the area of 10,000) really, really fast.

Wayne made the point that most of the NoSQL stores try to excel at a very specific task or suite of tasks – with NoSQL, there’s a move away from the monolithic, one-size-fits-all approach taken by established SQL stores like MySQL and Postgres. Redis tries for really fast read access; Couch makes it possible to define “views” on data that define in advance the structure it needs to have on retrieval; Riak focuses on extreme distribution.

Really, though, all of this is beside the point. Oversoul will almost certainly never be run as a public-facing “service” that would be in the position of accumulating hundreds of terabytes of data scattered across dozens of servers. There’s too much computation, too often, for that kind of deployment to be feasible – and beyond the question of whether or not it would be possible, it’s not really a priority for the project. Instead, the challenge here is just the matter of moving the core vote data over the wire from the database to the application code fast enough for the scoring routine to keep up with the slicer, which, ideally, would max out at about 1000ms. The problem is just the access issue – how to get the current vote data booted into memory and handed over to v8 for the scoring?

Wayne made a key insight – the vote data is completely static in the sense that it is never updated. The set of point allocations that need to be computed for a given slice in a given round is constantly expanding in directionality as more and more allocations are posted back from the clients, but the content of any given vote stays the same for the entire duration of the round once it is applied. There’s a big bucket of data that needs to be reliably condoned off from other existing buckets of data in the application, but once you toss something into the bucket you never need to get it back out individually and work with it – you just need to walk the contents of the entire bucket. The application needs to work with the set of vote data for a round as a clump, but never with subsets.

So, if the hurdle is getting the constantly-expanding-but-never-changing lump of data into memory…why have it ever leave memory in the first place? If it all needs to be available in real-time in memory for the slicer, why go to the trouble of sending it off to the database, then only to have to pull it all back over the wire every few hundred milliseconds when the scoring routine executes?

The solution, I realized, is just to push the votes onto a big object, keyed by round id, sitting on the node global object, and work directly on the in-memory data. With this approach, all that the scoring routine just has to run a single findById query to get out the poem record (this is negligible from a time budget standpoint, a handful of milliseconds at most), and then get the current round id off of the retrieved document to key the votes object on the global namespace. In the end, I can get the same per-slice performance that I was seeing when I was trying to load all of the data into a big all-encompassing poem super-document, but without the atomicity chaos that results from trying to make lots of concurrent writes on a physically large document on the disk.

Of course, if the power shuts off and the server dies in the middle of the round, all of the un-persisted vote data sitting in memory would be lost (the poem document itself, though, and all of the words locked up to that point, would be unaffected). But Oversoul is about as far from any kind of business- or life-critical system as could ever be imagined, and the theoretical risk of data loss is a small price to pay in order to achieve the kind of scale necessary to make the final language persuasively unattributable.

Mongo query performance

More difficulty today in my efforts to find a scalable way to do the scoring in Oversoul. I thought that I had this figured out over the weekend when I totally rewrote the models so that all of the data for a given poem – the current round, the set of unique words in play, and the votes for the words – were structured as one, massive, three-levels-of-nesting object. In other words, the only first-class model was Poem, and the other three (Round, Word, and Vote) were all just tacked on as embedded documents.

This results in fantastic benchmarks – since you only have to hit the database once at the start of the process to pop out the master Poem record, you can then just walk downwards on the object and retrieve all of the related word and vote data directly off of the object. So, in other words, what would have required two more “sets” of queries (one to get the words, and then another, for each word, to get the votes) is effectively free, and speeds up the whole process by about an order or magnitude. When everything is chunked out as separate models that just reference each other by ObjectId (the original approach), a benchmark that creates 100 words each with 100 votes runs in about 1000ms; with the single, deeply-nested document, it falls to about 125ms.

After lunch on Monday I ran this idea by Wayne and Eric, and they both immediately pointed out that this opens the door to concurrency problems that might not show up in the benchmark – multiple clients open up the document, make changes, try to commit conflicting versions, etc. This morning I tried to implement a middle ground – completely get rid of the “word” abstraction and push that information onto the vote schema. So, you have poem, round (as a set of embedded documents on poem), and then a collection of votes with ObjectId references to the current round, each with the word to which they pertain, the quantity, and the time that they were applied. This way, we avoid the concurrency problems with the poem super-object, and pare down the total number of queries that have to happen in the scoring process to just two – one findById() to pop out the poem record, and then a costly query for all of the votes in the active round. Then, we score each of the votes, tally up the per-word rank and churn scores, sort, slice, and return.

Unfortunately, this doesn’t really work. It’s faster than the original, totally-decomposed-models approach, but only by a bit. With 100 words, each with 100 votes, it’s still over 1000ms. Wayne pointed out that there would probably be a not-insignificant speed boost when the application is deployed on real hardware with faster disk access. But even if there’s something like a 20-30% speed increase on the query, I really don’t like the idea that a 10,000-vote round would take a full second to score on the server – that means that the slicing interval for a poem of that size would probably have to be at least 4-5 seconds, which is getting pretty glacial. Really, a 500ms slicing interval is the goal, which means that the per-slice scoring can’t take more than about 100ms at the 100-words-100-votes-per-word level.

The frustrating thing is that this speed is algorithmically possible in v8, as evidenced by the really high performance of the implementation that crammed all of the data onto a single document. But the whole thing is shut down by the massive disk I/O cost of the query. As much as I don’t like to admit it after about a month of work, I think Mongo might be the wrong medicine – I need an in-memory store that totally eliminates any touches on the hard disk during the scoring routine. Redis?

Generating Jasmine fixtures with Zend and PHPUnit

Cross-posted with edits from scholarslab.org

One of the annoyances of testing client-side Javascript is that it’s often necessary to maintain a library of HTML “fixtures” to run the tests against. A fixture is just a little chunk of markup that provides the “physical” DOM structure to run the code on. So, if you have a jQuery widget that adds extra functionality to a form input, your fixture could be as simple as just a single input tag. When the code is predicated on extremely simple markup, you can get away with just manually generating markup on the fly in the tests:


In practice, though, non-trivial applications usually require Javascript that makes touches on large, complex, highly-specific markup that can’t realistically be injected into the test suite as inlines jQuery element constructors. Static HTML fixtures can be manually prepared in separate files, but this commits you to a labor-intensive, open-ended maintenance task – every time you make a change to a template, you have to remember to replicate the change in the fixture library. Over time – and especially as new developers come onto the project – there’s a high probability that the “real” HTML generated by the live application will start to diverge from your fixtures.

My search for a solution led me to this excellent post from JB Steadman at Pivotal Labs. He describes a clever method for automatically generating a library of HTML fixtures that uses the server-side test suite as a staging environment that prepares, creates, and saves the markup emitted by the application. That way, your fixtures library can only ever be as old as the last time you ran your back-end test suite, which should be a many-times-daily affair. I was able to implement this pattern in the Omeka/Zend + PHPUnit ecosystem with little difficulty. Basically:

  1. Create a special controller in your application that exists solely for the purpose of rendering the templates (and combinations of templates) that are required by the JavaScript test suite;
  2. Create a series of “testing cases” that issue requests to each of the actions in the fixtures controller, capture the responses, and write the generated HTML directly into the fixtures library.

Imagine you have a template called records.php that looks like this:

And when it’s rendered in the application, the final markup looks like this:

The goal is to create a controller action that populates the template with mock data and renders out the markup, which can then be captured and saved by an integration “test” that we’ll write in just a minute (test in scare quotes, since we’re essentially hijacking PHPUnit and using it as an HTML generator). First, add a new controller class called FixturesController and create an action that mocks any objects that need to get pushed into the template:

Basically, we’re just stubbing out two artificial record objects (for simplicity, we add only the attributes that are used in the template) and directly render the template file as a “partial.” Note the call to setNoRender(true) – by default, Zend will try to automagically discover a template file with the same name as the controller action, but we’re just disabling that functionality since we want direct control over which templates get rendered and in what order.

Next, add a directory called “fixtures” in the /tests directory, and create a file called FixtureBuilderTest.php to house the integration test that will do the work of requesting the new controlled action, capturing the markup, and saving the result to the fixtures library.

This should look like this:

Note that you need to specify the location in the project directory structure that you want to save the fixtures to. In this case, I’m saving to the default location used by Jasmine, but you could point to anywhere in the filesystem relative to the AllTests.php runner file in /tests.

Make sure that the /fixtures directory is included in the test discoverer in AllTests.php, run phpunit, and the fixture should be saved off and ready for use in the front-end suite. Using Jasmine with the jasmine-jquery plugin: