Imagine you’re building some kind of travel application and you have a Backbone model called
City, with fields like
population, which are just run-of-the-mill, static values tucked away in a database row off on the server. Then, you realize that you also want to display the current weather forecast for each city – you need, in other words, to do something like:
Which is rather more difficult. The weather changes constantly – the only way to get it is to query against an API endpoint on the server, which, let’s say, dials out to a third-party service that returns a little 2-3 sentence summary forecast for a given location. It’s not the kind of thing that you can bake into the database – it has to be fetched on-the-fly, at runtime. Obviously, this is the exception, not the rule – in almost all cases, Backbone models will have a one-to-one correspondence with rows or documents in a database. Once in a while, though, there are situations in which models on the front end need to have “dynamic” or “compiled” fields that have to be filled in by AJAX requests in real-time, a pattern for which Backbone doesn’t really provide much guidance.
We ran into this problem recently with Neatline, in the process of reworking the interaction between records in Neatline exhibits with items in the underlying Omeka collection. In Neatline, a “record” is the basic unit of content in a exhibit – a piece of vector geometry on an interactive map, an image thumbnail, a plotting on a timeline, a clickable waypoint, a georectified historical map, etc. Records can either be free-floating, unaffiliated little atoms of content, confined to a single exhibit (usually cosmetic elements like arrows or text captions that don’t need to have any kind of formal metadata description), or they can be linked back to items in the Omeka collection. Once a record has been associated with an item, it becomes a kind of proxy or alias for the item, a visual avatar responsible for presenting the item’s metadata in the exhibit. Think of it as an elaborate hyperlink that gives the Omeka item a spatial or temporal anchoring inside of a specific environment – one Omeka item could have ten customized instantiations inside of ten different Neatline exhibits.
The Neatline record, then, needs to be able to access the compiled metadata output of its parent item in the same way that it would access any of its own, locally-defined attributes. So, in other words, this:
Needs to work in basically the same way as:
title is just a standard-issue
VARCHAR column on the
neatline_records table. The difficulty, of course, is that there is no
item field, at least not in the same way. Items are a hodgepodge of differently-structured components – individual metadata attributes (all stored separately as entity-attribute-value triples in the database), file attachments, images, and custom content threaded in by plugins. When it comes time to display an item, Omeka gathers up the pieces and glues them together into a final chunk of HTML, a process that, depending on where you draw the boundaries, involves many hundreds or thousands of lines of PHP. The item is an artifact generated by the application code, a customizable “view” on a bucket of related bits of information – not something that can just be selected straightforwardly from the database. This flexibility is what makes Omeka so powerful – it lets you represent almost any conceivable type of object.
But how to map all of it into a single Backbone model field?
We’ve taken quite a few swings at this over the course of the last couple years. An obvious solution is just to fill in the item metadata at query-time on the server, right before pushing the results back to the client. So, query once to load the Neatline records and then walk through the results, calling some kind of
loadItem() method that would render the item template and set the compiled HTML strings on the Neatline record objects. This is attractively simple, but it buckles under any kind of load – compiling the metadata output for each item involves touching the database at least once to gather up the element text triples (and often as many as three or four times, depending on the type of information requested by the templates), meaning that the Neatline API will generate at least one additional query for each record in the result set linked to an Omeka item. So, if a query matches 100 item-backed Neatline records, the MySQL server would get hammered with 101 queries, at minimum, and in practice more like 201 or 301, which slows things down pretty quickly.
In previous versions, we got around this by pre-compiling the item metadata and storing a copy of it directly inside the
neatline_records table, thus making it immediately available at query-time. This was one of those good-on-paper ideas that dies by a thousand little cuts when it comes time to actually implement it. For starters, what happens if the user changes the template used to generate the metadata? All of the pre-compiled HTML snippets stored on the Neatline records suddenly become obsolete. This wasn’t a huge problem – you could always just click a button to re-import all of the item-backed records, which had the effect of re-rendering the templates. Still, an annoyance. More problematic was the whole set of obscure bugs that cropped up when trying to render the item templates inside of background processes, which Neatline uses when importing large collections of items into exhibits to keep the web request from timing out. For example, Omeka’s template rendering system depends on global variables (stuff like
WEB_ROOT) that don’t get bootstrapped when code is run outside of the web container, which has the effect of mangling URLs in the templates, and so forth an so on.
Omeka as an API, not just a codebase
Then, a couple months ago, Jeremy had a great idea – why not wait until the item metadata is actually needed in the user interface, and then AJAX in the content from Omeka on an as-needed basis? Fundamentally, the insight here is to treat Omeka as an API, not just a codebase – circling back to the original example, the Neatline record becomes the
City model, the item metadata snippets become the weather forecasts, and Omeka becomes the API that delivers them. Instead of trying to freeze away static snapshots of the item metadata, embrace the fact that the items are living resources that change over time (as metadata is updated, files added and removed, plugins installed and uninstalled) and turn Neatline into a client that pulls in final, presentational metadata produced by Omeka.
This turns out to be a really concise and low-code solution to the problem, but the implementation involves some interesting patterns to essentially trick Backbone models into supporting asynchronous getters – calls to the
get method that need to spawn off an AJAX request to fetch the attribute value. First, though, I had to run the basic piping to get the data in place – I started by adding a little REST API endpoint on the server the emitted the metadata for a given item as a raw HTML string, and then added a simple method to the Neatline record Backbone model that pings the endpoint and sets the response on the model under an
fetchItem has been called, the item metadata is fully hydrated on the model and ready to be accessed with regular
get('item') calls. The problem, though, is that the extra step – needing to call
fetchItem – breaks Backbone’s conventions for getting/setting data. Who’s responsible for calling
fetchItem, and when? The model can’t really do it automatically in the constructor, which, for a collection of 100 records, would hammer the server with 100 simultaneous AJAX requests. But passing the buck to the calling code breaks things at various points downstream. For example, Neatline uses a library called Rivets to automatically bind Backbone models to Underscore templates, and Rivets assumes that the model attributes are all accessible from the get-go by way of the default
get method. So, once again, this:
Needs to behave in exactly the same way as:
Even though the
item value is actually being AJAX-in from Omeka, instead of just being read off of the internal
Backbone.Mutators to the rescue
To do this, I used a library called Backbone.Mutators, which makes it possible to define custom getters and setters for individual attributes on a Backbone model – by folding a small bit of custom logic into a getter for the
item field, it’s possible to automate the call to
fetchItem in a way that preserves the basic get/set pattern:
Basically, the getter just checks to see if the
item key on the internal
attributes object is undefined (which is only the case the first time the key is accessed, before any data has been loaded), and, if so, calls
fetchItem, which fires off the request for the metadata. A few milliseconds later, the request comes back and the response gets set on the model under the
item key, which, in turns, triggers a
change:item event on the model. If the record is bound to a template, this will cause Rivets to automatically re-get the
item key, which calls the custom getter again. This time, though, the newly-loaded metadata set by the first call to
fetchItem is returned, but
fetchItem isn’t called (ever) again, since the item metadata is already set.
In practice, this means that there will be a short blip when a record is first bound to a template while the initial request is in flight, but, once it finishes, any subscriptions to the
item key will automatically synchronize, and the item metadata will be available immediately for the rest of the lifecycle of the model instance.
omeka-html action context or API format?
I can imagine a number of other situations in which it would useful to use Omeka as an API that emits the final, synthesized HTML metadata output for Omeka items – the item “show” pages, essentially, but just the item metadata HTML snippets, without the surrounding HTML document and theme markup. Neatline does this by grafting on a simple little Neatline-controlled API endpoint (essentially,
/neatline/items/5) that renders an
item.php partial template and spits back the HTML. I wonder, though, if it could be useful to actually bake some kind of
omeka-html action context or API format into the Omeka core – something like
The JSON API added in version 2.1 is ideal when you need to pluck out specific little chunks of data from the items for programmatic use, but less useful when you just want to show the item metadata. The API consumer could always pass the JSON representation of an item through some kind of templating system that would render it as HTML, but that just duplicates the same logic already implemented (no doubt much more robustly) in the Omeka core.