A poem, typeset inside its own title

Start the Poem


I’ve spent a lot of time recently thinking about strange, experimental ways to represent text documents. This grows out of a number of other projects that I’ve had the good luck to work on in recent years, many of which have turned on the question of how to take a chunk of static text and find a useful and appealing way to put it on a computer screen. At first, I thought this was a coincidence, but then I started to wonder if it might actually be a kind of common denominator that binds together lots of different types of DH-y projects, even ones that aren’t directly interested in questions of textual representation. Neatline, for example, started out as just a mapping project, but then we realized that we almost always wanted to say something about the maps we were making, and started brainstorming about ways to wire them up with articles, primary texts, blog posts, book chapters, etc. Suddenly, it was as much about the texts as the maps – how do you tweak the functionality of a text document to make it play well with a digital map?

These kinds of questions are still very new, in the long view. We’ve had thousands of years to think about how to organize texts on physical pages, and really just about 20 years to figure out how to do it on computer screens. And, books are a hard act to follow – they’re extremely mature, and we’ve grown into very sophisticated users of them over time. I think this puts a lot of pressure on digital book design to be really functional and legible right out of the gate, lest it seem like a gimmick. In fact, in most cases, text on screens isn’t actually different from text on pages. This post, for example, is almost completely interoperable with analog text – I could print out a physical copy in just a couple seconds, and the experience of reading it wouldn’t be different in any kind of functional way. This premium on usability means that we haven’t spent much time mucking around with the low-level mechanics of the reading interface. What we’re really doing, a lot of the time, is using computers to migrate texts, to convert ink into pixels, to recreate physical pages on digital screens – but not really diving into the question of how (whether?) computers could be used to rethink the functionality of text documents from first principles.

And, really, this makes a lot of sense. The printed page is put together the way it is for good reasons. The basic building blocks of physical documents are probably governed by pretty low-level constraints – the physiology of vision, the cognitive structure of language understanding, the simple pragmatics of reading, etc. Having said that, I often find myself daydreaming about what kinds of totally crazy, impractical, ludic reading interfaces could be built if the requirements for usability and legibility were completely lifted. This lends itself to weird thought experiments. If I were an alien with no knowledge of books or human culture, but magically knew how to read English and program javascript – how would I lay out a piece of text?

The answers tend to be impractical, by definition, and they often seem most promising as some flavor of electronic literature. I was thinking about this recently, and decided to try my hand at “printing” a text on a screen in a way that would be difficult on a physical page. I played around with a couple of similar projects last fall, creating digital typesettings of isolated little fragments of poetry, but this time I wanted to try to do an entire work – a whole poem, a complete text that could actually be read from start to finish. I decided to try it out with a little poem called Polyphemus that I’ve been playing with for about four years – I wrote a long version of it back in 2010, right after finishing college, rewrote it a handful of times, and eventually typed out a really short version as a block comment in the source code of Exquisite Haiku, a web application for collaborative poetry composition that I built as a 20% project back at the Scholars’ Lab.

The idea was this – take the words in a poem, and physically place them inside the letter glyphs of the poem’s title:


Once all the words were in place, I realized that I wanted some way to “traverse” the poem, to slide through it, to smoothly read the entire thing without having to manually pan the display. This is ironic – really, what I wanted was to be able to read it more normally, which gets back to the question of just how much innovation is really desirable at the level of the basic reading mechanic. But, I decided to stick with it, because I liked something about the way the new layout of the poem jibed with the content. Or, really, the genre – dramatic monologues are always trying to characterize or define the speaker, to be the speaker, in a sense. Here, I realized, this gets encoded into the dimensional layout of the text – the poem is inscribed inside the name of the speaker, it literally conforms to the shape of the word that signifies the person being described.

To chip away at the problem of finding a way to add a sense of linearity, I added a little slider the bottom of the screen, which can either be dragged with the cursor or “stepped” forward or backward by pressing the arrow keys. Since Neatline always centers the selected record exactly in the center of the screen, this results in something similar to commercial speed-reading interfaces like Spritz – you can just leave your eyes focused on the center of the screen, hold down the right arrow key, and “watch” the poem as much as read it:

The dream – the deep page

These projects are fun to make, but they’re brittle and can’t really scale much beyond this. OpenLayers does a good job of rendering the really high-density geometries that are needed to represent the letter shapes, but the content management is slow and difficult – Neatline is a GIS client at heart, not a text editor, and the workflow leaves a lot to be desired (Polyphemus is a brisk 94 words, and I bet I spent at least 10-15 hours putting this together). Really, this is a simple attempt to prototype a much larger and more interesting project, a generalization of this type of deeply-zoomable “page” that would be much easier to create, edit, and navigate.

I’m not totally sure what this would look like, but I think the crux of it would be this – a writing and reading environment that would treat depth or zoom as a completely first-class dimension. I’ve had a vague sense that something like this would be interesting and useful for a while, but I couldn’t really put a finger on exactly why. Then, a couple weeks ago, I stumbled across a fascinating paper abstract from DH2014 that surveys a number of different variations on exactly this idea, and frames it in a way that was really clarifying for me. In “The Layered Text. From Textual Zoom, Text Network Analysis and Text Summarisation to a Layered Interpretation of Meaning,” Florentina Armaselu traces this idea back to work by Neal Stephenson and Barthes, who propose the notion of a “z-text,” a collection vertically stacked text fragments, each expanded on by the snippet that sits below it. Reading takes place on two dimensions – forward, along the conventional textual axis, and also down into any of the fragments that make up the text.

Poetry aside, this could be fantastically useful whenever you want to formalize a hierarchical relationship among different parts of a text, or create some kind of graded representation of complexity or detail. A footnote or annotation could sit “below” a phrase, instead of being pushed off into the margin or to the bottom of the document. Or think about the whole class of organizational techniques that we use to make it easy to engage with texts at different levels of detail or completeness. We start with a table of contents, which then feeds into an abstract, an introduction, an executive summary – each step providing a progressively more detailed fractal representation of the structure of the document – and then finally provide the complete treatments of the sections. A deeply zoomed page would formalize this structure and make it unnecessary to fall back on the “multi-pass” approach that regular, one-dimensional documents have to rely on, which, seems inefficient and repetitive by comparison. This seems like a case where computers really could improve on a fundamental limitation of printed texts, which is exciting.

From this perspective, it actually makes a lot of sense that technologies that were originally designed for mapping (Neatline, OpenLayers) turn out to be weirdly good at mocking out rickety little approximations of this idea. If you think about it, digital maps do exactly this, just in a visual sense. You often start out with a broad, low-detail, zoomed-back representation of a place – say, the United States – and then progressively zoom downwards, losing the comprehensiveness of the original view but gaining enormous detail at one place or another – a state, a city, a block. The z-text is exactly the same idea, just applied to text. The question of how to build that interface in a way that’s as fluid and intuitive as a slippy map is totally fascinating and, I think, unsolved.