Three-dimensional renderings of text

I’ve been thinking a lot recently about how to “augment” text. I realize all of the projects I’ve worked on over the course of the last two years, and I realized that all of them can essentially be thought of as efforts to use software either to (a) graft some kind of additional functionality on to text or (b) to provide some kind of “view” or representation of text that wouldn’t be possible with analog technologies. Public Poetics tried to formalize the structure of New-Criticism-esque close readings by making it possible to “physically” attach threads of conversation onto the poem; as Neatline evolves, it’s increasingly starting to morph into a text-focused application, a kind of geo-spatial footnoting system that links portions of a text to locations on a map. Oversoul is much more radical and transgressive to the extent that it stops working with extant texts and instead tries to formalize, in the shape of interactive software, the classical semiotic model of language construction – choose words from a paradigmatic pool of signifiers; string them together into syntagms.

Really, all of these projects make pretty heavy-lift interventions, in the sense that they construct completely new systems and capabilities around texts (or, create texts from scratch). This is why they’re interesting. Yesterday, though, I was trying to think about ideas for projects in this domain that would be simpler and more limited (by design) in the depth and complexity of the intervention. Instead of providing platforms for adding new content next or adjacent to texts (Public Poetics, Neatline), what remains undone in the realm of just presenting texts, of showing what’s inherently present in the thing itself?

I’ve always been interested in the physicality of language. I don’t really mean that in the sense of the history of the book as a technology, but more in the sense that printed language has a structural, dimensional, measurable embodiment on the page as a collection of letters – in the end, we’re dealing with matter, with a highly specific arrangement of ink that has mass and volume.

Volume. That got me thinking – imagine a dead simple web application where you could enter in a string of text (picture a basic text area that would allow characters and line breaks), and the application would use a Javascript engine like three.js to render the text inside of an endless, pure-white environment. Then, you could traverse the environment with standard mouse/WASD movement, perhaps also with E moving the perspective vertically up (perpendicular relative to the current view angle) and Q moving down (this would make it easier to float “downwards” to read vertically high texts without needing to “look” downwards and use W to move forward, which would interrupt the view of the text itself).

I imagine two sliders that would make real-time adjustments to the environment – one that would change the “depth” of the rendered letterset, and another that would change the scale of the user’s perspective relative to the size of the letterset (In other words, how large are the letters relative to your perspective? How long does it take you to float along a sentence? A couple seconds, or a couple minutes? How huge or minuscule is the text object?).

Even better, but much more complex and difficult: Imagine taking this idea and converting it into a freeform, “multiplayer,” three-dimensional, infinitely-large canvas where anyone could come and plot three dimensional text at any orientation, in any size, and in any location in the endless volume of the application. This could then be traversed in a similar way, a constantly-changing cloudscape of floating three-dimensional text.