[Crossposted from scholarslab.org]
One of the reasons that programming is so invigorating is that software is constantly leaning forward into further elaboration and complexity. Every feature is the precursor of a hundred possible new ones. Code is always the double-delight of what it is and what it could become.
Prism currently occupies the place in the evolution of a software project where there’s enough functionality in place to really engage with the shape of the idea, but still enough unfinished that there’s space for broad, exploratory thought about the future direction of things. Annie did a fantastic job implementing a core interface that allows users to apply concept verticals (“highlighters”) to texts. Looking forward, the motivating question is how this capability can be leveraged to produce concrete scholarly outcomes – ideally, new understandings of texts.
The first public release seems like an excellent opportunity to start trying to whittle down this question to a set of specific research goals to guide future development. To me, Prism points towards two general lines of inquiry:
- How can experiments in collaborative markup capture uncommon or dissenting readings? The concept of crowdsourcing – and, really, the social internet in general – has proven highly adept at extracting majority opinions, at taking the pulse of a group of people. What is “liked” by the community of participants? Where is there agreement? Always implicitly contained in the data that yields these insights, though, is information about how individuals and dissenting groups diverge from the majority consensus.
Usually, in the context of the consumer web, these oppositions are flattened out into monolithic “like” or “dislike” dichotomies. Tools like Prism, though, capture structurally agnostic and highly granular information about how users react to complex artifacts (texts – the most complex of things). I think it would be fascinating to try to find ways of analyzing the data produced by Prism that would illuminate places where the experimental cohort profoundly disagrees about things. These disagreements could be interesting irritations into criticism. Why the disagreement? What’s the implicit interpretive split that produced the non-consensus?
- Continuing on the concept of the “experiment.” Prism points at the provocative possibility that literary study could literally take the form of experiments, similar in structure to the “studies” conducted in disciplines like research psychology, sociology, and experimental philosophy. Literary criticism generally asks questions about how texts can be read. The critic conjures highly creative statements of meaning that often stake their value claim on the extent to which they are unexpected, unanticipated, not obvious, or atypical. Of course, this mode of introspective exegesis is ancient, beautiful, and permanent – an important counterweight to modern modes of loud, fast thinking. Far from replacing these traditional efforts, tools like Prism hold the promise of extending and advancing them in profoundly new ways.
Prism provides information about how texts just are read. I think it would be fascinating to take this to the next level and stage formal experiments in which subjects are presented with a text and asked to mark it up with a small number (even just one) of carefully-selected, highly-controlled terms. Done responsibly and with a healthy aversion to the sugary siren call of “Data” in a field that’s fundamentally in the business of studying art, I think that this could provide fascinating insights about everything from the concept of Kantian “expertise” in the formation of aesthetic judgments to the questions about how people of different ages, ethnicities, genders, and disciplinary affiliations engage with texts. How does a college freshman read differently than a 5th year English graduate student? How do physicists read differently from philosophers?
Either way, I can’t wait to see where it all goes.