[My first blog post for CCR 733]
Digital humanities, and a good portion of academia at large, is a bit like RoboCop in RoboCop 3. For those unfamiliar with the brilliantly corny 90s flick, the plot is essentially this: large, militarized corporation is trying to evict people for an international business deal. To get RoboCop on their side, the company tries to tune down his human elements and make him more susceptible to programmed orders. Of course, this doesn’t work, and RoboCop gets involved with the rebels, later joined by police and blue collar Detroit citizens, to fight the corporate army.
“Humanity,” in the movie, seems at odds with programming. Or more particularly, humanity lets RoboCop morally settle his conflicting programming when law enforcement is no longer on the side of the people. RoboCop, unlike the fully robotic assassin he fights in the film, is human. And through that humanity, he can behave with compassion, violating immoral orders.
I think a similar fear is lurking in academia, especially in the humanities, that emotions and all the human elements that speak to our “human condition” are getting vacuumed out by technology and neoliberal policy.
As Alan Liu writes in “The Meaning of the Digital Humanities”:
The general crisis is that humanistic meaning, with its residual yearnings for spirit, humanity, and self—or, as we now say, identity and subjectivity—must compete in the world system with social, economic, science-engineering, workplace, and popular-culture knowledges that do not necessarily value meaning or, even more threatening, value meaning but frame it systemically in ways that alienate or co-opt humanistic meaning” (419).
The quote itself is pretty clear. And, the erosion of the humanities through technology and workplace culture is not new. Matthew Arnold writes beautifully on such anxiety, as Alex Reid alludes to here.
But regardless of its age, the question remains urgent in DH and DH-related fields. And while I can’t speak for a field, as a PhD candidate with interest in digital rhetoric, I often feel what Arnold describes in “The Grande Chartreuse,” “Wandering between two worlds, one dead,/ The other powerless to be born.”
I’ve seen older systems of labor gutted out and shifted. I’ve looked through the grim numbers for higher education hires in humanities. Such a world may not be “dead,” but it is having some serious growth pains.
And the other world, “powerless to be born”? Here, I often run up against different sites of resistance. Sometimes, it’s one based on method or imagination, the “theoretical poverty” involved with humanities computing, in which people are using computers to simply speed up more traditional research, as Willard McCarty describes. But this is still productive. Sometimes traditional elements of academia also clash with segments of DH or related thinking, like new materialist v. historical materialist debates.
But, to evoke RoboCop again, I’m more concerned with neoliberal attitudes than anything else.
I think that Willard McCarty puts it clearly, saying, “Better techne should mean greater freedom to change the material conditions of life for better or worse, but discussions involving it so often imply an inevitable outcome, therefore a narrowing rather than expanding of possibilities” (41). To me, this “narrowing. . . of possibilities” evokes Heidegger’s fear for the “essence of technology,” a way of being that reduces everything into “stock” or “standing reserve” [Bestand]. As he describes, a forest becomes a stock of lumber, a field becomes a vein of coal, a person becomes a “human resource.”
While Heidegger was skeptical of technology and the “essence” behind it, I’m more concerned with the essence itself, independent of the materials of technology. We often do become the “tools of our tools,” as Thoreau complained. But we also make ourselves into tools through our own cultures and outlooks. And while it’s a contentious point, neoliberal policies, especially in certain settings, often have that proclivity and result.
At least that is how I and other friends and colleagues feel sometimes, like “human resources” to use Heidegger’s term.
So what worries me about DH is not its current state but the already beginning possibility it has to further the sort of neoliberal “enframing” noted above. We see this in subtle ways, like the participation gaps of concrete hands-on access to technology. We also see it in the spaces online, with the rhetoric of users or the limits and “politics” imposed by the interfaces, as this article notes. We see it with the sort of skills that companies value–and by extension, school administration–like learning how to work office programs and social media but not reflect critically on their use.
So we are a bit like RoboCop these days, hybrids of humanity and technology pressured to turn down our “humanity.” But technology and computers are not at odds. As Liu points out, “Knowledge is an ice- skater’s dance on a slippery epistemic surface, on which neither the human nor the machine—the dancer nor the skates—alone can stand” (416).
But when stepping from knowledge to policy, I think the issue becomes more incisive, as questions of humanities and technology get wrapped up in privilege, oppression, and the daily needs of students and faculty.
DH is in an important position to rally a tangled, unruly background of resources to address technology’s potential to exclude or oppress. And as groups like HASTAC or spaces like Hybrid Pedagogy show, such work is already being done.
To get more concrete, this debate forces us to examine what we want being “human(e)” to mean, as we consider our materiality and the policies we can implement in classrooms or writing programs. Though such policies may (at times) be at odds with our institution, with the cultures we encounter in different spaces, or our own internal beliefs and “programming,” to be hybrid means to be aware of our hybridity and steer it in directions of compassion, critical thinking, thoughtful inclusion, access, and safety.