As I’ve said in the past, the start of the semester feels more “New Year” than Jan. 1, and I want to rekindle the tradition of posting at the start to set things on a productive path and provide a space to reflect.
But first, an update.
As I’ve said in the past, the start of the semester feels more “New Year” than Jan. 1, and I want to rekindle the tradition of posting at the start to set things on a productive path and provide a space to reflect.
But first, an update.
“There is a pleasure in the pathless woods,
There is a rapture on the lonely shore,
There is society, where none intrudes,
By the deep sea, and music in its roar:
I love not man the less, but Nature more,
From these our interviews, in which I steal
From all I may be, or have been before,
To mingle with the Universe, and feel
What I can ne’er express, yet cannot all conceal.”
Starting off a reflection about social media with a quote from Byron about the solitude of nature seems counter intuitive. A “society, where none intrudes” clashes with the usual rhetoric surrounding the networked culture of digital spaces, and the “lonely shore” and “pathless woods” probably lacks WiFi–or broadband.
But bringing in Byron highlights the paradox of place that the Internet and digital technology brings. We are networked selves, accessing the Internet in multiple ways from multiple places or portals, as our physical self continues to take up space and air “irl.” And much like the narrative locales of Romantic poetry, many digital spaces are constructed and emergent.
Byron’s saga traces the physical geography of Southern Europe, but Byron’s textual place–his “pathless woods” and roaring sea–arrive at us in ephemeral language through his poetry. They are authored locales. Phrased another way, one can visit the spaces where he allegedly traveled while writing Childe Harolde Pilgrimage, but those irl locations—the rocks, the rivers, the trees, the moss-laced logs—all of these differ from the locations that we envision when reading or hearing his poetry—nor are they constant over time, like the printed word. Language both signifies and creates locales.
Similarly, I think that the quality of born-digital space forces us to look at space as an ephemeral, emergent gathering. Websites may have a url pinning them down and servers in world sucking up power and taking up space, but we largely experience them more subjectively. In his later work, Martin Heidegger discusses the notions of “location” (or “locale”) and “space.” As he writes in “Building, Dwelling, Thinking”:
“The location is not already there before the bridge is. Before the bridge stands, there are of course many spots along the stream that can be occupied by something. One of them proves to be a location, and does so because of the bridge. Thus the bridge does not first come to a location to stand in it; rather, a location comes into existence only by virtue of the bridge.”
The bridge in this example, by being constructed, is opening a “location,” a significant site where different elements can gather and be. One can look at the bridge as a concrete space of possibility, a site that can direct meaning at some level in ways that an unmarked, undeveloped area cannot. Before the bridge exists, the area is just a “spot.” Things are happening in it, but nothing is built there. And with no building–or inscribed significance, like a park or childhood memory–the place feels anonymous.
On the one hand, this is obvious, and Heidegger’s obscure thinking may over-complicate the matter. But I think it gets at something important: how construction creates a fundamentally new reality at a site. Before the bridge, the space was simply “nature” or a river bend. Now, the bridge may have a name. It serves a human purpose for commerce. Lovers add locks to it. It may be in a film. It may represent a certain style or culture. It interacts with the nonhuman environment, deflecting rain and providing shelter for animals.
In Heidegger’s thought, a “thing,” like a bridge, is not an inert site of stone and steel. Drawing from the older use of thing in Icelandic and Germanic language, “Ting” and “Ding” respectively, thing is a site for an assembly, a gathering of people to reach decisions. With thinkers like Bruno Latour and Thomas Rickert picking up on this use more recently, I think we can look at Internet architecture with a similar dynamism.
A site is often even more of a “thing,” in this sense, than Heidegger’s bridge. It is a place for gathering. And in that gathering, a fundamentally location-attuned way of being arises through the interplay of different forces. As Nancy Baym argues in “The Emergence of On-Line Community,” online communities are emergent rather than dictated. As she writes, “Social organization emerges in a dynamic process of appropriation in which participants invoke structures to create meanings in ways that researchers or system engineers may not foresee.” Participants inherent certain structures or systems, Baym points out, and users dwell in and add to these initial elements to construct social practices and communal spaces. Location emerges. The community of individual authors writes and is written by the location.
But I want to turn, particularly, to authorship.
As Jessica Reyman argues in “Authorship and Ownership,” such spaces are often “co-authored” by algorithms and multiple people. By drawing from user data—as they point, click, and brows the digital spaces—algorithms tailor adds, curate feeds, and allegedly cocoon users in “filter bubbles” of easy-to-consume content, all the while drawing meta data for marketing and research. Today, this data mining and site curation is commonplace, and though scandals brought by Cambridge Analytica and others have brought renewed scrutiny, Reyman offers an important perspective. She argues that users have a right to this data: they are the ones creating it, while corporations profit off it. This sort of free labor, sometimes fit under the term “playbor” abounds in the Internet. As Andrew Ross argues, “The social platforms, web crawlers, personalized algorithms, and other data mining techniques of recent years are engineered to suck valuable, or monetizable information out of almost every one of our online activities” (15).
The relationship between authorship and labor has had a pronounced history leading back to the Statute of Anne in 1710 and the tensions of “intellectual property.” The image of the gentlemanly author plucking inspiration from muses and native genius to create new ideas, taken down in print, remains a sticky one. Today, if one follows Reyman’s argument, we are all authors at some level, as our being-in-the-(digital)-world adds to that world, co-authoring these spaces through our content creation and meta-data. Considerable playbor takes place in the form of Instagram posts, linking to articles, fanfiction, videogame modding, and more. Indeed, part of the reason that videogame companies endure the cottage industry of streamers and walkthroughs is for the free publicity it provides, and it has been common place since the 90s to collect and re-release content created by fans for company profit. Turn-it-In also owns student work, creating a financial empire from the labor of student writers.
In the more material sense, in terms of dollars and cents, this is a problem, but I want to take it to a somewhat deeper level–first addressing the authoring on the other side.
As philosopher Daniel Estrada wrote in a Medium article on filter bubbles, “in a very deep sense, you are your bubble. The process of constructing a social identity is identical to the process of deciding how to act, which is identical again to the process of filtering and interpreting your world.” While I would argue that identity is more than “the process of deciding how to act,” a point that I reckon Estrada would likely recognize, I think it definitely plays a central role. Sartre put it best: “We are our choices.” Our choices have echoes, and sometimes those echoes etch our being–or how others view our being.
But Estrada goes on: “any constraints imposed on your filter are also constraints on your possibilities for action, constraints on the freedom of your decisions and the construction of your world. If you are your bubble, then any attempt to control or manipulate your bubble is likewise an attempt to control you.” As technology ethicist Tristan Harris puts it, you may get to decide what you eat in these platforms, but they provide the menu.
Again, this has implications as we consider our selfhood or identities. While for Kant, the self is largely insular, cognitive, sensory, and self-contained, thinkers continue to argue, from a Buddhist metaphysics of emptiness to Diane Davis in Inessential Solidarity and Thomas Rickert in Ambient Rhetoric that the self is more osmotic or relational. It is permeable and messy, bundled and blurry, oozy and diffuse, yet localized by language and materiality. As Rickert puts it, we don’t just live in a world, we are enworlded.
And here come the algorithms. These too, if you want to go this way, are part of us, and so is the digital pathways they “co-author” from our metadata. To use Kant’s term, this digital world informs–or possibly is–our phenomenological experience and the self that this experience informs. In many cases our digital selves are ourselves—networked and saturated by technology and the nameless bots and programs in the background. And as both Reyman and Estrada point to, we don’t really own, or fully understand, these algorithms. Eusong Kim has argued about trending, for example: “We don’t know why something trends. The algorithm is a locked secret, a “black box” (to the point where MIT professors have built algorithms attempting to predict trending tags). The Fineprint: Trending is visibility granted by a closed, private corporation and their proprietary algorithms.”
This leads me back to Reyman’s view on data and our ownership of it. As we live in a more English model of copyright, economics and law tend to steer the conversation. But as this digital composing infuses our lives, both the deliberate messages we send out and the co-authoring of our data, issues of ownership, autonomy, and originality come to the forefront—especially that of ownership. Who owns our data is not just an issue of privacy, but it is an existential one. As our being-in-the-world co-authors and becomes entangled with our personas and places online, so do our selves. Just as England wrestled with the intellectual labor and textual ownership of traditional authors, we face a world in which our own ideas and our own digital being has become monetized and divested from our hands. Despite efforts by Facebook and others to allow us to see our data or have more input on our privacy and feed, a fundamental structure of black-boxing already exists, persistent through law and custom, to own and profit from our online meanders and statuses—and filer our own experience and online localities.
As we make paths in this pathless wood, Facebook profits and shapes the woods around us.
This afternoon, the Senate–after weeks of rancor and the bathetic hem-hawing of folks like Flake–will vote in Kavanaugh as the Ninth Justice of the current Supreme Court. I should technically say that they “likely” or “all-but-certainly” will, but precision devalues the sheer force pushing confirmation. So, unless God himself smites the Capitol Monty-Python style, hello Chief Justice Kavanaugh.
I haven’t written here in a while, though I have been meaning to, and perhaps those now-unfinished posts may make their way up here. But like many, the Kavanaugh confirmation, long-since troublesome, has consumed my thinking the past two weeks after Ford’s accusations and subsequent testimony. Considered a referendum on the #MeToo movement, the debate over Kavanaugh certainly represents a crucible-rupturing focus on gender politics in an already fraught era. It has also brought up issues of judicial impartiality and temperment, the institutional credibility of the court, and the stability of centrist and liberal provisions eked out over the past decades.
All of these are important conversations, as are the testimonies of Ford and Kavanaugh, the political background of Kavanaugh, the procedural issues of the confirmation, the veracity of his two other accusers, and many more issues. However, I mainly want to focus on the arguments of those in favor of the Kavanaugh vote, as I see them.
I want to take these at face value, though I suspect like so much in this era, they lack the sincerity of their delivery. I do this knowing that it makes no difference. Having called public servants, donated to causes, talked with friends, and gone to protests–done all in my current power, in other words–I feel that it may at-best be an intellectual exercise. Nevertheless, as a teacher and student of rhetoric, I think it’s important to look at the arguments that govern major political and policy decisions and define our country for our lifetimes and beyond.
As such, I see three main arguments, summarized and addressed below. And, yes, I am biased. I do not want Kavanaugh, but being biased does not preclude academic fairness. And frankly, I don’t think these arguments deserve that fairness, but many Americans (cough, Republicans) support him, so here we go.
My main purpose for today is to explore games as a type of media and use a new materialist focus—predominately from Laurie Gries—to highlight their capacity as circulating interactive materials. In rhetoric, discussion of games has tended to focus on their procedural arguments or on a potential culture that they may help foster. Looking at games as dynamic objects as they circulate, however, showcases how their qualities as media, including their procedurality, interact with other actors to co-create ongoing media ecologies. Games represent a distributed potential, starting with participants in the design phase, and spilling into their co-interaction with players and platforms out in the world. Focusing on their circulation helps one see this more expansive engagement.
In her tracing of the famous Obama Hope image, Gries pursues and outlines a new materialist approach to circulation, building from the work of rhetoric scholars, like Jenny Rice, and new materialists, like Jane Bennet. Her frame work focuses on the futurity and consequences of rhetorical artifacts as they move through the world. As she writes, “Rhetoric . . . is a process that unfolds and materializes with time and space. We can thus learn a lot about rhetoric, I imagined, by focusing on the material consequences that unfold during futurity — those spans of time beyond the initial moment of production and delivery.” She describes how the Obama Hope image helped localize and form networks through its affective force. To that end, she outlines six principles, which I would like to briefly review:
While Gries uses this framework to engage primarily with visual rhetoric in her project, they also provide a flexible, illuminating approach to other media and modalities, including videogames.
Before getting into my example with The Sims, though, I wanted to briefly highlight some central tenants of video games as media. This is still an ongoing project, but I see these four qualities as central to games and part of their unique rhetorical capacities as they circulate:
To showcase these qualities, I want to use chess as an example. In The Grasshopper, game theorist Bernard Suits distinguishes between a game and its “institution,” using chess to clarify. The institution of chess, he theorizes, contains the abstracted rules and associations that carry across contexts and sessions. Excluding house rules and personal changes, pieces move the same way and win conditions remain largely codified, set down in “the rules,” but each session is different. Here, we see these four qualities in action. The game is active, requiring it to be “played,” such playing presents a complex system where a range of possibilities emerge, the session is procedurally mediated, and players are in dialogue with those procedures—and one another, in this case.
Reflecting these capacities, Tanja Sihvoven distinguishes between the game as it “comes of the shelf” or get released as a title and the game as it circulates in the world, getting modified, producing derivative texts, etc. as “process.” As she writes, “I think of the COTS (commercial off-the-shelf) game title as ‘game- as-product’ and the game materialised in gameplay as ‘game-as-process’. . .The duality of game-as-product and game-as-process reveals that a game does not only consist of a material aspect, the algorithm, but it also entails an embodied experience, the act of play. . . . In this sense, all games are constructed of rules and rulesets, which contain the potentiality of the game, game in potentia, but only the actual play of a game brings it to full existence, game in actio . The game has to be experienced by its player, interacting with the rules and the provided virtual environment, in order for it to achieve its actuality. The potentiality of a game can thus be considered as a designed formal system that is able to direct and predict certain experiences the player is likely to undergo without resorting to simplistic determinism.”
While the traditional uptake of procedural rhetoric has tended to focus on the former—games as rhetorical arguments designed to make largely claims-based arguments—games as process also prove rhetorical in unique ways, considering their Heraclitan qualities. Much like Gries’ Obama Hope image, games present objects that inspire a range of behaviors, regarding play and beyond. Her framework—geared toward circulating objects, consequences, futurity, and changing contexts—aligns with this more ongoing game-as-process framework. To showcase how this may look, I will use The Sims as an example, tracing its creation and uptake.
When looking at the creation of The Sims, I think it’s important to look at its origins and development, move to its design and mechanics, then look at its uptake. My materials for this work come from a range of secondary sources, including published interviews and work from other scholars, along with archival materials from The Strong Museum in Rochester, particularly Will Wright’s design notebooks. I also used the Internet Archive and my own childhood materials. While any of these phases could go more in depth, I just wanted to hit the highlights.
Will Wright originally envisioned The Sims from three main sources, his longstanding interest I human psychology and architecture, the dollhouse he and his daughter Cassidy played with, and a 1991 fire that swept through Oakland Hills, where Wright and his family lived, destroying his home and possessions. By 1993, ideas for the game had congealed into Wright’s working title, Home Tactics: The Experimental Home Simulator. He was not the first with a similar idea, with the 1985 Activision game Little Computer People already sporting a domestic focus and open play condition, but this game was not that successful commercially. Maxis’ board of directors was skeptical to accept Wright’s project with its domestic focus, despite the success of some of Wright’s other games. Wright had to wait until EA Games bought Maxis in 1997, when Wright and his team of over 50 programmers got to work. The Sims came out 4 February 2000, quickly surpassing Myst (1993) as the best-selling PC game of all time. The Internet also played an important role, with Maxis launching the site in advance, and allowing press releases and fan excitement flourish.
In his notebooks, Wright often comes back to two main elements: (1) the goal of happiness and (2) a people v. things conflict. He wanted the game to be a satirical critique of capitalism, with players fulfilling their needy Sims’ constant desire for more stuff. Mechanically that satire is largely lost, with stuff-buying being central to the happiness of their Sims, not just relationships, relying on a four-part process. First, every Sim has a series of needs called “motives,” like hunger or hygiene, which dictate their happiness and degrade over time. Higher motives mean happier Sims. Second, players have their Sims use objects in the game to meet those needs. A shower, for instance, helps hygiene. Third, more money lets players buy objects that help these motives more efficiently. A more expensive shower, for example, provides more “hygiene” than a cheaper model. And fourth, one gets more money by advancing in their career across different career paths, themselves modeled after real-world versions.
As Jesper Juul (2010) argues regarding the franchise, the game does not force a specific goal or punish you for failure (in a direct sense, at least), but coaxes you down a path of “least resistance” most in line with the game’s values (137). This is a common tactic in more open-ended games, and this open-ended nature and low-stress, casual gameplay aided its success and uptake, especially among less hardcore players. Today, with casual gaming more common, this may not seem as significant, but at the time, the game proved revolutionary.
The open-ended nature of the game also gave considerable leeway for players to compose. Many of these works gained circulation through the official site, pictured here, on the “Exchange”. Fan sites, linked to the main site, also provided outlets.
Less prominent on the main site, but common with fan sites, The Sims had a robust modding community. Maxis provided a basic mod kit to help with creating new Skins and other basic changes, but some players changed deeper mechanics, creating new objects; shifting the game away from its more normative sexual gender biases, and in a case that drew Maxis intervention; erasing the censorship bar from naked Sims. Most of these communities found structure through popular fan sites, like The Sims Resource pictured here—one of the largest and most popular. Like most mod communities, this one was more of a gift economy, but The Sims Resource did allow premium content for a subscription, paying modders.
Importantly, all of these player interactions have two core parts worth scrutinizing. First, the qualities of the original game—especially its open-ended play, casual nature, easy modibility, and screen-capture function—allowed this community to help flourish. These capacities, when put in motion through play, informed its participatory uptake. Second, the ongoing interaction of other participants through a range of networked platforms, tools, and communities—often with their own motivations and goals—also led to its ongoing popularity and healthy fan culture. In both its original invention and arrangement and in its ongoing delivery, The Sims illustrates how games don’t just make arguments or allow fun, but can provide the means to produce other composing spaces and compositions—themselves rhetorical in their own way.
Here is my introduction as part of a round table at the 2018 Computers and Writing Conference at George Mason:
As Bruce McComiskey describes in his recent Post-Truth Rhetoric and Composition, “fake news” has become another means to validate and circulate falsehoods, facilitated by social media and an audience’s desire to share and support this erroneous news. But it goes beyond this. As Collin Brooke argues in “How Trump Broke/red the Internet,” many people critiquing articles share them, causing it to trend, and beyond human agents, bots share and comment. “The Spread of True and False News Online” by Soroush Vosoughi, Deb Roy, Sinan Aral finds that fake news tends to spread faster than truthful sources on Twitter.
As an example, fake news offers a sticky paradox: opponents of “post-truth” are often hampered in their fight by broader histories of habit (especially in the media), infrastructure, and economic goals and models. While this brief introduction does not have the space to detail this, I want to describe what I mean, why it’s significant, and two approaches.
In terms of these histories of habit, Michael X. Delli Carpini argues in “Alternative facts,” “Rather than an exception, ‘Trumpism’ is a culmination of trends that has been occurring for several decades” (18). The blur between news and entertainment, the weakening of traditional gatekeepers, and the growth of what Carpini calls a “multiaxial” and “hyperreal” media landscape, where contradictory news co-exists and information often replaces the underlying material reality it represents—all of these represent long-standing trends contributing to Trump and post-truth rhetoric.
Mainstreaming fringe discourse also contributes. As Waisborg et al argue in “Trump and the Great Disruption in Public Communication,” mainstream news offered platforms for fact-free, intolerant discourse from formerly fringe groups, and as Zeynep Tufekci argued in a recent New York Times op-ed, algorithms on sites like YouTube often draw viewers to more extreme content. Angela Nagel, in Kill all Normies, and a recent report from Whitney Phillips in Data and Society also point out this mainstreaming, highlighting the role of trolls. Furthermore, as Noble’s Algorithms of Oppression highlights: the digital infrastructure often enforces hegemony and racism.
As rhetoric has long been central to public deliberation, we need to teach what has become of this deliberation. While political enmity, fractured discourse, and fake news are not new—from Ancient Athens killing Socrates to the strife of Reconstruction—our media landscape is. And I think two points bare deeper scrutiny.
First, as Zizi Papacharissi argues in Affective Publics, we often underestimate the role affect in public debate. This is especially true today, as her work with social media shows. Many of these point-and-click economies rely on affect, often stoking social change—or the means for it—through revenue models, forming “affective publics” as networks organize online and offline. Many legacy media outlets also rely on affect to draw and maintain viewers, informing coverage. While we, as a field, may often prioritize logos and ethos in writing, we need to recognize affect and its ability to circumvent other appeals—through humans and interfaces.
Second, much as the digital humanities has advocated working with computer science departments while developing computer literacies of our own, I think we need to connect with media and journalism. As public rhetoric often takes place through news—fake or otherwise, on television or through Facebook—we need to connect with those who do this work, how it is done, its history, and how it circulates. In other words, we need to interrogate the whole structure, not just consumer media habits and literacies.
Patricia Roberts-Miller argues in Demagoguery and Democracy that demagoguery comes from an underlying culture. Even as we fight the daily battles of post-truth rhetoric, we must also—per our energy’s allowance—combat the underlying war, as it pervades our media, politics, and daily lives.
Bockowski, Pablo J. and Zizi Papacharissi, eds. Trump and the Media. Cambridge, MA: The MIT Press, 2018.
Brooke, Collin Gifford. “How #Trump Broke/red the Internet” Skinnell 122-141.
Carpini, Michael X. Delli. “Alternative Facts : Donald Trump and the Emergence of a New U.S. Media Regime.” Bockowski and Papacharissi 17-24.
McComiskey, Bruce. Post-Truth Rhetoric and Composition. Logan, UT: Utah State University Press, 2017.
Nagle, Angela. Kill All Normies: Online Culture Wars From 4Chan and Tumblr to Trump and the Alt-Right. Winchester, UK: Zero Books, 2017.
Papacharissi, Zizi. Affective Publics : Sentiment, Technology, and Politics. Oxford, UK: Oxford University Press, 2015.
Phillips, Whitney. “The Oxygen of Amplification.” Data and Society. 22 May 2018. Web.
Roberts-Miller, Patricia. Demagoguery and Democracy. New York, NY: The Experiment, 2017.
Skinnell, Ryan, ed. Faking the News: What Rhetoric Can Teach Us About Donald J. Trump. Exeter, UK: Imprint, 2018.
Tufekci, Zeynep. “YouTube, The Great Radicalizer.” The New York Times. 10 March 2018. Web.
Vosoughi, Soroush, Deb Roy, Sinan Aral. “The Spread of True and False News Online.” Science 359.6380 (2018): 1146-1151.
Waisbord, Silvio, Tina Tucker, and Zoey Lichtenheld. “Trump and the Great Disruption in Public Communication.” Bockowski and Papacharissi 25-32.
I’ve been playing a lot of Stardew Valley lately. The pixel-graphics farm RPG has enjoyed a one-year anniversary this past Feb. 26, but mostly I’ve found the game to be a bit of an escape as Syracuse’s nickel grey March and school’s looming deadlines deepen a seasonal depression.
For those of you who have not played Stardew Valley, the plot is simple. Inheriting your grandfather’s rustic farm in the bucolic Stardew Valley, you start with some lose coins and tools and gradually nurture the farm back to health, interacting with the community and the surrounding countryside–from mysterious woods, to mines, to the ocean–as you plant and harvest seeds, forage, mine, and care for animals. Like any RPG, you level up your skills, from crafting and combat, and build relationships with NPCs by giving gifts and completing small quests. The player can eventually get married and raise a family.
The game has some overlap with the Harvest Moon and Animal Crossing series, placing the player as a caretaker enmeshed in a community. The simple music, pixel graphics, and winsome, quirky cut-scenes have their charm, and while the mechanics can get a bit grind-inducing (depending on one’s style and goals), the rhythm of rising, getting set for the day, working, and heading to sleep is a calming metronome that structures your daily actions, whether attending a community celebration, fighting “Slimes” in the mine, or simply fishing away a few hours.
More deeply, though, I kept coming back to what Stardew Valley teaches about Martin Heidegger (1889-1976), especially his notion of sorge, or “caring,” as it’s often translated.
The poet Charles Olson wrote, “Whatever you have to say, leave/ The roots on, let them/ Dangle/ And the dirt/ Just to make clear/ Where they come from.” Words are grimed, caked, and clotted with decades of use and wrinkled with age. Some words and phrases become anachronistic, like “winding” a window down in a world of electric windows. Others carry an explosive politics. Many get bleached by the endless passing of palms, losing a clear meaning.
But at a deeper sense, Olson’s line reminds me that we need to inspect our language in all its dirty history and daily use. To take it step further: Words impact our world, etching our reality like the steady run of water on rock or blowing it up like dynamite.
As George Orwell wrote, “if thought corrupts language, language can also corrupt thought.” His classic 1984 also stresses the coercive and meaning-making power of language through “newspeak,” the official language of Oceania that uses simplicity and structure to limit free thought. For example, “bad” no longer exists; instead, one has “ungood.” By limiting expression, one limits thought. This, among other reasons, hits at the danger of censorship and its popularity among totalitarian regimes.
This, of course, leads me to the recent reveal of the Trump administration’s censorship of seven words for the CDC: “vulnerable,” “entitlement,” “diversity,” “transgender,” “fetus,” “evidence-based” and “science-based.” While the initial call seems like it was over-blown, the words being discouraged for the CDC budget to make it more palatable, it follows a larger pattern: the EPA’s censoring of scientists, the removal of “LGBT” and “climate change” from the White House site, Trump’s attacks on the media and use of “fake news” epithets, etc. Indeed, even if the Post’s story was overblown, the fact they needed to police their language along ideological lines for research funds troubles me.
With the tax bill passing Friday being touted as a “win” by Republicans, despite potential blowback, I keep coming back to an idea I’ve been nursing for a few weeks now: winning in politics. Phrases like Trump’s “you’ll be tired of winning” and reporter’s “Republicans need a win” saturate public discourse, and I keep asking what “winning” means. Like most buzzwords,”winning” leaves much unsaid–and unthought–but it still exerts its influence. And in this case, “winning” isn’t a good thing. What I call a “rhetoric of winning,” this trend to frame things as “wins,” feels like a significant danger for our current American politics.
Winning immediately brings up positive images. Triumph. Trophies. Confetti. The climax of a sports movie when our underdog protagonists finally overcome the big, mean team. But these positive feelings overlook two key things: opponents and win/loss binaries.
I was just reading Cathy O’Neil’s (@mathbabedotorg) New York Times piece on the tech industry and academia, which argues how academics have not done enough to study issues caused by recent technology, including filter bubbles and big data. Others have already critiqued some of the tone and oversights of the piece, with varying degrees of sass, but I want to look at it as a rallying cry. While I think the piece could give more credit to current researchers, it recognizes a dangerous gap between this research and the tech industry.
A few of O’Neil’s points are especially key. For one, she notes how big data is often cloistered in companies, reducing access to academics. She also notes how private companies hire academics, and she describes how funding that drives engineering and computer science programs may not include more humanities-tinged concerns for the ethical, social dimensions of technology.
More contentiously, O’Neil also says, “There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives.” While a distinct field of study may be harder to name and locate, plenty of sub-fields and inter-disciplinary work hits at this exact issue. For example, in rhet-comp, Kevin Brock and Dawn Shepherd discuss algorithms and their persuasive power and Jessica Reyman has analyzed issues of authorship and copyright with big data. Beyond rhet-comp, danah boyd continues to write on these issues, along with work from the University of Washington.
But a gap remains to some extent, despite this research.
Personally, I see two potential reasons: hubris and tech’s failure to consider social media more critically. Regarding hubris, George Packer’s “Change the World” (2013) explores Silicon Valley’s optimism and their skepticism of Washington. After describing how few start-ups invest in charity, for instance, Packer writes:
At places like Facebook, it was felt that making the world a more open and connected place could do far more good than working on any charitable cause. Two of the key words in industry jargon are “impactful” and “scalable”—rapid growth and human progress are seen as virtually indistinguishable. One of the mottoes posted on the walls at Facebook is “Move fast and break things.” Government is considered slow, staffed by mediocrities, ridden with obsolete rules and inefficiencies.
After Russia’s propaganda push and amid ongoing issues, like Facebook’s role in genocide, this optimism seems naive and dangerous. Zuckerberg’s trip to the Midwest , hiring more fact checkers, and increasing government scrutiny seem to point to a change. But I’m not sure how much is actually changing in tech–or larger structures like education and law.
This leads me to my second thought. In Being and Time, Martin Heidegger distinguishes between the ready-at-hand and the present-at-hand. The former refers to how we normally go through life, interacting with objects without much reflective thought, while the later refers to the way a scientist or philosopher may look at stuff. In his hammer example, Heidegger says that we normally use a hammer without much second thought, but once the hammer breaks, we reflect on what it is or does.
Similarly, with the ugly realities of social media surfacing more, we are more apt to examine and reflect. Before it “broke,” we used it as a neutral tool to communicate and pontificate digitally. As long as we continue to see social media as a neutral tool, or a tool just needing tweaks or fixes, we miss considering what social media is within a broader context of culture, economics, and society. We may be waking up to these deeper questions now, but we can’t fall back on present-for-hand approaches to use and design.
As Lori Emerson (2014) argues, companies rush to intuitive designs and ubiquitous computing, but we must consider how these trends blackbox the values and potentials of our tools. As Emerson and others argue, we can challenge these trends with firmer technological understanding, more democratized development, and the resistance of hackers and activists.
But with tech having so much power, I am not optimistic for change without a broader attitudinal shift in tech and elsewhere. I only see incremental changes coming, like increased fact checkers and algorithmic tweaks. These are good and may lead to significant change in time, but fundamental outlooks in tech–what philosophers may call instrumental rationality–will likely stay the same. Many critique the Ivory Tower for its obsession with present-at-hand abstraction, but the Silicon Tower seems just as dangerous with its present-for-hand reduction.
[Image: “Hacker” by the Preiser Project, via Creative Commons]
It’s been a while since I’ve blogged, and an especially long time since I’ve blogged “for fun” outside of a class requirement, but with the semester starting up again, I wanted to start off with positive habits, creating a space to think through things. For now, I’ve been thinking a lot about politics and what my own interest in play can bring.
Wary of becoming another “It’s Time for Some Game Theory” guy or the writer of a naive think piece that praises some creepy gamifying tactic, I nevertheless think that play, games, etc., have a lot to offer how we consider politics.