Walking to my car today, I was thinking of Kant’s Critique of Pure Reason, particularly his idea of the “thing-in-itself.” Trying to reconcile the rationalism of Leibniz and Descartes with the empiricism of David Hume, Kant cut a line between the phenomena of our experience and noumena of the thing-in-itself, of what reality is outside of our experience.
For Kant, we cannot gain empirical knowledge without our own cognitive filters. We can never actually experience reality in-itself, so we cannot actually “know” the world, except through self-evident a priori analytic judgements.
I’ve never left that dichotomy between subjectivity and objectivity, between our experience and what “is” outside our a priori glasses. And as time went on, the layers of separation added up. Our senses. Our cognitive biases. Mediators, like my own glasses. Language. Culture and normative hubris. Geography. People around me. Prior readings and the general assemblage that constitutes “me,” or the signifiers and self I identify as. All filters. All filter bubbles.
As Daniel Estrada writes, “in a very deep sense, you are your bubble. The process of constructing a social identity is identical to the process of deciding how to act, which is identical again to the process of filtering and interpreting your world.” While I would argue that identity is more than “the process of deciding how to act,” a point that I reckon Estrada would likely recognize, I think it definitely plays a central role. Sartre put it best: “We are our choices.” Our choices have echoes, and sometimes those echoes etch our being–or how others view our being, which Sartre also argues.
But what I like about Estrada, is that the key is not the act itself, but the “deciding,” the cognitive pathways and heuristics–conscious or unconscious, affective or analytic, cultural and idiosyncratic–that form our actions. And at some level this deciding reflects our worldview, or prejudices, and the information we use to understand and then act in the world.
And here, things get interesting–at least for me. As Estrada goes on, “Thus, any constraints imposed on your filter are also constraints on your possibilities for action, constraints on the freedom of your decisions and the construction of your world. If you are your bubble, then any attempt to control or manipulate your bubble is likewise an attempt to control you.” While for Kant, the self is largely insular, cognitive, sensory, and self-contained. Expressing such a view of self, Emerson, a foundation to our own American sense of self (and “self-relience”) describes it bluntly: “the universe is composed of Nature and the Soul. Strictly speaking, therefore, all that is separate from us, all which Philosophy distinguishes as the NOT ME, that is, both nature and art, all other men and my own body, must be ranked under this name, NATURE.”
But, as thinkers continue to argue, from a Buddhist metaphysics of emptiness and many Native American epistomologies (and axiologies), to Diane Davis in Inessential Solidarity and Thomas Rickert in Ambient Rhetoric, the self is more osmotic or relational. It is permeable and messy, bundled and blurry, oozy and diffuse, yet localized by language and materiality.
And here come the algorithms. These too, if you want to go this way, are part of us, and so is the digital pathways they “co-author” from our metadata (to draw from Jessica Reyman). To use Kant’s term, this digital world informs–or possibly is–our phenomenological experience and the self that this experience informs. And as both Reyman and Estrada point to, we don’t really own, or fully understand, these algorithms. As Eusong Kim has argued about trending, “We don’t know why something trends. The algorithm is a locked secret, a “black box” (to the point where MIT professors have built algorithms attempting to predict trending tags). The Fineprint: Trending is visibility granted by a closed, private corporation and their proprietary algorithms.”
And, to make matters worse, as Brock and Shepherd argue, these algorithms are persuasive, constituting “procedural enthymemes,” drawing from Aristotle’s enthymeme, a syllogistic argument finished by the audience. It requires audience participation, as Bogost argues in Persuasive Gaming and Richard Colby argues in his work in composition, saying it creates a “rhetorical situation.” But the procedure still influences, as these scholars point out, both in games and other digital and non-digital systems. There is this odd hybrid between human and nonhuman, what Tom Friedman in his view of digital games (extended by Alexander Galloway) calls “cyborg consciousness,” in which the player learns to think with the algorithm. But here, the player may not be aware how the algorithm is influencing their thinking.
And I think this leads me to three key points, which I end on. First, I would argue that these filters and filter bubbles are rarely a one-shot argument; they exhibit ontological rhetoric–or at least heftily influence what we see as “reality” or how we work through “reality.” To use Burk’s terms, they inform our identification and terministic screens. Second, it’s not just algorithms, but people, mainstream media, interactions, upbringing, etc.,–our filter bubble online and offline–that we co-create with other actants, with varying level of control and agency. Believing we were somehow less permeable before social media feels wrong to me; though we didn’t have the same procedural influences.
And finally, I really don’t think we can improve without heavily challenging this propensity to think phenomenological experience is the noumenal world–and that this phenomonlogical experience is based on individual, non-relational agency. Or at the very least, we need to challenge the idea that one must impose one’s phenomena on others.
In particular, I think this cuts two ways. On the one hand, this involves people trying to force people into certain ways of being, regardless of their own agency–what Lévinas termed “totalizing.” And here we have the destructive role of identity politics, which Trump’s rhetoric and neoconservative policy (I think) have capitalized on: enacting an ideology and identity so much that you feel need to threaten the identity of others at a material, ontological level.
It also cuts the other way. Here, I move back to Kant and his “Miscarriage of all Philosophical Trials in Theodicy,” in which he argues that once you abandon certain modes of verifying knowledge and regulating discussion (rules of discourse and epistemology), such a discussion becomes pointless. It’s like playing a game. If I play by the rules and you do not, we can’t play. And as Bakhtin argues, dialogue cuts both ways. If we are in different realities, phenomenologically speaking, we can’t do anything.
As Burke argues in “The Rhetoric of Hitler’s Battle,” when a religious rhetoric gets corrupted, it is one of the most damaging, as people lack a certain self-reflexivity. They believe on faith or on belief. And even more dangerously they may think that belief is universal–and needs enforcing. And, maybe somewhat paradoxically or provocatively, algorithms have the most blind faith of any rhetorical actant: their programming.
[Featured image: Robot! by Crystal, via Creative Commons]
2 thoughts on “CCR 633: Transformers 2016, Rise of the Algorithms”
Hi, I just saw this article. Thanks very much for the thoughtful analysis and kind words!
Thanks for the comment and kind words; I really appreciate it! I really like your article’s critique on a lot of the filter bubble rhetoric, so it was engaging to use.