Jathan Sadowski is a PhD student in the “Human and Social Dimensions of Science and Technology” program at Arizona State University. Broadly speaking, he researches critical technology studies, with a focus on social justice and political economy. More specifically, he is writing a dissertation on “smart cities,” which will be composed of three complementary sections: discourse analysis of corporate, government, and media sources; examining issues of social justice and political economy (in both actually existing and potential technologies/policies); and further developing a theory of cyborg urbanization. He also freelance writes articles and op-eds—mostly about the politics and ethics of technology—for a number of magazines and newspapers (e.g. Slate, Wired, Al Jazeera America, The Baffler, The New Inquiry, and others).
From Mega-Machines to Mega-Algorithms: Digitization, Datification, and Dividualization
The critic Lewis Mumford described a prevalent form of organization he called “mega-machines”: giant socio-technical mechanisms—with humans acting like the cogs in a machine—that used authority, hierarchy, and bureaucracy to structure, organize, and control people. Mumford’s insights are still relevant, but need some updating. In the time of networked computing and smart technologies, what I call the “mega-algorithm” is taking over, with people acting as information nodes, inputs, and outputs. People are atomized by digital tech and blown apart into streams of data fed into processors. They provide productive labor, and are incorporated into the megaalgorithm, just by existing on the network. The logic of the system is to create, collect, and extract value from data wherever possible.
Consider the following juxtaposition. On one hand, Google uses old UN translations to fuel Google’s translate service. They mined data that could be usefully and cheaply processed. Imagine if the translators kept their copyright and could negotiate for kickbacks on the income Google receives—after all, the translators performed a labor-intensive service. But inegalitarian capitalism has convenient amnesia. When we’re all data streams the ability to get paid for the data disappears, somehow, when it could actually be stronger. On the other hand, Coursera, the online education startup (e.g. MOOCs), is also in need of translators’ labor so it can sell courses in other languages. Rather than pay for such services—such a passé notion now—Coursera has another plan. By using rhetoric of community and solidarity, they are actively recruiting volunteers to contribute to their “crowd-translating” project. While no money exchanges hands, “volunteers” must sign a “Translators Agreement” to ensure that all ownership of produced services transfers to Coursera. Sure, the volunteers enjoy this “playbor,” or else they wouldn’t do it. Whether or not they’re aware of the insidious implications of deskilled and disempowered labor is another question. It’s obvious who really benefits—who gets to drink deeply from the data stream—and Coursera and Google want to keep it that way.