Digital Labor

Frank Pasquale

person  

Frank Pasquale’s research addresses the challenges posed to information law by rapidly changing technology, particularly in the health care, internet, and finance industries. He is a member of the NSF-funded Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project. His book The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015) develops a social theory of reputation, search, and finance. Pasquale has been a Visiting Fellow at Princeton’s Center for Information Technology, and a Visiting Professor at Yale Law School and Cardozo Law School. He has received a commission from Triple Canopy to write and present on the political economy of automation.


Automating the Automators
What if algorithmic science is now so good that the data scientists themselves aren’t needed? In an unguarded moment, Google’s Chief Scientist Peter Norvig conceded “We don’t have better algorithms than anyone else; we just have more data.” Does that position open the door for an upward creep of automation? My paper will start by reviewing the usual alarming literature on the automation of professionals, from doctors to lawyers to counselors (remember ELIZA?). But I’ll try to move beyond it by asking: what algorithms describe or drive the a) developers of automation technology, b) their managers, and c) the finance firms that decide what to invest in? What if they turn out to be simpler than the procedures of the fields they are trying to model and standardize? Perhaps then a case can be made for automating automation—subject, of course to constraints that would automatically stop it if it began to distort a profession’s character. Businesses pushing automation of health, education, and medicine make judgments based on profit maximization—a decision procedure just as easily algorithmatizable as, say, deciding which teaching style in a short video works, or which clinical decision is best. Ben Ginsberg has argued that the automation of education via MOOCs could be accomplished by one MOOA—massively open online administration, so that one provost could make all the decisions (and free universities from the expense of provosting). You could make a similar argument about hospital CEOs, or law firm managing partners, who impose given agendas—-just let one automated version of the decision maker take the lead. As Amar Bhide has suggested, that sounds suspiciously like the socialism of Cybersyn—but isn’t it inherent in the logic of automation now pursued by captains of industry? For my “automate the automators” project, I want to figure out how investors, CEOs, and administrators resist automating their own process of automating other fields. Maybe they have a good argument that their decisions really are unique, singular, incapable of comparison with hundreds of other similar judgments. But that assertion sounds a little suspicious when they assert every job but theirs will eventually be done by robots, or automated, or standardized. If they can’t give an account of the inherently human aspect of executive decision making, then they, too, should be swept aside by machines.

Employee Surveillance in Era of Big Data

Abstract 
tba

 
Taxing Data Labor & Labor in the Monetized Peer Economy
Sat, November 15
10:00 AM - 12:30 PM

Low-Wage Work: Getting By & Fighting Back
Sat, November 15
01:30 PM - 04:00 PM

Final Reflections
Sun, November 16
11:00 AM - 11:30 AM

Links