Why we must democratize AI to invest in human prosperity, with Frank Pasquale

Why we must democratize AI to invest in human prosperity, with Frank Pasquale

Date: 2024-03-19    Source    Read:1199


Frank Pasquale, author of New Laws of Robotics: Defending Human Expertise in the Age of AI, discusses the four most important ideas required to guide progressive AI policy.


Four years ago, Frank Pasquale, Professor of Law at Brooklyn Law School, catalyzed a debate over algorithmic and corporate power with the publication of his highly acclaimed and highly critical book, The Black Box Society: The Secret Algorithms That Control Money. 

On the one hand, Pasquale put us on notice by warning of the dire consequences of runaway algorithms. He highlighted their capacity to create havoc everywhere, by doing things like disrupting the stock market and ruining credit scores. On the other hand, because Pasquale is attuned to the dynamics of political economy and corporate maleficence, he also cautioned us against focusing too much on technological constraints and too little on unsavory human tendencies. He found both Wall Street and Silicon Valley guilty of weaponizing opaque algorithmic processes to expand power, conceal hidden agendas, and deflect responsibility. 

In short, the book’s bleak tone, negative focus, and overall skepticism of course-correcting governance measures, including heightened calls for transparency and accountability, were so pronounced that an apt alternative title would have been Algorithmic Affordances: Cracking the Code of Corruption.   

Pasquale’s new book, New Laws of Robotics: Defending Human Expertise in the Age of AI, takes a profoundly different approach towards challenging the status quo. Disenchanted with dominant visions of progress, libertarian, neo-liberal, and posthuman ones that offer anything
but a progressive vision of how automation is poised to shape the future of healthcare, education, military conflict, social inequality, and even the very understanding of it means to be human, Pasquale fights back with the power of an unexpected resource: positive narrative. 

To democratize artificial intelligence (AI), he contends we need to imagine that an emancipatory politics is possible, one that can shape how automation is developed and deployed, without restricting key decision-making processes to merely determining the best way to save money, enhance efficiencies, and encourage innovation—innovation that honest discourse would admit disproportionately benefits elites and displays apathy, if not contempt, for the rest. To conceptualize a world that not only values human dignity but actively protects human expertise and judgment, Pasquale champions news laws of robotics—laws that can redirect society away from dangerous proclivities and priorities.

The following is an edited version of a conversation between Evan Selinger, Professor of Philosophy at Rochester Institute of Technology, and Frank Pasquale.

 

Evan: What are your new laws of robotics?

 

Frank: They can be stated simply. First, robotic systems and AI should complement professionals, not replace them. Second, they should not counterfeit humanity. Third, they should not intensify zero-sum arms races. And fourth, they must always indicate the identity of their creators, controllers, and owners.

I hope that each law can help us use technology more humanely. For example, imagine you have a gravely ill relative in the intensive care unit. As you and your family are waiting outside to visit, a robot rolls out to give you some bad news. “The patient has a 95% likelihood of dying within ten days,” it announces. “Would you like to explore options for terminating treatment?”

The problem here should be obvious. It’s not the predictive AI itself, which could be very useful information if interpreted and communicated properly. The problem is the lack of human connection. The technology should complement medical staff, not replace it, even if it would be cheaper and perhaps even more efficient for the hospital to just routinize such encounters.

Part of respecting the grieving is taking suffering seriously. This requires a personal touch. So that leads to my first law: Robots and AI should complement professionals, not replace them. Of course, few would mind if robots took over cleaning duties at their hotel or hospital, or did far more of the difficult and dangerous work of mining. Dirt and coal don’t have feelings, much less feedback to communicate. But people do, so direct, empathic identification is critical to success in services.

 

Evan: Your hypothetical example reminds me of a real tragedy that took place in 2019. That’s when a physician at a remote location told a hospitalized patient that he didn’t have long to live and communicated this grim prognosis through a screen. The situation was so disturbing that some media outlets misreported what happened. One misleading headline was: “Family horrified to learn loved one is dying as told by robot not a real doctor.” 

As Arthur Caplan and I argued, the real problem had nothing to do with robots. It’s that the patient didn’t give consent to have this type of conversation, one that severely blunted the transmission of empathy, with a human talking through the mediation of telemedicine. It's all too easy for technology to be used to undermine professional communication norms.        

 

Frank: Oh yes, absolutely. The problem here is quite acute, and something I try to address even more directly in my second law, that robots and AI should not counterfeit humanity. I’m very committed to this idea because I think that there are several research programs in AI that are essentially designed to deceive people into thinking that a robot or an avatar or some similar AI creation has the emotions and sensations of a human being. I think that’s fundamentally dishonest because to the extent that we study biology and social evolution we see that the feelings of anger, sympathy, loss, and all manner of subjective experiences are grounded in embodiment—in having a body.

And so, if we are going to be honest with ourselves about our relationship with technology, we need to acknowledge that if it has any experience at all, it is fundamentally different than our own. As a philosopher, you’re well aware of Nagel’s famous article, “What is it like to be a bat?” which essentially says: that’s almost impossible to know! Or Wittgenstein’s classic line: “If a lion could speak, we could not understand him.” It’s even truer of contemporary AI, where there is of course some surface meaning coming out of bots and some humanoid robots, but almost no public understanding of the processes by which data, algorithms, and more came to serve what is so often a corporate interest behind the AI.

 

Evan: Before going forward, I’m surprised you haven’t yet mentioned Isaac Asimov, the author of the most famous “laws of robotics.” Why are you, a serious legal theorist and political economist, embracing Asimov’s science fiction framing as your guiding counter-cultural message?

 

Frank: I like the calculated irreverence of your question. Nobody else has said, “isn’t sci-fi basically beneath you?” Or, at least beneath the topic of the book! And I may have to concede Asimov's many faults—not a great stylist and a terrible person in important respects. But in terms of science fiction in general, I can’t put it better than Kim Stanley Robinson:  

It’s often struck me that the name “science fiction,” in some ways so inaccurate and wrong, is actually extremely powerful anyway, because the two words can be translated into “facts” and “values,” and the fact/value or is/ought problem is a famous one in philosophy, and often regarded as insoluble, so that if you call your genre “fact values” you are saying it can bridge a difficult abyss in our thinking. This means frequent failure, of course, as it is indeed a difficult abyss. But it is a strong claim for a genre to make, and I’ve come to love the name “science fiction”.

I do, too. I think the main reason that a serious scholar in law and policy should engage with science fiction and popular culture, is because many policymakers and politicians and even judges and regulators are very influenced by the conception of technology that they get from popular entertainment. So, we have seen that work in Charli Carpenter’s work on killer robots, as well as Desmond Manderson’s excellent work on 24. 

 

Evan: Oh, sure, I agree with all this. To be clear, I wasn’t questioning why you engage with science fiction. And I wasn’t questioning the value of the genre. My more pointed question probed why the engagement is so strong that you frame your key message and even title your book with a nod towards Asimov. 

 

Frank: A nod and a farewell. The nod acknowledges what fiction does so well, painting images of alternative worlds to our own. Readers need some anchor for thinking about “robot ethics,” and sci-fi work like that of Asimov is just about the most widely known ones out there. 

As Iris Murdoch once said, “Man is a creature who makes pictures of himself, and then comes to resemble the pictures.” That’s why I explore fiction, film, and art in the last chapter of the book. It helps us see the long-term consequences of decisions made in the past and the potentially massive effects of small things we do today. 

Imagine, for instance, that Al Gore became president in 2001, and after 9/11, responded not with a disastrous war, but instead by insisting on the need for U.S. energy independence via renewables. The Supreme Court’s intervention sparked a disastrous “wrong turn” for the U.S., just as the Comey intervention in late October, 2016 did, and the Trump Administration’s utter mishandling of COVID did. Each of those fateful turns generated enormous losses that are permanent, tragically irreversible. And by the same token, what seem like small changes in law or regulation now will set the technology industry on paths that may lead to virtuous cycles of productivity and service to humans, or toward zero-sum arms races, exploitation, and constant surveillance. 

This is all critical to understanding the importance of thoughtful intervention now. But it’s hard to play out such complex narratives in a non-fiction setting. Fiction writers compress or expand the time of the plot like an accordion to make critical points. 

The key difference between Asimov’s and my view of the laws of robotics is that Asimov really is assuming the development of robots that are indistinguishable from persons, and then trying to make sure those robots can’t hurt anyone or hurt them too badly. Whereas, in my laws, I’m saying we generally shouldn’t have robots that look like persons—that’s a pretty bad research program. I just can’t see why we want to have human-looking things performing like machines. Even more importantly, Asimov’s laws are about ensuring robots promote human welfare, whereas mine are about ensuring that human beings, and not just a few, but a very distributed, larger group of humans, exercises democratized control of robotics. He’s all about human welfare. I want that and democratized human power as well.

 

Evan: For some time, your work has emphasized the importance of following-the-money, particularly when it comes to paying critical attention to the gap between what Big Tech companies say their goals are and how they actually respond to economic incentives. In your new book, you remind us that it’s far too simplistic to treat these companies as monolithic systems where everyone is imprisoned in the mental shackles of groupthink, particularly since pronounced disparities have arisen between management and workers. 

For example, you write: “At some of the largest American AI firms, a growing number of software engineers are refusing to build killer robots—or even precursors to their development. Their resistance is part of a large movement by professionals and union members to assert control over how they work and what they work on. Recent developments at Google show both the virtues and limits of this approach.” 

What’s your view of the controversy surrounding Google management’s claim that renowned AI ethicist Timnit Gebru resigned from the company—a claim many see as gaslighting meant to obscure the company’s troubling relationship with diversity and the integrity of research?  

 

Frank: In thinking about the role of technologists and their own ethical perspective in large tech firms, we have to applaud those who have done so much to bring ethical perspectives to the tech workplaces, like Meredith Whitaker and Timnit Gebru. They deserve enormous credit for speaking truth to power. I also think there are probably many critical battles now going on inside firms to preserve or extend the independence of employees.

I also think we need to think more institutionally about creating spaces that will enable people like Whittaker and Gebru to work from within large tech firms. And to promote independent moral judgment generally. So, for example, we may want to see insulation of workers from certain financial pressures, particularly when they work in research departments at these firms.  If there isn’t proper insulation, then the research department just becomes a form of glorified PR.  

 

Evan: Is this a matter of professionalism, as you define it in your book?

 

Frank: Some forms of further professionalization would help. If software engineers had a professional association that was as active as that of lawyers or doctors, it could set standards of independence that would, in turn, give all workers in the industry the ability to push back when employers like Google move in ethically questionable directions.

Physicians can lose their licenses if they, say, push the interest of a pharmaceutical firm so aggressively as to harm their patients. What if software engineers had a license to lose, one that could be suspended or revoked if they played a role in a particularly egregious dark pattern or scam? That kind of vulnerability is also a form of power since it allows all the engineers at the firm to say, in response to an unethical project: “No, we can’t do that, we’d lose our licenses if we did, and then you’ll have no one to code for you!”

This type of power to push back is critical and has to be imposed by law because if it’s not imposed by law, if you’re the one person that’s pushing back, there’s always going to somebody who’s willing to take your place. And even if you’re the twenty people, or the fifty people, or the thousand people fighting—well, without the backing of law, you’re in danger. You can be fired en masse, or punished, or just fall behind “reputationally,” thanks to AI systems I critique in the book.

  

Evan: This is going to be my last question. To formulate it right, I need to provide some context.

To prevent algorithms from being used inappropriately, we need clear rules, standards, and values that regulators can appeal to, to identify behaviors society should deem unacceptable. In your chapter “Machines Judging Humans,” you identify three unethical practices that undermine a humane credit system: predatory inclusion, subordinating inclusion, and creepy inclusion. 

An example of predatory inclusion is qualifying students for loans to be used at deceptive and exploitative for-profit colleges—colleges that, contrary to their rosy rhetoric, are not preparing students for a future of gainful employment in their chosen fields of study. An example of subordinating inclusion is penalizing credit applicants for exercising basic political liberties, such as engaging in political action. 

I’m wondering, however, if the third problem could be better named. You write: “Creepiness is an intuition of future threat, based on a deviation from the normal. Normal experience includes entitlement to a work life distinct from a home life and having judgments made about us on the basis of articulable criteria. Creepy inclusion disturbs the balance, letting unknown mechanical decision-makers sneak into our cars, bedrooms, and bathrooms.”

Of course, I wholeheartedly agree that ever-expanding surveillance that feeds increasing amounts of our personal information into automated systems undermines our ability to experience the breathing room that’s necessary for wellbeing. And you were right back in The Blackbox Society to warn about the risks that opaque judgment poses to weighty procedures, such as due process. But feeling creeped out is an emotional experience where our metaphorical spider-sense warns us that something is amiss. By defining that perception of misalignment as being aware that something abnormal is occurring, you might risk making it too easy for conservatives to subvert the ideal and for libertarians to object. 

For example, conservatives steeped in traditional interpretations of religious values could object to policies that progressives view as promoting fairness but which they see as contradicting their natural law-based conception of how people should “naturally” behave. And the libertarians who advocate for permissionless innovation will complain that appeals to “normal” are textbook examples of status quo bias that express fear of change and adaptation. So, might there be a better way to object to practices like “nonstop cell phone tracking, archiving, and data resell”    

 

Frank: In terms of creepiness, I believe that this is a form of moral judgment. As applied in realms of data collection analysis and use, that can be quite helpful. I think this form of moral judgment only seems overly subjective if we’re not fairly comparing it to the other forms of policy discourse that are common in today’s society. Let’s drill down on what is often the gold standard for policy analysis: a cost-benefit calculation. 

We could, in thinking about how data is collected, try to estimate both for the data collectors, for the user, and for society as a whole, what the benefits of that use would be, and what the cost would be. Note that for a user who is about the U.S. median age, about 35 years old, we would probably have to estimate 50 years of life expectancy. So, you would have to think about what are the potential benefits that can be drawn from this data over the next 50 years, what would be the potential discoveries that could be made based on the data? 

We would then also have to look at costs. We could try to estimate: what are the risks of a data breach next year?  What are the risks in five years?  How about 25 years from now? What are all the threat scenarios arising out of the breach, like “acquaintance purchases” and the like. And you could try to think about whether privacy laws would be stricter or looser at that time in order to have more or less of a deterrence effect on such data breaches and threat scenarios.  

Of course, once we start talking about the possibility and effects of data breaches in the year 2050, we’ve already lost the thread, right?  This is already a far more subjective and contestable exercise than simply asking for a basic ethical analysis of whether people think that having their cell phone data collected, analyzed, and used, everything they’ve done on the phone shared, all thanks to some terms of service they barely noticed, is creepy or not.  

So, the problem here is that the ostensibly technical, neutral, and objective standards for evaluation resting upon quantitative economic analysis, are not any more grounded or valid than common individual objections to practices they find crossing important lines. Note, too, that even quantitative predictions that seem much easier—like traffic patterns—are now coming into question.  

And this is really important on several levels, one being the relative status of say, economics and philosophy. I don’t think we want to live in a world where every policy has to be justified with some sort of economic rationale. That perspective has already had too much influence, relative to other social science and normative perspectives. 

To get you to your points about conservative and liberal objections, certainly yes, there’s going to be conservative and liberal objections, but I think each of those ideological affiliations has pro-privacy grounds as well. There are very strong pro-privacy trends within contemporary libertarian thought. Where libertarianism is primarily represented in the public policy space by people who think that the deepest freedom is to be able to contract away your rights—well, perhaps that’s an indication less of actual libertarians’ views, than of who are the wealthiest people funding policy discourse in that space.

We need to acknowledge the importance of emotion to political life as well. If people feel that certain data practices, certain forms of surveillance, are creepy, are troubling, are disgusting, that’s worth listening to. And it’s worth translating into more “respectable” academic outlets.  It’s worth putting into the words of philosophers and it is worth translating into comments on rulemakings and other types of democratic contributions to policy discourse.

 

Evan: Are you making the case that emotions are and should be central to agonistic political discourse?  

 

Frank: Yes—I think it’s inevitable. Consider, for instance, the word ‘disgust,’ which might be even more objectionable than ‘creepy,’ given the work of Martha Nussbaum in her critiques of disgust as a political emotion. I would never call a person disgusting, of course. But can a corporate practice or governmental decision (such as the recent child-parent separation policy at the U.S. border, or potential killer robot attacks on soldiers or police tactics against protesters) be disgusting? Absolutely—and creepy, revolting, mind-boggling as well. 

Political emotions are important. If liberals and progressives just give up on such emotions entirely and opt for technocracy—well, as your work with Jathan Sadowski shows, that can be very damaging. We end up with large portions of society that want to feel a strong visceral reaction to social conditions that are rapidly deteriorating. And when social critics fail to deliver, that effectively leaves the field completely open to reactionary forces to claim the mantle of passion and emotion in political life.

So, we need to be extremely careful about getting bogged down in minutiae of tone policing and aspirations to pure rationality, because this is a form of politics that is becoming increasingly ill-suited for a fragmented and raucous public sphere.