Palantir and the Two Forms of Synthesis

by Howard Gardner

Until recently, only those “in the know” had heard of the corporation named Palantir. But of late, it has come into the spotlight. For investors, on October 1, 2020, Palantir had an initial public offering on the New York Stock Exchange—book value in the neighborhood of twenty billion dollars. For newspaper readers, on October 25, 2020, Palantir was the cover story in the Sunday Magazine of The New York Times.

What is it? Palantir is a company that specializes in data analysis. It takes huge amounts of data, in almost any area, and, using artificial intelligence (AI) algorithms, organizes the data in ways that are seen as useful by the client. According to The Economist of August 29, 2020, “The company sells programs that gather disparate data and organizes them for something usable for decision-makers, from soldiers in Afghanistan to executives at energy firms.” Then, in The Economist fashion, follows the wry comment: “More than a technology project, it is a philosophical and political one.”

To this point, most of Palantir’s work has been for governments—clearly the United States government (particularly the CIA and Defense Department), but also for other governments as well—though only those governments believed to be friendly to the interests of the United States. While Palantir’s actual work is kept secret, it’s widely believed to locate sensitive targets (including the location of Osama bin Laden as well as of undocumented immigrants and criminals on the run); identify regions that are dangerous for US soldiers or local police; trace the locations and spread of diseases (like COVID-19); and locate markets for commercial products. Of course, approaches used for one purpose in one place can be re-purposed for use elsewhere.

Palantir is the brainchild of two individuals. Peter Thiel, hitherto the better known one, was a co-founder of Pay Pal and is also one of the few Silicon Valley executives to have publicly supported Donald Trump’s 2016 campaign for the presidency. Alex Karp, a law school graduate with a doctorate in political philosophy from Goethe University in Frankfurt describes himself as a person on the left of the political spectrum.

Not surprisingly, given the mysterious work that it does and the apparently different political leanings of the co-founders, there is a lot of chatter about whether Palantir does good work. One is reminded of the debate on whether Google lives up to its  promised slogan, “Don’t be evil.”

But to ask whether a company does good work is to commit what philosophers call a “category error.” 

First of all, though the Supreme Court may consider a corporation to be an individual (Citizens United v. Federal Election Commission 2010), that characterization makes no sense in common language or—in my view—in common sense. Companies make products and offer services, but who asks for these and how they are used cannot be credited to or blamed on the company per se. For over a century, General Motors (GM) has built motor vehicles—but those vehicles could be ambulances that transport the injured to hospitals or tanks that are used to wage unjustified wars.  For over half a century, IBM has sold computers, but those computers could be used to track health factors or to guide missiles.

Second, even determining precisely what a company does, and to or for whom, may not reveal whether the work itself is good or bad. That decision also depends on what we as “deciders” consider to be good—is the missile being aimed at Osama bin Laden or Angela Merkel or Pope Francis? Do we think that none, some, or all of these individuals should be so located and then murdered? Is the hospital being used to treat those with serious illnesses or to hide terrorists? Indeed, despite the red cross on display, is it actually a hospital?

This is not to invalidate the idea of corporate social responsibility—but even if the leadership of a corporation is well motivated, it can scarcely prevent abuses of its products.

So far, my examples pertain to cases that can be understood by lay persons (like me). This is decidedly NOT the case with the work that Palantir does—work that I would call “synthesizing  vast amounts of data.” The means of synthesizing are very complex—for short, I will call them “AI syntheses.” These synthesizing programs have been devised because the actual “data crunching” is so complicated and time consuming that it would not be possible for human beings to accomplish the task in human time. Even more concerning, it is quite likely that no one quite understands how the patterns, the arrangements, “the answers” have been arrived at.    

And so I think it is important to distinguish between two kinds of synthesizing—what I call AI Synthesizing and Human Synthesizing.  It’s the latter that particularly deserves scrutiny.

First, AI Synthesizing:

Think: How do we distinguish one face from another or group different versions of the same face?   “Deep learning” programs can do so reliably, even if we can’t explain how they accomplish this feat. So, too, winning at chess or “Go”—the program works even though we can’t state quite how. And, building up in complexity, the kind of synthesizing that Palantir apparently does—identifying markets for products, figuring out promising targets for attack or defense, or discerning the cause(s), the spread, or the cure)(s) for a diseases. The human mind boggles.

Work of this sort generates a variety of questions:

What is the purpose and use of the synthesizing?

Who decides which questions/problems are to be addressed?

Which data are included for analysis and synthesis, and which ones are not?  How is that determination made?

By which algorithms are the data being clustered and re-clustered? 

Can the parameters of the algorithm be changed and by whom and under what circumstances? 

Will the data themselves (and the algorithms used thereupon) be kept secret or made public?  Will they be available for other uses at other times?

Importantly, who owns the data?

Which individuals (or which programs) examine the results/findings/patterns and decide what to do with them? Or what not to do? And where does the responsibility for consequences of that decision lie?

Who has access to the data and the synthesis? What is private, public, destroyable, permanently available?

What happens if no one understand the nature of the output…Or how to interpret it?   

These questions would have made little sense several decades ago; but now, with programs getting ever more facile and more recondite, they are urgent and need to be addressed.

Here’s my layperson’s view:  I do not object to Palantir in principle. I think it’s legitimate to employ its technology and its techniques—to allow AI synthesis.

Enter Human Synthesis.

With regard to the questions just posted: I do not want decisions about initial questions or goals for the  enterprise, relevant data, the interpretation or uses of results to be made by a program, no matter how sophisticated or ingenious. Such decisions need to be made by human beings who are aware of and responsible for possible consequences of these “answers.” The buck stops with members of our species and not with the programs that we have enabled. The fact that the actual data crunching may be too complex for human understanding should not allow human beings to wash their hands off the matter, or to pass on responsibility to strings of 0s and 1s.  

And so, when I use the phrase “human synthesis” I am referring to the crucial analysis and decisions about which questions to ask, which problems to tackle, which programs to use—and then, when the data or findings emerge, how to interpret them, apply them, share them, or perhaps even decide to bury them forever.   

For more on human synthesis—and the need to preserve and honor it in an AI world, please see the concluding chapters of my memoir A Synthesizing Mind.

Reference

Michael Steinberger, “The All-Seeing Eye,” The New York Times Magazine, October 25, 2020.

© Howard Gardner 2020

I thank Shelby Clark, Ashley Lee, Kirsten McHugh,  Danny Mucinskas, and Ellen Winner for their helpful comments

Previous
Previous

The Synthesizing Mind in Politics and Diplomacy