Palantir and the Two Forms of Synthesis

by Howard Gardner

Until recently, only those “in the know” had heard of the corporation named Palantir. But of late, it has come into the spotlight. For investors, on October 1, 2020, Palantir had an initial public offering on the New York Stock Exchange—book value in the neighborhood of twenty billion dollars. For newspaper readers, on October 25, 2020, Palantir was the cover story in the Sunday Magazine of The New York Times.

What is it? Palantir is a company that specializes in data analysis. It takes huge amounts of data, in almost any area, and, using artificial intelligence (AI) algorithms, organizes the data in ways that are seen as useful by the client. According to The Economist of August 29, 2020, “The company sells programs that gather disparate data and organizes them for something usable for decision-makers, from soldiers in Afghanistan to executives at energy firms.” Then, in The Economist fashion, follows the wry comment: “More than a technology project, it is a philosophical and political one.”

To this point, most of Palantir’s work has been for governments—clearly the United States government (particularly the CIA and Defense Department), but also for other governments as well—though only those governments believed to be friendly to the interests of the United States. While Palantir’s actual work is kept secret, it’s widely believed to locate sensitive targets (including the location of Osama bin Laden as well as of undocumented immigrants and criminals on the run); identify regions that are dangerous for US soldiers or local police; trace the locations and spread of diseases (like COVID-19); and locate markets for commercial products. Of course, approaches used for one purpose in one place can be re-purposed for use elsewhere.

Palantir is the brainchild of two individuals. Peter Thiel, hitherto the better known one, was a co-founder of Pay Pal and is also one of the few Silicon Valley executives to have publicly supported Donald Trump’s 2016 campaign for the presidency. Alex Karp, a law school graduate with a doctorate in political philosophy from Goethe University in Frankfurt describes himself as a person on the left of the political spectrum.

Not surprisingly, given the mysterious work that it does and the apparently different political leanings of the co-founders, there is a lot of chatter about whether Palantir does good work. One is reminded of the debate on whether Google lives up to its  promised slogan, “Don’t be evil.”

But to ask whether a company does good work is to commit what philosophers call a “category error.” 

First of all, though the Supreme Court may consider a corporation to be an individual (Citizens United v. Federal Election Commission 2010), that characterization makes no sense in common language or—in my view—in common sense. Companies make products and offer services, but who asks for these and how they are used cannot be credited to or blamed on the company per se. For over a century, General Motors (GM) has built motor vehicles—but those vehicles could be ambulances that transport the injured to hospitals or tanks that are used to wage unjustified wars.  For over half a century, IBM has sold computers, but those computers could be used to track health factors or to guide missiles.

Second, even determining precisely what a company does, and to or for whom, may not reveal whether the work itself is good or bad. That decision also depends on what we as “deciders” consider to be good—is the missile being aimed at Osama bin Laden or Angela Merkel or Pope Francis? Do we think that none, some, or all of these individuals should be so located and then murdered? Is the hospital being used to treat those with serious illnesses or to hide terrorists? Indeed, despite the red cross on display, is it actually a hospital?

This is not to invalidate the idea of corporate social responsibility—but even if the leadership of a corporation is well motivated, it can scarcely prevent abuses of its products.

So far, my examples pertain to cases that can be understood by lay persons (like me). This is decidedly NOT the case with the work that Palantir does—work that I would call “synthesizing  vast amounts of data.” The means of synthesizing are very complex—for short, I will call them “AI syntheses.” These synthesizing programs have been devised because the actual “data crunching” is so complicated and time consuming that it would not be possible for human beings to accomplish the task in human time. Even more concerning, it is quite likely that no one quite understands how the patterns, the arrangements, “the answers” have been arrived at.    

And so I think it is important to distinguish between two kinds of synthesizing—what I call AI Synthesizing and Human Synthesizing.  It’s the latter that particularly deserves scrutiny.

First, AI Synthesizing:

Think: How do we distinguish one face from another or group different versions of the same face?   “Deep learning” programs can do so reliably, even if we can’t explain how they accomplish this feat. So, too, winning at chess or “Go”—the program works even though we can’t state quite how. And, building up in complexity, the kind of synthesizing that Palantir apparently does—identifying markets for products, figuring out promising targets for attack or defense, or discerning the cause(s), the spread, or the cure)(s) for a diseases. The human mind boggles.

Work of this sort generates a variety of questions:

What is the purpose and use of the synthesizing?

Who decides which questions/problems are to be addressed?

Which data are included for analysis and synthesis, and which ones are not?  How is that determination made?

By which algorithms are the data being clustered and re-clustered? 

Can the parameters of the algorithm be changed and by whom and under what circumstances? 

Will the data themselves (and the algorithms used thereupon) be kept secret or made public?  Will they be available for other uses at other times?

Importantly, who owns the data?

Which individuals (or which programs) examine the results/findings/patterns and decide what to do with them? Or what not to do? And where does the responsibility for consequences of that decision lie?

Who has access to the data and the synthesis? What is private, public, destroyable, permanently available?

What happens if no one understand the nature of the output…Or how to interpret it?   

These questions would have made little sense several decades ago; but now, with programs getting ever more facile and more recondite, they are urgent and need to be addressed.

Here’s my layperson’s view:  I do not object to Palantir in principle. I think it’s legitimate to employ its technology and its techniques—to allow AI synthesis.

Enter Human Synthesis.

With regard to the questions just posted: I do not want decisions about initial questions or goals for the  enterprise, relevant data, the interpretation or uses of results to be made by a program, no matter how sophisticated or ingenious. Such decisions need to be made by human beings who are aware of and responsible for possible consequences of these “answers.” The buck stops with members of our species and not with the programs that we have enabled. The fact that the actual data crunching may be too complex for human understanding should not allow human beings to wash their hands off the matter, or to pass on responsibility to strings of 0s and 1s.  

And so, when I use the phrase “human synthesis” I am referring to the crucial analysis and decisions about which questions to ask, which problems to tackle, which programs to use—and then, when the data or findings emerge, how to interpret them, apply them, share them, or perhaps even decide to bury them forever.   

For more on human synthesis—and the need to preserve and honor it in an AI world, please see the concluding chapters of my memoir A Synthesizing Mind.


Michael Steinberger, “The All-Seeing Eye,” The New York Times Magazine, October 25, 2020.

© Howard Gardner 2020

I thank Shelby Clark, Ashley Lee, Kirsten McHugh,  Danny Mucinskas, and Ellen Winner for their helpful comments


7 Comments on “Palantir and the Two Forms of Synthesis”

  1. Ethan Zuckerman November 13, 2020 at 5:30 pm #

    I am glad you’re taking on big questions like whether a Palantir can be a force for good in society and trying to draw distinctions between human and AI synthesis. The solution you are offering is close to one often referred to in the AI field as “human in the loop”. When the US military uses AI systems to identify targets, the process is not entirely automatic. First, humans decide to search for targets in particular places and ask the AI systems to identify candidates. Then a human signs off on the targeting decision before weapons are fired. It makes sense – as much as we might trust these systems, we want humans to evaluate and ensure that they understand the decisions that are being made and can justify them, for instance, if the US is accused of war crimes.

    But here’s the problem – systems with humans in the loop are often used irresponsibly. My favorite example of this was documented by my friend Julia Angwin for ProPublica. She and her team analyzed tools used for “judicial risk assessment”. These tools are designed for a classic human-in-the-loop scenario – they are intended to help judges set bail for criminal defendants. Based on statistical models, the systems offer the risk that a defendant will reoffend, giving the judge data to inform her decision whether or not to grant bail.

    Angwin looked at the result of a system widely used in Florida and discovered that the tool had a decided racial bias. White defendants were almost always classified as unlikely to reoffend, and therefore good candidates for bail. Many more Black defendants were flagged as at risk to reoffend. Angwin follows the cases to demonstrate that these scores are not very accurate, that many of the predictions about the defendents did not come to pass, and notes that judges are likely to use these scores because the algorithm gives the process a sense of objectivity and fairness to an otherwise subjective process. (

    So here’s the thing – the algorithms Angwin and her team audit might have been competently built. They may have accurately extrapolated from data that Black Americans are more likely to reoffend than White Americans. But “reoffending” means that you were arrested again for committing a crime. We know that Black communities are systemtically overpoliced in the US. We know there are serious, valid concerns about Black men in particular are more likely to be charged with crimes that people in other demographic categories might be allowed to walk away from. (Consider the armed White men who’ve showed up to “patrol” BLM marches and received thanks and bottles of water from police.)

    AI systems extrapolate from data. Data about the criminal justice system reflects systemic racial biases. Contemporary AI systems encode and perpetuate these biases. In other words, they tell us that a defendant is a poor candidate for release because – within the existing policing system with its tendency to overpolice Black men in majority Black areas – he is likely to reoffend. But is he less deserving of bail than a comparable White defendant? Or does his treatment by the AI algorithm reflect the picture of the world as it is rather than a more just and fair world?

    That’s the complaint I and many others have about existing AI systems. Even when they work – and it’s very hard to disentangle reality from inflated claims – they may be working in ways that entrench existing power structures. I feel that it’s critical to do the sort of auditing work that Julia does, in part because we’re otherwise outsourcing exactly the sort of human synthesis you are favoring. A judge who sees an endless line of Black defendants is likely to develop a synthesis over time that something is terribly wrong in our communities of color and how they are policed. But a judge who is distanced from that synthesis by algorithms that provide guidance may free her from developing that synthesis.

    • Howard Gardner November 13, 2020 at 7:56 pm #

      Thanks, Ethan, for that very thoughtful comment. It seems like there will need to be constant interplay between the human beings and the algorithms and that it’s risky to depend on one set of human judges and on one specific algorithm. What appears at first blush to be simple and straightforward turns out to be complex.

  2. Jonathan Zittrain November 13, 2020 at 5:31 pm #

    This jumped out to me in your blog entry: “Companies make products and offer services, but who asks for these and how they are used cannot be credited to or blamed on the company per se. For over a century, General Motors (GM) has built motor vehicles—but those vehicles could be ambulances that transport the injured to hospitals or tanks that are used to wage unjustified wars. For over half a century, IBM has sold computers, but those computers could be used to track health factors or to guide missiles.”

    One of the big technical transformations I’ve been fascinated with over the past thirty years has been the transformation of so many things from products to services. GM might build cars that can be used any number of ways once they’re driven off the lot, but/and Tesla today knows so much more about how and where people are driving — and could easily shut down the engine under any number of pre-specified (or just-in-time specified) conditions. I think we’re moving into an era in which companies will be praised or blamed much more on the basis of what their customers do.

  3. Howard Gardner November 13, 2020 at 8:00 pm #

    Thanks, Jonathan, for sketching that transformation in vivid terms. Elon Musk of Tesla can play the role of ‘big brother’–and the rules that Musk follows (or contrives) in that role add a third layer to the traditional ‘manufacturer- user’ dyad.

  4. Katie Davis November 16, 2020 at 4:40 pm #

    I think the points Ethan raises are spot on, and reflective of the arguments made in Ruha Benjamin’s book Race After Technology:

    I agree that human synthesis is important — it also matters which humans are doing the synthesis (and the particular biases, values, and blinders they bring) and at what point in the process they’re doing the synthesis (are they feeding data to the algorithm? interpreting the results?).

  5. Michael Carr December 23, 2020 at 3:44 am #

    Mr Gardner, first, a pleasure to meet a like mind!!

    I am here due to a curious mind and a school assignment regarding MI, which I find fascinating in its own right. I can’t swear to it, but as a DOS Guru from the ‘90’s, I do believe I’ve run across the name Palantir sometime in the past couple of decades…
    Anyway, to join Mr Zuckerman’s party, I don’t really see what the big issue is… as always in life, it simply comes down to balance. The whole point of automation is the removal of the human factor, yet, of course there are always situations where human judgment must be deciding due to the stakes or potential outcomes, usually people’s lives. So, why is it so hard to simply engage a system of integration that includes the AI making a call that is reviewed by the human, who has total power to contravene, but also total responsibility for justifying contravention? No human system is ever perfect, but I should think for the most part, problem solved…?
    This sort of approach would also, in all likelihood, increase the fair impartiality of any system it is applied to simply because, data doesn’t lie, making AI better positioned for analysis and recommendation, but hardware and software both glitch, data abnormalities can cause aberrant results, simple oversight may generate an inappropriate result. Perhaps there are peripheral circumstances that warrant a more ‘human’ approach (woman repeatedly runs over spouse with car; then it’s discovered to be result of years of serious abuse, causing mental distress, etc…) Since we are largely talking about affecting human lives, I sort of see it as a base responsibility as a society to take that much responsibility, at the very least!
    But to quote D. Miller, “that’s just my opinion, I could be wrong.”
    ~Michael C

  6. Howard Gardner December 23, 2020 at 7:55 pm #

    Thanks for reading and for your very reasonable comments. Looking back at my essay, the only thing I would add is that the interaction between humans and AI needs to continue indefinitely In the best case, we humans will get ‘smarter’ about deploying AI syntheses, and those syntheses, in turn will be informed and improved by earlier human contributions and critiques.. Best wishes Howard

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s