By Robert Folsom | May 17, 2012
The 2002 sci-fi movie Minority Report had a memorable plot: In the not-too distant future, the government has the ability to foresee acts of crime. In turn, law enforcement efforts have shifted from investigation to prevention.
If you saw the movie you probably recall that, because the cops know who the criminals will be, they arrest and convict perpetrators before they commit their crimes.
Great idea for a storyline — and for a crime-free world, if it could be so.
Yet think with me for a moment about a “what if” scenario.
Instead of omniscient foreknowledge, let’s suppose you could have the next closest thing: A system of “pre-crime” prevention that was 99.99% accurate — a tiny fraction short of perfect.
Would you want to live in that world? Or, if you were given the power to incorporate this near-perfect (.01 margin) system in society, would you do it?
Let’s take “what if” a step further. If you (or anyone) did impose the 99.99% pre-crime system on society, we can make credible estimates of what the .01 margin of error would include.
As we’ll see in a moment, a pre-crime predictive model in today’s world would seek to thwart terrorism. Governments would deploy it in transportation hubs around the globe. Millions of people would be screened each day.
If one person in every million is a terrorist, here is what our 99.99% model will do: Catch the one real terrorist in that million — and, also, arrest and convict 99 other INNOCENT individuals for that same crime.
Oops. Well, that’s .01 percent for you. Sucks to be one of them. Check my math if it’ll make you feel better.
This exercise reveals a counter-intuitive truth. But the math is simple. And it’s not exactly a stunning insight on my part. Any professional working on a big-data predictive model understands this problem. It’s known as the false positive paradox. That same professional also would be aware that to attain a 99.99% reliable forecasting model is as farfetched as the foresight depicted in Minority Report.
Not that any of this has deterred the Department of Homeland Security, which today is developing what it calls the “Predictive Screening Project.” According to its website, the program
“aims to derive observable behaviors that precede a suicide bombing attack and develop extraction algorithms to identify and alert personnel to indicators of suicide bombing behavior. The potential operational benefit is the increased ability to interdict Improvised Explosives Device (IED) threats further from the checkpoint with fewer resources.”
So if Homeland Security is not deterred by the false positives, the question becomes: Why not?
For starters, they know just how far law enforcement has already gone in recent years, in the shift from investigation to prevention — which is to say, a pre-crime mindset is already in place…
… As are pre-crime practices by law enforcement. The NYPD’s stop and frisk policy I described in the May 8 Social Mood Watch is all about prevention, as Police Commissioner Raymond Kelly has said repeatedly. To be clear, cops on the street don’t arrest (much less convict) every individual they stop and frisk. But the NYPD does accept an extraordinarily high number of false positives in order to apprehend a comparatively low number of lawbreakers.
Would Homeland Security’s big-data model be any different?
This pre-crime orientation is the perfect field of battle for the authoritarian and anti-authoritarian trends now unfolding in a time of negative social mood.
Other examples of the shift toward “pre-crime,” you ask? There are so many that in coming weeks I’ll be writing from a literal list (next up: The Supremes on Strip Search). Stay tuned.
Andrea Dibben contributed research.