posted by Joshua Foust on December 13, 2012 at 10:07 am
One of the biggest challenges in discussing the way the U.S. uses drones to target suspected terrorists is establishing basic data and agreeing to terms. The debate often rests on muddled and vague terms, which results in a lot of assumption but very little analysis based in fact. In addition, the data to support many public stances on drones is neither rigorous nor exhaustive, which makes drawing firm conclusions about the program difficult, if not impossible.
Put in the simplest terms, the drone debate is not actually about drones, per se – it is really about a broader policy of capturing or (far more often) killing suspected terrorists the U.S. government judges to be a threat. The drones used to carry out these targeted killings are just one weapons system the U.S. military and CIA use – others include helicopters, traditional piloted aircraft, cruise missiles, and even small teams of special operations forces (such as the SEAL team that killed Osama bin Laden).
By focusing so heavily on a single weapons system, many critics of drone strikes can’t see the forest for the trees. Referring to remotely piloted aircraft as “killer robots” (as some writers do) focuses attention on a single platform used to enact a policy, rather than on the policy itself. The policy of killing suspected terrorists is what matters, not the weapon used to carry it out.
Beyond the policy, however, is the issue of data. Few studies of drones and their effects use rigorous methods to collect and analyze data, and all studies of drones are severely hampered by the lack of useful data available: As a result of classification issues, the government will not publicly say when it launches a drone strike (or even a non-drone targeted killing).
Indirect measurements of the effects of drones strikes are incredibly difficult to carry out. In some areas, like the Federally Administered Tribal Areas, the Pakistani government will not permit researchers to directly visit strike locations. In others, such as Somalia, the environment is so volatile that physically getting to a strike location can be difficult.
Interviewing self-described drone victims is not a good way of researching drone effects, either. No published study interviewing supposed victims has included a weapons forensics expert who could correctly diagnose injuries as definitively coming from a weapon fired by a drone. Prominent studies like the NYU/Stanford report “Living Under Drones” rely on anti-drone activist groups in Pakistan to arrange interviews with supposed victims outside of the drone strike areas. There’s no way to know if their interviewees really experienced drone strikes or not.
Across the few case studies that exist, there are a few common elements: a reliance on the same databases and data points, the availability of only a limited range of metrics, and analytical weaknesses that potentially exacerbate data bias.
The lack of reliable data is arguably the biggest factor preventing experts from drawing even broad conclusions about lethal drone strikes. The most-cited public databases in studies, created by the New America Foundation, The Bureau of Investigative Journalism, and The Long War Journal, rely on media reports of drone strikes. Less well-known databases based at universities, like the UMass Drone project, also rely on media reports to assemble data.
Media accounts are a poor basis for generating rigorous data about drone strikes. Stories about drone strikes are almost never directly reported or confirmed (despite claims to the contrary). Depending on the perspective of the media outlet, reports on the same strike event can vary significantly: American media reports are likely to differ substantially from those of its Pakistani counterparts due to national perspectives alone, let alone willful manipulation for political and social purposes. In the absence of any government or official narrative, it is nearly impossible to reconcile the purported “facts” contained in any media report. Thus, such data are inherently biased, having been compromised at the source level rather than through collection.
The U.S. government, in particular, is not open about its strike data. The drone program in Pakistan is so highly classified the government cannot even acknowledge it exists. Because there are so few sources of data available, many analysts try the best they can to sift through the many incomplete, unverifiable, and often contradictory media reports of drone strikes to try to understand the effects of these weapons.
There is no reason to doubt the intentions of these analysts, but it is important to keep in mind that, ultimately, they are trading in guesswork. The lack of good data makes it difficult to understand the targeted killing program beyond polemics and advocacy.
One way to break the logjam is for the government to acknowledge the serious downsides of its program and respond to them. Whatever its effectiveness or utility, the targeted killing program in Pakistan has poisoned the bilateral relationship and spurred widespread anti-Americanism. In Yemen, too, this same program is creating anti-Americanism in some communities and alienating Yemenis from their government – the opposite of the program’s intended effects. When U.S. officials have publicly defended the use of drones, their comments have been as vagueand inadequate as the many criticisms of the program.
Right now, the drone debate is more or less deadlocked by outside researchers trying to piece together an incomplete picture of the program while the government issues ambiguous assurances that it is effective and legal. The uncertainty of the debate, its vague and unclear terms, and the severely incomplete data all hurt the public’s ability to understand and evaluate what its government is doing in its name abroad. More government transparency about data, processes, and methods would go a long way toward improving public dialogue and supporting the broad goals of U.S. foreign policy (namely, diminishing the ability for terror groups to strike).
This post originally appeared on OpenCanada.org.