IUS (10/26/13, 10am: Chicago)
The Palmer House Hilton in Chicago, Illinois, USA
October 25-27, 2013
The following paper figures as a entry into a series of reflections I’ve loosely titled “The Ethics of Algorithms.” In broad terms this title targets philosophical and ethical meditations related to the problems and possibilities inherent in new technologies. To show my hand a bit I’ll say that the major contours of these investigations have been shaped by two prongs of influence. On the one hand, I draw a good deal from the phenomenological critiques of technology and technization of Edmund Husserl (Crisis) and Martin Heidegger (Questions Concerning Technology), in which technology opens up new possibilities for human living but also covers over and (surreptitiously) obscures our distance from familiar ways of human fulfillment; on the other hand, I draw on the work of Emmanuel Levinas and others who have used both the possibilities and impossibilities of phenomenology to describe what is distinctive about the ethical relation and ethical experience. These two prongs have inspired a number of smaller scale, topical investigations that use phenomenology to better understand the novelties of new technology for life, and how that novelty can be better incorporated into life -- or in some cases how that novelty must be censured -- in the interests of a wholesome, flourishing, and ethical existence for us all.
To tighten the scope for today’s paper, I will be speaking on the novelty of drone warfare and how that novelty reshapes the topology of just warfare. I’ll take the term ‘drone’ to signal the somewhat clumsy net of descriptors ‘lethal’, ‘unmanned’, and occasionally ‘autonomous’ ‘system’. The clumsier description reminds us that the novelty we are concerned with here need not be instantiated in the flying, bulbous headed vehicle that has become synonymous with the term ‘drone’ in the popular press -- a whole slew of design alternatives are possible. Each design presents a different set of functionality and a correlatively adjusted set of necessary ethical considerations. For the sake of time I will focus on the type of drone familiar to the popular press. That is, possessing the following core characteristics:
This set of characteristics will allow me to tighten my scope further to a contrastive phenomenology of human comportment in live combat scenarios and human-cum-robotic comportment in drone attack scenarios, wherein the soldier remotely operates a mobile, airborne, directly or indirectly lethal machine. The latter form of combat, as is widely-reported, is on the rise. So our guiding question will be whether the ethical discourse of just warfare has kept pace with the novelty of drone warfare. And the groundwork for such a question will be laid with a phenomenological examination of the acts of consciousness involved in these different types of battlefield engagement.
A key insight of Husserlian phenomenology is that every act of consciousness is conscious of some thing. In Husserl’s Cartesian Meditations this insight is glossed in the following manner: “Each cogito, each conscious process, we may also say, “means” something or other and bears in itself, in this manner peculiar to the meant, its particular cogitatum” (CM, 14). This means, in simpler terms, that every subject (ego) in his, or her, conscious and more or less attentive experience is directed towards some correlative object (cogitatum) of his, or her, concerned and more or less attentive consciousness. And this directedness “means” the object in a way that is peculiar (the style of the cogito) that constitutes the object according to a certain style.
In schematic terms,
ego → cogito → cogitatum
I (ego) see (the style of cogito is perception) the bird (cogitatum).
My conscious awareness is directed towards the bird in the specific manner of a visual perceiving. And this act of consciousness conforms to the basic model illustrated above. We could vary several elements of the formula and still hold true to the model. Perhaps I was not perceiving a bird, but rather a plane: thus, one cogitatum is switched out for another. Or perhaps I was not perceiving the bird, but rather remembering it, or dreaming it, or expecting it: thus, one style of cogitato is switched out for another. What cannot be varied, however, is the correlative relation between the ego and the cogitatum -- without such a correlation it could not be said that consciousness is conscious of any thing at all. Husserl’s insight, therefore, serves as a foundational and universal insight that reminds us of the minimum threshold qualifications for an act of consciousness to be conscious.
(It’s worth mentioning that Husserl defers any metaphysical or ontological questions regarding the ultimate reality of the cogitatum. He’s not interested in charting out “what there really is” in the world, by virtue of or in spite of, what we think we know about the world in advance of a phenomenological investigation. The hope is that a frank look at what phenomena present themselves in acts of consciousness -- whether the phenomena are empirical attributes or ideal objectivities -- will provide the philosophical clarity requisite to ask substantive questions about the subject of the subject matter of any subsequent investigation. In other words, we want to get clear on what it is we’re asking about before we get too far in the asking.)
How does such a phenomenological approach get us further down the road in regard to the ethical questions I want to put to drone warfare? I propose that the mainstream discourse on drone warfare fails to make important phenomenological distinctions between different types of warfare and consequently uses an ill-fitting “one size fits all” approach that levels over the precise sort of differences that need to be acknowledged in order to conform as closely as possible to an ideal of ethical employment of drone warfare technology.
But why should I think there is a problem at all with the mainstream discourse of drone warfare? Let’s look to a popular author on the subject for an example of the kinds of phenomenological obscurities and slippages that could result in a drone-equipped military force not effectively self-monitoring its practices with respect to ethical and humanitarian concerns.
In his widely-cited article “The Case for Ethical Autonomy in Unmanned Systems” Ronald Arkin concludes with a bulletpoint list of obstacles that stand in the way of wider, more effective, and more ethically-sound employment of drone weaponry (or, more precisely, lethal autonomous unmanned systems). The list details an assortment of issues technical, ethical, political, and more. For our purposes we are interested in a consistent conceptual slippage, between what counts as an ethical problematic and what counts as a technological problematic, that ranges over the entire list. This slippage is variously expressed:
1) The transformation of International Protocols and battlefield ethics into machine-usable
representations and real-time reasoning capabilities for bounded morality using modal
logics. (Can such transformation take place without an essential loss of meaning?)
2) Mechanisms to ensure that the design of intelligent behaviors only provide responses
within rigorously defined ethical boundaries.
3) The development of effective perceptual algorithms capable of superior target
discrimination capabilities, especially with regard to combatant-noncombatant status.
This partial selection is adequately representative of the conceptual slippage between what amounts to a technical problem and what amounts to a problem of ethical decision making. The technical problems, on the one hand, are framed within the language of algorithms, code, and robotics. The technological solution to the technological problem will involve tweaking the code to more precisely target objects along X, Y, Z axes in a three-dimensional, mathematized space at the right point in time. But nothing involved in this bid to greater technological precision begins to broach the ethical concerns of, say, distinguishing between combatant and non-combatant status. It’s a fair question to ask whether coding ever more “machine-usable representations” brings the technological and ethical dimensions of the problem into greater alignment, or exasperates the the distance between them to a greater degree. Although the tone of the policy makers on this front has been one of general optimism of the former sort, anyone who has ever suffered the rage-inducing task of trying to conform their humanity to the brutalist outline demanded by automated telephone customer service exchanges -- and that’s most of us -- should be willing to entertain the latter. That is, that increased automation makes ethical action less probable, not more probable.
Problem: mainstream analyses of drone warfare misidentifies the object of the intentional relation implicit in the remote human guidance of drone systems; consequently, mis-describes the relation between the human operator and the mission target; and, in ultimate consequence, misidentifies the parameters within which ethical decisions can be meaningfully made.
To ameliorate such phenomenological misunderstanding, I propose two brief phenomenologies of human versus human-cum-robotic mission performance to better discern what is the proper object of each experience, and how that object is experienced. These two phenomenologies (which obviously could and should be expanded for greater levels of detail) will be followed by a gesture towards a perhaps impossible third phenomenology of “pure” robotic comportment.
First, normative human comportment in a battlefield scenario. We’ll choose the somewhat outdated exemplar of real physical combatants in an outdoor environment. (Outdated not because there are no longer such examples of live combat, but rather because such combat has become circumscribed within a much larger nexus of real-time command and control functionality that exceeds the immediate borders of the battlefield. However, this example will serve as a useful test case as will become clear.) For every mission, there will be some sort of mission objective. The mission objective assumes the status of normative aim for all the soldiers involved in the mission. Each soldier may have a specific orientation towards the objective peculiar to his or her training -- that is, the separate duties of the communications, or navigation, or reconnaissance -- and in each case the individual soldier brings his or her training to bear on the situation at hand in view of attaining the general objective. The experience of the battlefield will, therefore, in each case, be conditioned by this habituated training: the communications officer will or will not find himself in a signal deadzone, the navigator will or will not find the terrain passable, the soldier charged with reconnaissance will or will not find a good vantage point, and so on. Thus, there seems to be several layers of conscious attentiveness operating simultaneously for the average live combat soldier, which can be divided up into two main categories: (1) the baseline perceptual awareness of a soldier in a more or less alien environment, and (2) the habituated training the soldier brings to bear on the situation. In experience these two layers are synthesized into a unity held together by the general mission objective. For example, it makes no sense to say that the soldier charged with reconnaissance perceives the “high hilltop”, on the one hand, and separately “the good vantage point”, on the other hand. He or she perceives both the “high hilltop” and “the good vantage point” in the same act of awareness, animated by the normative mission objective. In a sense, one “sees” the goal.
This unity of experience is important because if the motivation of the mission objective is embedded within the directed perceptual awareness of the soldier, then, by extension, the path to achieving the mission objective will also be found within acts of motivate perception. Obstacles that fall in the way of achieving mission success, for example, will be or not be mitigated by the real world, kinaesthetically-motivated actions of an embodied soldier. The communications officer that can’t get a signal will find higher ground, the navigator that judges a river impassable will seek an alternative route, and so on. In each case what constitutes effective negotiation of the battlefield in pursuit of the mission objective is a synthesis of habituation and perception. We have not yet mentioned an objective so serious as killing, but it’s clear that the objective of killing enemy combatants in a live combat scenario would follow the same pattern: the enemy combatants would have to be located, evaluated, eliminated, and verified as dead within real space and time by more or less proximate, real, embodied soldiers on the basis of motivated perception of the battlefield.
To summarize this according to the Husserlian model of conscious awareness we’ve referred to earlier, we can say that human live battlefield comportment is a complex constellation of acts that synthesizes real perceptual awareness with general and mission-specific training habituation, directed towards the overall aim of achieving the mission objective. Schematically, it might look like this:
Ego (soldier as subject) → cogito (soldier as real perceiver) → cogitatum (mission objective)
Clearly, there are a great many intermediate steps ko within the ‘cogito’. The soldier could not be so directly engaged upon the mission objective as ‘cogitatum’ lest the immediate exigencies of the unfolding mission pass by him or her unnoticed. Necessarily the soldier will more often be concerned with the proxy and intermediate goals achieved along the way of the mission, each of which posses their own ego-cogito-cogitatum. Nevertheless, these intermediate goals are meaningful only within the overarching aim of the mission objective.
Second, the human-cum-robotic mission. I will be briefer here because the relevant lines of contrast between the human and the human-cum-robotic are already in place. In the first instance, we have a synthesis of perceptual awareness and whatever acts are required to instantiate training habituation, be it recollection, anticipation, and so forth. What do we have in the second instance? Taking the average work environment of a drone operator as an example, we are presented with interface of the human and the automated. On the human side, there is a soldier specially trained for the unique demands of drone operation. On the machine side, there is a panoply of screens detailing essential information regarding mission status: information related to the structural integrity of the drone, its course, its missile complement, and screens relaying tight-cropped, real-time video feeds from the drone’s perspective, according to the spectrum of human vision, infrared, or otherwise. The relevant point of contrast is this: whereas the soldier in the case of human comportment real human perception, with all its advantages and disadvantages, figured heavily in the advance towards mission success, in the case of the human-cum-robotic comportment of the drone operator, real human perception is reduced to an almost negligible contribution. To be sure, the military still needs the drone operator to possess a full and adequate range of sensibility in order to register a complete uptake of all the relevant information the screens have to offer -- and there may even be a unique talent or learned ability for adequately perceiving a diverse panoply of screens. Nonetheless the contribution of real human perception toward mission success has become so minimal as to be restricted to the deciphering of information, presented in a flat, two-dimensional format, which the operator has been trained to expect and respond to in a number of predetermined fashions.
What is the difference between negotiating battlefield scenarios in a three-dimensional real world environment, on the one hand, and negotiating battlefield scenarios according to a two-dimensional, formalized proxy for the same three-dimensional real world environment, on the other hand? Phenomenologically, the difference is quite pronounced. What attentive phenomenological description reveals (more needs to be done here) is that the experience of the drone operator shares more in common with a data analyst dealing with abstracted and absolute values than it does with the experience of our conventional understood soldier who applies his or her training to the “real world”. The greater role of abstracted and absolute values is, to some degree, the guiding stimulus for the mechanization of warfare because it gives us solid metrics on which to make the most serious decisions regarding the use of lethal force. Compare the unwavering read on a situation that a bank of screens gives the drone operator versus the uncertainties involved with a soldier on a real battlefield, of whom we can never know if his or her decision-making ability will hold up to the stress of combat. Remote warfare grants us a reprieve from this sort of uncertainty -- but the phenomenological distance between human comportment and human-cum-robotic comportment should give us pause as to whether this sort of uncertainty has been in itself neutralized, or whether it has been merely obscured by the impressiveness of advanced technology. As for the latter option, we should not consider ourselves above being unduly charmed by the promises of technology to the detriment of our better reason. There are many well-intentioned individuals that continue to defend the decision of the U.S. military to characterize boys as young as 15 as “military age” and therefore acceptable targets of drone attacks. Such defenses have various motivations -- for our purposes, I’d like to suggest one of those motivations is the following: it is impossible to conceive or accept that a drone strike that is carried out with such technical precision is premised on such imprecise criteria. Technology leads military decision making and justification, in this case, by the nose.
I’ll conclude this portion of the paper with two main bulletpoints of the phenomenological difference between human battlefield comportment and human-cum-robotic battlefield comportment, situating this difference in the Husserlian model of intentional awareness.
First, there is a qualitative difference in the transition from human to human-cum-robotic comportment. This difference manifests itself in the quality of the cogito that negotiates the relation of the subject and his or her mission objective. With the increasing role of automation in warfare, real experience begins to be overtaken by symbolic experience.
Human Comportment: ego → perceiving synthesis (real) → cogitatum
Human-cum-Robotic Comportment: ego → deciphering synthesis (symbolic) → cogitatum
Second, there is an agentive difference of the absented ego (or, difference in the experiential situatedness of the subject) in the transition, which follows from the qualitative difference. When the soldier is confronted not by real life, but rather a reductive and impoverished symbology of anticipated signs and predetermined responses, his or her subjective experience is situated at a remove from the battlefield: not merely a physical remove, but also a cognitive remove. That is, what the symbology of drone operation means has been determined at some prior point in time by the group of individuals that designed the informational array as well as the procedures for engaging the array -- but the drone operator is not a member of this group of individuals. The agentive impact of the real drone operator is reduced to the point that it’s not clear whether the ego of the intentional relation is a determination of the soldier or whether the ego of the soldier has been absented to a degree that he or she becomes a mere agentive proxy for others.
In its most radical formulation, the absented ego becomes a proxy for nothing and no one at all (compare to robotic comportment, the purely executed algorithm). This is the place for a pure phenomenology of robotic comportment. In another paper I constructed a phenomenology of lethal, unmanned automated systems to gain a better understanding of how these machines constitute and parse their world. The phenomenology was in some sense doomed to failure because it’s impossible to fathom the experience of that which has no subjectivity through which to experience experience. But the failure is instructive insofar as it illustrates what it would be like for a soldier whose agentive impact was completely absented to the point that his or her mission performance more closely resembled the execution of code rather than the execution of training.
These observations are meant to be purely phenomenological observations, not yet a judgment on the ethicality of vicarious warfare. I hope, however, that the ethical implications are becoming apparent. If, for example, these two differences pass unnoticed then we are apt to apply our ethical considerations to the wrong sort of model, thereby resulting in the wrong sort of results. The basic question that motivates this section is: have policy-makers adequately grasped how the landscape of violent conflict has changed in the transition from human to human-cum-robotic comportment? The lack of clarity regarding what counts as a technical dilemma and what counts as an ethical dilemma suggests that the answer is no.
The Continuous Spectrum of Human-Robotic Comportment
(And the Possibility of a Tipping Point)
In this section, I’d like to pull back from the extremes of our twin, idealized poles of “pure” human comportment and “pure” robotic comportment. Whatever the prospects for phenomenologically describing such “purity”, it’s clear that most of, if not all of, contemporary warfare falls somewhere between these two poles. That is, the technological revolution has woven automation into almost all combat procedures though it is still a rarity that the human element is ever denied some decision-making role in the circuit of robotic lethal functionality.
And if we construe automation more broadly, beyond the confines of roboticization, towards a more instrumental or administrative sense, then we can observe that combat has long occupied this middle ground between the spontaneity of human comportment and the boundedness of automated comportment.
In instrumental terms, any piece of equipment we use to progress towards the mission objective modifies and delimits the originary openness of the soldier to his environment. If a soldier sports a rifle, for example, he will negotiate his environment according to the capabilities of his firearm. He will not seek his enemy in close quarters, nor will he want to be too far away if his orders are to engage the enemy. He will seek the appropriate distance in order to maximize his potential odds of success. This is not only sound strategy, but also, phenomenologically, the very way he parses the lived experience of combat. As with the soldier on reconnaissance who “sees” the hilltop as a “good vantage point”, so does the rifleman “see” the battlefield according to the constraints of his firearm and his training to its use. This “seeing” is a certain type of automation that quickens the pace of his combat responses -- but it also closes off certain avenues of sensitivity that might otherwise prove valuable (in matters of ethical consideration). And the same could be said for any type of instrument: bazooka, catapult, bow and arrow, bludgeon (and fist?).
In administrative terms, the increasingly rational organization of command and control functionality has narrowed the range of acceptable performance for the average soldier. Like the autoworker positioned along the assembly line, or the business employee squirrelled away in one office of one department of one division of a multibillion dollar corporation, the experiential purview of the modern soldier has become highly specialized. As with instrumental advances, more efficient administration quickens the response time, and presumably the effectiveness, of the the soldier; however, we also observe a correlative closing off of avenues of sensitivity that might have proved useful in ethical considerations.
The question, then, is not whether the automation of robotic and human-cum-robotic warfare inaugurates some unconscionable procedure never before witnessed in the history of humanity. Rather, it is a question of marking out where these different styles of warfare fall along a continuous spectrum, bounded at either end by idealized poles of free and bounded human experience. There’s no exact method for marking out degrees of this spectrum, but our preliminary attempt at a contrastive phenomenology above at least let’s us say that the case of human comportment in a live combat environ provides a phenomenologically richer sampling of the possibilities of human experience than the case of the drone operator, whose real perceptual functions have been restricted to the deciphering of symbols flashing across a screen. The drone operator, thus, experiences a world that is less nuanced, more restrictive, and in which he or she has less flexibility to exercise his or her moral discretion. The exercise of such moral discretion is not guaranteed in the case of the live combat soldier, but there is at least its possibility. In comparison the drone operator is faced with a nearly all or nothing ethical dilemma: either the information and intelligence received is correct and the kill command justified, or the entire procedure of formalizing violence to this degree is suspect and immoral. Soldiers being soldiers, drone operators put their trust in the former, but the increasing anecdotal evidence (is there a formal study) of very high PTSD rates, as well as burnout, among former drone operators suggests the truth of the latter is felt if not openly conceded. “War is hell”, as the saying goes, but it becomes all the hellish when the human element in it has become so marginalized that the soldier can no longer affirm the overriding justice of the difficult tasks he is asked to perform.
Pushed to its limit, human-cum-robotic comportment in warfare would be a kind of comportment in which the individual soldier could no longer testify to the ethicality of his or her actions insofar as his or her agency had been usurped by automation. This is the end of the ethical soldier.
No Individual Ethics; What’s Left?
(Industrialized Ethics; Freedom and Limit)
As my research into this topic is ongoing, I’ll finish with two areas for further investigation rather than a hard and fast conclusion.
First, these ethical considerations need to be redrawn according to (what I’ll call) an “industrialized ethics”. The marginalization of live and present human agency in ethical decision-making -- already inaugurated in the automation of instrumentality and administration, and rapidly accelerated by roboticization -- simply does not align with an individualistic ethics. At this late-stage of roboticized automation, the key ethical decisions are made in advance at the industry-level of military technology. That leaves plenty of questions for politicians and policy-makers, who will craft the “International Protocols and battlefield ethics”; plenty of questions for the engineer and programmer, who will design machines that convert the Protocols into “machine-usable representations”; and plenty of questions for critics, phenomenologists or otherwise, critical of the entire process. The drone operator, on the other hand, is left with very little to mull over.
Second, does ethical decision-making require any specific phenomenological type of experience, which might be consonant with some varieties of military comportment and dissonant with other varieties? In this paper I’ve resisted privileging a single type of experience, although I have privileged the richness of experience, in terms of the variety of acts it involves and the freedom of the agent to responsibly negotiate among them, over more impoverished experiences that marginalize the soldier’s agency and reduce his or her freedom of moral discretion. This privileging is based on the assumption that a soldier with the freedom to range over a greater and deeper variety of conscious acts will be in a position to make a sounder ethical choices. This is clearly the case when the soldier is marginalized to a mere “cog in the wheel” with no insight into the meaning of his or her moral alternatives. But on the other hand there is likely also a limit point at which the soldier, granted too much freedom, will again falter in his decision-making. Finding the line which separates too little from too much freedom is an essential to reviving the possibility of individualistic military ethics and the ethical soldier.