Utilitarianism is the best-known of the consequentialist school of philosophical thought on ethics. As such, it holds that actions can be judged to be moral or immoral based on their results. This is in contrast to “deontological” theories of ethics, most commonly associated with religious thought on questions of “right” and “wrong,” which judge actions based on their nature in and of itself.
The classic example given to illustrate the difference between these schools is that of decision-making on life or death scenarios. A utilitarian would be more likely to argue that killing one person to save many is the “right” thing to do, while someone arguing on deontological grounds would more likely forbid the act of killing regardless of the consequences.
Utilitarian thinking is generally traced back to 18th-century thinker Jeremy Bentham, who also laid the groundwork for both for the utilitarian project as whole and for “rule” utilitarianism, which aimed to create general rules to avoid the problems associated with individuals’ judgment and foresight. The term itself was coined by John Stewart Mill, who further developed his theories almost a century later. Mill distinguished between “higher” and “lower” pleasures to try to prevent accusations that utilitarians would be satisfied with a world of simple and hedonistic pleasures over one where people would seek to live truly meaningful and fulfilling lives.
Many others have since sought to alter and improve on this form of ethical theory, and there have been numerous critiques of utilitarian thinking over the years. These include the huge margins of error in predicting future events, qualms about the heinous acts that utilitarians may be forced to permit under certain circumstances, and the issue of what should be done if the “balance” of utility falls into the negative, with the potential for a descent into anti-natalist philosophy, such as that of David Benatar, if this were judged to be the case; i.e. “if there is inevitably more suffering than happiness in existence, the best thing to do is to end sentient existence.”
All of the above are worthwhile ideas to explore, and to my knowledge, there have been adequate enough responses to keep the utilitarian ideal at least partially alive so far. However, there are two major issues that I have not seen directly discussed elsewhere that seem to expose serious flaws in certain areas of the utilitarian project, especially if seeking to be an all-encompassing ethical framework. These can be presented in the form of the following thought experiments.
A Never-ending Moment of Pure Bliss
The first of these two issues is mostly related to the concept of meaning. To be a utilitarian, one must hold that moral good or evil must necessarily be linked to—as Sam Harris would likely put it—“the mental states of conscious beings.” However, this runs into issues when we consider situations where these conscious states might be manipulated, artificially generated, or even just in some other way bereft of content that we would deem meaningful. This could be due to these states being, for instance, induced by some sort of magic or other supernatural force, such as Descartes “evil demon.” More relevant today, perhaps, is the prospect of their generation through artificial stimulation of our neural circuitry.
This issue has already been dealt with in some limited capacity by science fiction, with examples including The Matrix and Black Mirror. In The Matrix, we see that the protagonists are willing to sacrifice a relatively comfortable life, which would likely be preferable to many of us over the hellscape that the “real” world has become in their reality. We are still, though, meant to agree that their decision to shake themselves out of the artificially-generated dream/prison that is the Matrix—as well as their mission to “free” the rest of humanity from it—are just.
In Black Mirror, we see variants on the same theme in various episodes. For instance, “Men Against Fire” has us hoping that the main protagonist will choose to see the world for what it is rather than choosing the sanitised version, despite the mental anguish this would cause him, albeit we hold this hope in part because it would prevent him from continuing to take part in a genocide. In “San Junipero,” one of the few “techno-optimistic” episodes in the anthology-style series, the narrative leads us to what is sort of the opposite hope. That is, that the characters will choose to live together for an eternity in a sort of “cloud-based” paradise, having had their consciousnesses uploaded following the deaths of their organic bodies. This is despite a monologue earlier in the episode, in which one of the characters challenges the meaning of such an existence.
The “San Junipero” episode described above starts us on the journey towards the eventually severe issue here. The episode is geared towards having viewers take the utilitarian stance: that their continued existence in this new, blissful world is a desirable (“good”) outcome. There is already some conflict between this and the visions of the “good” that underpin the philosophies of The Matrix films and other Black Mirror episodes, including “Men Against Fire.” The utilitarian view seems to lose ground, though, when we take these scenarios to their potential extremes.
To examine these potentials, let us look at another relevant Black Mirror episode, “Black Museum,” in which a man is (falsely) convicted of a terrible crime and is sentenced to death by electrocution. As in the world of “San Junipero,” in this episode’s version of reality, consciousness can be replicated in a synthetic form. Here, though, this allows for the man’s final moments of agony to be repeated in a loop for all eternity, for the sake of making disturbing tourist trinkets. A horrible thought in and of itself, but the problem for utilitarianism actually comes from the potential for this scenario to be inverted.
A utilitarian would be hard pressed to find a problem with the “eternal paradise” scenario shown in “San Junipero.” However, you could make a utilitarian argument that this would not be “perfect” enough; the consciousnesses would still suffer at times, simply due to the fact that they have thoughts. So we could continue to narrow the scope of their experiences, especially if we allow for “looping.” Say the “San Junipero” simulation had an infallible ability to detect levels of happiness and decided that one specific year was someone’s best. It could simply then repeat that year over and over, unbeknownst to them. Why stop at years? Or months? Weeks? Days? Hours? Seconds? Milliseconds? Nanoseconds?
The utilitarian would ultimately seem compelled to argue that the best possible outcome would be everyone living in a single moment (as short as a human can process) of the purest engineered bliss. If everything, including memories, can be engineered, how could we conclude anything else? It wouldn’t even need to be a moment unique to us as individuals. Thus a factory churning out an infinite number of inverted “Black Museum”-style trinkets must become, within the utilitarian framework, among the highest of positive outcomes to be sought after. This would, however, seem much closer to a dystopia than a utopia to most of us.
Sadists On An Island
The second major issue is related to the concept of justice and needs less explanation. Let us imagine that a bunch of pure sadists—with a history of murder, torture, rape, and absolutely no remorse or desire for redemption—have somehow escaped captivity and washed up on a nice island where they can comfortably live out the rest of their lives. We know that they will never again encounter anyone from outside, nor does anyone know of their fate. Generally, our reaction will be that this scenario is unjust, especially when we start to really think about the details and populate this scenario with those we specifically abhor, whom we deem worthy of punishment.
However, a utilitarian would have to ultimately conclude that this scenario is positive—in fact, more positive than if these sadists had met whatever fate society might have had in store for them. Utilitarian approaches to justice usually involve arguments around the effects of policies on criminal behaviour or the wellbeing of victims, and so are focused on outcomes rather than pure retribution. They have a very hard time distinguishing between those experiencing happiness, instead making the experiencing of this the moral good in and of itself. The problem is that retribution forms a core element of the concepts of justice that most of us hold.
In the body of utilitarian thought, there seems to be more debate and variety in responses to this problem than the first. One example might be to incorporate some sort of “virtue theory’”-aligned version of “rule” utilitarianism, arguing that though the scenario is not “objectively” wrong, it is good that people think it is, because this generally leads them to act in ways which contribute to the greatest good overall. This still does not overcome the problem at its base, though. This issue is made even more pressing due to the fact that it still more readily translates into real-world implications than the first.
So where do we go from here? In short, I’m not sure. I would genuinely like to see a satisfactory answer given to address these two scenarios in utilitarian terms, but so far, I have not either been able to generate one, nor can I find one given elsewhere. The answer may therefore lie in a synthesis of utilitarian and other types of ethical thinking, which seems to have been the course taken over recent years by secular ethical philosophers, at least. Both Derek Parfit and Sam Harris have, for instance, made attempts to incorporate utilitarian thinking as a component of a wider secular theory of ethics, while identifying and acknowledging its flaws within their own works. What is certain is that if these issues remain unresolved—along with various other criticisms that have not been satisfactorily answered over the years—utilitarianism cannot stand alone as a guide to the good.
Tags mentioned:Culture Innovation Tech