top of page

SHOULD WEAPONISED MORAL ENHANCEMENT REPLACE LETHAL AGGRESSION IN WAR?

In press. Israel Law Review. (watch talk here)

Some have proposed the development of technologies that improve our moral behaviour, moral enhancement, in order to address global risks such as pandemics, global warming and nuclear war. I will argue that this technology could be weaponised to manipulate enemy combatants’ moral dispositions. Despite being morally controversial, weaponised moral enhancement would be neither clearly prohibited nor easily prohibitable by international war law. Unlike previous psychochemical weapons, it would be relatively physically harmless. I will argue that when combatants are liable to lethal aggression to achieve an aim of war, they are also liable to weaponised moral enhancement to achieve that same aim. Weaponised moral enhancement will loosen just war requirements in both traditional and revisionist normative just war theories. It will particularly affect revisionist theories’ jus ad bellum requirements for humanitarian and preventive wars. For instance, weaponised moral enhancement could be more proportional and efficacious than lethal aggression to effect institutional changes in preventive and humanitarian wars. I will conclude that despite evading international war laws and loosening normative just war requirements, the intuition that weaponised moral enhancement would gravely harm combatants can be defended by arguing it would severely disrupt personal identity, which could potentially ground future prohibitions.

2021Journal of Value Inquiry, Online 29 Oct, 1–20.

The assumption that human enhancement could produce beings with higher moral status than our own (supra-persons) is often accompanied by concerns that doing so will be catastrophic to unenhanced persons. Many continue to defend the creation of supra-persons, including several who accept the potential harm to the unenhanced. However, their justification for the permissibility of harms to the unenhanced is weakened once we consider that the enhancement of existing persons might undermine their individual interests. Enhancements that are substantial enough to increase moral status will likely be substantial enough to affect psychological continuity. Certain individual interests are sensitive to losses of psychological continuity. Although the creation of supra-persons might lead to beings much better off than we are, it may still be against our interests because such beings might not be our psychological continuants. This objection could be addressed by making our replacement by supra-persons incremental.

2021AJOB Neuroscience, 12(2-3), 89–102. (watch talk here)

Our present moral traits are unable to provide the level of large-scale co-operation necessary to deal with risks such as nuclear proliferation, drastic climate change and pandemics. In order to survive in an environment with powerful and easily available technologies, some authors claim that we need to improve our moral traits with moral enhancement. But this is prone to produce paradoxical effects, be self-reinforcing and harm personal identity. The risks of moral enhancement require the use of a safety framework; such a framework should guarantee practical robustness to moral uncertainty, empirical adequacy, correct balance between dispositions, preservation of identity, and be sensitive to practical considerations such as emergent social effects. A virtue theory can meet all these desiderata. Possible frameworks incorporate them to variable degrees. The social value orientations framework is one of the most promising candidates.

2020Journal of Medical Ethics, 46(6), 405–411. [Selected for the Editor's choice list.]

A new argument has been made against moral enhancement by authors who are otherwise in favour of human enhancement. Additionally, they share the same evolutionary toolkit for analysing human traits as well as the belief that our current morality is unfit to deal with modern problems, such as climate change and nuclear proliferation. The argument is put forward by Buchanan & Powell and states that other paths to moral progress are enough to deal with these problems. Given the likely costs and risks involved with developing moral enhancement, this argument implies moral enhancement is an unpromising enterprise. After mentioning proposed solutions to such modern problems, I will argue that moral enhancement would help to implement any of them. I will then detail Buchanan & Powell new argument disfavouring moral enhancement and argue that it makes too bold assumptions about the efficacy of traditional moral progress. For instance, it overlooks how that progress was to achieve even in relatively successful cases such as the abolition of slavery. Traditional moral progress is likely to require assistance from non-traditional means in order to face new challenges.

2020Neuroethics, 14, 269–281.

I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that in so far as deep moral enhancement targets motivation, it is prone to be self-reinforcing and irreversible. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would be likely to see a decrease in co-operation between groups and an increase in the disposition to engage in further modifications, which are both potential problems.

2017. Cambridge Quarterly of Healthcare Ethics, 26(3), 431-445. (with Anders Sandberg)

How individuals tend to evaluate the combination of their own and other’s payoffs— social value orientations—is likely to be a potential target of future moral enhancers. However, the stability of cooperation in human societies has been buttressed by evolved mildly prosocial orientations. If they could be changed, would this destabilize the cooperative structure of society? We simulate a model of moral enhancement in which agents play games with each other and can enhance their orientations based on maximizing personal satisfaction. We find that given the assumption that very low payoffs lead agents to be removed from the population, there is a broadly stable prosocial attractor state. However, the balance between prosociality and individual payoff-maximization is affected by different factors. Agents maximizing their own satisfaction can produce emergent shifts in society that reduce everybody’s satisfaction. Moral enhancement considerations should take the issues of social emergence into account.

2013. Analysis and Metaphysics, 12, 105–115.

Certain problems with standard two-dimensional semantics are addressed and cases in which these problems arise explored. In such cases the primary intension cannot be univocally mapped in one and only one indexical world, thus standard two-dimensional semantics cannot efficiently address the problems presented. Subsequently, a modified model is presented which leads these problems to be averted in the replicated cases. This modified model admits primary intensions that are not univocally mapped. The conclusion discusses the advantages and disadvantages of the modified model and analyzes its possible consequences for the philosophy of mind.

bottom of page