top of page

RESEARCH

I research the ethics of new technologies. We failed to use several past technological advances ethically, such as nuclear and chemical weapons. As technology progresses further, ethical analysis is of increasing urgency. The central issue I address with my research is the tension between the high desirability and extreme risks of new technologies. 
For instance, if humanity ever develops a safe drug that dramatically increases co-operation between individuals, should we all use it? Technologies that improve moral behaviour – moral enhancement – seem desirable. However, my research indicates they could backfire. I currently research moral enhancement’s risks related to personal identity, moral status, and complexity, while developing a virtue-oriented safety framework that addresses such risks. Next, I will turn my analysis to two other emerging technologies, worker enhancement and AI.

​

Past Research

During my doctoral research, I focused on the long-term risks of attempting to improve human moral dispositions – that is, moral enhancement – and on developing frameworks to avoid those risks. During my doctorate, I co-authored a publication at the Cambridge Quarterly of Healthcare Ethics investigating the societal effects of moral enhancement using social simulations.
In my current post-doctoral fellowship, I have published four articles and have a fifth in review in academic journals. A publication at the Journal of Medical Ethics addresses a new critique of moral enhancement that deems it unnecessary given we have traditional means of moral progress. I argued this critique makes overly bold assumptions about moral progress; for instance, it overstates the case for moral progress driving a decrease in violence. An article published at Neuroethics argues that moral traits are unusually fragile, and consequently that moral enhancement is risky. An article at AJOB Neuroscience develops desiderata for a framework for moral enhancement to deal with these risks, compares potential frameworks, and concludes that common proposals fare poorly. In the Journal of Value Inquiry, I argue that although the creation of drastically better persons might be desirable, it is undermined by the fact that these individuals might not be our psychological continuants. Many intuitively agree with the claim that human morality is unusually complex, another submitted article formalises and defends this claim.

Next Steps

I am working on three more articles, the first two will be submitted in a few months. The present world affected by a pandemic stands more divided than before, not only precluding an expedient crisis response, but also the development of global and national social compacts that could address future threats. I am writing a manuscript analysing the recent decadal increase in social division, the erosion of the democratic social compact and their interaction with the COVID-19 pandemic. I will propose two avenues for solutions informed by moral psychology: traditional institutional reform and technological interventions. 
A second manuscript, whose draft was invited for submission at the Israel Law Review, will argue that weaponised enhancement technologies could be used in future wars against enemy combatants in morally abhorrent ways that would, nevertheless, elude current just war theories. Many transformative technologies were first developed by the military. If weaponised enhancements can be used as an effective and cheap way to achieve the aims of wars, they are likely to be deployed once they become feasible. I will propose grounds for restricting their usage that could motivate extensions to current war conventions.
Drastic enhancements can harm personal identity, but the reasons for enhancing can outweigh these harms. The harms to personal identity affect individual interests alone, while many of the benefits of enhancement are impersonal. In a third manuscript, I will analyse a range of cases beyond enhancement, suggest how to balance harms to identity with impersonal benefits, and their implications to normative ethics and metaphysics. 

​

Long-Term Plans

After this first stage of research, I will work on the ethics of AI safety and worker enhancement. Developing powerful AI poses a plethora of design challenges, from formal architecture constraints to sample data selection. Foremost among them is how to guarantee these artificial agents will behave morally, especially if they come to surpass human capabilities. I will develop a virtue framework particularly suited for dealing with the problem of imbuing artificial agents with morality. Grounding these agents on virtues can prevent them from diverging from human values even if they undergo unguided learning and recurrent self-improvement. Focusing on virtues has three key advantages. Virtues are inherently designed to inculcate agents with morality. They are empirically grounded, fitting well with neuroscientific data, thus they would depart the least from human values. Finally, they have modest theoretical commitments, thus they do not exacerbate the risks involved in using a specific theory of the good as a guide in the presence of theoretical moral uncertainty. Additionally, my considerations on the risks of creating supra-persons can also reveal unexplored issues of creating superintelligent artificial agents, even if their moral capacities radically surpass ours.
Various forms of enhancements are already being used to enhance work performance, with a prevalence of one in five in medical and academic professionals. Should these drugs be prescribed for worker enhancement? Should employees conduct drug tests banning their usage? What are the consequences of the current unregulated market? These questions touch on issues ranging from the nature of the doctor-patient relationship in medical enhancement to equality of opportunity. I will argue that allowing, or even requesting enhancements to meet job requirements that are also virtues is a superior alternative to the current unregulated market.

bottom of page