I research the ethics of new technologies. We failed to use several past technological advances ethically, such as nuclear and chemical weapons. As technology progresses further, ethical analysis is of increasing urgency. The central issue I address with my research is the tension between the high desirability and extreme risks of new technologies.
For instance, if humanity ever develops a safe drug that dramatically increases co-operation between individuals, should we all use it? Technologies that improve moral behaviour – moral enhancement – seem desirable. However, my research indicates they could backfire. I currently research moral enhancement’s risks related to personal identity, moral status, and complexity, while developing a virtue-oriented safety framework that addresses such risks. Next, I will turn my analysis to two other emerging technologies, worker enhancement and AI.
During my doctoral research, I focused on the long-term risks of attempting to improve human moral dispositions – that is, moral enhancement – and on developing frameworks to avoid those risks. During my doctorate, I co-authored a publication at the Cambridge Quarterly of Healthcare Ethics investigating the societal effects of moral enhancement using social simulations.
In the first year of my post-doctoral fellowship, I have produced four articles, two published, one accepted and one imminent submission to academic journals. A publication at the Journal of Medical Ethics addresses a new critique of moral enhancement that deems it unnecessary given we have traditional means of moral progress. I argued this critique makes overly bold assumptions about moral progress; for instance, it overstates the case for moral progress driving a decrease in violence. An article published at Neuroethics argues that moral traits are unusually fragile, and consequently that moral enhancement is risky. An accepted article at AJOB Neuroscience develops desiderata for a framework for moral enhancement to deal with these risks, compares potential frameworks, and concludes that common proposals fare poorly. A submitted article argues that although the creation of drastically better persons might be desirable, it is undermined by the fact that these individuals might not be our psychological continuants. Many intuitively agree with the claim that human morality is unusually complex, another submitted article formalises and defends this claim.
I am working on four more articles which I will submit next semester. The present world affected by a pandemic stands more divided than before, not only precluding an expedient crisis response, but also the development of global and national social compacts that could address future threats. I am writing a manuscript analysing the recent decadal increase in social division, the erosion of the democratic social compact and their interaction with the COVID-19 pandemic. I will propose two avenues for solutions informed by moral psychology: traditional institutional reform and technological interventions.
A second manuscript will develop one of the most promising frameworks for moral enhancement in more substantive detail. I will offer a practical research agenda for scientists researching moral enhancement. Benefiting from parallel research using social simulations, I will develop a framework that adds group membership to existing models of social preferences.
Drastic enhancements can harm personal identity, but the reasons for enhancing can outweigh these harms. The harms to personal identity affect individual interests alone, while many of the benefits of enhancement are impersonal. I will analyse a range of cases beyond enhancement and suggest how to balance harms to identity with impersonal benefits.
Lastly, I will write a follow-up publication to my co-authored article. The next publication will investigate group-level effects of moral enhancers by refining my computer model to also include group-membership and reputation, thus producing a more realistic model. Some of the groundwork for this model was already developed during my doctorate.
After this first stage of research, I will work on the ethics of AI safety and worker enhancement. Developing powerful AI poses a plethora of design challenges, from formal architecture constraints to sample data selection. Foremost among them is how to guarantee these artificial agents will behave morally, especially if they come to surpass human capabilities. I will develop a virtue framework particularly suited for dealing with the problem of imbuing artificial agents with morality. Grounding these agents on virtues can prevent them from diverging from human values even if they undergo unguided learning and recurrent self-improvement. Focusing on virtues has three key advantages. Virtues are inherently designed to inculcate agents with morality. They are empirically grounded, fitting well with neuroscientific data, thus they would depart the least from human values. Finally, they have modest theorical commitments, thus they do not exacerbate the risks involved in using a specific theory of the good as a guide in the presence of theoretical moral uncertainty. Additionally, my considerations on the risks of creating supra-persons can also reveal unexplored issues of creating superintelligent artificial agents, even if their moral capacities radically surpass ours.
Various forms of enhancements are already being used to enhance work performance, with a prevalence of one in five in medical and academic professionals. Should these drugs be prescribed for worker enhancement? Should employees conduct drug tests banning their usage? What are the consequences of the current unregulated market? These questions touch on issues ranging from the nature of the doctor-patient relationship in medical enhancement to equality of opportunity. I will argue that allowing, or even requesting enhancements to meet job requirements that are also virtues is a superior alternative to the current unregulated market.