Quantcast
Channel: Dual use problems – Practical Ethics
Viewing all 12 articles
Browse latest View live

Why Bioenhancement of Mathematical Ability Is Ethically Important

$
0
0

by Julian Savulescu

In a paper just released today, Cohen Kadosh and colleagues (Cohen Kadosh et al., Modulating Neuronal Activity Produces Specific and Long-Lasting Changes in Numerical Competence, Current Biology (2010), doi:10.1016/j.cub.2010.10.007) described how they increased the numerical ability of normal people by applying an electrical current to a part of the skull. So what? Most of us don’t do that much maths after leaving school and manage just fine.

Kadosh and colleagues highlight the importance of enhancing ability with numbers. Around 20% of normal people have trouble with numbers. They write, “The negative impact of numerical difficulties on everyday life is manifested in the lack of progress in education, increased unemployment, reduced salary and job opportunities, and additional costs in mental and physical health.”

Such research is obviously important for the prospect of such people with poor numeracy. It shows the importance of advances in the biosciences and neurosciences for increasing the opportunities and well-being of normal people who fall at the lower end of the normal distribution curve of abilities.

But such research is important for at least two other reasons. Anders Sandberg has argued that having a sense of proportion and numeracy are more important to energy savings than having espoused green ethical commitments. Mathematical ability can have important general social effects.

Secondly, even those at the top end of mathematical ability might benefit from enhancement. If one takes those people in the top 1% of the population of IQ, the top quarter of that top 1% produce more than twice as many patents as the bottom quarter. So even if you are in the top 1%, enhancing your IQ might enhance your creativity and inventiveness. Kadosh and colleagues begin their article, “Dalton, Keynes, Gauss, Newton, Einstein, and Turing are only a few examples of people who have advanced the quality of human life and knowledge through their exceptional numerical abilities.” But if we were to enhance the ability of such geniuses by even a tiny per cent, problems would be solved that would otherwise be unlikely to be solved. Tiny improvements have great effect over large numbers of people over significant periods of time. An important problem that has remained unsolved or unrecognized could be solved. It is important to recognize that cognitive enhancement is an important social and economic issue.

All powerful technology is liable to the dual-use problem. Already we can enhance both memory and forgetting. The US military is looking into preventing the laying down of memories by using beta-blockers to prevent post-traumatic stress disorder in its soldiers. In this experiment, it was also possible to make normal people worse at numbers. What is the possible application of de-enhancing numeracy? It is hard to see one. But it is a potential problem and one which is shared with all other technology. As I have argued in connection with this dual-use problem and synthetic biology (with Tom Douglas), the solution is not to ban or retard the acquisition of knowledge or the development of potentially hugely beneficial technology, but to regulate its development to prevent abuse.

Mathematics is not merely for boffins. It affects all of our lives, every day. Enhancing everyone’s mathematical abilities, even those of geniuses, is in everyone’s interests.

How zap of electricity could make you smarter at sums… or give you maths ability of a six-year-old | Mail Online

Electrical brain stimulation improves math skills – life – 04 November 2010 – New Scientist

Current Biology – Modulating Neuronal Activity Produces Specific and Long-Lasting Changes in Numerical Competence


If you’ve done nothing wrong, you’ve got nothing to fear: Wikileaks and RIPA

$
0
0

Governments around the world have condemned Wikileaks recent release of US diplomatic cables, often while simultaneously denying they matter; the reactions are tellingly similar to the previous reactions from the US military simultaneously claiming the leaks were highly illegal, dangerous and irrelevant. At the same time many have defended the release as helping transparency. As David Waldock twittered: "Dear government: as you keep telling us, if you've done nothing wrong, you've got nothing to fear".

Is this correct?

The common good

In recent years we have seen an unprecendented expansion of government power to monitor citizens, driven both by technological change and changed laws. Often these laws have been constructed rather haphazardly, a combination of past regulations and emergency measures in response to events such as terrorism. The update and monitoring of such powers also show a significant lag (a current UK example is the criticism made by civil liberties groups of the brief consultation on the RIPA act that appears set to exclude stakeholders outside government and business). A common defense of such sweeping surveillance powers is that the common good must be protected, and people who have done nothing wrong have nothing to fear.

The first problem is what constitutes the common good in an international setting. Even if one accepts domestic surveillance as ultimately benefiting oneself or the community, it is not clear this applies to other communities. The 'common good' for the US might not be good for me or my community (e.g. it might benefit the US and its citizens to act against our interests or rights), and often national interests are clearly self-serving. It might be bad for the common good of the US to have diplomatic relations revealed but it could be helpful for the common good of other groups, even humanity as a whole. Whether the Wikileaks documents actually fulfill this is a complex and uncertain matter, but the case is certainly not weaker than the case made by proponents of government surveillance – they are also claiming great common good benefits, yet tend to be reticient in revealing any empirical support.

A key difference between Wikileaks and government surveillance is the limited filtering and oversight. In principle government surveillance in most democratic societies should be possible only under proper oversight (e.g. the RIPA act), limiting misuse, balancing rights and public interests and ensuring accountability. In practice this might fail, of course, but the aim ought to be to to serve the common good by limiting the spread of private information. On the other hand, the data released through Wikileaks has been claimed to be partially filtered to reduce risks to individuals, but is then left open for dissemination, interpretation, and use.  It is doubtful whether person-protecting filtering can be completely achieved, or even whether it is entirely desirable: there can be a strong public interest in revealing some information such as major crimes, even when that may increases the risk of violent or unlawful action against people.

The safety of innocents is vulnerable to both government surveillance and Wikileaks-style disclosure. Without the right context even truthful and inocuous information in a government database can be misinterpreted in ways that harm citizens (e.g. in some US states public urination convictions lead to people being placed in sex offender databases, with consequences far beyond the crime itself). As more information is gathered and used the risk of accidental misinterpretation increases.

Stuck in the panopticon?

The Wikileaks idea is to serve the common good by revealing the activity of organisations and nations as widely as possible. Rather than restricting information on a need-to-know basis, the leak aims to provide information to anyone wishing to know. It can be seen as part of a wider transparency movement.

The ability for the public to disseminate and process information has increased significantly. Back in the 1980's the current leak would have been a heavy filing cabinet, hard to transport and to copy, only accessible in a few places to a few people. Today it is not only accessible through the internet and hard to police, but also comes equipped with various attempts at improving browsing, visualisation and setting up further data investigations. Data leaks have become far more global and irreversible, at least if they promise to contain some interesting material.  There are good reasons to call for scientists to open up their data, since releasing it will likely improve the scientific process. Open data initiatives attempt to get government data into the open to help the deliberative process or encourage innovations.

One of the key arguments for transparency is that it improves accountability. If what we do becomes known, then we have rational reasons to act well. This underlies many procedural rules and freedom of information regulations in democratic societies. However, it can sometimes backfire. While the "climategate" affair may not have showed any serious scientific misconduct, it demonstrated the sordid state of everyday research. Ideally the risk of such a disclosure would lead to scientists behaving more responsibly. But one can easily imagine researchers instead avoiding to email or write down potentially damaging facts or opinions, leaving them in the ephemeral form of personal discussions. Increased freedom of information has apparently led many civil servants to avoid taking notes, since they could be subpoenaed. Some people have warned that Wikileaks might impair the work of future historians.

More seriously, as the Swedish foreign minister Carl Bildt argued, this kind of leak could limit trust and confidence in diplomatic communications. If governments cannot communicate confidentially, they would be forced to make public 'megaphone statements' that presumably would be more polarizing. This argument has merit, but it is partially based on the idea of diplomatic channels are a gentlemens club in the first place. As some of the leaks have revealed, there have been mixing going on between foreign policy and espionage, demanded from top officials. That is surely a worse problem for the diplomatic system than the possibility that confidences are leaked.

If the risk of leaks is a serious problem to governments, then it seems that this is also an argument against data retention initiatives. Large databases on private citizens are surely not better protected than diplomatic or military information. While the impact of disclosure of government information might be more widespread than the impact of disclosing information about an individual, to the individual the effects can be personally more damaging. By collecting private information leaks can become more pervasive.

Similarly, if the argument that confidential discussions are necessary for the proper function of governments and international relations is viewed as true, then there is a strong reason to consider the confidentiality of discussions people have privately or as part of organisations as important. Trust is just as important for the functioning of civil society as the international community.

Conclusions

There are indeed paralels, if imperfect, between the increased ability of scrutinize citizens and to disclose government informations. Both can serve the common good (in a local or global sense), improve accountability by raising the cost of misbehavior and possibly help bring wrongdoers to justice. Both raises the issue of how to properly manage the costs of revealing private information and reductions of confidentiality in transparent environments. They also fundamentally differ in their specificity (at least as government surveillance regulations are officially formulated in democracies): need-to-know selective disclosure or want-to-know public revelation. Both are based on different assumptions about how the information can be used, the likeliehood of misuse and the benefit of innovative new uses.

Whether Wikileaks have acted morally in their 2010 releases is hard to tell without a deep analysis of their motivations, method of work and consequences (depending on your favorite ethical system). But regardless of the morality of their actions this release is unlikely to be the last.

We might indeed be witnessing the slow birth of a new world order where governments are subjected to scrutiny even when they do not desire it, as unable to defend themselves from it as individuals are from government scrutiny. This has so far been driven more by technology than legal or ethical principles (and is hence hard to prevent without sacrificing economical growth and many other benefits). However, there is also a strong countermove where governments increasingly claim a right of global monitoring without easy citizen oversight. This, fortunately, is not so much driven by technology as politics, and is in principle amenable to democratic and ethical control.

There are good reasons to watch governments closely. Historically the largest anthropogenic losses of life has been due to state activities (wars, democides, famines through economic policies), and even fairly benign governments have engaged in deeply unethical behavior that has both been hidden from and negatively impacted citizens. Two-way transparency might be the only way of surviving even more powerful states.

The quixotic prohibition of attention-enhancing drugs in sport.

$
0
0

Amphetamines and major league baseball are in the news again, with a number of busts made for the prescription drug Adderall, which contains several amphetamine stimulants in its list of active ingredients.

From the New York Daily News:

Thirteen players tested positive for the amphetamine-based drug Adderall in the past season, and 105 were granted exemptions for attention deficit disorder, for which Adderall is frequently prescribed. The exemptions excuse players in advance for banned substances they take on doctor’s orders.
The piece goes on to say that if 105 players in the league have diagnoses of attention deficit disorder (ADD), that represents 10% of the total population of players, while the highest estimates of the population rate are around 5%. The most reasonable explanation of the discrepancy is that a number of players are deliberately obtaining an ADD diagnosis so that they can use adderall for its stimulant properties. Central nervous system stimulants like amphetamines reduce a player’s reaction time, among various other effects that are beneficial in a skill-based sport like baseball. That’s why Adderall is on the prohibited list for baseball; a player needs a medical exemption in order to use the drug while playing.
In order to obtain a diagnosis of ADD, a player needs to show a number of things to either a doctor, a neurologist, psychiatrist or psychologist. They need to demonstrate six or more symptoms from a checklist of nine symptoms, and they need to show evidence that the symptoms appear in two settings (work and home, for example) for at least six months. They need to show that they had some signs of these symptoms before the age of seven, and that their symptoms cause problems in their occupation.
For example, I might obtain a diagnosis of ADD by saying this:
For as long as I can remember, I have been easily distracted. I get bored easily and have difficulty holding attention on my work. I procrastinate, and at school I sometimes failed to hand in my homework. I forget to do important things, I’m often indecisive, and I miss out on a lot of opportunities due to inaction. All this is holding me back from my maximum potential at work, and it is a source of friction with my spouse.
It is still controversial whether ADD exists. But by the more generous estimates, up to 5% of the adult population suffers from ADD. However high that estimate may be, it leaves the vast majority of the population unaffected with ADD — and yet, a far higher proportion could truly attest to the foregoing description of symptoms. Human beings are easily distracted, forgetful procrastinators, and it does cause problems for them in work and in play.
In other words, if we are motivated to get an ADD prescription, most of us do not even need to lie to our doctors. If we simply describe the limits of our free agency in a particular way, we are candidates for an ADD prescription. It is not illegal to do this. It is a legal way to wilfully obtain a prescription-controlled substance.
My point here is not to criticize the diagnostic criteria for ADD — assuming ADD exists, and that the afflicted are worse off in the relevant ways than ordinary people, it is certainly possible that we should prefer a set of diagnostic criteria that are over-inclusive rather than under-inclusive, so that people who are genuinely impaired with ADD are not prevented from obtaining effective therapy. However, the case of ADD brings into stark relief the absurdity of current doping policy.
If a player is prepared to tell the truth to a doctor in a certain way, he is allowed to enhance his performance with legal, prescription-only stimulants. But if he uses the same stimulant without visiting his doctor, he is deemed to be cheating and barred from playing in the league. No matter how we construct our rationale for conducting anti-doping schemes, there can be no argument for the injustice involved in banning one player but not the other for taking the same performance enhancing drug, when that drug is legal and—for all intents and purposes—freely available.

Can Liberals Support a Ban on Sex Selection?

$
0
0

Australia essentially bans sex selection, except to prevent babies being born with serious sex-linked disorders. The National Health and Medical Research Councils also prohibits it in its guidelines.

A couple in the state of Victoria is currently appealing to the Victorian Civil and Administrative Tribunal to allow them to access IVF and to deliberately have a girl. The couple have had three boys naturally and lost a daughter soon after birth. They recently had IVF which resulted in a twin pregnancy. The twins were boys. They aborted the pregnancy.

I argued over 10 years ago there are no good reasons to oppose sex selection in countries like Australia.

According to the father of liberalism, John Stuart Mill, the sole ground for interference in liberty is to prevent harm to others. As Gab Kovacs pointed out in that article and I pointed out 10 years ago, nobody is harmed by this couple’s decision to have a girl. The ban on sex selection is a blatant abuse of state authority.

It is instructive to look at the objections that people did raise to this case. First, let’s start with the “expert” from the opposing side, Gene Ethics director Bob Phelps.

I’m sorry they lost their daughter but, in the interests of society as a whole, they should seek some counselling for their grief and look for another way of getting a daughter into their family.

Mr Phelps said he was concerned that making an exception in this case could open the floodgates and raised concerns about skewing the male-female balance of the population.

There is no evidence that the sex ratio would be changed in a country like Australia. People’s preferences are divided equally and most requests are for “family balancing” as in this case, where couples seek to have a child of the opposite sex to the ones which they have already. If one were seriously concerned about the country’s sex ratio, one should adopt the least liberty restricting option, not the most liberty restricting option. The least liberty restricting option would be to allow free selection and simply monitor the sex ratio. If it did show signs of worrying disturbance, this could easily be corrected – by then allowing only sex selection for the minority sex. That is, if too many boys were produced, sex selection could be restricted to choosing girls for a time. Or a slightly more liberty restricting option would be to only allow sex selection for second and subsequent children and only for the sex opposite to the existing children.

But this is not the reason why most people are concerned with sex selection. One has only to glance through the correspondence on this issue to see what the real concern is. Evidently, people employ what Leon Kass described as the “wisdom of repugnance” or the “wisdom of the gut”. I have again discussed this kind of “ethical reflection” and its invalidity. In the first 30 or so responses, I read, “Appalling”, “Sick”, “Disgusting”, “Horrifying”. Virtually every response expressed strong condemnation (except for a rather thoughtful response from an IVF mother.) Here is one typical response:

These people are disgusting. Grieving for there dead daughter, but happy to murder (oh, I mean ‘terminate’) two boys? IVF sex selection should be for medical reasons only, such as gender-specific genetic disease. Not for people who want to replace a lost child. Frankly I don’t think people this emotionally unstable should be having any more children. They need counseling, not help to medically manufacture a replacement baby.

In the background of virtually the opposition was that this couple should not kill two healthy male foetuses for this reason. In a somewhat interesting twist on this argument, one journalist used this as an argument in favour of sex selection:

No, it’s the babies lost that must be our real concern here and for that reason alone the law must change so that no more will we see parents disposing of one child to make way for another.

Can the “babies lost” be a reason for liberals to oppose sex selection, or support it?

At the extreme end of this argument is the full blooded pro-life position, held by the Catholic Church, that the fetus and embryo are persons with a full right to life. Obviously, if that view were correct, sex selection would be wrong. But so would the 100 000 social terminations which occur in Australia every year, IVF which involves disposing of spare embryos, laws like those in Victoria which require destruction of excess IVF embryos, the use of the IUD and even the use of the oral contraceptive which can result in failure of embryo implantation. Such a view is wildly inconsistent with Australian life, society and values. It is not the basis for any kind of legislation.

Rather, at the basis of most of these folk objections is the view expressed that IVF and embryo selection can be used for serious diseases but not for mild disorders or for selecting the sex of one’s offspring. Over 10 years ago, Lach De Crespigny and I surveyed the attitudes of Australian practitioners working in clinical genetics and obstetrical ultrasound on whether termination of pregnancy (TOP) should be available for conditions ranging from mild to severe fetal abnormality and for non-medical reasons. We compared these for terminations at 13 weeks and 24 weeks. One striking finding was that these professionals who were routinely involved in termination of pregnancy were much more prepared to facilitate a termination, even at 13 weeks, for a normal pregnancy than for one involving a cleft palate.

This displays the dominant tendency of both professionals and the public to evaluate the reasons for a person’s choice to have a termination or to destroy an embryo. As I argued there, such a view cannot be sustained by liberals. Either the fetus or embryo has a right to life, in which case all killing is wrong, for whatever reason, or the fetus/embryo does not have the moral status of a person, in which case killing it for any reason is permissible, just as killing other living things with no moral status is permissible.

In fact, the dominant community position that killing for disability is permissible but not for reasons of sex selection is an example of objectionable eugenics – devaluing the lives of some, that is, those with a sex linked or other disorder.

The public and professionals are confused about the value of life and moral status. Either the embryo has a moral status or it does not. At present, it is permissible to destroy an embryo if

  1. It has haemophilia, or
  2. It is normal but excess to requirements for IVF

But it is not permissible to destroy an embryo because its sex is the same as 3 preceding children. This represents a wholly unsupportable account of the status of human life.

People with haemophilia, a sex linked disorder, have lives which are very worthwhile and worth living. The reason why it is permissible to destroy embryos with haemophilia is because embryos are not the kinds of beings which are harmed by being destroyed.

Liberals will at some point have to bite the bullet and embrace the implications of the values that they adhere to.

It is time to revise our irrational, harmful and illiberal opposition to sex selection.

Ferretting out fearsome flu: should we make pandemic bird flu viruses?

$
0
0

Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?

The researchers were investigating what it took for the H5N1 virus to become able to spread effectively. Normally it doesn’t spread well among humans, a required trait for a pandemic virus. It could be that this is fairly easy to evolve through a few mutations, but equally well it could require some unlikely redesign: some researchers think that H5 viruses cannot become pandemic. Science writes:

Fouchier initially tried to make the virus more transmissible by making specific changes to its genome, using a process called reverse genetics; when that failed, he passed the virus from one ferret to another multiple times, a low-tech and timehonored method of making a pathogen adapt to a new host.

After 10 generations, the virus had become “airborne”: Healthy ferrets became infected simply by being housed in a cage next to a sick one. The airborne strain had five mutations in two genes, each of which have already been found in nature, Fouchier says; just never all at once in the same strain.

This airborne strain is also likely to work in humans. So now we know that it is not too hard for nature to evolve a dangerous bird flu and which genes matter. The first fact is directly useful: we must not be complacent about bird flu, which exists in many bird populations, and we should keep monitoring it. The second is useful for science and vaccine design, but also for enterprising bioterrorists.

This is a fine example of an information hazard, a piece of true information that is potentially harmful.

Should this research be published? Or even done? Richard Ebright at Rutgers said that these studies “should never have been done”, and a colleague expressed the same view – it should never have been funded. On the other hand the researchers seem to have the support of the flu research community and did consult with various institutions before the experiments. People clearly hold very different views, and being able to fully judge risks and benefits requires specialist knowledge about both virology, epidemics and bioterror risks – knowledge that might not even be known to anybody at present.

Right now the main issue is whether to publish the result, or more likely what methods to leave out. But it seems that evolving viruses in lab animal populations is hardly a secret that nobody else could figure out – it is a “time-honoured” method. The real question is whether there needs to be prior review for work on risky topics, perhaps an international risk-assessment system for pandemic viruses. At present there is hardly any even on a national level.

Paul Keim:

“The process of identifying dual use of concern is something that should start at the very first glimmer of an experiment,” he says. “You shouldn’t wait until you have submitted a paper before you decide it’s dangerous. Scientists and institutions and funding agencies should be looking at this. The journals and the journals’ reviewers should be the last resort.”

This is likely sensible. Although it also means that research about many topics that are likely important to mankind now gets an extra level of bureaucracy. In the US, new rules tightening restrictions about research into select agents with biological warfare potential, has also stifled research into protection against them – the extra rules make researchers aim for topics with less hassle and cost. Researchers also have motivation to downplay risks of their research and play up the problems of regulation: we should expect a measure of bias. Research sometimes shows unexpected risks too, that cannot be prevented by prior review. It is unlikely that any reviewer would have caught the discovery of how to make mousepox 100% lethal, and its worrying implications for small pox.

In situations like this, should one err on the side of caution or try to make an admittedly very uncertain risk/benefit estimation? There might be two cases: one is research that has the potential to do great harm, but not permanently damage the prospects of mankind. The second is research that could increase existential risk.

Nick Bostrom argues in a recent paper that reducing existential risk might be “a dominant priority in many aggregative
consequentialist moral theories (and as a very important concern in many other moral theories).” The amount of lost value is potentially staggering. This makes the “maxipOK rule” (“Maximize the probability of an “OK outcome,” where an OK outcome is any outcome that avoids existential catastrophe”) sensible: for existential risk, research that increases existential risk should not be done. Conversely, research that reduces it should be strongly prioritized. Another point in the paper is that we should strive to keep our options open: increasing our knowledge of what to value, what risks there are and how to steer our future is also extremely valuable.This may include refraining from some areas research until a later point in time when we are more likely to be wise or coordinated – or just because it gains us a few years extra of little risk while we think things through.

In the case of the flu virus things are less stark. Existential risks, because of their finality, pose different issues than merely bad risks. Even a human-made flu pandemic would not be the end of the world, just millions of lives. Doing the research now rather than later has given us relevant data on a risk that we can now reduce slightly. Maybe it could have been found some other way, but it is not obvious.  The information hazard is not enormous – the ferret method is not new, and the knowledge of the relevant genes is at most a part of the overall puzzle of how pandemics work. Bioterrorists are not that likely to aim for a flu as their doomsday weapon. The risk of release of the virus exists, but there are already millions of birds acting as a reservoir and evolutionary breeding ground: we know those ferrets are dangerous, unlike the doves on the street outside your window.

The real lesson seems to be that current coordination of dual use research, especially when it has interdisciplinary aspects, is still very bad. Hopefully this controversy will help foster construction of a smarter international systems of review that can take information hazards and existential risks into account. It will also have the unenviable job of making risk/benefit estimations where there is very little data and millions of lives are in the balance – on either side. But the enormity of that balance shows that such institutions would be important. They would not be frivolous wastes of taxpayer money or scientist time: even a slight improvement of our decisions about big risks nets a significant number of saved lives. We better remember that in the future when they occasionally will make the wrong decision.

Experimenting with oversight with more bite?

$
0
0

It was probably hard for the US National Science Advisory Board for Biosecurity (NSABB) to avoid getting plenty of coal in its Christmas stockings this year, sent from various parties who felt NSABB were either stifling academic freedom or not doing enough to protect humanity. So much for good intentions.

The background is the potentially risky experiments on demonstrating the pandemic potential of bird flu: NSABB urged that the resulting papers not include “the methodological and other details that could enable replication of the experiments by those who would seek to do harm”. But it can merely advice, and is fairly rarely called upon to review potentially risky papers. Do we need something with more teeth, or will free and open research protect us better?

During the holidays I watched the film Contagion, recommended to me by an epidemiologist (high praise indeed). It is worth seeing and considering. The film depicts a plausible outbreak of a viral pandemic, ending with at least 26 million dead globally. In this fairly-bad-case scenario there is still no risk of humanity going extinct or end of civilization, but certainly there are major threats to life and liberty, hard practical and ethical decisions, and all societies will no doubt be permanently marked by the experience. It is not a bad benchmark to consider the work of NSABB against (to reach existential risk level we need to be very unlucky or run into deliberate biotechnological genocide, something research biosecurity cannot do much about).

As I argued in my previous post on the topic, as long as the threat is not an existential threat to the survival of the species it is “merely” a risk-benefit issue: very weighty, but it does not overrule all other considerations. If we assume a 1% chance of a major pandemic per year (one 1918 flu per century) with 26 million dead (rather than 50+ million in the 1918 case) it corresponds to an expectation of 260,000 dead per year – just between the mortality rates of pancreatic and cervical cancer. No matter the seriousness of a pandemic there will be other things – liberty, economics, science, etc. – that have to be weighed against safety: the proper question (as noted by Alan in his previous post) is how the weighing is done (and by who).

Right now the safety of biotechnology research, especially the information hazard aspect of publishing results, is handled by a hodgepodge of local or national rules, various voluntary guidelines and the goodwill of scientists. It is very likely that the weighing is not done right and in an impotent fashion. No matter where one’s intuitions lie, it is clear that spending some effort at improving the system – preferably before something untoward happens that forces reform – would be effort well spent.

One constructive attempt at doing this can be found in a report from the Center for International and Security Studies at Maryland University, Controlling Dangerous Pathogens: A Prototype Protective Oversight System. It sketches out an oversight system on the local, national and international level aimed at improving biological security:

In an effort to encourage productive discussion of the problem and its implications, this monograph discusses an oversight process designed to bring independent scrutiny to bear throughout the world without exception on fundamental research activities that might plausibly generate massively destructive or otherwise highly dangerous consequences. The suggestion is that a mandatory, globally implemented process of that sort would provide the most obvious means of protecting against the dangers of advances in biology while also pursuing the benefits. The underlying principle of independent scrutiny is the central measure of protection used in other areas of major consequence, such as the handling of money, and it is reasonable to expect that principle will have to be actively applied to biology as well.

It does this by 1) instituting licensing of institutions and individuals that handle risky agents (and vetting of students, janitors and others not formally part of the system), and 2) a system of independent peer review of proposed projects, with review boards on higher levels dealing with agents of more concern (all the way up to a top international level considering agents of extreme concern) and deciding on the conditions for approval. Information disclosure practices would be well-defined, as well as risk-benefit decision criteria, accountability and verification.

It is a lovely armchair product with some serious thinking behind it.

The real problem is of course that building international institutions is hard. It is not impossible, and sometimes worth it. The proposed scheme is a not too outlandish extension of current practices and would no doubt be supported by many scientists and policy-makers, yet it would need funding, international agreements, overcome individual, institutional and national resistance, as well as build a base of credibility within the scientific community. It can be done, but it would take much dedicated effort.

The proposal recognizes this and points out a useful feature: even a partial implementation is better than nothing. Even if it is just Ruritania that sets up a proper oversight system that means – assuming the oversight works – that the world has become slightly safer. The more who joins, even with imperfect systems, the better.

Trying to implement proper oversight also has another beneficial effect: plenty of critics will be motivated to articulate potential problems. This is very useful, even if some critics are not themselves motivated by love of truth or safety. Trying to actually formulate acceptable professional standards, risk-benefit decision-making criteria, appropriate levels of disclosure and all the other elements of proper oversight will attract far more attention and intellectual capital than wishing for a pre-packaged solution: usually the best way of getting something done well is to create a bad first sketch and see everybody flock to outdo what is there on the canvas. The Maryland report is an excellent seed for this kind of process: the more competing projects it generates, the better.

A sceptic might worry that experimenting with new forms of oversight is in itself risky: they might waste resources or impair science if implemented badly, and like much bureaucracy they might be hard to dislodge once they come into existence. This is a valid input to the process – we better make sure whatever oversight eventually develops can be held accountable and has incentives to be useful rather than self-serving – but it does not mean it is wrong trying to understand what kind of oversight could work. Rather, we should avoid premature convergence on a fixed institution. At present we do not have the knowledge to properly answer this post’s question about where the proper balance between openness and precaution lie. Figuring it out is not just a nice exercise for the admin-heads out there, but an actual matter of (probabilistic) life and death of thousands or millions.

H5N1: Why Open the Stable Door?

$
0
0

Professor Paul Keim, who chairs the US National Science Advisory Board for Biosecurity, recently recommended the censoring of research that described the mutations which led to the transformation of the H5N1 bird-flu virus into a form that can be transmitted between humans through droplets in breath (in ferrets, the number of mutations required is frighteningly small – five). His reason is simple: the research would be a recipe book for bioterrorists.

Keim thinks, however, that such censorship will only delay the inevitable. The information will come out sooner or later, but at least governments might by then have developed and prepared sufficient stocks of vaccine and set in place other emergency measures to deal with a global pandemic.

This is not quite closing the stable door after the horse is bolted. It’s more like closing the farm gate, in the knowledge that eventually the horse will jump the gate and escape.

But this raises the question of why the stable door wasn’t bolted in the first place. In an article in Nature, the leader of one of the teams has said that the research was necessary to show that those experts who doubt the human transmissibility of H5N1 are wrong. But given that there is controversy here, governments should of course be doing what they have been doing: treating the possibility as a serious risk. In response to the charge that the research is dangerous, this same research leader’s response is that there is already a threat of mutation in nature. But threats don’t cancel one another, and nature is not revealing its secrets to bioterrorists. The researchers claim that their research was necessary for the development of a vaccine. Keim’s view is that this is quite implausible, since the drugs the scientists were using against their virus were the same ones used against others. If he’s right, a natural conclusion to draw is that the scientists should never have done the research in the first place. And, having done it, they should have kept quiet about its details and destroyed the virus. They might indeed have informed the media of their overall result, or some carefully restricted set of other researchers of the details of their research. But then of course they wouldn’t have been able to publish those details in top scientific journals.

Personalised weapons of mass destruction: governments and strategic emerging technologies

$
0
0

Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…

If personalised biowarfare done via Internet is not enough to give post-Halloween nightmares, consider the US expansion of “kill lists” into a “disposition matrix” system of people to kill or capture, and available means to do so. As noted by the Washington Post, this is a institutionalization of the practice of secret, targeted killing with very limited (if any) legal oversight. It is pretty obvious how future personalised biowarfare could be slotted into such a system right next to drone strikes, no doubt ably defended by a White House spokesperson as being constitutional and certainly within the presidential remit if it ever came to light.

Continuing to another domain, cyberwarfare is regarded by the US as a casus belli. Yet Obama appears to have ordered cyber-attacks against Iran’s nuclear program. Leaving aside the layers of hype of “cyber-” what is actually discussed is remote, technologically empowered sabotage. It might or might not be possible to use in a widespread society-disrupting fashion, but it certainly can be used against focused targets. It will also have collateral effects, not the least that the exploit tools now become available to the wider community of hackers who can turn them to their own ends.

It would be trivial here to continue in a standard rant about the failings of the US government to uphold various ethical or humanitarian principles, but it would be rather redundant – that can be found anywhere on the Internet. It is also obvious that many other governments are moving in similar directions: the US just happens to be the biggest, most advanced and most scrutinized government.

I think a more interesting angle is how governments and other groups handle the security implications of new and disruptive technologies.

Do they get it?

One interesting criticism of Obama’s decision to promote digital sabotage against Iran is that it might have been based on a faulty understanding of the technology and its consequences. It does not appear that he regarded himself as giving a potential casus belli to Iran, nor that spreading the technology of Stuxnet and Flame into the open would become used by enemies of the US (which, after all, has the most sensitive and expensive infrastructure to lose). He or his advisers likely did not see a big problem because they considered their tool as an ordinary tool or action (however sneaky). Normal tools don’t run away and become part of the threat ecosystem. But software is copyable, and once something is out it will remain out there: you need to protect yourself against it forever.

Similarly for drone technology. Since the US has demonstrated drone technology so well, it is now being copied by everybody. Not to be outdone by the Occupy movement’s occucopter, Hezbolla has launched their drone. Given these results, maybe the demonstration of Boeing’s CHAMP drone equipped to destroy electronics is not so good news for the US. How long before a counterpart is in the hands of groups the US would not want to have it? Again, it is an excellent weapon against high-tech infrastructure and societies dependent on it, just the thing to even the odds in a conflict against the US.

While military forces can be protected against drones or anti-electronic weapons, it is unlikely that this would be feasible for an entire civilian infrastructure. The situation is very similar to the Secret Service defence of the president against bioweapons: they have a single person to protect, so they can focus on him and have a reasonable chance of success. The same mechanisms they use, whatever they are, would be unlikely to protect an entire society. Same thing with computer security: it is certainly possible to protect the president’s computer, but the real threat is the unsecured computers out there, running the backbone of society.

Slowing down the spread of disruptive technologies is hard, as many governments are discovering. One reason is that most of them have positive uses: the Internet is enormously empowering, drones allow us to monitor our environment better, biotechnology will help medicine and the environment, 3D printing will enable massive customization and garage innovation, and numerous toxic and explosive chemicals are essential parts of our industrial infrastructure. Making use and co-opting them is often a better solution than trying to prevent the bad uses, since bad uses can rarely be predicted beforehand.  As noted in The Atlantic article, some systems would help both the president and everybody else. Widespread monitoring for new pathogens, transparency and data-sharing to boost response abilities, and constant pursuit of better biodefenses, would make everybody safer. There are many more minds and much more resources out there interested in reducing risk than could ever be mustered by any government. It is just that we do not know if this is enough to counter the Moore’s law of mad science.

Recognizing there is a problem

The common point of the biohacking, drones and cyberwarfare is that they are technologies that fundamentally change the nature of national and personal security. Yet they currently are not handled differently by governments: they are certainly seen as strategic technologies, but that just implies to the decision-makers that We should get them before They get them, not that it would be risky to pursue them at all. It might be impossible to prevent them from eventually being invented and used, but  it can be rational for a government like the US one to ask itself whether they want to have a world with these now rather than later.

Maybe the decision-makers are on top of things and do make sensible decisions about what strategic technologies to introduce. But past evidence speaks against it. The Nazi German military did not want computers for code-breaking. The Soviet establishment regarded radar stealth technology irrelevant and allowed Pyotr Ufimtsev to publish his findings in the open literature, where they were used by the US stealth program. The potential of the Internet for changing economics and politics seem to have passed most governments by until the 00′s. Decision-makers still do not seem to have understood the subversive potential of digital currencies.

If, as I think, decision-makers have a hard time getting the full implications of technologies they launch, then it seems that recognizing that there is a problem is the key issue. Yes, the technologies should also be used for good ends, but if you do not understand the consequences of your tool your intentions have a good chance of being swamped by unforeseen consequences.  Yes, predicting technology is notoriously hard, especially for open-ended technologies like computers and biotechnology, but that doesn’t mean they are utterly beyond reason and foresight.

At the very least decision-makers should consider whether they ought to be pushing for technologies or practices that are likely to damage their strategic interests. If you are more vulnerable than your competitors to biowarfare, EMP, cyberwarfare or assassination politics it is irrational to promote them: you should attempt to slow their spread and development, and ideally culture technologies or institutions that reduce their impacts.

 


We may need to end all war. Quickly.

$
0
0

Public opinion and governments wrestle with a difficult problem: whether or not to intervene in Syria. The standard arguments are well known – just war theory, humanitarian protection of civilian populations, the westphalian right of states to non-intervention, the risk of quagmires, deterrence against chemical weapons use… But the news that an American group has successfully 3D printed a working handgun may put a new perspective on things.

Why? It’s not as if there’s a lack of guns in the world – either in the US or in Syria – so a barely working weapon, built from still-uncommon technology, is hardly going to upset any balance of power. But that may just be the beginning. As 3D printing technology gets better, as private micro-manufacturing improves (possibly all the way to Drexlerian nanotechnology), the range of weapons that can be privately produced increases. This type of manufacturing could be small scale, using little but raw material, and be very fast paced. We may reach a situation where any medium-sized organisation (a small country, a corporation, a town) could build an entire weapons arsenal in the blink of an eye: 20,000 combat drones, say, and 10,000 cruise missiles, all within a single day. All that you’d need are the plans, cheap raw materials, and a small factory floor.

It’s obvious that such a world would have a completely different balance of power to our own. Arms-control treaties would become pointless, and any single state or statelet, willing to run the risk, could challenge the world in the course of a week or a day.

It may be that the only way to prepare for this is to aim for a world entirely without war. If we can remove war as an instrument of state policy for good, then other means (surveillance, treaties, minimal deterrence) may be enough to contain the remaining risk.

A world without war! How utopic is that? Well, much less utopic than it’s ever been before – the amounts of deaths through wars (of all types) has been on a steady downwards trajectory for decades now. Literally, we’ve never been so peaceful, despite the television cameras that provide a steady diet of conflict from wherever the world is bleeding that day.

What does this mean for Syria? It means that we shouldn’t reach conclusions based simply on the current facts, but on what we think would lead to less war in the future. And there, it seems, the balance is strongly towards non-intervention. For an intervention would be for a large or medium power (USA, France, Turkey…) to project power beyond their borders, invading a sovereign country, in opposition to quite a number of other world powers, and with high risk of getting stuck in there for years. This is the kind of situation we need to end, no matter how well intentioned or justified it may be in this instance. There will be other instances, less justified, less well intentioned, harder to oppose if we intervene now.

If it is important for our future to get rid of war, we’re going to have to start opposing wars today – even “good wars”.

How much transparency?

$
0
0

By Dominic Wilkinson (Twitter: @Neonatalethics)

There are reports in the press this week that the remains of 86 unborn fetuses were kept in a UK hospital mortuary for months or even years longer than they should have been. The majority were fetuses less than 12 weeks gestation. According to the report, this arose because of administrative error and a failure to obtain the necessary permissions for cremation.

The hospital has publicly apologized, and set up an enquiry into the error. They are planning to cremate the remaining fetuses. However, they have decided not to contact all of the families and women whose fetal remains were kept on the basis that this would likely cause a greater amount of distress.

Is this the right approach? Guidelines and teaching in medical schools encourage health-care professionals and institutions to own up to their errors and disclose them to patients. Is it justifiable then to not reveal errors on the grounds that this would be too upsetting? How much transparency is desirable in healthcare?

This question arises commonly in medical practice. Doctors, nurses, physiotherapists and pharmacists all make mistakes. Indeed all of us in our professional lives make mistakes. The question is not whether mistakes happen – rather it is what we can do to prevent them and what we should do when they do happen.

One of the relatively unique features of the healthcare profession is the emphasis on transparency about error. Doctors and nurses are exhorted to come clean, to own up to mistakes when they occur. This is understandable (though not easy) when the mistakes have led to significant harm to the patient. However most mistakes that occur in every day medical care are small and unlikely to cause harm. For example, a nurse might miss or delay or give an extra dose of medication. A doctor might prescribe the wrong dose of a drug (by a small degree). A test result may not be followed up in a timely manner. Do doctors need to own up to these mistakes? Some have argued that doctors should disclose errors even potentially when the mistake have caused no harm. The NHS constitution appears to support this “The NHS also commits when mistakes happen to acknowledge them, apologise, explain what went wrong and put things right quickly and effectively.” Some have advocated for a statutory ‘duty of candour’.

However, it is difficult to think of any other profession that would behave in this way.* We do not expect our politicians, engineers, pilots, lawyers, teachers or mechanics to perform a mea culpa every time that they make any degree of error. So why should health professionals?

There are several arguments in favour of a philosophy of ultra-transparency for medical error. One reason is that identifying errors is important for preventing future mistakes. Although errors are inevitable, they can be reduced. Owning up to mistakes is necessary if lessons are to be learned. Although an individual error may not have led to harm, the same error in a different circumstance could well be harmful. Second, some argue that admitting to mistakes encourages trust between patients and doctors. There is some evidence to support this: a study from 2006 found that non-disclosure of error (that the patient discovered) reduced patient trust and satisfaction. Third, disclosure is often said to reduce the risk of patients seeking legal remedy.

These arguments provide strong reasons for doctors to disclose harmful medical errors. However, it is less clear that they apply to errors that have not led to harm. There is a need to have a medical culture of reporting, investigating and addressing errors whenever they occur. Yet, disclosure to patients is not necessary in order to report and address errors. Complete candour and openness could lead to patients trusting their doctors more. However, it also seems plausible that disclosure of minor errors would undermine trust in health professionals and in the profession generally. The frequency of apologies may render them meaningless. Finally, disclosure of non-harmful events would be unnecessary to reduce litigation, since in the absence of demonstrable harm, legal action would usually not be successful.

Existing systems in the NHS encourage a nuanced approach to the disclosure of medical errors. The national ‘Being Open’ framework notes that “prevented patient safety incidents and ‘no harm’ incidents” do not need to be disclosed. Professionals are advised to consider individual circumstances and the best interests of the patient. This sounds simple, but it will not always be straightforward. What does it mean for the case at the start of this post – where fetal remains were kept in a mortuary for too long? The hospital’s approach of not contacting women to notify them of the error seems on the face of it justified. There is no discernible harm to the fetuses. There is a potential harm to women who discover and are distressed to learn that their unborn children remained in a mortuary for a prolonged period. Yet this would only be realized if women learned of the incident through disclosure (or through media attention).

Medical care should aim to be translucent, but perhaps not always transparent. There are some things that we are better off not seeing.

 

 

 

*One obvious exception to this are journalists, who regularly publish corrections to material, even if the errors are small and apparently non-significant. We might think that the special importance of accuracy in published information means that journalists should be bound by ultratransparency. However, it is not clear that errors of omission are necessarily disclosed by journalists in the same way.

The future of punishment: a clarification

$
0
0

By Rebecca Roache

Follow Rebecca on Twitter here

I’m working on a paper entitled ‘Cyborg justice: punishment in the age of transformative technology’ with my colleagues Anders Sandberg and Hannah Maslen. In it, we consider how punishment practices might change as technology advances, and what ethical issues might arise. The paper grew out of a blog post I wrote last year at Practical Ethics, a version of which was published as an article in Slate. A few months ago, Ross Andersen from the brilliant online magazine Aeon interviewed Anders, Hannah, and me, and the interview was published earlier this month. Versions of the story quickly appeared in various sources, beginning with a predictably inept effort in the Daily Mail, and followed by articles in The TelegraphHuffington PostGawkerBoing Boing, and elsewhere. The interview also sparked debate in the blogosphere, including posts by Daily NousPolaris KoiThe Good Men ProjectFilip SpagnoliBrian LeiterRogue PriestLuke Davies, and Ari Kohen, and comments and questions on Twitter and on my website. I’ve also received, by email, many comments, questions, and requests for further interviews and media appearances. These arrived at a time when I was travelling and lacked regular email access, and I’m yet to get around to replying to most of them. Apologies if you’re one of the people waiting for a reply.
I’m very happy to have started a debate on this topic, although less happy to have received a lot of negative attention based on a misunderstanding of my views on punishment and my reasons for being interested in this topic. I respond to the most common questions and concerns below. Feel free to leave a comment if there’s something important that I haven’t covered.

Do you want to torture prisoners?
No. I don’t endorse any of the punishment methods mentioned in the Aeon interview or in the other media coverage. Considering futuristic punishment methods is an exercise in philosophy, not a proposal for the reform of the criminal justice system. The dystopic punishment methods discussed in the interview interest me because of the philosophical questions they raise about punishment practices. These questions include: Is it important for punishment to be unpleasant? What makes a punishment inhumane? If we could use technology to ‘calibrate’ punishments to ensure that the subjective experience of a particular punishment is similar for anyone who receives this punishment, is this something we should do? Should prison sentences be increased as average lifespan increases, just as fines have increased as average wealth has increased? What are the legitimate aims of punishment? When is it acceptable to deny someone access to a particular technology as part of their punishment, and when is doing so an impermissible infringement of their rights? In what way is remorse important, and would pharmacologically-induced remorse be as good as spontaneous remorse?
The recent media coverage creates the impression that I’m interested solely in thinking up new methods of punishment, but the focus of the paper I’m writing with Anders and Hannah is much more general than this. We’re interested in how technology and punishment practices might come to interact in the future, and some of the ways this might happen could be unintentional. Imagine, for example, that mobile phone technology evolves so that instead of having a handset to carry around, many people have a chip implanted under their skin that performs roughly the same function that mobile phones perform today. Would it be ethically acceptable to remove such an implant from prisoners? On the one hand, future governments—like current ones—might have good reasons for believing it important to restrict prisoners’ ability to communicate with the outside world, and they might use these reasons to justify removing the implant. On the other hand, removing such an implant would involve surgery, probably without consent in most cases. This would be difficult to justify, and we might reasonably worry about where it might lead: if we accept that the state can perform surgery on a non-consenting criminal for the purpose of removing an implant, might this open the door to other uses of compulsory surgery or invasive treatments?
Another interesting issue is that, even if technology is harnessed to devise new punishment methods, it might not be clear how the new methods compare to old methods. Radical lifespan enhancement might enable us to send people to prison for hundreds of years, but would this be a more severe punishment than current life sentences, or a less severe one? On the one hand, longer prison sentences are more severe punishments than shorter prison sentences, so a 300-year sentence would be a more severe punishment than a 30-year one. On the other hand, consider that many prisoners sentenced to death in the US appeal to have their death sentences converted to life sentences. This suggests that a longer sentence is viewed by prisoners who are sentenced to death as less severe than a shorter sentence (followed by execution). I made this point in the Aeon interview, and some people took me to be rejecting the idea of technologically-extended life sentences on the ground that this would be too lenient, and therefore bad. In fact, my point was that it might not always be obvious how technologically-induced changes in a punishment affect its severity.

You’re only pretending to care about ethically assessing futuristic punishments. You don’t really care about this.
No, I really do think that it is extremely important ethically to assess future technologies before they are developed. Six years ago, I published an academic paper arguing for exactly this claim (‘Ethics, speculation, and values’, which you can read here if you’re interested).
Possibly this accusation arises from inaccurate reports in the media that I am a scientist, and/or that I’m in charge of a team of scientists. That might have given people the impression that I’m involved in actually developing technologies for the purpose of modifying punishments. In fact I’m a philosopher, not a scientist, and I’m not in charge of anyone.

Only an evil person could think up the punishments you describe.
I find the claim, implied here, that it is morally wrong to entertain certain thoughts disturbing and implausible. But if thinking up dystopic punishment methods makes me evil, I am in good company. As several people have pointed out, various science fiction authors have described punishments very similar to the ones mentioned in the Aeon interview and in my original blog post. An episode of Star Trek deals with a method of punishment resembling the mind uploading scenario I described in the blog, in which prisoners’ minds could be ‘sped up’ to enable them to serve a subjectively long virtual sentence in just a few minutes or hours of real time. And Vernor Vinge’s novel, Marooned in Real Time, describes a method of ‘freezing’ criminals in time so that punishment can be delayed. I doubt that anyone seriously believes that the authors of such punishment scenarios are evil.

Why do you focus on retributive punishments? Are you a retributivist?
We don’t focus solely on retributive methods—even the Aeon interview discusses rehabilitation—but I think the most interesting philosophical questions are raised by retributive issues, so we devote more attention to considering those.
A bit of background: retribution is a reactive form of punishment. It aims to punish criminals by imposing on them a deprivation proportional to the seriousness of the crime they’ve committed. Traditionally conceived, pure retribution does not aim at bringing about a particular consequence, such as rehabilitation or deterrence. Because of this, we might call retributive punishments non-consequentialist. (To complicate matters, however, more recent theories of retribution hold that the purpose of punishment is to convey censure to criminals. This is not a purely non-consequentialist conception of retribution.)
In reality, punishments tend not to be purely retributive. They are usually partly consequentialist, in that they aim at producing a particular effect. The practice of imprisoning people for crimes is partly consequentialist in that it aims at deterring others from committing similar crimes. The practice of issuing community service orders (as, for example, when people caught spraying graffiti on public property are made to remove the graffiti as a punishment) is partly consequentialist in that it aims at having criminals make amends for their wrongs. Despite being consequentialist, these methods also have a retributive element: they involve imposing a deprivation on criminals that is proportional to the serious of the crime committed, and that is deserved by the criminal.  The ideas of proportionality and desert are characteristic of retributive punishment.
Whilst it is the primary motivation of justice systems like that of the UK, retribution is controversial. There is a sense in which it captures the essence of punishment—particularly the ideas of proportionality and desert, which are central to our intuitions of what punishment is about—but since pure retribution, traditionally conceived, does not aim at producing any positive effect (or, indeed, any particular effect at all), its ‘eye for an eye’ approach can seem primitive and unjustifiable.
Now, if we are consequentialists about punishment in that we wish to use punishment primarily as a means for achieving certain ends (such as deterrence, rehabilitation, making amends), then the use of futuristic technology in punishment does not raise any particularly interesting ethical issues about punishment. If we’re just interested in consequences, then all we need to do in order to settle the ethical issues is to gather empirical data about which technologies help us to achieve the best consequences, and use methods that incorporate those technologies.
By contrast, non-consequentialist concerns raise a number of interesting questions, such as those mentioned at the beginning of my answer to the first question, above. These issues are not novel, but they are raised in an interesting new way by viewing punishment through the lens of the future. Considering non-consquentialist concerns is relevant not only for retributivists, but also for consequentialists, since even consequentialists about punishment are interested in some non-consequentialist issues such as proportionality and desert. For example, few consequentialists would think it acceptable to frame an innocent person for murder and punish them with life imprisonment, even if there was reason to believe that doing so would be a good way of effecting desirable consequences, such as deterring would-be murderers. That consequentialists would view this as unacceptable indicates that they think that desert is an important aspect of punishment, and desert is a retributive notion.
As to the extent to which I’m a retributivist, I’m still trying to decide. I’m certainly not a pure retributivist: I don’t believe that retribution is the only legitimate purpose of punishment, but I’m open to the idea that retribution is important. That’s not to say that I’m unsympathetic to the view that it seems more constructive to aim at positive ends than to punish reactively. But there is a sense in which the retributive aspect of punishment is more respectful of people and more affirming of their capacities as rational agents than consequentialist approaches: by not aiming primarily at bringing about a certain consequence, retribution (unlike consequentialist approaches) does not treat criminals as means to an end. On a more practical level, it could be that retributive punishment is important for social stability: we might see more vigilante activity if the state did not satisfy people’s desire for retribution by ensuring that criminals suffered appropriately for their crimes. The importance of such considerations, and the extent to which they justify retributive punishments, is something that I’m still considering.

Have you been misrepresented?
Media coverage has played a significant role in distorting my views: various re-hashings of the Aeon interview on other websites have created an impression that I’m in favour of implementing shockingly severe punishments—and, in some cases, that I’m a scientist actually involved in developing such punishments. What has been lost in this coverage is the distinction between philosophically evaluating an idea, and endorsing it. As I have said above, I don’t endorse any of the punishment methods I’m considering with Anders and Hannah. Indeed, the disturbing thought that futuristic technologies might be unthinkingly incorporated into our criminal justice practices without prior ethical assessment is one of my motivations for wanting to work on this topic.

Are you annoyed at the response you’ve received?
I’m delighted to have started a debate about this issue, and I welcome intelligent responses from people who either agree or disagree with me. Most of the responses I’ve received have fallen into this category. I’ve engaged with people who have written responses on Twitter and on their blogs, and I’m happy that so many people have found the issue stimulating.
I’m less happy to be misunderstood as endorsing the futuristic punishment methods mentioned in the Aeon interview. This post is an attempt to correct that misunderstanding.

Gene-Editing Mosquitoes at The European Youth Event 2018

$
0
0

By Jonathan Pugh

 

The below is a slightly extended version of my two 5min presentations at the European Youth Event 2018, at the European Parliament in Strasbourg. I was asked to present on the following questions:

 

  1. What are the ethical issues surrounding gene-editing, particularly with respect to eradicating mosquitoes?

 

  1. Should the EU legislate on gene-editing mosquitoes?

 

 

What are the ethical issues surrounding gene-editing, particularly with respect to eradicating mosquitoes?

 

We humans have been grappling with moral questions regarding genetic modification for many years now. However, the rise of the CRISPR/cAS9 gene-editing system has brought these questions back into the public sphere with a vengeance. The reason for this is that the CRISPR system is more precise, easier to use, and cheaper than preceding methods of genetic modification. It has hugely enhanced our ability to exert control over the genomes of various different organisms.

Naturally, there has been a great deal of interest in the potential for editing the human genome, particularly following the publication of the first study successfully changing DNA sequences in human embryos. This study has prompted ethical reflection on the sorts of influence that we should be able to exert over future generations. Could it be permissible to use gene-editing to cure serious genetic diseases like Huntingdon’s Disease and cystic fibrosis? Could it also be permissible to use the technology to change non-disease traits, like eye-colour, looks, or even intelligence? Could such a technology be used without violating fundamental moral principles of justice and equality?

These are undoubtedly profound questions. However, they are perhaps not the most pressing ethical questions that gene-editing raises. One reason for this is that the technology cannot yet be used to reliably and safely target the genes we want to target in human embryos without significant risks of off-target mutation. As such, clinical uses are still a very long way off. In contrast, studies have suggested that gene-editing can, and is already being used in other ways that raise a different set of ethical questions that are no less profound.

For one example, consider the use of gene-editing in the fight against mosquito-borne diseases. Such diseases represent a significant global disease burden; according to WHO, there were 214 million cases of malaria in 2015, resulting in 438 000 deaths. Using pre-CRISPR techniques of genetic modification, scientists in the EU have already edited genes in the Aedes aegypti species of mosquito (responsible for spreading the Zika virus, Malaria and dengue amongst others) so that the offspring of the mosquito cannot reproduce. These gene-edited mosquitoes are already being field tested in South America, with the modification leading to significant reductions in the mosquito population in the local area, and human disease.

The CRISPR system opens up new opportunities for gene-editing strategies. It might be used to target other genes in other species. For example, some teams have explored the use of CRISPR to modify the Anopheles stephensi mosquito so that it becomes resistant to the plasmodium parasite that causes malaria in humans. More significantly, these researchers, and others investigating the use of CRISPR in this context, have harnessed the CRISPR system to develop gene drive systems for their genetic modifications. Such gene drive systems could potentially enable researchers to spread a chosen genetic mutation throughout an entire species, by stimulating preferential inheritance of the affected gene. As well as being deployed in the fight against vector borne diseases, gene drive systems might also be employed for environmental causes, such as suppressing the population of an invasive and destructive species.

This is not in the realm of science fiction; the technology is here, detailed in respected scientific journals. As such, it is hugely important to assess the ethical implications of how we should use this technology.

Some commentators have criticized the technology as being contrary to the principle of the sanctity of life, or by arguing that it amounts to ‘playing God’. In my research, I have suggested that these objections are problematic for a number of reasons, and that they obscure what is really at stake in this debate. One problem they both face is that both allegations could have similarly been made against our choice to eradicate the variola virus responsible for smallpox in the 1970s, through an extensive vaccination program. Yet this was one of modern medicine’s greatest triumphs.

The real ethical questions that the prospect of gene-editing mosquitoes raises are grounded by our scientific uncertainty about the technology; will the modification work? Would eradicating a species of mosquito adversely affect the ecosystem? Will the modification spread to other species? These are empirical questions about which there is significant debate, and we clearly need more data to answer them. However, these empirical questions must be the starting point for what is really the fundamental ethical question here, which is ‘How should we make decisions about whether or not to deploy a technology, when we have only a limited understanding of its potential risks and benefits? Who should decide? Do we have the right to deploy a technology that could plausibly change the global ecosystem, if others object to its use?

The easy answer here is to say that we should not take any risks, and that in the light of any uncertainty, we should simply maintain the status quo situation; perhaps it is better the devil you know. Of course, that is not the approach that we have taken with other sorts of novel technology, such as IVF and the internet. But more importantly, in this case the status quo is one in which many hundreds of thousands of people are dying from diseases that this new technology could potentially prevent. As such, we need have to have very strong moral reasons to maintain this situation. Rather than avoid the ethical questions by simply adverting to the possibility of risk, we have an obligation to engage in moral reasoning about how to weigh the relevant risks and benefits here, grounded in the best scientific evidence available to us.

 

Should the EU legislate on gene-editing mosquitoes?

 

There are two different questions here. The first is whether this technology is a legitimate target of EU legislation. The second is whether there is any existing EU legislation that already sufficiently regulates this technology. I shall discuss each question in turn.

It might be argued that this technology is not a legitimate target of EU regulation simply because mosquito-borne disease is not primarily a European problem; after all, there are comparatively few incidents of mosquito-borne diseases in Europe, and the technology is primarily intended for use in countries outside of the EU. However, I want to urge against this view for three reasons.

First, the prevalence of mosquito-borne disease might plausibly increase in Europe as the effects of climate change take hold. Second, native animal that could be targeted with gene drives might become vectors for new pandemics; indeed the most devastating pandemic in European history, the great plague of 1347-1351, was caused by a bacterium that was carried to Europe by fleas on the backs of rats. With this in mind, it is notable that scientists have now developed a population suppressant gene drive that works in rodents. Further research in this area might allow us to develop gene drives in the future that could limit the spread of future pandemics.

Finally, much of the research into the genetic modification of disease vectors is being carried out in the EU, whilst field trials of the products of this research are being performed in other parts of the world. If the EU stands to benefit from this research, it also has a moral responsibility to make sure this research is being performed responsibly.

In fact, I suggest that gene-drive technology is in fact a paradigm example of a case in which there is a clear need to develop international agreement about a new technology. Mosquitoes do not respect national boundaries; accordingly, the consequences of nationally isolated decisions to employ gene drives would not simply be restricted to those nations. We all have a stake in getting the answers right. By the same token, the EU should have a global outlook here, by making its decision on potential regulation of this technology in consultation with nations outside of the bloc. As a leader in the research for this technology, the EU has an opportunity to set an example of global moral leadership, by taking steps towards developing much-needed trans-national agreement on the specific regulation of gene-drive technologies.

So genetic modification and gene drives are a legitimate target of European legislation. Is any existing EU legislation that already sufficiently regulates this technology? The EU already has a directive concerning the release of genetically modified non-human organisms (GMOs) into the environment. This is an important piece of legislation, but I believe that there are at least two ways in which the existing directive needs to be modified in order to appropriately regulate disease vectors that might be modified using the CRISPR system and gene drives.

 

Too Lax? Community Consultation or Community Consent?

 

I believe that the existing directive is too lax in one important respect. Article 9 of the directive requires that member states shall consult the public prior to the deliberate release of GMOs and disseminate relevant information about the trial.

This sort of community consultation and information disclosure is of course important, and perhaps even sufficient for the experimental release of agricultural GMOs. However, there is a strong case for claiming that the public should have greater moral protection in field trials of gene drive systems mosquitoes. The reason for this is that such trials would include elements of both an environmental experiment and a public health clinical trial. After all, in order to assess the impact of the mosquitoes, it is likely that field trials will need to obtain public health data, and perhaps even data from individuals (for more on this, see David Resnik’s excellent discussions here and here)

In the case of public health trials though, there are far more stringent consent protections for communities, as evidenced in the clinical trials directive (and the clinical trials regulation that will succeed it) than mere consultation. Whilst the informed consent of every individual is often not possible for public health interventions, adequate regulation is this area should aim for democratic procedures that can capture broad community consent to the intervention, rather than consultation and information dissemination alone. Indeed, researchers themselves have been promoting this model of “responsive science”, which champions not only transparency, but gives affected people the final say on whether a trial goes ahead.

Accordingly, I recommend that the EU stipulate clear guidelines on the model of community consent required for permissible field trials of genetically modified disease vectors, and more broadly, how the GMO directive should be understood to interact with the clinical trials regulations in this context.

Of course, there are also broader challenges for democratic procedures in such trials. Following a 2015 revision to the initial directive, if an EU member state objects to the use of an approved GMO, other member states must establish ‘coexistence regulations’, to avoid cross-border GMO contamination. However, it is unclear how successful such co-existence measures can be in the case of genetically modified mosquitoes, particularly if a gene-drive is employed. Accordingly, rather than relying on a simple model of majority rule, the EU may have to reconsider how to navigate fundamental democratic issues in deciding whether to implement field trials that may put even abstaining states at unavoidable risk.

 

Too Restrictive? The Perils of Precaution

 

In other ways though, the GMO directive is overly restrictive.

Under the existing directive, GMOs are subject to a risk assessment performed by the European Food Safety Authority. One problem with this is that many of the risks and benefits of GM mosquitoes arguably go beyond the scope of the EFSA’s expertise. I recommend that the risk assessments of GMO mosquitoes should instead be performed by a collaborative body, consisting of members of both the EFSA and the European Centre for Disease Prevention and Control. Such a collaborative body is necessary to adequately assess the diverse risks and benefits at stake with gene-drive technology in epidemiology, public health and bio-diversity. Indeed, these bodies have already produced a collaborative publication on combatting vector-borne disease.

The more significant problem though is that under the existing directive, these risk assessments must be performed under the guidance of the precautionary principle. This principle places priority on the avoidance of risk, and places the burden of proof firmly on proponents of GMOs to prove that they are safe. Accordingly, risk assessments prioritise information pertaining to the potential risks posed by GMOs over information pertaining to their potential benefits.

The precautionary principle is often criticised amongst philosophers, because it plays into a number of cognitive biases that humans are prone to exhibit, including loss aversion and an assumption that nature is benign. More importantly though, the precautionary principle offers only limited help in making adequate risk assessments in this context. The reason for this is that if vector-borne disease becomes more prevalent, or we face new pandemics, then a decision to maintain the status quo is itself failing to take precautions against a very serious, and foreseeable risk. Furthermore, the principle may also put gene-drive research into a catch 22 regarding the principle’s burden of proof; the onus is on researchers to prove such organisms are safe before they can be released into the environment, but there will be some scope for doubt about the safety of the organisms until we conduct field tests by releasing the organisms into the environment.

Accordingly, my final recommendation is that the European Parliament should reconsider the role of the precautionary principle in the directive, and closely attend to the potential benefits, as well as the risks associated with different courses of action available. In this regard, I echo the spirit, if not the precise letter, or the House of Lords recommendation on genetically modified insects.

This is not to say that we should ignore the risks of the technology and carry out field tests regardless of the risks of gene-drive. Rather, given the high stakes involved, we should base our assessment on all of the available morally relevant information. This includes both information about the benefits and risks of the technology. It may be that such analysis leads to the conclusion that we should not deploy GM mosquitoes in the EU, depending on the decision principles we employ in weighing the relevant facts. Rather the point is that we have a moral responsibility to reach this conclusion on the basis of sound, scientifically grounded moral reasoning informed by all the morally relevant facts at our disposal.

 

Post –Script

 

I was invited to speak in a panel discussion alongside Louison Charmoillaux (a volunteer for Greenpeace Lyon) and Adrien Pasquier (PhD student, Telethon Institute of Genetics and Medicine [TIGEM]). The panel was moderated byJonathan Hendrickx, (Freelance journalist). You can find a video of the panel here (be sure to change the language settings to English to hear translations of the French speakers).

 

 

Viewing all 12 articles
Browse latest View live




Latest Images