SlideShare uma empresa Scribd logo
1 de 26
Baixar para ler offline
Acknowledgments: I am most thankful to Regina Rini and Collin O’Neil, who offered
extensive and thoughtful comments at various stages in my writing process. I would also
like to thank Noushaba Rashid, Scott Briggs, Jennifer Treman and Isabella Knazeck for their
comments, criticisms and support.
Dual-Use Research: The Scientist’s Role
J. Kyle Treman
New York University Center for Bioethics
Regina Rini, PhD
Spring 2015
Abstract:
More than any other time in history, humans possess a great deal of
technological power. As this power increases, so to does the magnitude of its
risk. This is due to a characteristic of certain research projects known as dual-
use. Dual-use describes the ability to enable benefits and harms. Typically, the
precautionary principle or a cost-benefit analysis is used to judge these
situations. However, the lack of knowledge, a public bias towards emerging
technology and a psychological tendency to favor short-term benefits over long-
term benefits might make regulatory bodies ill-suited for these cases. To
mitigate these risks society must look for the individuals most responsible and
best placed to make decisions about dual-use research. The special position of a
scientist is perfectly suited for this responsibility; the special knowledge,
abilities and unique causal influence of the scientist all impart a role obligation
to safeguard society from risks in dual-use research. That obligation persists
despite strong countervailing reasons, and may require the scientist to withhold
information from investors, the public as well as governments.
2
When a technology is used for harm, it is usually being used in a manner
that it was not intended for. When these harms are brought about, the scientist
who developed the technology is rarely held responsible. However, in certain
situations the scientist can uniquely enable an individual willing to do harm. If a
scientist published a study, and that study directly contributed to a bioterrorist
attack, would you hold that scientist responsible? Would you want to? I suspect
there is an opinion in our society that scientists are in some way responsible for
their part in enabling harms through research. This judgment comes from what I
believe the special position of a scientist entails and the magnitude of risk
inherent in research today. In what follows, I defend the claim that a scientist
has a special position that confers a strict role obligation to safeguard society
from the potential misuse of his research, and this obligation persists despite
strong countervailing reasons.
Dual-Use Research
In order to motivate my claim, these cases need more details. The
research projects that I am concerned with all possess similar characteristics; the
first is the property of dual-use. Dual-use describes the results of a research
project, namely that they can enable either benefits or harms. For example,
consider a scientist who discovered a way to excise portions of DNA without
damage and implant them into other organisms. This study is aimed at
providing more targeted therapies that can work at the genetic level. Through
the same method used to derive benefits a bioterrorist could create a
weaponized virus. The ambiguous use of dual-use research, often translates into
our opinions about dual-use technology. We have intuitions that the project
should be pursued for the benefits, but avoided for the harms. What makes
these studies even more difficult to judge are the factors underlying the benefits
of the study. That is, the benefits outweigh the harms to a meaningful degree
and the scientists pursuing these projects have no intention of enabling the
harms.1
For example, cancer drugs are highly poisonous, but that is necessary to
be effective against cancer. One could easily predict that these highly poisonous
ingredients could be used in a murder. However, the benefits to millions of
cancer patients far outweigh the harms of potentially enabling a few murders,
thus the drugs are researched and produced. Further, the scientists who
research cancer drugs never intend them to be used for this purpose. They
intend these drugs to improve the quality of life of patients with cancer. The
1
Meaningful degree is meant to capture the place where a society (or whatever group in a
society is in charge of this determination) would deem a research project worth pursuing.
All studies have harms that remain, but they are almost always outweighed by the
benefits. Meaningful degree is meant to capture this idea.
3
intention of these drugs, in addition to the high weight of the benefits, makes
this type of dual-use research worthwhile.
However, not all dual-use studies are so easily imagined, or ubiquitous.
Often times these projects represent a novel departure from scientific dogma,
enabling a new set of benefits and harms. The second characteristic of dual-use
research is the degree of novelty their results have. These research results may
be novel in degree or kind. For example, a new chemotherapy drug that is an
analog of an existing drug is novel in degree. We can correctly observe that this
degree is very low, which is evident in the justification of developing cancer
drugs. However, discovering a new kind of drug, or a new kind of treatment for
cancer all together, possesses a high degree of novelty. Any harms or benefits
enabled in the dual-use projects were previously non-existent, or existent to a
very low degree. Thus, the studies specifically enable harms that may not have
been possible before. The dual-use research projects I am considering have
results that are unique to those individual projects and the scientists who
publish them.
The last characteristic that dual-use research shares is the magnitude of
risk. When considering the magnitude of dual-use research there are three
qualities to examine: the scope, the intensity and the probability.i
The scope
describes the amount of people that stand to be affected; today many scientific
studies have a global reach. Terrorists need only to turn on a cell phone to
download the latest information to enable their ends. Thus, a dual-use research
study can enable harms around the world, harming anyone indeterminately.2
Stephen Kern, in The Culture of Time and Space, explored the phenomenon new
technology can have on society. Kern notes, that during World War I the
increased speeds of communication and shortened time for diplomacy led to
detrimental consequences. ii
New technologies, the telegraph and telephone,
were largely responsible for influencing the pace of diplomacy during the July
Crisis leading up to the First World War.iii
These rapid communications led to
nations placing deadlines on response times; where a nation would normally
have had weeks to determine a course of action now they had a matter of days,
which inevitably led to escalation.
The intensity is the degree to which the risk will lower the well-being of
individuals. Risks can have effects at the personal, local or global level. Consider
the following chart:
2
By indeterminately I mean it is not predictable who will be affected when a study is
published until the risks are actualized into harms. Combined with a global reach, the
individuals that will be affected are unpredictable, or indeterminate.
4
Figure 1: Intensity v. Scope of Riskiv
Scope
Global
Local
Personal
Endurable Terminal
Intensity
Dual-use research rarely raises concern at the personal level, partly
because of the novelty, and global technological infrastructure. Most dual-use
research will have effects on the global level.3
The harms that come about as a
result of dual-use studies fit into endurable and terminal. Where the endurable
harms can be overcome as opposed to the terminal harms, which are
insurmountable. Dual-use research generally falls into the upper categories,
because of the reach of these technologies and its accompanying infrastructure.
As the reach of these projects increase in scope and intensity, dual-use research
enables a new type of risk.
With some dual-use technologies there exists existential risk.v
These are
risks that if actualized into harms will have an irreversible effect on the human
species, either eliminating it or drastically curtailing its potential.4
The threat of
a nuclear holocaust is probably the most visceral image for people alive today,
but even a nuclear holocaust may only fit into the endurable category, as
mankind may not be irreversibly changed. Dual-use research projects do not
always fit this category, but there is an increasing prevalence of these
technologies. Nanobots, gene therapy, research into deadly diseases and
machine-body interfaces are all examples of these. They are especially
3
Of course this does not preclude any other type of effect on a local or personal level. A
global endurable risk may still be terminal at the individual or local level.
4
Bostrom has identified four types of existential risk, bangs, crunches, shrieks, and
whimpers. For a deeper analysis see [i]
Pandemic Existential Risk
Epidemic Genocide
Infected with
deadly virus
Death from virus
5
worrisome because a mistake, such as releasing one self-replicating nanobot into
the environment, may only need one breach to cause catastrophe.
The probability is the chance that a given event will occur, but probability
of risk in dual-use research can further be broken down into subjective and
objective probability.vi
Since objective risk assessments are difficult to precisely
determine, we must operate partially on subjective risk. This is due to the lack
of available facts about the consequences and partially due to the novel nature of
the risk. Any type of prediction, where novelty exists, cannot be one hundred
percent certain. Some risk will be perceived rather than measured.5
A general
trend when discussing dual-use research is the more subjective risk the more
the Precautionary Principle is invoked. The precautionary principle basically
favors safety when there is a lack of scientific evidence or an overwhelming bias.
Underlying the PP is a judgment about the value of the current state of affairs.vii
The more novel the risks, the more people have the perception that the current
state of affairs is somehow superior, or that the new state of affairs is somehow
worse. By and large the public have this negative opinion about many of these
future technologies, such as cloning.viii
Nick Bostrom has identified that one of the reasons for our attraction to
the precautionary principle might be that our coping strategies for these risks
have not evolved out of experience, as our strategies for dangerous animals,
diseases or wars have evolved.ix
Therefore, our typical strategies for thinking
about these risks, such as the precautionary principle, may be ill-suited to
determine the appropriate course of action. The intensity and scope of the risk
may be too difficult to rationally fathom. Further, as consumerism continues to
direct scientific pursuits regulatory bodies might be more convinced by short-
term benefits than long-term harms.6
This is one problem of cost-benefit
analysis with regard to these technologies. Humans have a propensity to judge
short-term gains as more valuable than long-term gains - something known as
hyperbolic discounting.x
One piece of evidence to suggest this is affecting
judgment about dual-use research is the introduction of a recent bill in the US
House of Representatives that will consider lowering FDA standards for
5
This is applicable to the scientist too, each individual scientist is certain to have biases.
However, there are two reasons to disregard these biases, or at least consider them trivial.
First, the position of the scientist is one where rationality dominates the thinking, or at
least it theoretically should. Of any position in society we consider having the least bias
towards science, scientists should be among the top individuals. Second biases can never
be eliminated completely, but the scientists, as a position in society, can minimize bias.
This can be accomplished through procedural methods, such as having other scientists
check decision-making steps prior to publishing.
6
Long-term harms here can mean either low probability, or harms that will occur in a
future time.
6
crossover drug use.xi
The short-term benefit of an increased economy or benefits
from private interest groups may outweigh the long-term benefit of keeping the
safety standards high for new drugs and uses. If this type of judgment is made
with technology that has existential risk, there is no second chance.
The more problematic cases, however, will exist in the middle. Existential
risks that will end humanity will likely be heavily scrutinized because of the
perception of risk, or at minimum risk will reach that magnitude less than the
other risks. Thus, they may not be as difficult to safeguard against as global
endurable risk. The cases where the risks are global but endurable will still
contain a high deal of benefits, making the perception of that technology more
difficult.7
These borderline cases will have proponents on both sides of the isle.
There will be less of a call for the precautionary principle (especially if the risks
are distant) and many different valuations of a cost-benefit analysis. This will
make it difficult to determine the appropriate course of action quickly and
decisively, which is needed for the magnitude of risk these projects possess.8
All the cases that I am concerned with possess these same characteristics:
the property of dual-use, a greater potential for benefit than harm, a lack of
intention of enabling harms on behalf of the scientist, novelty, and a high
magnitude of risk. Now consider an example of dual-use research that possesses
these characteristics that was published without safeguarding.
In June of 2012, seven separate studies were published in Nature detailing
experiments mutating H5N1, the most deadly strain of influenza. Since 2003
there have been 718 confirmed cases of H5N1 across 16 different countries, and
of those cases, 413 people have died.xii
The viruses, and their accompanying
mutations, have a low rate of transmissibility between human hosts. The low
transmissibility is a key factor in preventing the disease from becoming a
pandemic during outbreaks, like the 1918 Spanish flu that claimed close to 5%
of the world’s population.xiii
Two of the studies, published in Nature, detailed
novel methods of introducing mutations to the virus that would make it
transmissible, potentially to humans.xiv
In addition, the method behind mutating
7
At minimum, there will be more proponents of these technologies, or people who
evaluate the risk as ‘worth-it’. This is because global but endurable risks will bring a
different psychology to these problems. Since human beings will not perish or become
diminished, different countries will be affected differently. Depending on the ethical
standards and dispositions of these countries, global endurable risk may be outweighed
by that countries perception of benefits in research. The appropriate action of the
international community in these cases is beyond the scope of this paper.
8
Even if existential risk is not actualized, a global endurable risk may become actualized
quickly and have terminal effects at the local and individual level.
7
the virus was incredibly easy to replicate. One involved using nasal spray and
ferrets as mediums for mutation.xv
Both studies showed that the virus had
diminished lethality, however none of the scientists denied that these
experiments could be replicated and the lethality could be retained through
other mutations if that was the intent.xvi
The public was first alarmed to these studies when one of the scientists
stated that he and his team “had done something very stupid by mutating a
H5N1 virus that resulted in a one of the most deadly viruses that could be
created.”xvii
He later walked his comments back by claiming that the media had
blown his statements out of proportion. However, the perception became that
this research was a danger to release since it had the possibility to enable
bioterrorism. Proponents pointed out that the research could also be used to
counter-act terrorism by providing a cure, vaccine or countermeasures to a
attack. Even if the proponents were right, the scientist’s statements about his
research are troubling; at minimum they show a degree of callousness about the
potential consequences of his research. However, callousness does not
necessarily mean that he acted wrongly.9
None of the research teams intended to
publish a study that could be used in bioterrorism. They, like most scientists,
are focused on more pragmatic concerns: preserving their research grant, their
job, completing a sound scientific study, benefiting people through scientific
discovery etc. The study of H5N1 was aimed at providing cures and
countermeasures, rather than weapons. In the end, the studies were published
in their entirety and as of today there have been no attacks with a weaponized
strain of H5N1. However, it would be premature in thinking that the correct
decision was reached or that these concerns over dual-use research are
unfounded.
With all of these factors at play it can be difficult to determine what we
ought to do when dual-use technologies are being developed. Rather than
attempt to use the precautionary principle or a cost-benefit analysis on a case-
by-case basis, society should question whether the scientists researching these
technologies have any obligation to safeguard society from the consequences. To
motivate this point, I will examine when a position possesses role obligations
and consider the position of the scientist under these circumstances.
Role Obligations
“… We have consultations, which of the inventions and experiences which we
have discovered shall be published, and which not: and take all an oath of
9
Wrong in the sense of the scientist being responsible for the research’s misuse.
8
secrecy, for the concealing of those which we think fit to keep secret: though
some of those we do reveal sometimes to the state and some not.”xviii
- Francis Bacon, 1624
In New Atlantis, Bacon saw the need for scientists to prevent certain types
of research from entering the public sphere, and the need of society to prevent
that research from becoming public knowledge. Bacon’s sentiments can be
viewed as a call for responsibility on the part of the scientist, specifically a
responsibility to safeguard society against certain types of research from
becoming public knowledge.
Positions that have special responsibilities can be said to have role
obligations. Role obligations are used generally to mark the existence of special
responsibilities of a position in society towards a specific individual or group.
The responsibilities are considered special because they are above and beyond
what an individual outside of that particular position ought to do. Generally,
role obligations function to protect vulnerable groups from harm. These harms
are also special in the sense that they would not be easily mitigated. The
imbalance of power usually comes from abilities within position itself.
Consider the position of a doctor. Doctors have the power improve
patients’ health, a very important function in our society. In order to make
patients better, doctors subject individuals to harm during the course of an
examination. These harms are necessary to bestow a greater benefit to the
patient. Since the doctor has the special power and ability to subject patients to
harm and the knowledge to determine how best to use those harms for the good
of the patient, doctors can be said to have a responsibility to minimize harms for
patients.10
For example, consider the ramifications if doctors did not have an
obligation to minimize harms. Doctors could use their power to subject patients
to whatever invasive and expensive test they wished, taking advantage of people
in their most vulnerable state by exposing them to unnecessary physical and
psychological harms. In determining that no doctor can ethically act in these
ways, the obligation begins to determine the boundaries of the position of a
10
I appeal to the role obligations of doctors because of the generally uncontroversial
claim that the position does carry with it inherent obligations to patients. I don’t mean to
specifically advocate for any particular use of ‘minimize harms for patients’, however I
assume there is a common intuition that this is one (or a derivative of ‘minimize harms’)
of the basic obligations of a doctor.
9
doctor.11
However, there is an important feature of a doctor’s role obligation
that presents a problem for drawing similarities to the scientist, namely the
influence the Hippocratic Oath plays. The oath can be seen as a voluntary
acceptance of the obligations of a doctor.12
The problem is there is no oath for
becoming a scientist. If an oath confers duties as a doctor, then role obligations
may need to be voluntarily undertaken, but there are good reasons to doubt that
this has to be the case.
The position of a naturalized citizen is one where the obligations to one’s
home country are not explicitly undertaken. Being born, and being born in a
specific country are not agreements any individual can enter into, but that does
not preclude naturalized citizens from possessing the obligations and rights of
individuals who choose to be citizens. The obligations of citizens are always
present when that individual is identified in that role.13
A citizen of the United
States has obligations to pay taxes, and to follow the laws. In return that citizen
is granted certain powers that are protected (Bill of Rights), and has a more
advantageous status than non-citizens. However, initially this is not an explicit
agreement.14
Naturalized citizenship is conferred upon being born, without any
prior agreements or understanding of preexisting norms. Yet no one could
plausibly violate these obligations and claim ignorance.15
Further, no one could
deny a citizen his powers if he lacks knowledge of them. Therefore, if role
obligations can be conferred in explicit and implicit ways for a variety of reasons,
then it is worthwhile to analyze the position of a scientist to determine if it is
one with role obligations.
The Special Position of Scientists
11
This is meant generally. If a position is prohibited from a certain action (responsible for
not acting in a certain way), for example harming a patient, then that position must
operate around that prohibition when performing different functions or expanding its role.
12
There are a variety of versions between countries and medical schools, but I assume a
commonality of minimizing harms to patients among them. There may be specific
differences between oaths in the cases of euthanasia, abortion, etc. but while interesting
they are not relevant to the discussion of the scientist’s obligations.
13
I leave aside the specifics of how to determine citizenship (or degrees of citizenship)
and appeal to any case the reader deems unproblematic for what constitutes a citizen
(since there is no doubt the individual’s I am concerned with are scientists).
14
Infants have rights under the state despite not being able to agree to being citizens but
in many cases their parents absorb these
15
Consider explaining to the IRS that you didn’t know you had to pay taxes. Even if you
don’t go to jail, you will still owe them money, no matter how much you disagree or
protest that you didn’t undertake this obligation.
10
To motivate my claim I begin by analyzing the work of Sieghart et al.xix
They provide a good starting point as they sought to analyze whether the
position of the scientist has role obligations in general. Sieghart et al. began by
asking whether the position of a scientist is especially placed to benefit or harm
society.xx
They conclude that because of the potential to benefit or harm society,
in conjunction with the fact that scientists are able to more accurately predict
these consequences, scientists are in a special position that gives rise to role
obligations.xxi
I agree with their initial assessment, but there are more reasons to
judge the position of the scientist as being one with role obligations.
The special position of the scientist can be broken down into three main
components: special abilities, special knowledge, and a unique level of
complicity in the misuse of research. Recall that role obligations function to
protect vulnerable populations from harms stemming from power imbalances
that are not easily mitigated. Examples of the special abilities of scientists can
include use of restricted materials (such as strains of H5N1) and/or the use of
restricted procedures (experiments on human subjects). The imbalance of power
is one of the reasons that medical scientists have a myriad of obligations
regulating the way their studies can use human subjects. These abilities are
special because they are restricted from the public sphere and/or are held
secondarily by other agents.16
In addition, the special abilities of scientists are
important because of the magnitude of risk that dual-use research carries.17
Scientists also have the ability to first consider the consequences of their
research and are best placed to effect change. This is a practical consideration,
i.e. as their individual careers develop and scientists begin new studies, they are
usually the first individuals to decide what direction the project will take, and
what will be used. Therefore, they are best placed to implement any change to
their research, whether that is a change in methods, materials or the way the
results are released. The scientists, who will be under the greatest obligation,
are those who have the greatest ability to change the direction of the project. By
this I mean that no study occurs in a vacuum, there are multitudes of people
working even on small projects. However, there is a significant difference in
ability between the head scientist and everyone else. The top scientists on any
16
Restricted in the sense that an ordinary citizen would have to expend a great deal of
effort and/or break laws to achieve the same end that the scientist is capable of. ‘Held
secondarily by other agents’ refers to the ability of other agents to secondarily hold these
abilities. A bird farmer in China might have access to H5N1 but without the methods to
manipulate it (provided by the scientist) there is not the same potential for misuse. These
agents will always act secondarily to the scientist, because the scientist is responsible for
creating the research that might enable others.
17
Figure 1
11
project will generally have the most abilities (this also goes for special
knowledge and complicity). The special abilities of scientists, the use of
restricted materials and procedures, the placement of the scientist to first
consider studies. The global risk these studies contain are only one portion of
what confers the scientist’s obligation; special knowledge also plays an
important role.
The special knowledge of a scientist is a very important consideration for
the existence of role obligations. I make this claim based on two justifications:
first the special knowledge of a scientist enables a special degree of foresight
into the consequences, and second, that knowledge required dedication and
forethought to obtain. If a scientist is working on novel research then it is safe
to assume that the scientist is an expert in his field. By expert I mean to capture
the difference in knowledge between the project’s head scientist and an ordinary
individual. The head scientist may be conducting a study to obtain a specific
type of knowledge, but is still much more knowledgeable about the entirety of
what is involved when compared with an ordinary individual.18
The scientist is
only able to become an expert through dedicated work over a long period of
time; obtaining the knowledge necessary to successfully run experiments. That
expertise places the scientist in a unique position to work with special abilities.
More importantly, it allows him to have the knowledge necessary to foresee the
consequences of dual-use research. That knowledge is not always available to a
non-expert, and places the scientist in a unique position to determine the
consequences. That is because in order to design a dual-use study, one must
exercise forethought and develop a procedure. In thinking through the
procedure and the desired result, one would also be led to consider the
consequences. The scientist’s foresight is more likely to be accurate, because of
the special knowledge, and more likely to occur first because of the scientist’s
position.
An objection could be raised here, which stated, if an agent was ignorant
of the consequences and then that agent should not be held responsible (or
should be held responsible to a diminished degree) due to the fact he lacked
sufficient foresight. In some instances this objection has merit, however there
are limits when this objection can be used in the case of the scientist. If the
scientist is an expert in his field, but fails to consider the consequences of his
research project then he is either culpably ignorant, or the consequences are
unforeseeable. Culpable ignorance describes a case where an agent ought to
18
This does not preclude that ordinary individuals cannot possess this knowledge, but I
doubt there are very many of equal degree as the scientist and further they do not possess
the other features necessary for the obligation.
12
have inquired or exercised effort to remove their ignorance. If the scientist does
not inquire as to how his results will be used, or fails to consider how they
might be, then it may not be controversial to hold him responsible on the
grounds of culpable ignorance. This would be analogous to a car rental agency
failing to inquire if a client had a valid license before that client caused a major
accident. The car rental agency has an obligation to check this and the
foreseeable knowledge that an unlicensed driver could cause a serious accident.
If the agency were to claim that it was ignorant of the driver not having a
license, the company would still be liable as that is part of their role. The fact
that they simply failed to inquiry, resulting in foreseeable harms, does not
excuse their responsibility. Comparatively, if the scientist does not attempt to
consider the consequences, and the consequences are easily foreseeable then the
scientist is culpably ignorant.
If the agent was ignorant of the consequences because the consequences
were unforeseeable, then that agent may be excused from responsibility. For
example, it would be ludicrous to hold Alexander Fleming or Louis Pasteur
responsible for the rise of super-resistant strains of bacteria resulting from the
overuse of antibiotics. Therefore, unforeseeable consequences would excuse an
agent’s responsibility, but for the purposes of this paper I am not concerned
with unforeseeable consequences.
Another objection of my view might want to criticize the importance that
I place on a scientist’s foresight. This objection falls under the doctrine of
double effect. Generally, the doctrine seeks to differentiate an action that is
impermissible when intended but permissible when foreseen. One example
where the two are distinctly separable is in the instance of self-defense that ends
in the death of the attacker.xxii
Under normal circumstances, assume an agent
intending the death of a person is wrong, but under the circumstances of self-
defense, acting in a manner where the outcome is the same (the death of the
attacker), the agent is absolved of wrongdoing. Of course, this example relies on
further moral principles, such as the permissibility to defend one’s life at the
cost of another, but we can identify that there is a potential for judging the
events differently. The potential difference for judging events differently resides
in the intention of the agent. Since the agent’s intention in self-defense is to
deter an attacker and not kill him, the foreseeable consequence of being forced
to kill that attacker lessens the responsibility of the agent.
The objection in terms of the scientist proposes that since the scientist
did not intend the harms, and further, another agent supplied the wrongful
action, the scientist cannot be judged as responsible. Especially considering that
without the intervening agent, the act would not have occurred. This objection
however, does not take into account just how unique the position of foresight is
13
that the scientist has. Consider the following analogous futuristic self-defense
case:
The Aggressive Neighbor: You receive word from a
friend that a neighbor who you had an altercation
with is looking for you. You know from previous
experience that this man is prone to violence. Before
you walk home you could make a police report that
would delay him, you could also take another route
home to avoid him all together, or you can take your
usual route home with the foreseeable knowledge
that the neighbor is looking for you and it could
potentially become violent.
In the original self-defense case (one without foresight or other options)
the difference between intending the man’s death, and defending oneself with
the foreseeable knowledge that he will die makes a greater moral difference.
This is because of a lack of foresight about the situation and a lack of available
options. If the man from the neighbor case chooses not to avoid his attacker, the
judgment of the situation might differ. I am only claiming here that these are
not identical in degree of foreknowledge. That difference in that degree has the
potential to change the judgment. The higher the degree of foreknowledge the
more potential there is for accountability. If the scientist is one hundred percent
certain that a particular group will use a study, then the scientist could be held
responsible to a higher degree as opposed to if he is one percent certain.
Therefore, the objection in terms of foresight is reliant on degrees. This is
corollary to the degree of foreseeable misuse that projects possess - the higher
the degree of foresight is - the higher the degree of certainty is - the higher the
degree of responsibility is for the scientist. The special abilities and knowledge
of the scientist begin to shape the motivation for a scientist’s obligations, but
the unique complicity of scientists in cases of dual-use research provides the
most compelling reason to accept my claim.
A Scientist’s Complicity
When an agent acts, his role can fit into two categories: principal or
accomplice. The principal directly brings about the outcome through direct
action. In other words, there is no intervening action by another agent. For my
purposes, leave aside the discussion of principal wrongdoers. This is because the
scientist is never a principal in the misuse of dual-use research; he is an
accomplice. A principle directly brings about the outcome whereas an
accomplice assists in the principal’s action. There are a number of ways that an
accomplice can assist a principal. For example, the scientist who mutated a
14
H5N1 virus would assist a bioterrorist by enabling the construction of a weapon.
This assistance by an accomplice is referred to technically as complicity.
To be complicit in wrong-doing is a way of describing the degree of an
agent’s participation in a principal’s wrong-doing. However, complicity defined
in this way may not always carry enough weight to determine blame. To be
clear, an agent can be complicit, but not blameworthy. By blameworthy I do not
mean to evoke images of retribution. Blameworthy describes an agent whose
complicity warrants moral consideration. Complicity that is worthy of moral
consideration describes the disposition of other agents to judge the complicit
agent as blameworthy, and thus, seen as having committed a wrong.19
If the
agent’s complicity does not meet the criteria for blame, then that agent is
excused of wrong-doing. The difference between complicity and
blameworthiness can be difficult to determine in specific cases. What generally
appears to separate them in these cases of dual-use research is the trivialness of
an agent’s complicity below the threshold of blameworthiness.20
The Triviality principle is used to describe factors that can be seen as
negligible when determining the appropriate course of action. Consider the
difference in complicity and blameworthiness of the following individuals in a
bank robbery. There are the men inside robbing the bank (the principals), there
is a get away driver parked out front, a teller who hands over the money, and a
man who unknowingly (and unnecessarily) opens the banks door from the
outside as the robbers escape. Each of the three individuals that assist the bank
robbers is an accomplice to the robbery; they are complicit through their causal
contributions, but the get-away-driver seems to be the only one that warrants
blame. The high degree of his assistance, and intention of continued assistance,
leads to the conclusion that he is the most complicit, and the most
blameworthy. Conversely, the man who unnecessarily opens the door for the
robbers is complicit to a trivial degree. He shares no intentions with the robbers,
possesses no knowledge of the events he is walking into and provides a trivial
degree of causal contribution. Therefore, he is not blameworthy. In cases where
contributions are trivial, it won’t be controversial to absolve the scientist of
blame.21
19
It is up to particular groups how to handle the consequences of moral judgments.
20
I use threshold in a general sense, I do not mean to imply that there is a fine line or a
range, rather there is a place or time of judgment, the rules of which I am not interested in
determining, where above a threshold the agent is considered blameworthy and below
that threshold the agent is only considered complicit.
21
This is analogous to the discussion of a new type of cancer drug. The novelty is trivial
and therefore the scientist is complicit but not blameworthy.
15
Complicity is a function of an agent’s mind-state and expected causal
contribution, both of which come in varying degrees. xxiii
Simply stated, an
agent’s expected causal contribution is the degree to which that agent
reasonably expects his actions contribute to the outcome.22
Mind-state captures
both the notions of foresight and intention. Per the constraints, the scientist
pursuing dual-use research never has the intention for harm but does foresee
them (or is in a position where he is expected to). The moral importance of this
special foresight of the position of the scientist has already been examined
above, now I will consider the unique causal contribution scientists can provide.
When considering how agents can be judged in the bank robbery
example, it becomes easier to identify how causal contribution can differ within
the notion of complicity. However, none of the agents in the bank robbery
parallel the scientist’s causal contribution. None of them exhibit the same
degree of complicity when compared with the scientist and the bioterrorist. This
is because the causal contribution a scientist can provide is markedly unique
among the agents involved in the misuse of dual-use research. Recall two of the
conditions of dual-use research: the novel nature of the results and the requisite
expertise needed to obtain novel results. Under these conditions the results are
not easily obtainable by those lacking the requisite abilities and the precise level
of foresight needed to bring about the results. The types of results themselves
also provide new consequences that may not be foreseeable to those unfamiliar
with that particular project or lacking sufficient knowledge. Therefore, the
causal contribution provided by the scientist is not easily replaceable and
markedly unique, in the same way that his foresight is. To further examine the
importance of the scientist’s causal contribution consider the following example
where the agent has special abilities, special knowledge (that required
forethought to obtain) as well as a unique causal role.
The Chef and the Assassin: A chef at a gourmet
restaurant in Washington, DC has perfected working
with a special ingredient that is highly poisonous. He
alone has the knowledge to nullify the poisonous
effects of the ingredient, but through those same
methods, the poisonous smell (that would have
22
I use ‘reasonably expects’ to imply the following intuition. If a scientist were to take
the necessary steps to fulfill his role obligation, and he found that publishing does not
violate the obligation to safeguard, which was independently confirmed by other
scientists, but his research still enabled a catastrophe then he is complicit to a very small
degree. In contrast if a scientist ‘reasonably expects’ a study to contribute to a high
degree to a terrorist plot, and chooses to publish, but no harms happen as a result, can still
be held complicit to a high degree.
16
allowed someone to detect it) can be masked. The
chef also spends his downtime on the deep web
tracking the exploits of political assassins. During his
extensive research he learns that an assassin
renowned for using poison is moving to Washington.
If he were to cook the special ingredient the assassin
might be able to reverse engineer it and utilize the
poison a particularly difficult contract.
First, we can observe that the chef has zero intention of enabling the
assassin; it would be better if he left the city so that the chef could avoid this
consideration all together. Second, the chef can reasonably foresee that the
assassin could learn of his technique, and choose to attempt to use it, but the
assassin will likely find another method eventually. However, there does seem
to be something special about the case above; the intuitive claim is that the chef
should resist serving the assassin. Perhaps to fully make a judgment about the
case, there need to be more details. If the assassin were to have a better chance
at evading capture using the chef’s ingredient, he may have more cause to
prevent the assassin from learning the method. If the ingredient were as
common as the rat poison from a convenience store then that case seems less
compelling than the case where the assassin uses the chef’s special method.
While still a unique causal contribution, the chef’s new method does not
possess some of the special properties of the original ingredient.23
The lack of
novelty makes the causal contribution less special. The unique causal role the
chef has is based on the chef’s special ability. The special ability is the unique
methodology of preparing the poison. In addition, the chef’s method leads to
unique harms. The poison once prepared using the chef’s methodology is
markedly unique in the poisons the assassin could use. Thus, the harms brought
about are also markedly unique. The chef also has special knowledge about the
assassin, conferring the direction of the duty, namely to not enable the assassin.
In addition, the unique causal role of the chef is also difficult to replace. He is
the only one available with that particular methodology and the only one who
could effect change. Combined with the foreknowledge of its misuse, there
becomes a more compelling argument to hold the chef responsible.
The scientist, like the chef, has zero intention of enabling any terrorist
and has the foreknowledge of his project’s misuse. There are contextual
differences between the methods of misuse, but all things considered, the chef
and scientist possess unique knowledge of the potential harms. The scientist
enables the terrorist to complete an act that would have been impossible
23
This is an example of how differences in degree and kind effect judgments.
17
without his contribution. The means by which the terrorist achieves his end is
unique to the causal contribution of the scientist. The terrorist would probably
find another means of carrying out his end, however if he were to utilize the
method of the scientist the situation changes. There is a harmful action that is
enabled that was previously impossible by the causal contribution of the
scientist. Since the scientist has particular foreknowledge of that misuse, and
the ability to safeguard against it, the scientist has a responsibility to not
causally contribute. Additionally, the position of the scientist is not one in
which society expects to enable harms, it is primarily a beneficial endeavor.
Therefore, when scientists place society is harms way, by failing to recognize the
unique role they play (causal contribution, foresight) and the importance of
their position (with regards to how quickly indeterminate harms can come
about) they can be judged as complicit in the actions of terrorists and held
blameworthy for violating their role obligation.
Objections
Until now I have left out an important underlying assumption of my
claim; the role obligation I have proposed could make the state of affairs worse.
For example, in the case of the influenza studies, it seems that for now no harms
have come about as a result of them being published. An opponent of my view
might suggest that by following the principle I have proposed society would
have missed out on beneficial knowledge. This is in part due to the remaining
ambiguity in my use of safeguard. The obligation of safeguarding dual-use
research may not necessarily mean never publishing. In many instances the
appropriate action might be to restrict only the methods or delay the study until
other precautionary measures can be taken.24
However, in some instances, the
obligation will demand that scientists make the state of affairs worse by
withholding their results. This type of instance would be a case like the
following: Publishing would benefit society more than the negative harm of a
scientist failing to adhere to his role obligations, in addition to the potential
harms of dual-use research. On the other hand, refusing to publish would harm
society more than the positive benefit of the scientist following his role
obligation plus mitigating the harms of dual-use research. In the former case, by
abandoning the role obligation, the scientist could make the state of affairs
better. In the latter case, by following role obligations, a scientist would make
the state of affairs worse. This does not mean that the harms from dual-use
cannot occur in both cases, it only means that at the time of publication the
state of affairs will be worse if the scientist follows his role obligation. If those
24
The appropriate action will vary from study to study. What is appropriate to safeguard
one study may have no effect on another type of dual-use study.
18
potential harms are actualized, this does not differ the judgment of the state of
affairs with regards to the scientist.25
Therefore, scientists are under an
obligation to safeguard even though the state of affairs could have been, by
comparison better.26
The types of cases where this might occur generally have
countervailing reasons that affect the context of the scientist’s role obligations.
The countervailing reasons that can influence the scientist can take the
form of social pressure (such as pressure from employers), professional pressure
(other scientists) and even societal pressure (for a new tech, or against one).
The reasons an agent himself can have to act against role obligations are varied,
they can range from needing money to support a drug habit or needing the
money to support the medical treatment for the preventable death of a child.27
These reasons can affect the judgment of the scientist in both positive and
negative ways. The scientist who fails his obligation to support a drug habit may
face harsher judgment than for the scientist supporting his dying child.
Regardless of the reasoning, scientists are under a greater pressure to refuse
countervailing reasons as a justification for violating their role obligation. I
make this claim because the role obligation to safeguard society from dual-use
research mimics the ideals of the Solzhenitsyn Principle.
As Solzhenitsyn said in his Nobel Lecture, “And the simple step of a
courageous man is not to partake in falsehood, not to support false actions! Let
THAT enter the world, let it even reign in the world - but not with my help.”xxiv
Solzhenitsyn’s remarks break morality down into two parts: making a difference
through having a direct causal influence (acting as a principal) and the difference
one makes through the causal influence of someone else (acting as a accomplice,
i.e. being complicit).xxv
The Solzhenitsyn principle calls for a prohibition of
wrong-doing, not only as a principal, but also as an accomplice. Typically this is
thought of as an absolutist prohibition, but I do not wish to propose it as such.
Solzhenitsyn, a powerfully willed man who spent a long stretch of his life in a
Gulag for being a dissident, might disagree. However, it cannot be the case that
25
Recall our judgment of causal contribution relies on expected causal contribution,
which is determined at the time of complicity, not after the fact. Judging after-the-fact
leads to the implausible scenario of calling long dead individuals complicit in the distant
misuse of their research.
26
‘By comparison’ here is used to denote that when a scientist follows his role obligation
and safeguards society that is not a ‘bad’ state of affairs. It is only when the state of
affairs of safeguarding is compared with the state of affairs of publishing that it becomes
the ‘worse’ of the two options.
27
Similar to footnote 3
: There can also be various personal reasons, such as deeply held
moral or religious beliefs etc.
19
a scientist should refuse to publish under any circumstance.28
Rather, I use the
Solzhenitsyn Principle to evoke the idea that a scientist has a powerful
responsibility not to be complicit.
The principle can best be seen as a responsibility to resist countervailing,
at least with regards to complicity. More specifically, the principle demands an
agent to resist in instances where the agent’s complicity would benefit him. This
is parallel to the case of scientists who would bring about a better state of affairs
if he were to violate his role obligation.29
By following the obligation to
safeguard the scientist is obligated to bring about a worse state of affairs. The
connection between the scientist and the Solzhenitsyn Principle can be seen in
the phrasing of ‘courageous man’. If an agent is in a position to be courageous,
that agent is in a position to act rightly or wrongly. An agent in that type
instance can be said to have a unique causal position (to bring about a better or
worse state of affairs). In the case of the scientist that position is a choice
between following role obligations or not. Further, another agent cannot easily
fill that causal position. The scientist also has the unique knowledge to foresee
misuse, and the unique ability and position to deter it. Therefore, no other
individual is better placed to carry out the scientist’s responsibility or to act
courageously. Further, no other individual may be able to have any impact. Once
the results of a dual-use study are released, it would be very difficult, if not
impossible to recall that information. The more unique the causal position – the
more complicit an agent is – the greater the obligation to resist side-constraints.
The Solzhenitsyn Principle not only requires that scientists resist
countervailing to a higher degree, it also requires that they abstain from
participation when their causal contribution would make no difference to the
overall outcome. These types of cases are often referred to under the argument
for no difference; the objection is often stated as, “If I don’t, someone else will”.
In these cases there are multiple agents contributing to an outcome. That
outcome is not dependent on any single agent providing a unique causal
contribution. Therefore, the objection proposes that if the causal role of an agent
is important for determining blame, then in cases where that causal role is
diminished, can those individuals still be blameworthy?
These types of cases need more details. First, we can identify that the
uniqueness of a scientist’s causal contribution is a matter of degree. If the
28
The immediate threat of death could function as an instance where we could not hold
the scientist responsible for failing in his role obligation, especially when the harms of
publishing are minimal.
29
The better state of affairs could be a higher degree of benefit to society, or the positive
repercussions for the scientist after publishing.
20
scientist is certain another researcher will publish, his causal contribution is a
function of how similar their causal contributions will be. If each of the
scientists has found the exact same conclusion using the exact same method
then their causal contributions are, all things considered, equal in degree. If the
two scientists are three weeks apart, there is the potential for more causal
contribution by publishing first. The difference of three weeks could be
significant enough to warrant a greater degree of complicity. This type of
distinction will only apply to certain cases, but in those instances the difference
in degree has an effect on judgment. The more interesting and problematic case
is when the causal contributions are identical in degree.
In a case where multiple agents causally contribute to the same degree, it
is curious to explain why if an agent were certain of the outcome (someone
would take the same action regardless) why it is worse for that agent to be the
one that brings it about. I concede that the causal role of the scientist is no
longer special to that specific scientist in these cases, but there are other reasons
to doubt the claim about no difference outweighing a scientist’s role obligations.
Would the scientists publishing actually make no difference to the
outcome? Are two worlds, one in which one scientist publishes and the other
where two scientists publish (using the excuse for no difference) actually
equivalent? With regards to the outcome of publishing the dual-use study, the
two worlds are identical. Both worlds are exposed to the same risk of misuse.
However, the world in which two scientists fail their role obligations is worse
than a world where only one scientist fails.30
This is because the unique position
of the scientist is more than causal influence and special knowledge; it is a
position in which an individual can have an impactful difference. By this I mean
to appeal to more than the notion of making the world a better state of affairs by
safeguarding it from harm or benefiting it from research. I mean to capture the
social consequences implicit in the perception of the role of the scientist. These
societal influences give weight to the idea that the argument for no difference
may not apply in the case of the scientist failing his role obligation.
The work of scientists influences policy, public opinion and the work of
other scientists. The special knowledge of the position also gives the perception
of authority. That is because when a scientist speaks about science he knows
what he is talking about, or at least the public perceives that the scientist does.
30
Even under a moral theory that places all of the weight on outcomes, so that neither
individual could be held blameworthy because of their identical contributions, a world
with the outcome of two scientists failing in their role obligation is a has a worse state of
affairs than a world where only one scientist fails, even though we cannot assign blame to
a single agent.
21
For the most part the authority in the position of the scientist can be considered
to come from the position’s special knowledge and the choice to become a
scientist.31
The public lacks this special knowledge but has a perception of the
complexity of the information. The public also has a perception about the path
to becoming a well-respected scientist, and those opinions give the position
authority. Under explained, callous remarks or bad research can lead to terrible
consequences. The South African policy of refusing AIDS medication resulted in
the preventable (untreated) deaths of hundreds of thousands of people, and was
only undertaken after being influenced by a molecular biologist, Peter
Duesberg.xxvi
The difference between the scientist’s knowledge and the publics’ creates
an area of trust between the two positions. The public trusts that what scientists
say is scientifically accurate to the best of their knowledge. The public is so
easily influenced because of a lack of intimate detailed knowledge of methods
and purpose, as well as a generally negative attitude towards new technologies
(such as animal cloning).xxvii
If the public has a disposition to be negative and
lack knowledge then what scientists say has an even greater impact, as they are
seen as authorities. Trust is vital to the profession, for its success and our
technological success. Without trust medical scientists cannot recruit
participants for studies of new drugs, etc. This would slow medical advances to
what we could learn by vivisection and computer modeling. Since the FDA
requires clinical trials on humans, a lack of trust would seriously inhibit the
growth of scientific knowledge.32
In addition to the influence the position of a scientist has with regards to
the general public, they also have a great deal of influence amongst other
scientists. As with other positions that possess authority and role obligations,
scientists determine some professional codes of conduct amongst themselves.33
These individuals come together to determine standards of research, methods
and other issues in order to determine the fit for the profession in society. Thus,
the opinions of members within the community will always be more influential
than the opinions of non-members. Members are of the same mind, or at-least
are well versed in similar issues, as are other members. In addition, membership
becomes a mark of authority, without which ones influence would not have the
31
Other factors may influence this such as celebrity status.
32
This assumes that lack of trust would have a negative correlation to research
participation, which I do not believe would happen instantly if one researcher were
discredited, but under a system that operated on a lack of trust, I believe this is a possible
outcome.
33
There are other codes of conduct, such as those that govern the treatment and use of
human subjects.
22
same effect. Scientists who choose to disregard role obligations might influence
others to do the same, leading to more scientists disregarding their role
obligations. These types of side effects are known as spirals. These are side
effects where the quantity of the same action has a particular influence on
people.xxviii
An example of these types of effects can be seen in the stock market.
When a company reports a bad quarter, fires a CEO, or undergoes some type of
scandal, the perception of the company becomes that it is risky, and that risk is
reflected in the stock price. These pieces of information may have little to no
actual effect on the worth of the company. However, when these stories break
there is often a selling off of stocks by risk adverse investors. When those sell
offs occur the stock price continues to decline, precipitating more risk adverse
investors (of a lower degree than the first) to sell their shares precipitating the
perception of more risk, a trend that can spiral with disastrous results. In the
case of the scientist, the influence of one individual publishing a study
concerning influenza may influence several other labs to pursue and publish
studies of the same kind. If each of these studies makes a small discovery
concerning influenza, this will generate more interest and more research into
the field. Each of these new studies carries with them the potential for misuse,
but as a result of the increased interest, every scientist plays an even more
diminished causal role. As more scientists take up the pursuit of a particular
piece of research the more risk society comes under. Despite the lack of casual
contribution in these cases, the position of the scientist still warrants a high
degree of refusal when the effects of negative spirals are present. Of course,
spirals can also precipitate in a positive direction, so the most difficult case
appears to occur with multiple identical causal contributions with no apparent
side effects or spirals.
In cases where the causal contribution of agents is identical and the is
such that their contributions make no degree of difference with regards to side
effects, then the special position of the scientist has been diminished to a trivial
degree. Therefore, in cases where that particular the degree of causal
contribution was necessary for blameworthiness, what reason does the scientist
have to remain under the obligation?34
34
Being trivially complicit may not be enough to motivate the agent to follow a role
obligation or hold him accountable. For example, every individual (that actively
participates in society) is in someway complicit to a trivial degree in some type of
wrongful behavior. The world is too interconnected for this to be otherwise. Thus, on
reflecting on the trivial manner to which my driving contributes to global warming I
might be unmotivated to opt for public transportation everyday (assuming that is the
23
In following the Solzhenitsyn Principle the scientist would still be
required to abstain from publishing, this can be observed in the phrase ‘let it
reign, but not with my help’. However, the scientist doesn’t seem to have a
strong obligation to follow it if he is not considered blameworthy. For example,
imagine a terrorist group employed scientists to research influenza at the same
time the labs in 2012 were. Next, imagine that the scientists are certain the
terrorist group have the identical results they do (identical causal
contributions). Further, we can assume that the terrorist group would receive
one hundred percent of the attention for publishing (no negative side effects for
the scientists) and the scientists would benefit from being the first to publish. In
this situation it is still desirable to explain why the scientists ought to adhere to
their role obligation rather then accept the argument for no difference and act in
place of the terrorists. In this instance the judgment could be based on what
influence the scientists ought to have, namely opposite the terrorists. However,
with no causal contribution and a lack of negative side effects I doubt there is
any sufficient moral reasoning to condemn their publishing on the basis of
acting opposite the principal’s ideology.
It would seem that no scientist could be held blameworthy under an
argument for no difference in this situation. If no one can be complicit to the
outcome in a manner sufficient for blameworthiness, the no individual can be
held responsible. Those individuals have failed in their role obligation, but that
complicity can be considered trivial. However, while the causal contribution may
no longer be unique to any particular individual, it is unique to a group of
individuals in the same special position. One scientist may not hold a unique
causal contribution but a group of scientists does. If there are multiple agents
ready and willing to be complicit in an identical manner, then those agents can
be said to share the same unique causal position. It does not matter which agent
acts towards the outcome because the outcome will be the same.
Each agent who has the ability to causally contribute the necessary
amount to achieve an outcome can be said to share the same unique causal
position. No individual can be said to have a unique causal role, but as a group
they can be said to share one. Each of the seven research teams that were poised
to publish results about influenza (assume they are causally identical) shares
one-seventh the unique causal position. I propose that while any individual
might not be held fully responsible when invoking the argument for no-
difference, that individual can be held responsible equal to the degree of
complicity that individual contributed to the group. If the group is held
more ethical decision) and choose to drive occasionally. In acting this way, why would
my trivial complicity convince me to act against my desires?
24
blameworthy then the each agent receives a fraction of the blame. The
practicalities of how this claim can be implemented are far beyond the scope of
this argument to propose, but the motivation behind holding these individuals
responsible as a group can be justified. This is only the case however, if our
judgment is based solely on outcomes and there are no countervailing reasons to
affect the judgment.
Conclusion
Due to the special position of scientists, including special abilities, special
knowledge and unique causal contribution, scientists have an obligation to
safeguard society from the consequences of their dual-use research. This
obligation is present despite strong countervailing reasons and despite the
availability of other agents to provide the identical outcome in most cases.
25
Bibliography
i
Nick Bostrom, “Existential Risks,” Journal of Evolution and Technology 9, no. 1
(2002): 1, http://www.jetpress.org/volume9/risks.html.
ii
Stephen Kern, The Culture of Time and Space, 1880-1918: With a New Preface
(Harvard University Press, 2003).
iii
Ibid., 275.
iv
Bostrom, “Existential Risks,” 1.
v
Bostrom, “Existential Risks.”
vi
Ibid., 3.
vii
Carl F. Cranor, “Toward Understanding Aspects of the Precautionary Principle,”
Journal of Medicine and Philosophy 29, no. 3 (January 1, 2004): 259–79,
doi:10.1080/03605310490500491.
viii
William K. Hallman and Sarah C. Condry, Public Opinion and Media Coverage of
Animal Cloning, FPI Research Report (Rutgers University: Food Policy Institure, n.d.);
Matthew C. Nisbet, “Public Opinion About Stem Cell Research and Human Cloning,”
Public Opinion Quarterly 68, no. 1 (March 1, 2004): 131–54, doi:10.1093/poq/nfh009.
ix
Bostrom, “Existential Risks,” 2.
x
Jess Benhabib, Alberto Bisin, and Andrew Schotter, “Hyperbolic Discounting: An
Experimental Analysis,” New York University Department of Economics Working Paper,
2004, http://teaching.ust.hk/~bee/papers/Chew/04Benhabib-Bisin-Schotter-
Discounting.pdf.
xi
Toni Clarke, “FDA Could Approve Drugs for New Uses on Less Data: Draft Law,”
Reuters India, accessed April 30, 2015, http://in.reuters.com/article/2015/04/29/us-u-s-
health-lawmakers-bills-idINKBN0NK2FA20150429.
xii
World Health Organization, Influenza at the Human-Animal Interface; Summary and
Assessment as of 26 January 2015. (WHO, January 26, 1025).
xiii
“Influenza A Virus Subtype H5N1,” Wikipedia, the Free Encyclopedia, January 10,
2015,
http://en.wikipedia.org/w/index.php?title=Influenza_A_virus_subtype_H5N1&oldid=641
936712.
xiv
Donald G. Mcneil, “H5N1 Bird Flu Research That Stoked Fears Is Published,” The
New York Times, June 21, 2012, sec. Health,
http://www.nytimes.com/2012/06/22/health/h5n1-bird-flu-research-that-stoked-fears-is-
published.html.
xv
Ibid.
xvi
Ibid.
xvii
Ibid.
xviii
New Atlantis (Clarendon Press, 1915).
xix
Paul Sieghart et al., “The Social Obligations of the Scientist,” The Hastings Center
Studies 1, no. 2 (1973): 7, doi:10.2307/3527509.
xx
Ibid., 10.
xxi
Ibid., 11.
xxii
Alison McIntyre, “Doctrine of Double Effect,” in The Stanford Encyclopedia of
Philosophy, ed. Edward N. Zalta, Fall 2011, 2011,
http://plato.stanford.edu/archives/fall2011/entries/double-effect/.
xxiii
Chiara Lepora and Joseph Millum, “The Tortured Patient: A Medical Dilemma,”
Hastings Center Report 41, no. 3 (2011): 40, doi:10.1353/hcr.2011.0064.
26
xxiv
Alexander Solzhenitsyn, “Nobel Lecture,” 1970,
http://www.nobelprize.org/nobel_prizes/literature/laureates/1970/solzhenitsyn-
lecture.html.
xxv
John Gardner, “Complicity and Causality,” Criminal Law and Philosophy 1, no. 2
(May 1, 2007): 2, doi:10.1007/s11572-006-9018-6.
xxvi
Sarah Boseley and Health Editor, “Mbeki Aids Denial ‘Caused 300,000 Deaths,’” The
Guardian, accessed April 13, 2015,
http://www.theguardian.com/world/2008/nov/26/aids-south-africa.
xxvii
Hallman and Condry, Public Opinion and Media Coverage of Animal Cloning.
xxviii
Jonathan Glover and M. J. Scott-Taggart, “It Makes No Difference Whether or Not I
Do It,” Proceedings of the Aristotelian Society, Supplementary Volumes, 1975, 171–209.

Mais conteúdo relacionado

Mais procurados

GVN Business Leadership Council News
GVN Business Leadership Council NewsGVN Business Leadership Council News
GVN Business Leadership Council NewsBipin Thomas
 
Towards a malaria-free world - Background information
Towards a malaria-free world - Background informationTowards a malaria-free world - Background information
Towards a malaria-free world - Background informationXplore Health
 
The 2014 MDigitalLIfe Social Oncology Project Report
The 2014 MDigitalLIfe Social Oncology Project ReportThe 2014 MDigitalLIfe Social Oncology Project Report
The 2014 MDigitalLIfe Social Oncology Project ReportW2O Group
 
Behavan00059 0057
Behavan00059 0057Behavan00059 0057
Behavan00059 0057Liana Vale
 
Elsi in genome studies
Elsi in genome studiesElsi in genome studies
Elsi in genome studiesSamruddhiKunte
 
Use of Social Media for your Journal
Use of Social Media for your JournalUse of Social Media for your Journal
Use of Social Media for your JournalIris Thiele Isip-Tan
 
Proactive COVID-19 testing to mitigate spread
Proactive COVID-19 testing to mitigate spreadProactive COVID-19 testing to mitigate spread
Proactive COVID-19 testing to mitigate spreadCarl Bergstrom
 

Mais procurados (12)

GVN Business Leadership Council News
GVN Business Leadership Council NewsGVN Business Leadership Council News
GVN Business Leadership Council News
 
Towards a malaria-free world - Background information
Towards a malaria-free world - Background informationTowards a malaria-free world - Background information
Towards a malaria-free world - Background information
 
The 2014 MDigitalLIfe Social Oncology Project Report
The 2014 MDigitalLIfe Social Oncology Project ReportThe 2014 MDigitalLIfe Social Oncology Project Report
The 2014 MDigitalLIfe Social Oncology Project Report
 
Behavan00059 0057
Behavan00059 0057Behavan00059 0057
Behavan00059 0057
 
Elsi in genome studies
Elsi in genome studiesElsi in genome studies
Elsi in genome studies
 
Tilting perspectives
Tilting perspectivesTilting perspectives
Tilting perspectives
 
Use of Social Media for your Journal
Use of Social Media for your JournalUse of Social Media for your Journal
Use of Social Media for your Journal
 
Categorie
CategorieCategorie
Categorie
 
Proactive COVID-19 testing to mitigate spread
Proactive COVID-19 testing to mitigate spreadProactive COVID-19 testing to mitigate spread
Proactive COVID-19 testing to mitigate spread
 
Epidemiology v1.2 unit 2
Epidemiology v1.2 unit 2Epidemiology v1.2 unit 2
Epidemiology v1.2 unit 2
 
Presentation_PhD
Presentation_PhDPresentation_PhD
Presentation_PhD
 
2010 06 17.enel’s scientific paper oxygen
2010 06 17.enel’s scientific paper oxygen2010 06 17.enel’s scientific paper oxygen
2010 06 17.enel’s scientific paper oxygen
 

Destaque

Diapositiva de tecnologia de la informacion y comunicacion i
Diapositiva de tecnologia de la informacion y comunicacion iDiapositiva de tecnologia de la informacion y comunicacion i
Diapositiva de tecnologia de la informacion y comunicacion ijazmin Rijo
 
Radio Information Doc4
Radio Information Doc4Radio Information Doc4
Radio Information Doc4Jim Kent
 
Escenario socioeconomico
Escenario socioeconomicoEscenario socioeconomico
Escenario socioeconomicoAnel Sosa
 
Pertamina Solusi Bahan Bakar Berkualitas dan Ramah Lingkungan
Pertamina Solusi Bahan Bakar Berkualitas dan Ramah LingkunganPertamina Solusi Bahan Bakar Berkualitas dan Ramah Lingkungan
Pertamina Solusi Bahan Bakar Berkualitas dan Ramah Lingkunganraden pabelan
 
EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS
EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS
EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS QuynhDaoFtu
 
Hazard analysuis food packaging manufacturing(2)
Hazard analysuis  food packaging manufacturing(2)Hazard analysuis  food packaging manufacturing(2)
Hazard analysuis food packaging manufacturing(2)Tom Dunn
 
Contoh Laporan Keuangan
Contoh Laporan KeuanganContoh Laporan Keuangan
Contoh Laporan KeuanganFerry Rinaldi
 
Food Safety and Flexible Packaging Material
Food Safety and Flexible Packaging MaterialFood Safety and Flexible Packaging Material
Food Safety and Flexible Packaging MaterialEric Tu
 
Overview on Azure Machine Learning
Overview on Azure Machine LearningOverview on Azure Machine Learning
Overview on Azure Machine LearningJames Serra
 
Should I move my database to the cloud?
Should I move my database to the cloud?Should I move my database to the cloud?
Should I move my database to the cloud?James Serra
 

Destaque (15)

Liberatore_Resume
Liberatore_ResumeLiberatore_Resume
Liberatore_Resume
 
2015 ATO Certs & Ops Specs
2015 ATO Certs & Ops Specs2015 ATO Certs & Ops Specs
2015 ATO Certs & Ops Specs
 
Unit iv functions
Unit  iv functionsUnit  iv functions
Unit iv functions
 
General ev
General evGeneral ev
General ev
 
Diapositiva de tecnologia de la informacion y comunicacion i
Diapositiva de tecnologia de la informacion y comunicacion iDiapositiva de tecnologia de la informacion y comunicacion i
Diapositiva de tecnologia de la informacion y comunicacion i
 
Radio Information Doc4
Radio Information Doc4Radio Information Doc4
Radio Information Doc4
 
Escenario socioeconomico
Escenario socioeconomicoEscenario socioeconomico
Escenario socioeconomico
 
Pertamina Solusi Bahan Bakar Berkualitas dan Ramah Lingkungan
Pertamina Solusi Bahan Bakar Berkualitas dan Ramah LingkunganPertamina Solusi Bahan Bakar Berkualitas dan Ramah Lingkungan
Pertamina Solusi Bahan Bakar Berkualitas dan Ramah Lingkungan
 
EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS
EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS
EBOOK FOR RUNNING CPI - USING GOOGLE ADWORDS
 
Hazard analysuis food packaging manufacturing(2)
Hazard analysuis  food packaging manufacturing(2)Hazard analysuis  food packaging manufacturing(2)
Hazard analysuis food packaging manufacturing(2)
 
Contoh Laporan Keuangan
Contoh Laporan KeuanganContoh Laporan Keuangan
Contoh Laporan Keuangan
 
Contoh proposal pli
Contoh proposal pliContoh proposal pli
Contoh proposal pli
 
Food Safety and Flexible Packaging Material
Food Safety and Flexible Packaging MaterialFood Safety and Flexible Packaging Material
Food Safety and Flexible Packaging Material
 
Overview on Azure Machine Learning
Overview on Azure Machine LearningOverview on Azure Machine Learning
Overview on Azure Machine Learning
 
Should I move my database to the cloud?
Should I move my database to the cloud?Should I move my database to the cloud?
Should I move my database to the cloud?
 

Semelhante a Thesis (J. Kyle Treman)

Existential Risk Prevention as Global Priority
Existential Risk Prevention as Global PriorityExistential Risk Prevention as Global Priority
Existential Risk Prevention as Global PriorityKarlos Svoboda
 
Scientific Triage: How to make strategic choices about prioritizing basic sci...
Scientific Triage: How to make strategic choices about prioritizing basic sci...Scientific Triage: How to make strategic choices about prioritizing basic sci...
Scientific Triage: How to make strategic choices about prioritizing basic sci...nfefferman
 
The precautionary principle fragility and black swans from policy actions
The precautionary principle fragility and black swans from policy actionsThe precautionary principle fragility and black swans from policy actions
The precautionary principle fragility and black swans from policy actionsElsa von Licy
 
Type Essay Online. Type essay online - College Homework Help and Online Tutor...
Type Essay Online. Type essay online - College Homework Help and Online Tutor...Type Essay Online. Type essay online - College Homework Help and Online Tutor...
Type Essay Online. Type essay online - College Homework Help and Online Tutor...Theresa Paige
 
Law and Order: helping hospital and doctors recognize and manage risk
Law and Order: helping hospital and doctors recognize and manage riskLaw and Order: helping hospital and doctors recognize and manage risk
Law and Order: helping hospital and doctors recognize and manage riskSAMI EL JUNDI
 
A World United Against Infectious Diseases: Connecting Organizations for Regi...
A World United Against Infectious Diseases: Connecting Organizations for Regi...A World United Against Infectious Diseases: Connecting Organizations for Regi...
A World United Against Infectious Diseases: Connecting Organizations for Regi...The Rockefeller Foundation
 
Thinking About Risk - Denise Caruso - PICNIC '10
Thinking About Risk - Denise Caruso - PICNIC '10Thinking About Risk - Denise Caruso - PICNIC '10
Thinking About Risk - Denise Caruso - PICNIC '10PICNIC Festival
 
Black Hole Essay. The Universe of Black Holes - Free Essay Example PapersOwl...
Black Hole Essay. The Universe of Black Holes - Free Essay Example  PapersOwl...Black Hole Essay. The Universe of Black Holes - Free Essay Example  PapersOwl...
Black Hole Essay. The Universe of Black Holes - Free Essay Example PapersOwl...Shalonda Jefferson
 
Time Management Essay. Time Management Essay Ilustrasi
Time Management Essay. Time Management Essay  IlustrasiTime Management Essay. Time Management Essay  Ilustrasi
Time Management Essay. Time Management Essay IlustrasiJean Henderson
 
Epidemiology designs for clinical trials - Pubrica
Epidemiology designs for clinical trials - PubricaEpidemiology designs for clinical trials - Pubrica
Epidemiology designs for clinical trials - PubricaPubrica
 
reaction paper ppt on risk perception.ppt
reaction paper ppt on risk perception.pptreaction paper ppt on risk perception.ppt
reaction paper ppt on risk perception.pptDaniWondimeDers
 
Synthetic Biology beyond the bench: Biosafety & biosecurity
Synthetic Biology beyond the bench: Biosafety & biosecuritySynthetic Biology beyond the bench: Biosafety & biosecurity
Synthetic Biology beyond the bench: Biosafety & biosecurityDrew Endy
 
Technology Controversy essay
Technology Controversy essayTechnology Controversy essay
Technology Controversy essayJuan Lopez
 
Limitations of scientific activities
Limitations of scientific activitiesLimitations of scientific activities
Limitations of scientific activitiesAndrea Boggio
 
OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...
OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...
OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...OECD Environment
 
Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...
Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...
Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...Biocomplexity Institute of Virginia Tech
 

Semelhante a Thesis (J. Kyle Treman) (20)

Existential Risk Prevention as Global Priority
Existential Risk Prevention as Global PriorityExistential Risk Prevention as Global Priority
Existential Risk Prevention as Global Priority
 
Scientific Triage: How to make strategic choices about prioritizing basic sci...
Scientific Triage: How to make strategic choices about prioritizing basic sci...Scientific Triage: How to make strategic choices about prioritizing basic sci...
Scientific Triage: How to make strategic choices about prioritizing basic sci...
 
P2 lesson part two
P2 lesson part twoP2 lesson part two
P2 lesson part two
 
The precautionary principle fragility and black swans from policy actions
The precautionary principle fragility and black swans from policy actionsThe precautionary principle fragility and black swans from policy actions
The precautionary principle fragility and black swans from policy actions
 
Type Essay Online. Type essay online - College Homework Help and Online Tutor...
Type Essay Online. Type essay online - College Homework Help and Online Tutor...Type Essay Online. Type essay online - College Homework Help and Online Tutor...
Type Essay Online. Type essay online - College Homework Help and Online Tutor...
 
Law and Order: helping hospital and doctors recognize and manage risk
Law and Order: helping hospital and doctors recognize and manage riskLaw and Order: helping hospital and doctors recognize and manage risk
Law and Order: helping hospital and doctors recognize and manage risk
 
A World United Against Infectious Diseases: Connecting Organizations for Regi...
A World United Against Infectious Diseases: Connecting Organizations for Regi...A World United Against Infectious Diseases: Connecting Organizations for Regi...
A World United Against Infectious Diseases: Connecting Organizations for Regi...
 
Thinking About Risk - Denise Caruso - PICNIC '10
Thinking About Risk - Denise Caruso - PICNIC '10Thinking About Risk - Denise Caruso - PICNIC '10
Thinking About Risk - Denise Caruso - PICNIC '10
 
Black Hole Essay. The Universe of Black Holes - Free Essay Example PapersOwl...
Black Hole Essay. The Universe of Black Holes - Free Essay Example  PapersOwl...Black Hole Essay. The Universe of Black Holes - Free Essay Example  PapersOwl...
Black Hole Essay. The Universe of Black Holes - Free Essay Example PapersOwl...
 
Bucci_LIU_Thesis
Bucci_LIU_ThesisBucci_LIU_Thesis
Bucci_LIU_Thesis
 
Time Management Essay. Time Management Essay Ilustrasi
Time Management Essay. Time Management Essay  IlustrasiTime Management Essay. Time Management Essay  Ilustrasi
Time Management Essay. Time Management Essay Ilustrasi
 
Globaldarwin
GlobaldarwinGlobaldarwin
Globaldarwin
 
Epidemiology designs for clinical trials - Pubrica
Epidemiology designs for clinical trials - PubricaEpidemiology designs for clinical trials - Pubrica
Epidemiology designs for clinical trials - Pubrica
 
reaction paper ppt on risk perception.ppt
reaction paper ppt on risk perception.pptreaction paper ppt on risk perception.ppt
reaction paper ppt on risk perception.ppt
 
Synthetic Biology beyond the bench: Biosafety & biosecurity
Synthetic Biology beyond the bench: Biosafety & biosecuritySynthetic Biology beyond the bench: Biosafety & biosecurity
Synthetic Biology beyond the bench: Biosafety & biosecurity
 
Technology Controversy essay
Technology Controversy essayTechnology Controversy essay
Technology Controversy essay
 
Limitations of scientific activities
Limitations of scientific activitiesLimitations of scientific activities
Limitations of scientific activities
 
biofeedback-s14
biofeedback-s14biofeedback-s14
biofeedback-s14
 
OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...
OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...
OECD Global Forum on the Environment dedicated to Per- and Polyfluoroalkyl Su...
 
Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...
Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...
Computational Epidemiology tutorial featured at ACM Knowledge Discovery and D...
 

Thesis (J. Kyle Treman)

  • 1. Acknowledgments: I am most thankful to Regina Rini and Collin O’Neil, who offered extensive and thoughtful comments at various stages in my writing process. I would also like to thank Noushaba Rashid, Scott Briggs, Jennifer Treman and Isabella Knazeck for their comments, criticisms and support. Dual-Use Research: The Scientist’s Role J. Kyle Treman New York University Center for Bioethics Regina Rini, PhD Spring 2015 Abstract: More than any other time in history, humans possess a great deal of technological power. As this power increases, so to does the magnitude of its risk. This is due to a characteristic of certain research projects known as dual- use. Dual-use describes the ability to enable benefits and harms. Typically, the precautionary principle or a cost-benefit analysis is used to judge these situations. However, the lack of knowledge, a public bias towards emerging technology and a psychological tendency to favor short-term benefits over long- term benefits might make regulatory bodies ill-suited for these cases. To mitigate these risks society must look for the individuals most responsible and best placed to make decisions about dual-use research. The special position of a scientist is perfectly suited for this responsibility; the special knowledge, abilities and unique causal influence of the scientist all impart a role obligation to safeguard society from risks in dual-use research. That obligation persists despite strong countervailing reasons, and may require the scientist to withhold information from investors, the public as well as governments.
  • 2. 2 When a technology is used for harm, it is usually being used in a manner that it was not intended for. When these harms are brought about, the scientist who developed the technology is rarely held responsible. However, in certain situations the scientist can uniquely enable an individual willing to do harm. If a scientist published a study, and that study directly contributed to a bioterrorist attack, would you hold that scientist responsible? Would you want to? I suspect there is an opinion in our society that scientists are in some way responsible for their part in enabling harms through research. This judgment comes from what I believe the special position of a scientist entails and the magnitude of risk inherent in research today. In what follows, I defend the claim that a scientist has a special position that confers a strict role obligation to safeguard society from the potential misuse of his research, and this obligation persists despite strong countervailing reasons. Dual-Use Research In order to motivate my claim, these cases need more details. The research projects that I am concerned with all possess similar characteristics; the first is the property of dual-use. Dual-use describes the results of a research project, namely that they can enable either benefits or harms. For example, consider a scientist who discovered a way to excise portions of DNA without damage and implant them into other organisms. This study is aimed at providing more targeted therapies that can work at the genetic level. Through the same method used to derive benefits a bioterrorist could create a weaponized virus. The ambiguous use of dual-use research, often translates into our opinions about dual-use technology. We have intuitions that the project should be pursued for the benefits, but avoided for the harms. What makes these studies even more difficult to judge are the factors underlying the benefits of the study. That is, the benefits outweigh the harms to a meaningful degree and the scientists pursuing these projects have no intention of enabling the harms.1 For example, cancer drugs are highly poisonous, but that is necessary to be effective against cancer. One could easily predict that these highly poisonous ingredients could be used in a murder. However, the benefits to millions of cancer patients far outweigh the harms of potentially enabling a few murders, thus the drugs are researched and produced. Further, the scientists who research cancer drugs never intend them to be used for this purpose. They intend these drugs to improve the quality of life of patients with cancer. The 1 Meaningful degree is meant to capture the place where a society (or whatever group in a society is in charge of this determination) would deem a research project worth pursuing. All studies have harms that remain, but they are almost always outweighed by the benefits. Meaningful degree is meant to capture this idea.
  • 3. 3 intention of these drugs, in addition to the high weight of the benefits, makes this type of dual-use research worthwhile. However, not all dual-use studies are so easily imagined, or ubiquitous. Often times these projects represent a novel departure from scientific dogma, enabling a new set of benefits and harms. The second characteristic of dual-use research is the degree of novelty their results have. These research results may be novel in degree or kind. For example, a new chemotherapy drug that is an analog of an existing drug is novel in degree. We can correctly observe that this degree is very low, which is evident in the justification of developing cancer drugs. However, discovering a new kind of drug, or a new kind of treatment for cancer all together, possesses a high degree of novelty. Any harms or benefits enabled in the dual-use projects were previously non-existent, or existent to a very low degree. Thus, the studies specifically enable harms that may not have been possible before. The dual-use research projects I am considering have results that are unique to those individual projects and the scientists who publish them. The last characteristic that dual-use research shares is the magnitude of risk. When considering the magnitude of dual-use research there are three qualities to examine: the scope, the intensity and the probability.i The scope describes the amount of people that stand to be affected; today many scientific studies have a global reach. Terrorists need only to turn on a cell phone to download the latest information to enable their ends. Thus, a dual-use research study can enable harms around the world, harming anyone indeterminately.2 Stephen Kern, in The Culture of Time and Space, explored the phenomenon new technology can have on society. Kern notes, that during World War I the increased speeds of communication and shortened time for diplomacy led to detrimental consequences. ii New technologies, the telegraph and telephone, were largely responsible for influencing the pace of diplomacy during the July Crisis leading up to the First World War.iii These rapid communications led to nations placing deadlines on response times; where a nation would normally have had weeks to determine a course of action now they had a matter of days, which inevitably led to escalation. The intensity is the degree to which the risk will lower the well-being of individuals. Risks can have effects at the personal, local or global level. Consider the following chart: 2 By indeterminately I mean it is not predictable who will be affected when a study is published until the risks are actualized into harms. Combined with a global reach, the individuals that will be affected are unpredictable, or indeterminate.
  • 4. 4 Figure 1: Intensity v. Scope of Riskiv Scope Global Local Personal Endurable Terminal Intensity Dual-use research rarely raises concern at the personal level, partly because of the novelty, and global technological infrastructure. Most dual-use research will have effects on the global level.3 The harms that come about as a result of dual-use studies fit into endurable and terminal. Where the endurable harms can be overcome as opposed to the terminal harms, which are insurmountable. Dual-use research generally falls into the upper categories, because of the reach of these technologies and its accompanying infrastructure. As the reach of these projects increase in scope and intensity, dual-use research enables a new type of risk. With some dual-use technologies there exists existential risk.v These are risks that if actualized into harms will have an irreversible effect on the human species, either eliminating it or drastically curtailing its potential.4 The threat of a nuclear holocaust is probably the most visceral image for people alive today, but even a nuclear holocaust may only fit into the endurable category, as mankind may not be irreversibly changed. Dual-use research projects do not always fit this category, but there is an increasing prevalence of these technologies. Nanobots, gene therapy, research into deadly diseases and machine-body interfaces are all examples of these. They are especially 3 Of course this does not preclude any other type of effect on a local or personal level. A global endurable risk may still be terminal at the individual or local level. 4 Bostrom has identified four types of existential risk, bangs, crunches, shrieks, and whimpers. For a deeper analysis see [i] Pandemic Existential Risk Epidemic Genocide Infected with deadly virus Death from virus
  • 5. 5 worrisome because a mistake, such as releasing one self-replicating nanobot into the environment, may only need one breach to cause catastrophe. The probability is the chance that a given event will occur, but probability of risk in dual-use research can further be broken down into subjective and objective probability.vi Since objective risk assessments are difficult to precisely determine, we must operate partially on subjective risk. This is due to the lack of available facts about the consequences and partially due to the novel nature of the risk. Any type of prediction, where novelty exists, cannot be one hundred percent certain. Some risk will be perceived rather than measured.5 A general trend when discussing dual-use research is the more subjective risk the more the Precautionary Principle is invoked. The precautionary principle basically favors safety when there is a lack of scientific evidence or an overwhelming bias. Underlying the PP is a judgment about the value of the current state of affairs.vii The more novel the risks, the more people have the perception that the current state of affairs is somehow superior, or that the new state of affairs is somehow worse. By and large the public have this negative opinion about many of these future technologies, such as cloning.viii Nick Bostrom has identified that one of the reasons for our attraction to the precautionary principle might be that our coping strategies for these risks have not evolved out of experience, as our strategies for dangerous animals, diseases or wars have evolved.ix Therefore, our typical strategies for thinking about these risks, such as the precautionary principle, may be ill-suited to determine the appropriate course of action. The intensity and scope of the risk may be too difficult to rationally fathom. Further, as consumerism continues to direct scientific pursuits regulatory bodies might be more convinced by short- term benefits than long-term harms.6 This is one problem of cost-benefit analysis with regard to these technologies. Humans have a propensity to judge short-term gains as more valuable than long-term gains - something known as hyperbolic discounting.x One piece of evidence to suggest this is affecting judgment about dual-use research is the introduction of a recent bill in the US House of Representatives that will consider lowering FDA standards for 5 This is applicable to the scientist too, each individual scientist is certain to have biases. However, there are two reasons to disregard these biases, or at least consider them trivial. First, the position of the scientist is one where rationality dominates the thinking, or at least it theoretically should. Of any position in society we consider having the least bias towards science, scientists should be among the top individuals. Second biases can never be eliminated completely, but the scientists, as a position in society, can minimize bias. This can be accomplished through procedural methods, such as having other scientists check decision-making steps prior to publishing. 6 Long-term harms here can mean either low probability, or harms that will occur in a future time.
  • 6. 6 crossover drug use.xi The short-term benefit of an increased economy or benefits from private interest groups may outweigh the long-term benefit of keeping the safety standards high for new drugs and uses. If this type of judgment is made with technology that has existential risk, there is no second chance. The more problematic cases, however, will exist in the middle. Existential risks that will end humanity will likely be heavily scrutinized because of the perception of risk, or at minimum risk will reach that magnitude less than the other risks. Thus, they may not be as difficult to safeguard against as global endurable risk. The cases where the risks are global but endurable will still contain a high deal of benefits, making the perception of that technology more difficult.7 These borderline cases will have proponents on both sides of the isle. There will be less of a call for the precautionary principle (especially if the risks are distant) and many different valuations of a cost-benefit analysis. This will make it difficult to determine the appropriate course of action quickly and decisively, which is needed for the magnitude of risk these projects possess.8 All the cases that I am concerned with possess these same characteristics: the property of dual-use, a greater potential for benefit than harm, a lack of intention of enabling harms on behalf of the scientist, novelty, and a high magnitude of risk. Now consider an example of dual-use research that possesses these characteristics that was published without safeguarding. In June of 2012, seven separate studies were published in Nature detailing experiments mutating H5N1, the most deadly strain of influenza. Since 2003 there have been 718 confirmed cases of H5N1 across 16 different countries, and of those cases, 413 people have died.xii The viruses, and their accompanying mutations, have a low rate of transmissibility between human hosts. The low transmissibility is a key factor in preventing the disease from becoming a pandemic during outbreaks, like the 1918 Spanish flu that claimed close to 5% of the world’s population.xiii Two of the studies, published in Nature, detailed novel methods of introducing mutations to the virus that would make it transmissible, potentially to humans.xiv In addition, the method behind mutating 7 At minimum, there will be more proponents of these technologies, or people who evaluate the risk as ‘worth-it’. This is because global but endurable risks will bring a different psychology to these problems. Since human beings will not perish or become diminished, different countries will be affected differently. Depending on the ethical standards and dispositions of these countries, global endurable risk may be outweighed by that countries perception of benefits in research. The appropriate action of the international community in these cases is beyond the scope of this paper. 8 Even if existential risk is not actualized, a global endurable risk may become actualized quickly and have terminal effects at the local and individual level.
  • 7. 7 the virus was incredibly easy to replicate. One involved using nasal spray and ferrets as mediums for mutation.xv Both studies showed that the virus had diminished lethality, however none of the scientists denied that these experiments could be replicated and the lethality could be retained through other mutations if that was the intent.xvi The public was first alarmed to these studies when one of the scientists stated that he and his team “had done something very stupid by mutating a H5N1 virus that resulted in a one of the most deadly viruses that could be created.”xvii He later walked his comments back by claiming that the media had blown his statements out of proportion. However, the perception became that this research was a danger to release since it had the possibility to enable bioterrorism. Proponents pointed out that the research could also be used to counter-act terrorism by providing a cure, vaccine or countermeasures to a attack. Even if the proponents were right, the scientist’s statements about his research are troubling; at minimum they show a degree of callousness about the potential consequences of his research. However, callousness does not necessarily mean that he acted wrongly.9 None of the research teams intended to publish a study that could be used in bioterrorism. They, like most scientists, are focused on more pragmatic concerns: preserving their research grant, their job, completing a sound scientific study, benefiting people through scientific discovery etc. The study of H5N1 was aimed at providing cures and countermeasures, rather than weapons. In the end, the studies were published in their entirety and as of today there have been no attacks with a weaponized strain of H5N1. However, it would be premature in thinking that the correct decision was reached or that these concerns over dual-use research are unfounded. With all of these factors at play it can be difficult to determine what we ought to do when dual-use technologies are being developed. Rather than attempt to use the precautionary principle or a cost-benefit analysis on a case- by-case basis, society should question whether the scientists researching these technologies have any obligation to safeguard society from the consequences. To motivate this point, I will examine when a position possesses role obligations and consider the position of the scientist under these circumstances. Role Obligations “… We have consultations, which of the inventions and experiences which we have discovered shall be published, and which not: and take all an oath of 9 Wrong in the sense of the scientist being responsible for the research’s misuse.
  • 8. 8 secrecy, for the concealing of those which we think fit to keep secret: though some of those we do reveal sometimes to the state and some not.”xviii - Francis Bacon, 1624 In New Atlantis, Bacon saw the need for scientists to prevent certain types of research from entering the public sphere, and the need of society to prevent that research from becoming public knowledge. Bacon’s sentiments can be viewed as a call for responsibility on the part of the scientist, specifically a responsibility to safeguard society against certain types of research from becoming public knowledge. Positions that have special responsibilities can be said to have role obligations. Role obligations are used generally to mark the existence of special responsibilities of a position in society towards a specific individual or group. The responsibilities are considered special because they are above and beyond what an individual outside of that particular position ought to do. Generally, role obligations function to protect vulnerable groups from harm. These harms are also special in the sense that they would not be easily mitigated. The imbalance of power usually comes from abilities within position itself. Consider the position of a doctor. Doctors have the power improve patients’ health, a very important function in our society. In order to make patients better, doctors subject individuals to harm during the course of an examination. These harms are necessary to bestow a greater benefit to the patient. Since the doctor has the special power and ability to subject patients to harm and the knowledge to determine how best to use those harms for the good of the patient, doctors can be said to have a responsibility to minimize harms for patients.10 For example, consider the ramifications if doctors did not have an obligation to minimize harms. Doctors could use their power to subject patients to whatever invasive and expensive test they wished, taking advantage of people in their most vulnerable state by exposing them to unnecessary physical and psychological harms. In determining that no doctor can ethically act in these ways, the obligation begins to determine the boundaries of the position of a 10 I appeal to the role obligations of doctors because of the generally uncontroversial claim that the position does carry with it inherent obligations to patients. I don’t mean to specifically advocate for any particular use of ‘minimize harms for patients’, however I assume there is a common intuition that this is one (or a derivative of ‘minimize harms’) of the basic obligations of a doctor.
  • 9. 9 doctor.11 However, there is an important feature of a doctor’s role obligation that presents a problem for drawing similarities to the scientist, namely the influence the Hippocratic Oath plays. The oath can be seen as a voluntary acceptance of the obligations of a doctor.12 The problem is there is no oath for becoming a scientist. If an oath confers duties as a doctor, then role obligations may need to be voluntarily undertaken, but there are good reasons to doubt that this has to be the case. The position of a naturalized citizen is one where the obligations to one’s home country are not explicitly undertaken. Being born, and being born in a specific country are not agreements any individual can enter into, but that does not preclude naturalized citizens from possessing the obligations and rights of individuals who choose to be citizens. The obligations of citizens are always present when that individual is identified in that role.13 A citizen of the United States has obligations to pay taxes, and to follow the laws. In return that citizen is granted certain powers that are protected (Bill of Rights), and has a more advantageous status than non-citizens. However, initially this is not an explicit agreement.14 Naturalized citizenship is conferred upon being born, without any prior agreements or understanding of preexisting norms. Yet no one could plausibly violate these obligations and claim ignorance.15 Further, no one could deny a citizen his powers if he lacks knowledge of them. Therefore, if role obligations can be conferred in explicit and implicit ways for a variety of reasons, then it is worthwhile to analyze the position of a scientist to determine if it is one with role obligations. The Special Position of Scientists 11 This is meant generally. If a position is prohibited from a certain action (responsible for not acting in a certain way), for example harming a patient, then that position must operate around that prohibition when performing different functions or expanding its role. 12 There are a variety of versions between countries and medical schools, but I assume a commonality of minimizing harms to patients among them. There may be specific differences between oaths in the cases of euthanasia, abortion, etc. but while interesting they are not relevant to the discussion of the scientist’s obligations. 13 I leave aside the specifics of how to determine citizenship (or degrees of citizenship) and appeal to any case the reader deems unproblematic for what constitutes a citizen (since there is no doubt the individual’s I am concerned with are scientists). 14 Infants have rights under the state despite not being able to agree to being citizens but in many cases their parents absorb these 15 Consider explaining to the IRS that you didn’t know you had to pay taxes. Even if you don’t go to jail, you will still owe them money, no matter how much you disagree or protest that you didn’t undertake this obligation.
  • 10. 10 To motivate my claim I begin by analyzing the work of Sieghart et al.xix They provide a good starting point as they sought to analyze whether the position of the scientist has role obligations in general. Sieghart et al. began by asking whether the position of a scientist is especially placed to benefit or harm society.xx They conclude that because of the potential to benefit or harm society, in conjunction with the fact that scientists are able to more accurately predict these consequences, scientists are in a special position that gives rise to role obligations.xxi I agree with their initial assessment, but there are more reasons to judge the position of the scientist as being one with role obligations. The special position of the scientist can be broken down into three main components: special abilities, special knowledge, and a unique level of complicity in the misuse of research. Recall that role obligations function to protect vulnerable populations from harms stemming from power imbalances that are not easily mitigated. Examples of the special abilities of scientists can include use of restricted materials (such as strains of H5N1) and/or the use of restricted procedures (experiments on human subjects). The imbalance of power is one of the reasons that medical scientists have a myriad of obligations regulating the way their studies can use human subjects. These abilities are special because they are restricted from the public sphere and/or are held secondarily by other agents.16 In addition, the special abilities of scientists are important because of the magnitude of risk that dual-use research carries.17 Scientists also have the ability to first consider the consequences of their research and are best placed to effect change. This is a practical consideration, i.e. as their individual careers develop and scientists begin new studies, they are usually the first individuals to decide what direction the project will take, and what will be used. Therefore, they are best placed to implement any change to their research, whether that is a change in methods, materials or the way the results are released. The scientists, who will be under the greatest obligation, are those who have the greatest ability to change the direction of the project. By this I mean that no study occurs in a vacuum, there are multitudes of people working even on small projects. However, there is a significant difference in ability between the head scientist and everyone else. The top scientists on any 16 Restricted in the sense that an ordinary citizen would have to expend a great deal of effort and/or break laws to achieve the same end that the scientist is capable of. ‘Held secondarily by other agents’ refers to the ability of other agents to secondarily hold these abilities. A bird farmer in China might have access to H5N1 but without the methods to manipulate it (provided by the scientist) there is not the same potential for misuse. These agents will always act secondarily to the scientist, because the scientist is responsible for creating the research that might enable others. 17 Figure 1
  • 11. 11 project will generally have the most abilities (this also goes for special knowledge and complicity). The special abilities of scientists, the use of restricted materials and procedures, the placement of the scientist to first consider studies. The global risk these studies contain are only one portion of what confers the scientist’s obligation; special knowledge also plays an important role. The special knowledge of a scientist is a very important consideration for the existence of role obligations. I make this claim based on two justifications: first the special knowledge of a scientist enables a special degree of foresight into the consequences, and second, that knowledge required dedication and forethought to obtain. If a scientist is working on novel research then it is safe to assume that the scientist is an expert in his field. By expert I mean to capture the difference in knowledge between the project’s head scientist and an ordinary individual. The head scientist may be conducting a study to obtain a specific type of knowledge, but is still much more knowledgeable about the entirety of what is involved when compared with an ordinary individual.18 The scientist is only able to become an expert through dedicated work over a long period of time; obtaining the knowledge necessary to successfully run experiments. That expertise places the scientist in a unique position to work with special abilities. More importantly, it allows him to have the knowledge necessary to foresee the consequences of dual-use research. That knowledge is not always available to a non-expert, and places the scientist in a unique position to determine the consequences. That is because in order to design a dual-use study, one must exercise forethought and develop a procedure. In thinking through the procedure and the desired result, one would also be led to consider the consequences. The scientist’s foresight is more likely to be accurate, because of the special knowledge, and more likely to occur first because of the scientist’s position. An objection could be raised here, which stated, if an agent was ignorant of the consequences and then that agent should not be held responsible (or should be held responsible to a diminished degree) due to the fact he lacked sufficient foresight. In some instances this objection has merit, however there are limits when this objection can be used in the case of the scientist. If the scientist is an expert in his field, but fails to consider the consequences of his research project then he is either culpably ignorant, or the consequences are unforeseeable. Culpable ignorance describes a case where an agent ought to 18 This does not preclude that ordinary individuals cannot possess this knowledge, but I doubt there are very many of equal degree as the scientist and further they do not possess the other features necessary for the obligation.
  • 12. 12 have inquired or exercised effort to remove their ignorance. If the scientist does not inquire as to how his results will be used, or fails to consider how they might be, then it may not be controversial to hold him responsible on the grounds of culpable ignorance. This would be analogous to a car rental agency failing to inquire if a client had a valid license before that client caused a major accident. The car rental agency has an obligation to check this and the foreseeable knowledge that an unlicensed driver could cause a serious accident. If the agency were to claim that it was ignorant of the driver not having a license, the company would still be liable as that is part of their role. The fact that they simply failed to inquiry, resulting in foreseeable harms, does not excuse their responsibility. Comparatively, if the scientist does not attempt to consider the consequences, and the consequences are easily foreseeable then the scientist is culpably ignorant. If the agent was ignorant of the consequences because the consequences were unforeseeable, then that agent may be excused from responsibility. For example, it would be ludicrous to hold Alexander Fleming or Louis Pasteur responsible for the rise of super-resistant strains of bacteria resulting from the overuse of antibiotics. Therefore, unforeseeable consequences would excuse an agent’s responsibility, but for the purposes of this paper I am not concerned with unforeseeable consequences. Another objection of my view might want to criticize the importance that I place on a scientist’s foresight. This objection falls under the doctrine of double effect. Generally, the doctrine seeks to differentiate an action that is impermissible when intended but permissible when foreseen. One example where the two are distinctly separable is in the instance of self-defense that ends in the death of the attacker.xxii Under normal circumstances, assume an agent intending the death of a person is wrong, but under the circumstances of self- defense, acting in a manner where the outcome is the same (the death of the attacker), the agent is absolved of wrongdoing. Of course, this example relies on further moral principles, such as the permissibility to defend one’s life at the cost of another, but we can identify that there is a potential for judging the events differently. The potential difference for judging events differently resides in the intention of the agent. Since the agent’s intention in self-defense is to deter an attacker and not kill him, the foreseeable consequence of being forced to kill that attacker lessens the responsibility of the agent. The objection in terms of the scientist proposes that since the scientist did not intend the harms, and further, another agent supplied the wrongful action, the scientist cannot be judged as responsible. Especially considering that without the intervening agent, the act would not have occurred. This objection however, does not take into account just how unique the position of foresight is
  • 13. 13 that the scientist has. Consider the following analogous futuristic self-defense case: The Aggressive Neighbor: You receive word from a friend that a neighbor who you had an altercation with is looking for you. You know from previous experience that this man is prone to violence. Before you walk home you could make a police report that would delay him, you could also take another route home to avoid him all together, or you can take your usual route home with the foreseeable knowledge that the neighbor is looking for you and it could potentially become violent. In the original self-defense case (one without foresight or other options) the difference between intending the man’s death, and defending oneself with the foreseeable knowledge that he will die makes a greater moral difference. This is because of a lack of foresight about the situation and a lack of available options. If the man from the neighbor case chooses not to avoid his attacker, the judgment of the situation might differ. I am only claiming here that these are not identical in degree of foreknowledge. That difference in that degree has the potential to change the judgment. The higher the degree of foreknowledge the more potential there is for accountability. If the scientist is one hundred percent certain that a particular group will use a study, then the scientist could be held responsible to a higher degree as opposed to if he is one percent certain. Therefore, the objection in terms of foresight is reliant on degrees. This is corollary to the degree of foreseeable misuse that projects possess - the higher the degree of foresight is - the higher the degree of certainty is - the higher the degree of responsibility is for the scientist. The special abilities and knowledge of the scientist begin to shape the motivation for a scientist’s obligations, but the unique complicity of scientists in cases of dual-use research provides the most compelling reason to accept my claim. A Scientist’s Complicity When an agent acts, his role can fit into two categories: principal or accomplice. The principal directly brings about the outcome through direct action. In other words, there is no intervening action by another agent. For my purposes, leave aside the discussion of principal wrongdoers. This is because the scientist is never a principal in the misuse of dual-use research; he is an accomplice. A principle directly brings about the outcome whereas an accomplice assists in the principal’s action. There are a number of ways that an accomplice can assist a principal. For example, the scientist who mutated a
  • 14. 14 H5N1 virus would assist a bioterrorist by enabling the construction of a weapon. This assistance by an accomplice is referred to technically as complicity. To be complicit in wrong-doing is a way of describing the degree of an agent’s participation in a principal’s wrong-doing. However, complicity defined in this way may not always carry enough weight to determine blame. To be clear, an agent can be complicit, but not blameworthy. By blameworthy I do not mean to evoke images of retribution. Blameworthy describes an agent whose complicity warrants moral consideration. Complicity that is worthy of moral consideration describes the disposition of other agents to judge the complicit agent as blameworthy, and thus, seen as having committed a wrong.19 If the agent’s complicity does not meet the criteria for blame, then that agent is excused of wrong-doing. The difference between complicity and blameworthiness can be difficult to determine in specific cases. What generally appears to separate them in these cases of dual-use research is the trivialness of an agent’s complicity below the threshold of blameworthiness.20 The Triviality principle is used to describe factors that can be seen as negligible when determining the appropriate course of action. Consider the difference in complicity and blameworthiness of the following individuals in a bank robbery. There are the men inside robbing the bank (the principals), there is a get away driver parked out front, a teller who hands over the money, and a man who unknowingly (and unnecessarily) opens the banks door from the outside as the robbers escape. Each of the three individuals that assist the bank robbers is an accomplice to the robbery; they are complicit through their causal contributions, but the get-away-driver seems to be the only one that warrants blame. The high degree of his assistance, and intention of continued assistance, leads to the conclusion that he is the most complicit, and the most blameworthy. Conversely, the man who unnecessarily opens the door for the robbers is complicit to a trivial degree. He shares no intentions with the robbers, possesses no knowledge of the events he is walking into and provides a trivial degree of causal contribution. Therefore, he is not blameworthy. In cases where contributions are trivial, it won’t be controversial to absolve the scientist of blame.21 19 It is up to particular groups how to handle the consequences of moral judgments. 20 I use threshold in a general sense, I do not mean to imply that there is a fine line or a range, rather there is a place or time of judgment, the rules of which I am not interested in determining, where above a threshold the agent is considered blameworthy and below that threshold the agent is only considered complicit. 21 This is analogous to the discussion of a new type of cancer drug. The novelty is trivial and therefore the scientist is complicit but not blameworthy.
  • 15. 15 Complicity is a function of an agent’s mind-state and expected causal contribution, both of which come in varying degrees. xxiii Simply stated, an agent’s expected causal contribution is the degree to which that agent reasonably expects his actions contribute to the outcome.22 Mind-state captures both the notions of foresight and intention. Per the constraints, the scientist pursuing dual-use research never has the intention for harm but does foresee them (or is in a position where he is expected to). The moral importance of this special foresight of the position of the scientist has already been examined above, now I will consider the unique causal contribution scientists can provide. When considering how agents can be judged in the bank robbery example, it becomes easier to identify how causal contribution can differ within the notion of complicity. However, none of the agents in the bank robbery parallel the scientist’s causal contribution. None of them exhibit the same degree of complicity when compared with the scientist and the bioterrorist. This is because the causal contribution a scientist can provide is markedly unique among the agents involved in the misuse of dual-use research. Recall two of the conditions of dual-use research: the novel nature of the results and the requisite expertise needed to obtain novel results. Under these conditions the results are not easily obtainable by those lacking the requisite abilities and the precise level of foresight needed to bring about the results. The types of results themselves also provide new consequences that may not be foreseeable to those unfamiliar with that particular project or lacking sufficient knowledge. Therefore, the causal contribution provided by the scientist is not easily replaceable and markedly unique, in the same way that his foresight is. To further examine the importance of the scientist’s causal contribution consider the following example where the agent has special abilities, special knowledge (that required forethought to obtain) as well as a unique causal role. The Chef and the Assassin: A chef at a gourmet restaurant in Washington, DC has perfected working with a special ingredient that is highly poisonous. He alone has the knowledge to nullify the poisonous effects of the ingredient, but through those same methods, the poisonous smell (that would have 22 I use ‘reasonably expects’ to imply the following intuition. If a scientist were to take the necessary steps to fulfill his role obligation, and he found that publishing does not violate the obligation to safeguard, which was independently confirmed by other scientists, but his research still enabled a catastrophe then he is complicit to a very small degree. In contrast if a scientist ‘reasonably expects’ a study to contribute to a high degree to a terrorist plot, and chooses to publish, but no harms happen as a result, can still be held complicit to a high degree.
  • 16. 16 allowed someone to detect it) can be masked. The chef also spends his downtime on the deep web tracking the exploits of political assassins. During his extensive research he learns that an assassin renowned for using poison is moving to Washington. If he were to cook the special ingredient the assassin might be able to reverse engineer it and utilize the poison a particularly difficult contract. First, we can observe that the chef has zero intention of enabling the assassin; it would be better if he left the city so that the chef could avoid this consideration all together. Second, the chef can reasonably foresee that the assassin could learn of his technique, and choose to attempt to use it, but the assassin will likely find another method eventually. However, there does seem to be something special about the case above; the intuitive claim is that the chef should resist serving the assassin. Perhaps to fully make a judgment about the case, there need to be more details. If the assassin were to have a better chance at evading capture using the chef’s ingredient, he may have more cause to prevent the assassin from learning the method. If the ingredient were as common as the rat poison from a convenience store then that case seems less compelling than the case where the assassin uses the chef’s special method. While still a unique causal contribution, the chef’s new method does not possess some of the special properties of the original ingredient.23 The lack of novelty makes the causal contribution less special. The unique causal role the chef has is based on the chef’s special ability. The special ability is the unique methodology of preparing the poison. In addition, the chef’s method leads to unique harms. The poison once prepared using the chef’s methodology is markedly unique in the poisons the assassin could use. Thus, the harms brought about are also markedly unique. The chef also has special knowledge about the assassin, conferring the direction of the duty, namely to not enable the assassin. In addition, the unique causal role of the chef is also difficult to replace. He is the only one available with that particular methodology and the only one who could effect change. Combined with the foreknowledge of its misuse, there becomes a more compelling argument to hold the chef responsible. The scientist, like the chef, has zero intention of enabling any terrorist and has the foreknowledge of his project’s misuse. There are contextual differences between the methods of misuse, but all things considered, the chef and scientist possess unique knowledge of the potential harms. The scientist enables the terrorist to complete an act that would have been impossible 23 This is an example of how differences in degree and kind effect judgments.
  • 17. 17 without his contribution. The means by which the terrorist achieves his end is unique to the causal contribution of the scientist. The terrorist would probably find another means of carrying out his end, however if he were to utilize the method of the scientist the situation changes. There is a harmful action that is enabled that was previously impossible by the causal contribution of the scientist. Since the scientist has particular foreknowledge of that misuse, and the ability to safeguard against it, the scientist has a responsibility to not causally contribute. Additionally, the position of the scientist is not one in which society expects to enable harms, it is primarily a beneficial endeavor. Therefore, when scientists place society is harms way, by failing to recognize the unique role they play (causal contribution, foresight) and the importance of their position (with regards to how quickly indeterminate harms can come about) they can be judged as complicit in the actions of terrorists and held blameworthy for violating their role obligation. Objections Until now I have left out an important underlying assumption of my claim; the role obligation I have proposed could make the state of affairs worse. For example, in the case of the influenza studies, it seems that for now no harms have come about as a result of them being published. An opponent of my view might suggest that by following the principle I have proposed society would have missed out on beneficial knowledge. This is in part due to the remaining ambiguity in my use of safeguard. The obligation of safeguarding dual-use research may not necessarily mean never publishing. In many instances the appropriate action might be to restrict only the methods or delay the study until other precautionary measures can be taken.24 However, in some instances, the obligation will demand that scientists make the state of affairs worse by withholding their results. This type of instance would be a case like the following: Publishing would benefit society more than the negative harm of a scientist failing to adhere to his role obligations, in addition to the potential harms of dual-use research. On the other hand, refusing to publish would harm society more than the positive benefit of the scientist following his role obligation plus mitigating the harms of dual-use research. In the former case, by abandoning the role obligation, the scientist could make the state of affairs better. In the latter case, by following role obligations, a scientist would make the state of affairs worse. This does not mean that the harms from dual-use cannot occur in both cases, it only means that at the time of publication the state of affairs will be worse if the scientist follows his role obligation. If those 24 The appropriate action will vary from study to study. What is appropriate to safeguard one study may have no effect on another type of dual-use study.
  • 18. 18 potential harms are actualized, this does not differ the judgment of the state of affairs with regards to the scientist.25 Therefore, scientists are under an obligation to safeguard even though the state of affairs could have been, by comparison better.26 The types of cases where this might occur generally have countervailing reasons that affect the context of the scientist’s role obligations. The countervailing reasons that can influence the scientist can take the form of social pressure (such as pressure from employers), professional pressure (other scientists) and even societal pressure (for a new tech, or against one). The reasons an agent himself can have to act against role obligations are varied, they can range from needing money to support a drug habit or needing the money to support the medical treatment for the preventable death of a child.27 These reasons can affect the judgment of the scientist in both positive and negative ways. The scientist who fails his obligation to support a drug habit may face harsher judgment than for the scientist supporting his dying child. Regardless of the reasoning, scientists are under a greater pressure to refuse countervailing reasons as a justification for violating their role obligation. I make this claim because the role obligation to safeguard society from dual-use research mimics the ideals of the Solzhenitsyn Principle. As Solzhenitsyn said in his Nobel Lecture, “And the simple step of a courageous man is not to partake in falsehood, not to support false actions! Let THAT enter the world, let it even reign in the world - but not with my help.”xxiv Solzhenitsyn’s remarks break morality down into two parts: making a difference through having a direct causal influence (acting as a principal) and the difference one makes through the causal influence of someone else (acting as a accomplice, i.e. being complicit).xxv The Solzhenitsyn principle calls for a prohibition of wrong-doing, not only as a principal, but also as an accomplice. Typically this is thought of as an absolutist prohibition, but I do not wish to propose it as such. Solzhenitsyn, a powerfully willed man who spent a long stretch of his life in a Gulag for being a dissident, might disagree. However, it cannot be the case that 25 Recall our judgment of causal contribution relies on expected causal contribution, which is determined at the time of complicity, not after the fact. Judging after-the-fact leads to the implausible scenario of calling long dead individuals complicit in the distant misuse of their research. 26 ‘By comparison’ here is used to denote that when a scientist follows his role obligation and safeguards society that is not a ‘bad’ state of affairs. It is only when the state of affairs of safeguarding is compared with the state of affairs of publishing that it becomes the ‘worse’ of the two options. 27 Similar to footnote 3 : There can also be various personal reasons, such as deeply held moral or religious beliefs etc.
  • 19. 19 a scientist should refuse to publish under any circumstance.28 Rather, I use the Solzhenitsyn Principle to evoke the idea that a scientist has a powerful responsibility not to be complicit. The principle can best be seen as a responsibility to resist countervailing, at least with regards to complicity. More specifically, the principle demands an agent to resist in instances where the agent’s complicity would benefit him. This is parallel to the case of scientists who would bring about a better state of affairs if he were to violate his role obligation.29 By following the obligation to safeguard the scientist is obligated to bring about a worse state of affairs. The connection between the scientist and the Solzhenitsyn Principle can be seen in the phrasing of ‘courageous man’. If an agent is in a position to be courageous, that agent is in a position to act rightly or wrongly. An agent in that type instance can be said to have a unique causal position (to bring about a better or worse state of affairs). In the case of the scientist that position is a choice between following role obligations or not. Further, another agent cannot easily fill that causal position. The scientist also has the unique knowledge to foresee misuse, and the unique ability and position to deter it. Therefore, no other individual is better placed to carry out the scientist’s responsibility or to act courageously. Further, no other individual may be able to have any impact. Once the results of a dual-use study are released, it would be very difficult, if not impossible to recall that information. The more unique the causal position – the more complicit an agent is – the greater the obligation to resist side-constraints. The Solzhenitsyn Principle not only requires that scientists resist countervailing to a higher degree, it also requires that they abstain from participation when their causal contribution would make no difference to the overall outcome. These types of cases are often referred to under the argument for no difference; the objection is often stated as, “If I don’t, someone else will”. In these cases there are multiple agents contributing to an outcome. That outcome is not dependent on any single agent providing a unique causal contribution. Therefore, the objection proposes that if the causal role of an agent is important for determining blame, then in cases where that causal role is diminished, can those individuals still be blameworthy? These types of cases need more details. First, we can identify that the uniqueness of a scientist’s causal contribution is a matter of degree. If the 28 The immediate threat of death could function as an instance where we could not hold the scientist responsible for failing in his role obligation, especially when the harms of publishing are minimal. 29 The better state of affairs could be a higher degree of benefit to society, or the positive repercussions for the scientist after publishing.
  • 20. 20 scientist is certain another researcher will publish, his causal contribution is a function of how similar their causal contributions will be. If each of the scientists has found the exact same conclusion using the exact same method then their causal contributions are, all things considered, equal in degree. If the two scientists are three weeks apart, there is the potential for more causal contribution by publishing first. The difference of three weeks could be significant enough to warrant a greater degree of complicity. This type of distinction will only apply to certain cases, but in those instances the difference in degree has an effect on judgment. The more interesting and problematic case is when the causal contributions are identical in degree. In a case where multiple agents causally contribute to the same degree, it is curious to explain why if an agent were certain of the outcome (someone would take the same action regardless) why it is worse for that agent to be the one that brings it about. I concede that the causal role of the scientist is no longer special to that specific scientist in these cases, but there are other reasons to doubt the claim about no difference outweighing a scientist’s role obligations. Would the scientists publishing actually make no difference to the outcome? Are two worlds, one in which one scientist publishes and the other where two scientists publish (using the excuse for no difference) actually equivalent? With regards to the outcome of publishing the dual-use study, the two worlds are identical. Both worlds are exposed to the same risk of misuse. However, the world in which two scientists fail their role obligations is worse than a world where only one scientist fails.30 This is because the unique position of the scientist is more than causal influence and special knowledge; it is a position in which an individual can have an impactful difference. By this I mean to appeal to more than the notion of making the world a better state of affairs by safeguarding it from harm or benefiting it from research. I mean to capture the social consequences implicit in the perception of the role of the scientist. These societal influences give weight to the idea that the argument for no difference may not apply in the case of the scientist failing his role obligation. The work of scientists influences policy, public opinion and the work of other scientists. The special knowledge of the position also gives the perception of authority. That is because when a scientist speaks about science he knows what he is talking about, or at least the public perceives that the scientist does. 30 Even under a moral theory that places all of the weight on outcomes, so that neither individual could be held blameworthy because of their identical contributions, a world with the outcome of two scientists failing in their role obligation is a has a worse state of affairs than a world where only one scientist fails, even though we cannot assign blame to a single agent.
  • 21. 21 For the most part the authority in the position of the scientist can be considered to come from the position’s special knowledge and the choice to become a scientist.31 The public lacks this special knowledge but has a perception of the complexity of the information. The public also has a perception about the path to becoming a well-respected scientist, and those opinions give the position authority. Under explained, callous remarks or bad research can lead to terrible consequences. The South African policy of refusing AIDS medication resulted in the preventable (untreated) deaths of hundreds of thousands of people, and was only undertaken after being influenced by a molecular biologist, Peter Duesberg.xxvi The difference between the scientist’s knowledge and the publics’ creates an area of trust between the two positions. The public trusts that what scientists say is scientifically accurate to the best of their knowledge. The public is so easily influenced because of a lack of intimate detailed knowledge of methods and purpose, as well as a generally negative attitude towards new technologies (such as animal cloning).xxvii If the public has a disposition to be negative and lack knowledge then what scientists say has an even greater impact, as they are seen as authorities. Trust is vital to the profession, for its success and our technological success. Without trust medical scientists cannot recruit participants for studies of new drugs, etc. This would slow medical advances to what we could learn by vivisection and computer modeling. Since the FDA requires clinical trials on humans, a lack of trust would seriously inhibit the growth of scientific knowledge.32 In addition to the influence the position of a scientist has with regards to the general public, they also have a great deal of influence amongst other scientists. As with other positions that possess authority and role obligations, scientists determine some professional codes of conduct amongst themselves.33 These individuals come together to determine standards of research, methods and other issues in order to determine the fit for the profession in society. Thus, the opinions of members within the community will always be more influential than the opinions of non-members. Members are of the same mind, or at-least are well versed in similar issues, as are other members. In addition, membership becomes a mark of authority, without which ones influence would not have the 31 Other factors may influence this such as celebrity status. 32 This assumes that lack of trust would have a negative correlation to research participation, which I do not believe would happen instantly if one researcher were discredited, but under a system that operated on a lack of trust, I believe this is a possible outcome. 33 There are other codes of conduct, such as those that govern the treatment and use of human subjects.
  • 22. 22 same effect. Scientists who choose to disregard role obligations might influence others to do the same, leading to more scientists disregarding their role obligations. These types of side effects are known as spirals. These are side effects where the quantity of the same action has a particular influence on people.xxviii An example of these types of effects can be seen in the stock market. When a company reports a bad quarter, fires a CEO, or undergoes some type of scandal, the perception of the company becomes that it is risky, and that risk is reflected in the stock price. These pieces of information may have little to no actual effect on the worth of the company. However, when these stories break there is often a selling off of stocks by risk adverse investors. When those sell offs occur the stock price continues to decline, precipitating more risk adverse investors (of a lower degree than the first) to sell their shares precipitating the perception of more risk, a trend that can spiral with disastrous results. In the case of the scientist, the influence of one individual publishing a study concerning influenza may influence several other labs to pursue and publish studies of the same kind. If each of these studies makes a small discovery concerning influenza, this will generate more interest and more research into the field. Each of these new studies carries with them the potential for misuse, but as a result of the increased interest, every scientist plays an even more diminished causal role. As more scientists take up the pursuit of a particular piece of research the more risk society comes under. Despite the lack of casual contribution in these cases, the position of the scientist still warrants a high degree of refusal when the effects of negative spirals are present. Of course, spirals can also precipitate in a positive direction, so the most difficult case appears to occur with multiple identical causal contributions with no apparent side effects or spirals. In cases where the causal contribution of agents is identical and the is such that their contributions make no degree of difference with regards to side effects, then the special position of the scientist has been diminished to a trivial degree. Therefore, in cases where that particular the degree of causal contribution was necessary for blameworthiness, what reason does the scientist have to remain under the obligation?34 34 Being trivially complicit may not be enough to motivate the agent to follow a role obligation or hold him accountable. For example, every individual (that actively participates in society) is in someway complicit to a trivial degree in some type of wrongful behavior. The world is too interconnected for this to be otherwise. Thus, on reflecting on the trivial manner to which my driving contributes to global warming I might be unmotivated to opt for public transportation everyday (assuming that is the
  • 23. 23 In following the Solzhenitsyn Principle the scientist would still be required to abstain from publishing, this can be observed in the phrase ‘let it reign, but not with my help’. However, the scientist doesn’t seem to have a strong obligation to follow it if he is not considered blameworthy. For example, imagine a terrorist group employed scientists to research influenza at the same time the labs in 2012 were. Next, imagine that the scientists are certain the terrorist group have the identical results they do (identical causal contributions). Further, we can assume that the terrorist group would receive one hundred percent of the attention for publishing (no negative side effects for the scientists) and the scientists would benefit from being the first to publish. In this situation it is still desirable to explain why the scientists ought to adhere to their role obligation rather then accept the argument for no difference and act in place of the terrorists. In this instance the judgment could be based on what influence the scientists ought to have, namely opposite the terrorists. However, with no causal contribution and a lack of negative side effects I doubt there is any sufficient moral reasoning to condemn their publishing on the basis of acting opposite the principal’s ideology. It would seem that no scientist could be held blameworthy under an argument for no difference in this situation. If no one can be complicit to the outcome in a manner sufficient for blameworthiness, the no individual can be held responsible. Those individuals have failed in their role obligation, but that complicity can be considered trivial. However, while the causal contribution may no longer be unique to any particular individual, it is unique to a group of individuals in the same special position. One scientist may not hold a unique causal contribution but a group of scientists does. If there are multiple agents ready and willing to be complicit in an identical manner, then those agents can be said to share the same unique causal position. It does not matter which agent acts towards the outcome because the outcome will be the same. Each agent who has the ability to causally contribute the necessary amount to achieve an outcome can be said to share the same unique causal position. No individual can be said to have a unique causal role, but as a group they can be said to share one. Each of the seven research teams that were poised to publish results about influenza (assume they are causally identical) shares one-seventh the unique causal position. I propose that while any individual might not be held fully responsible when invoking the argument for no- difference, that individual can be held responsible equal to the degree of complicity that individual contributed to the group. If the group is held more ethical decision) and choose to drive occasionally. In acting this way, why would my trivial complicity convince me to act against my desires?
  • 24. 24 blameworthy then the each agent receives a fraction of the blame. The practicalities of how this claim can be implemented are far beyond the scope of this argument to propose, but the motivation behind holding these individuals responsible as a group can be justified. This is only the case however, if our judgment is based solely on outcomes and there are no countervailing reasons to affect the judgment. Conclusion Due to the special position of scientists, including special abilities, special knowledge and unique causal contribution, scientists have an obligation to safeguard society from the consequences of their dual-use research. This obligation is present despite strong countervailing reasons and despite the availability of other agents to provide the identical outcome in most cases.
  • 25. 25 Bibliography i Nick Bostrom, “Existential Risks,” Journal of Evolution and Technology 9, no. 1 (2002): 1, http://www.jetpress.org/volume9/risks.html. ii Stephen Kern, The Culture of Time and Space, 1880-1918: With a New Preface (Harvard University Press, 2003). iii Ibid., 275. iv Bostrom, “Existential Risks,” 1. v Bostrom, “Existential Risks.” vi Ibid., 3. vii Carl F. Cranor, “Toward Understanding Aspects of the Precautionary Principle,” Journal of Medicine and Philosophy 29, no. 3 (January 1, 2004): 259–79, doi:10.1080/03605310490500491. viii William K. Hallman and Sarah C. Condry, Public Opinion and Media Coverage of Animal Cloning, FPI Research Report (Rutgers University: Food Policy Institure, n.d.); Matthew C. Nisbet, “Public Opinion About Stem Cell Research and Human Cloning,” Public Opinion Quarterly 68, no. 1 (March 1, 2004): 131–54, doi:10.1093/poq/nfh009. ix Bostrom, “Existential Risks,” 2. x Jess Benhabib, Alberto Bisin, and Andrew Schotter, “Hyperbolic Discounting: An Experimental Analysis,” New York University Department of Economics Working Paper, 2004, http://teaching.ust.hk/~bee/papers/Chew/04Benhabib-Bisin-Schotter- Discounting.pdf. xi Toni Clarke, “FDA Could Approve Drugs for New Uses on Less Data: Draft Law,” Reuters India, accessed April 30, 2015, http://in.reuters.com/article/2015/04/29/us-u-s- health-lawmakers-bills-idINKBN0NK2FA20150429. xii World Health Organization, Influenza at the Human-Animal Interface; Summary and Assessment as of 26 January 2015. (WHO, January 26, 1025). xiii “Influenza A Virus Subtype H5N1,” Wikipedia, the Free Encyclopedia, January 10, 2015, http://en.wikipedia.org/w/index.php?title=Influenza_A_virus_subtype_H5N1&oldid=641 936712. xiv Donald G. Mcneil, “H5N1 Bird Flu Research That Stoked Fears Is Published,” The New York Times, June 21, 2012, sec. Health, http://www.nytimes.com/2012/06/22/health/h5n1-bird-flu-research-that-stoked-fears-is- published.html. xv Ibid. xvi Ibid. xvii Ibid. xviii New Atlantis (Clarendon Press, 1915). xix Paul Sieghart et al., “The Social Obligations of the Scientist,” The Hastings Center Studies 1, no. 2 (1973): 7, doi:10.2307/3527509. xx Ibid., 10. xxi Ibid., 11. xxii Alison McIntyre, “Doctrine of Double Effect,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Fall 2011, 2011, http://plato.stanford.edu/archives/fall2011/entries/double-effect/. xxiii Chiara Lepora and Joseph Millum, “The Tortured Patient: A Medical Dilemma,” Hastings Center Report 41, no. 3 (2011): 40, doi:10.1353/hcr.2011.0064.
  • 26. 26 xxiv Alexander Solzhenitsyn, “Nobel Lecture,” 1970, http://www.nobelprize.org/nobel_prizes/literature/laureates/1970/solzhenitsyn- lecture.html. xxv John Gardner, “Complicity and Causality,” Criminal Law and Philosophy 1, no. 2 (May 1, 2007): 2, doi:10.1007/s11572-006-9018-6. xxvi Sarah Boseley and Health Editor, “Mbeki Aids Denial ‘Caused 300,000 Deaths,’” The Guardian, accessed April 13, 2015, http://www.theguardian.com/world/2008/nov/26/aids-south-africa. xxvii Hallman and Condry, Public Opinion and Media Coverage of Animal Cloning. xxviii Jonathan Glover and M. J. Scott-Taggart, “It Makes No Difference Whether or Not I Do It,” Proceedings of the Aristotelian Society, Supplementary Volumes, 1975, 171–209.