In the U.S., the federal agencies that fund scientific research usually discuss scientific misconduct in terms of the big three of fabrication, falsification, and plagiarism (FFP). These three are the "high crimes" against science, so far over the line as to be shocking to one's scientific sensibilities.
But there are lots of less extreme ways to cross the line that are still -- by scientists' own lights -- harmful to science. Those "normal misbehaviors" emerge in a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson :
We found that while researchers were aware of the problems of FFP, in their eyes misconduct is generally associated with more mundane, everyday problems in the work environment. These more common problems fall into four categories: the meaning of data, the rules of science, life with colleagues, and the pressures of production in science. (43)
These four categories encompass a lot of terrain on the scientific landscape, from the challenges of building new knowledge about a piece of the world, to the stresses of maintaining properly functioning cooperative relations in a context that rewards individual achievement. As such, I'm breaking up my discussion of this study into two posts. (This one will focus on the first two categories, he meaning of data and the rules of science. Part 2 will focus on life with colleagues and the pressures of production in science.)
It's worth noting that De Vries et al. have not identified these categories of problematic behaviors from first principles. Rather, they sought scientists' perspectives:
Interestingly, those who know science best -- scientists themselves -- are almost never asked to describe the behaviors they regard as most threatening to the integrity of their work. ...
Using data from a series of focus groups, we describe the kinds of behaviors working scientists believe to be most threatening to the integrity of the research enterprise. (43)
To the extent that scientists themselves identify these behaviors as threatening to the integrity of their work, it's probably worth taking them seriously. At the very least, scientists' perceptions about who engages in problematic practices may influence their choices about who to collaborate with and whose work they can trust.
One of the take-home lessons of this research, as we'll see, is that top-down policies meant to prevent (or at least discourage) scientific misconduct miss the mark:
While it is true that the definition of misconduct as FFP describes "actions that are unambiguous, easily documented, and deserving of stern sanctions" and offers the added benefit of being similar to definitions used in other countries, we believe that policies intended to reduce misconduct must be informed by what researchers see as behaviors that hamper the production of trustworthy science. (44)
As we'll discuss at length below, the authors of this paper suggest it would be better to direct policies toward addressing behaviors that worry scientists, not just the agencies funding their research. I would further venture that it will probably take more than sanctions to really address the worrisome behaviors. Scientific communities and their members will also need to be involved in encouraging good behavior and discouraging bad behavior.
De Vries et al. will also draw some lessons for those who make research ethics the focus of their own research:
[S]cientists' reports on the types and effects of misbehavior in research serve to highlight a blind spot in the field of research ethics. It is a curious fact of the organization of intellectual life that research ethicists focus largely on the protection of human and animal subjects of research; the behavior of researchers (apart from their direct treatment of subjects) has yet to capture their imagination. This lack of interest may be the result of the ordinariness of misbehavior; we were told by one researcher that study of the poor behavior of researchers is far less intellectually stimulating than the conundrums of consent and conflicts of interest. (44)
There's an interesting question lurking here: are people who make the study of research ethics the focus of their professional activities primarily directed towards following their own interests, or primarily directed at being a resource to those conducting scientific research?
If the former, they need only worry about which problems they find intellectually captivating -- but they should not be surprised or offended if working scientists proceed to ignore all of their scholarly output.
I'm willing to bet that most of the people who study research ethics for a living hope that at least some of their scholarly output will be of use to scientists trying to conduct responsible research. To the extent that members of the field are serious about their work being useful and relevant to scientific researchers, attending to the "normal misbehavior" -- and what consequences it might have for the scientific community and the body of knowledge it works to produce -- seems like a good idea.
Are scientists interested in the subject of research ethics? The response De Vries et al. got when recruiting subjects for their study suggests that they are:
More scientists than could be accommodated volunteered to participate. We restricted the groups to no more than 10 participants, and, in order to minimize participants' reluctance to discuss context-related issues, we constructed the groups in such a way that participants were from different academic departments. The groups represented considerable diversity in race/ethnicity, gender, and disciplinary affiliation. (44)
As I'm not a social scientist, I can only speculate about the methodological advantages of focus groups rather than questionnaires. My hunch is that focus groups work better for open-ended questions, and that maybe interaction with other people on a focus group might make subjects more reflective than they would be whipping through a questionnaire.
Any social scientists want to weigh in on the pros and cons of focus groups?
It turns out that the focus group participants (and De Vries et al., in considering the information the focus groups have given them) latch onto one of the issues that I find most intellectually engaging, namely the interaction between the epistemic project of science and the ethical considerations guiding scientific activity:
[T]he everyday problems of scientists are often associated not just with ordinary human frailties, but with the difficulty of working on the frontier of knowledge. The use of new research techniques and the generation of new knowledge create difficult questions about the interpretation of data, the application of rules, and proper relationships with colleagues. Like other frontiersmen and -women, scientists are forced to improvise and negotiate standards of conduct. (44-45)
In other words, even if scientists were all on the fast-track to sainthood, they would still run into difficulties figuring out the right thing to do simply in virtue of the ambiguities inherent in building new scientific knowledge.
My guess is that, in most vibrant professional communities, community members are thinking (and talking) pretty regularly about what practices are good or bad from the point of view of their shared activity. That science is constantly pushing past the edge of what we know, trying to build good knowledge on the frontier, just means that the reflection and negotiation within the scientific community has to happen more frequently to keep the edges of the enterprise connected to the middle.
What specific issues did the scientists in the focus groups raise?
[Respondents] were troubled by problems with data that lie in what they see as a "gray area," problems that arise from being too busy or from the difficulty of finding the line between "cleaning" data and "cooking" data. (45)
The distinction between "cleaning" data and "cooking" data is important.
Even if there's unclarity about what you've found and what it means, in these gray areas scientists seem to have a bit more clarity about the mindset a scientist should adopt, and the kinds of pitfalls she should try to avoid. In sorting out the experimental results, a scientist should try to be objective. She should make as sure as she can that her result is robust -- so she'd be surprised is another scientist couldn't reproduce it, not surprised if she could.
Objectivity is hard even in the best conditions. Some of the less-than-objective calls scientists make in gray areas may be motivated by the practical goal of getting something published. As one of the focus group members put it:
"Okay, you got the expected result three times on week one on the same preparation, and then you say, oh, great. And you go to publish it and the reviewer comes back and says, 'I want a clearer picture,' and you go and you redo it -- guess what, you can't replicate your own results. ... Do you go ahead and try to take that to a different journal ... or do you stop the publication altogether because you can't duplicate your own results? ... Was it false? Well, no, it wasn't false one week, but maybe I can't replicate it myself ... there are a lot of choices that are gray choices ... They're not really falsification." (45)
Presenting iffy data is not the same as making up data from whole cloth. Presenting iffy data as being solid, however, is at least misleading.
Undoubtedly, there is also something worrisome to the scientist making this questionable call about standing by the iffy data. Once your results are published, you know, or ought to know, that there may well be other scientists trying to reproduce those results. Those scientists have no personal stake in giving you the benefit of the doubt if your results don't hold up.
If you know you're presenting an overly rosy view of how robust the results are, you also know that you're taking a risk that others will expose the flakiness of those results.
Some of the focus group participants pointed to gray areas that are, arguably, grayer, including the problem of separating data from noise in systems no one else has studied:
How do scientists actually clean their data? They often rely on their experience, cleaning out unanticipated findings and preserving what they "know" they would find (45)
We can't help being influenced by (indeed, guided by) what we think we already know. This makes it harder for a scientist to believe that unexpected results are reflections of how things really are, rather than the result of a faulty instrument, a contaminated culture, the wrong reagent.
If the purpose of science is to generate new knowledge, the meaning of the new data generated in that quest will necessarily be difficult to discern, requiring interpretation, inference, the sorting of "good" and "bad" data, decisions about the use of controls and statistical data. Scientists are aware of the opportunity to slant the data that these decisions afford and they remain unclear about the best ways to make and report these decisions. (45)
I think it's worth noting that the scientists here aren't just concerned about peers who take advantage of the uncertainties. Rather, these focus group participants seem to want a firmer sense about how to make better decisions about ambiguous data. There might even be the will to have regular conversations within the scientific community about tough calls.
What's the alternative? No one knows quite how other scientists made the calls, or how carefully they tried to avoid bias. This state of affairs makes scientist-to-scientist communication harder, both on the transmitting end and on the receiving end.
Another issue of concern to focus group participants was an excess of rules and regulations concerning various aspects of their research activities:
The work of scientists is increasingly governed by layers of rules intended to, among other things, protect animal and human subjects, to prevent misuse of grant funds, and to control the use of harmful materials. Our respondents noted that this plethora of rules -- many of which they find to be unnecessary and intrusive -- can actually generate misconduct (45)
Humans love rules, don't they?
There's much we could discuss here which I will defer so we actually make it all the way through this paper. Briefly, let me suggest that the question of which rules are unnecessary and intrusive is a matter of opinion.
Indeed, I wonder how many of the practices that scientist accept and take for granted at this stage of the game are accepted and taken for granted because of the prevailing rules and regulations. In the absence of these rules and regulations, would the behaviors of scientists all default to some agreed upon standard position? If so, why? Maybe this would come about by way of pressure from members of the scientific community. Do we have good evidence that the scientific community would be active in exerting such pressure, or that this kind of pressure would be sufficient to prevent some of the outcomes the rules and regulations are meant to prevent?
If, in the absence of rules, the behaviors of scientists did not all default to some agreed upon standard position, would scientists be prepared to deal with a disgruntled public, whose members might take issue with scientists' animal use, human subject use, toxic waste disposal choices, and so forth, and holler for their tax dollar to fund something other than scientific research?
Being accountable to the public that funds scientific research has something to do with the fact that the rules and regulations were imposed, after all.
However, it doesn't sound like the focus group participants were arguing that scientist be released from all rules. The real issue for scientists in the study doesn't seem to be that the rules and regulations exist, but that they command so much time and attention that they crowd out discussions of more nuts-and-bolts issues involved in doing good science. As a focus group participant says:
"I think it's really unfortunate that there's so many rules about how we use radioactivity, how we use these animals, but there really aren't many guidelines that train scientists on how to design experiments, how to keep notebooks. And it's almost as if young scientists think that, 'Oh, this is a pain, you know, let's just do it and not think about it, and you're just pestering me and you're expecting too much.' And it's extremely frustrating as someone that's running a lab. "(46)
Federal regulations (and scary penalties for breaking them) may grab the attention of trainees. But trainees may end up seeing the guidance from the PIs training them as just more burdensome rules. Indeed, PIs may bear some of the responsibility for this if they are communicating the message that the rules from the feds and the institution are necessarily burdensome.
If you're already in a bad mood about the very existence of rules, you may be inclined to do the absolute minimum to comply with them -- or at least, to appear to be in compliance.
But given that the guidelines that are part of a good scientific training are important to making successful scientists, it seems worthwhile to develop a better strategy for conveying them. PIs can motivate their own expectations and the prevailing regulations from the point of view of what they help scientists accomplish in their research, not on the basis of what penalty befalls those who flout them.
In the next post, we'll look at what scientists had to say about life with colleagues and the pressures of production in science. We'll also look for the take-home message from this study.
 Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson (2006) "Normal Misbehavior: Scientists Talk About the Ethics of Research" Journal of Empirical Research on Human Research Ethics 1(1), 43-50.