Archive for: July, 2011

Limits of ethical recycling.

In the "Ethics in Science" course I regularly teach, we spend some time discussing case studies to explore some of the situations students may encounter in their scientific training or careers where they will want to be able to make good ethical decisions.

A couple of these cases touch on the question of "recycling" pieces of old grant proposals or journal articles -- say, the background and literature review.

There seem to be cases where the right thing to do is pretty straightforward. For example, helping yourself to the background section someone else had written for her own grant proposal would be wrong. This would amount to misappropriating someone else's words and ideas without her permission and without giving her credit. (Plagiarism anyone?) Plus, it would be weaseling out of one's own duty to actually read the relevant literature, develop a view about what it's saying, and communicate clearly why it matters in motivating the research being proposed.

Similarly, reusing one's own background section seems pretty clearly within the bounds of ethical behavior. You did the intellectual labor yourself, and especially in the case where you are revising and resubmitting your own proposal, there's no compelling reason for you to reinvent that particular wheel (unless, if course, reviewer comments indicate that the background section requires serious revision, the literature cited ought to take account of important recent developments that were missing in the first round, etc.).

Between these two extremes, my students happened upon a situation that seemed less clear-cut. How acceptable is it to recycle the background section (or experimental protocol, for that matter) from an old grant proposal you wrote in collaboration with someone else? Does it make a difference whether that old grant proposal was actually funded? Does it matter whether you are "more powerful" or "less powerful" (however you want to cash that out) within the collaboration? Does it require explicit permission from the person with whom you collaborated on the original proposal? Does it require clear citation of the intellectual contribution of the person with whom you collaborated on the original proposal, even if she is not officially a collaborator on the new proposal?

And, in your experience, does this kind of recycling make more sense than just sitting down and writing something new?

10 responses so far

A question for the trainees: How involved do you want the boss to get with your results?

This question follows on the heels of my recent discussion of the Bengü Sezen misconduct investigations, plus a conversation via Twitter that I recapped in the last post.

The background issue is that people -- even scientists, who are supposed always to be following the evidence wherever it might lead -- can run into trouble really scrutinizing the results of someone they trust (however that trust came about). Indeed, in the Sezen case, her graduate advisor at Columbia University, Dalibor Sames, seemed to trust Sezen and her scientific prowess so much that he discounted the results of other graduate students in his lab who could not replicate Sezen's results (which turned out to have been faked).

Really, it's the two faces of the PI's trust here: trusting one trainee so much that her results couldn't be wrong, and using that trust to ignore the empirical evidence presented by other trainees (who apparently didn't get the same level of presumptive trust). As it played out, at least three of those other trainees whose evidence Sames chose not to trust left the graduate program before earning their degrees.

The situation suggests to me that PIs would be prudent to establish environments in their research groups where researchers don't take scrutiny of their results, data, methods, etc., personally -- and where the scrutiny is applied to each member's results, data, methods, etc. (since anyone can make mistakes). But how do things play out when they rubber hits the road?

So, here's the question I'd like to ask the scientific trainees. (PIs: I've posed the complementary question to you in the post that went up right before this one!)

In his or her capacity as PI, your advisor's scientific credibility (and likely his or her name) is tied to all the results that come out of the research group -- whether they are experimental measurements, analyses of measurements, modeling results, or whatever else it is that scientists of your stripe regard as results. Moreover, in his or her capacity as a trainer of new scientists, the boss has something like a responsibility to make sure you know how to generate reliable results -- and that you know how to tell them from results that aren't reliable. What does your PI do to ensure that the results you generate are reliable? Do you feel like it's enough (both in terms of quality control and in terms of training you well)? Do you feel like it's too much?

Commenting note: You may feel more comfortable commenting with a pseudonym for this particular discussion, and that's completely fine with me. However, please pick a unique 'nym and keep it for the duration of this discussion, so we're not in the position of trying to sort out which "Anonymous" is which. Also, if you're a regular commenter who wants to go pseudonymous for this discussion, you'll probably want to enter something other than your regular email address in the commenting form -- otherwise, your Gravatar may give your other identity away!

5 responses so far

A question for the PIs: How involved do you get in your trainees' results?

In the wake of this post that touched on recently released documents detailing investigations into Bengü Sezen's scientific misconduct, and that noted that a C & E News article described Sezen as a "master of deception", I had an interesting chat on the Twitters:

@UnstableIsotope (website) tweeted:

@geernst @docfreeride I scoff at the idea that Sezen was a master at deception. She lied a lot but plenty of opportunities to get caught.

@geernst (website) tweeted back:

@UnstableIsotope Maybe evasion is a more accurate word.


@geernst I'd agree she was a master of evasion. But she was caught be other group members but sounds like advisor didn't want to believe it.

@docfreeride (that's me!):

@UnstableIsotope @geernst Possible that she was master of deception only in environment where people didn't guard against being deceived?


@docfreeride @geernst I agree ppl didn't expect deception, my read suggests she was caught by group members but protected by advisor.


@docfreeride @geernst The advisor certainly didn't expect deception and didn't encourage but didn't want to believe evidence


@UnstableIsotope @geernst Not wanting to believe the evidence strikes me as a bad fit with "being a scientist".


@docfreeride @geernst Yes, but it is human. Not wanting to believe your amazing results are not amazing seems like a normal response to me.


@docfreeride @UnstableIsotope I agree. Difficult to separate scientific objectivity from personal feelings in those circumstances.


@geernst @UnstableIsotope But isn't this exactly the argument for not taking scrutiny of your results, data, methods personally?


@docfreeride @geernst Definitely YES. I look forward to people repeating my experiments. I'm nervous if I have the only result.


@docfreeride @UnstableIsotope Couldn't agree more.

This conversation prompted a question I'd like to ask the PIs. (Trainees: I'm going to pose the complementary question to you in the very next post!)

In your capacity as PI, your scientific credibility (and likely your name) is tied to all the results that come out of your research group -- whether they are experimental measurements, analyses of measurements, modeling results, or whatever else it is that scientists of your stripe regard as results. What do you do to ensure that the results generated by your trainees are reliable?

Now, it may be the case that what you see as the appropriate level of involvement/quality control/"let me get up in your grill while you repeat that measurement for me" would still not have been enough to deter -- or to detect -- a brazen liar. If you want to talk about that in the comments, feel free.

Commenting note: You may feel more comfortable commenting with a pseudonym for this particular discussion, and that's completely fine with me. However, please pick a unique 'nym and keep it for the duration of this discussion, so we're not in the position of trying to sort out which "Anonymous" is which. Also, if you're a regular commenter who wants to go pseudonymous for this discussion, you'll probably want to enter something other than your regular email address in the commenting form -- otherwise, your Gravatar may give your other identity away!

3 responses so far

What are honest scientists to do about a master of deception?

A new story posted at Chemical & Engineering News updates us on the fraud case of Bengü Sezen (who we discussed here, here, and here at much earlier stages of the saga).

William G. Schultz notes that documents released (PDF) by the Department of Health and Human Services (which houses the Office of Research Integrity) detail some really brazen misconduct on Sezen's part in her doctoral dissertation at Columbia University and in at least three published papers.

From the article:

The documents—an investigative report from Columbia and HHS’s subsequent oversight findings—show a massive and sustained effort by Sezen over the course of more than a decade to dope experiments, manipulate and falsify NMR and elemental analysis research data, and create fictitious people and organizations to vouch for the reproducibility of her results. ...

A notice in the Nov. 29, 2010, Federal Register states that Sezen falsified, fabricated, and plagiarized research data in three papers and in her doctoral thesis. Some six papers that Sezen had coauthored with Columbia chemistry professor Dalibor Sames have been withdrawn by Sames because Sezen’s results could not be replicated. ...

By the time Sezen received a Ph.D. degree in chemistry in 2005, under the supervision of Sames, her fraudulent activity had reached a crescendo, according to the reports. Specifically, the reports detail how Sezen logged into NMR spectrometry equipment under the name of at least one former Sames group member, then merged NMR data and used correction fluid to create fake spectra showing her desired reaction products.

Apparently, her results were not reproducible because those trying to reproduce them lacked her "hand skills" with Liquid Paper.

Needless to say, this kind of behavior is tremendously detrimental to scientific communities trying to build a body of reliable knowledge about the world. Scientists are at risk of relying on published papers that are based in wishes (and lies) rather than actual empirical evidence, which can lead them down scientific blind alleys and waste their time and money. Journal editors devoted resources to moving her (made-up) papers through peer review, and then had to devote more resources to dealing with their retractions. Columbia University and the U.S. government got to spend a bunch of money investigating Sezen's wrongdoing -- the latter expenditures unlikely to endear scientific communities to an already skeptical public. Even within the research lab where Sezen, as a grad student, was concocting her fraudulent results, her labmates apparently wasted a lot of time trying to reproduce her results, questioning their own abilities when they couldn't.

And to my eye, one of the big problems in this case is that Sezen seems to have been the kind of person who projected confidence while lying her pants off:

The documents paint a picture of Sezen as a master of deception, a woman very much at ease with manipulating colleagues and supervisors alike to hide her fraudulent activity; a practiced liar who would defend the integrity of her research results in the face of all evidence to the contrary. Columbia has moved to revoke her Ph.D.

Worse, the reports document the toll on other young scientists who worked with Sezen: “Members of the [redacted] expended considerable time attempting to reproduce Respondent’s results. The Committee found that the wasted time and effort, and the onus of not being able to reproduce the work, had a severe negative impact on the graduate careers of three (3) of those students, two of whom [redacted] were asked to leave the [redacted] and one of whom decided to leave after her second year.”

In this matter, the reports echo sources from inside the Sames lab who spoke with C&EN under conditions of anonymity when the case first became public in 2006. These sources described Sezen as Sames’ “golden child,” a brilliant student favored by a mentor who believed that her intellect and laboratory acumen provoked the envy of others in his research group. They said it was hard to avoid the conclusion that Sames retaliated when other members of his group questioned the validity of Sezen’s work.

What I find striking here is that Sezen's vigorous defense of her's own personal integrity was sufficient, at least for awhile, to convince her mentor that those questioning the results were in the wrong -- not just incompetent to reproduce the work, but jealous and looking to cause trouble. And, it's deeply disappointing that this judgment may have been connected to the departure of those fellow graduate students who raised questions from their graduate program.

How could this have been avoided?

Maybe a useful strategy would have been to treat questions about the scientific work (including its reproducibility) first and foremost as questions about the scientific work.

Getting results that others cannot reproduce is not prima facie evidence that you're a cheater-pants. It may just mean that there was something weird going on with the equipment, or the reagents, or some other component of the experimental system when you did the experiment that yielded the exciting but hard to replicate results. Or, it may mean that the folks trying to replicate the results haven't quite mastered the technique (which, in the case that they are your colleagues in the lab, could be addressed by working with them on their technique). Or, it may mean that there's some other important variable in the system that you haven't identified as important and so have not worked out (or fully described) how to control.

In this case, of course, it's looking like the main reason that Sezen's results were not reproducible was that she made them up. But casting the failure to replicate presumptively as one scientist's mad skillz and unimpeachable integrity against another's didn't help get to the bottom of the scientific facts. It made the argument personal rather than putting the scientists involved on the same team in figuring out what was really going on with the scientific systems being studied.

Of all of the Mertonian norms imputed to the Tribe of Science, organized skepticism is probably the one nearest and dearest to most scientists' basic understanding of how they get the knowledge-building job done. Figuring out what's going on with particular phenomena in the world can be hard, not least because lining up solid evidence to support your conclusions requires identifying evidence that others trying to repeat your work can reliably obtain themselves. This is more than just a matter of making sure your results are robust. Rather, you want others to be able to reproduce your work so that you know you haven't fooled yourself.

Organized skepticism, in other words, should start at home.

There is a risk of being too skeptical of your own results, and there are chances to overlook something important as noise because it doesn't fit with what you expect to observe. However, the scientist who refuses to entertain the possibility that her work could be wrong -- indeed, who regards questions about the details of her work as a personal affront -- should raise a red flag for the rest of her scientific community, no matter what her career stage or her track record of brilliance to date.

In a world where every scientist's findings are recognized as being susceptible to error, the first response to questions about findings might be to go back to the phenomena together, helping each other to locate potential sources of error and to avoid them. In such a world, the master of deception trying to ride personal reputation (or good initial impressions) to avoid scrutiny of his or her work will have a much harder time getting traction.

16 responses so far

Evaluating scientific reports (and the reliability of the scientists reporting them).

One of the things scientific methodology has going for it (at least in theory) is a high degree of transparency. When scientists report findings to other scientists in the community (say, in a journal article), it is not enough for them to just report what they observed. They must give detailed specifications of the conditions in the field or in the lab -- just how did they set up and run that experiment, choose their sample, make their measurement. They must explain how they processed the raw data they collected, giving a justification for processing it this way. And, in drawing conclusions from their data, they must anticipate concerns that the data might have been due to something other than the phenomenon of interest, or that the measurements might better support an alternate conclusion, and answer those objections.

A key part of transparency in scientific communications is showing your work. In their reports, scientists are supposed to include enough detailed information so that other scientists could set up the same experiments, or could follow the inferential chain from raw data to processed data to conclusions and see if it holds up to scrutiny.

Of course, scientists try their best to apply hard-headed scrutiny to their own results before they send the manuscript to the journal editors, but the whole idea of peer review, and indeed the communication around a reported result that continues after publication, is that the scientific community exercises "organized skepticism" in order to discern which results are robust and reflective of the system under study rather than wishful thinking or laboratory flukes. If your goal is accurate information about the phenomenon you're studying, you recognize the value of hard questions from your scientific peers about your measurements and your inferences. Getting it right means catching your mistakes and making sure your conclusions are well grounded.

What sort of conclusions should we draw, then, when a scientist seems resistant to transparency, evasive in responding to concerns raised by peer reviewers, and indignant when mistakes are brought to light?

It's time to revisit the case of Stephen Pennycook and his research group at Oak Ridge National Laboratory. In an earlier post I mused on the saga of this lab's 1993 Nature paper [1] and its 2006 correction [2] (or "corrigendum" for the Latin fans), in light of allegations that the Pennycook group had manipulated data in another recent paper submitted to Nature Physics. (In addition to the coverage in the Boston Globe (PDF), the situation was discussed in a news article in Nature [3] and a Nature editorial [4].)

Now, it's time to consider the recently uploaded communication by J. Silcox and D. A. Muller (PDF) [5] that analyzes the corrigendum and argues that a retraction, not a correction, was called for.

It's worth noting that this communication was (according to a news story at Nature about how the U.S. Department of Energy handles scientific misconduct allegations [6]) submitted to Nature as a technical comment back in 2006 and accepted for publication "pending a reply by Pennycook." Five years later, uploading the technical comment to makes some sense, since a communication that never sees the light of day doesn't do much to further scientific discussion.

Given the tangle of issues at stake here, we're going to pace ourselves. In this post, I lay out the broad details of Silcox and Muller's argument (drawing also on the online appendix to their communication) as to what the presented data show and what they do not show. In a follow-up post, my focus will be on what we can infer from the conduct of the authors of the disputed 1993 paper and 2006 corrigendum in their exchanges with peer reviewers, journal editors, and the scientific community. Then, I'll have at least one more post discussing the issues raised by the Nature news story and the related Nature editorial on the DOE's procedures for dealing with alleged misconduct [7].

Continue Reading »

3 responses so far

The economy might be getting better for someone ...

... but I daresay that "someone" is not the typical student at a public school or university in the state of California.

The recent news about the impact of the California State budget on the California State University system:

The 2011-12 budget will reduce state funding to the California State University by at least $650 million and proposes an additional mid-year cut of $100 million if state revenue forecasts are not met. A $650 million cut reduces General Fund support for the university to $2.1 billion and will represent a 23 percent year over year cut to the system. An additional cut of $100 million would reduce CSU funding to $2.0 billion and represent a 27 percent year-to-year reduction in state support.

“What was once unprecedented has unfortunately become normal, as for the second time in three years the CSU will be cut by well over $500 million,” said CSU Chancellor Charles B. Reed. “The magnitude of this cut, compounded with the uncertainty of the final amount of the reduction, will have negative impacts on the CSU long after this upcoming fiscal year has come and gone.”

The $2.1 billion in state funding allocated to the CSU in the 2011-12 budget will be the lowest level of state support the system has received since the 1998-99 fiscal year ($2.16 billion), and the university currently serves an additional 90,000 students. If the system is cut by an additional $100 million, state support would be at its lowest level since 1997-98.

Two immediate responses to these cuts will be to decrease enrollments (by about 10,000 students across the 23 campuses of the CSU system) and increase "fees" (what we call tuition, since originally the California Master Plan for Higher Education didn't include charging tuition, on the theory that educated Californians were some sort of public good worth supporting), yet again, by another $300 per semester or so.

"Why cut enrollments?" I hear some of you ask. Well, because the state still puts up a portion of the money required to actually educate each enrolled student (although that portion is now less than half of what the students must put up themselves). So 10,000 less students means 10,000 less "state's share" expenditures. And, short term, that's a saving for the tax payers. Long term, however, it may cost us.

Those students circling the tarmac, hoping to be admitted to the CSU (or University of California) system as students, are only going to cool their heels in community college for so long. (Plus, the community colleges are impacted by the decrease in transfer slots due to slashed enrollments, and have had their budgets cut because of the state's fiscal apocalypse.) At a certain point, many of them will give up on earning college degrees, or will give up on earning them in California. And if the place where they earn those college degrees is less enthusiastic about slashing education budgets to the bone, these erstwhile Californians may well judge it prudent to put down roots, since it will make it easier to secure a good education for their offspring or partners, or a good continuing education for themselves.

I do not imagine a brain drain would do much to help California's economy to recover.

In possibly related "what is the deal with our public schools?!" news, the elder Free-Ride offspring will be starting junior high (which, in our district, includes seventh and eighth grades) in the fall. The junior high school day consists of just enough periods for English, math, science, social studies, lunch, and one elective.* The elective choices include things like wood shop, or home economics, or band, or a foreign language. But unless your child has mastered bilocation, there is no option to take French and band, or mechanical drawing and Mandarin. Plus, school is out at like 2:15 PM -- well before the standard 9-to-5 workday is over. Of course, this doesn't take into account how many parents work more than eight hours a day (and may be hesitant to complain about it because at least they still have jobs) or how much time they have to spend commuting to and from those jobs. The bottom line seems to be that the public is unwilling to fund more than five academic periods per day of junior high. The public doesn't even appreciate the utility of keeping the young people off the streets until 3 PM.

Verily, I suspect that only thing holding us back from abolishing child labor laws is that the additional infusion of labor would make our unemployment numbers worse, which rather undermine the narrative that the economy is turning a corner to happy days.

This lack of progress addressing the budgetary impacts on education -- indeed, this apparent willingness to believe that education shouldn't actually cost money to provide -- makes me a big old crankypants.
* There is probably also some provision for physical education, because there is still something like a state requirement that there be physical education.

6 responses so far

« Newer posts