Archive for the 'Methodology' category

SPSP 2013 Contributed Papers: Computation and Simulation

SPSP 2013 Contributed Papers: Computation and Simulation

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 29, 2013, during Concurrent Sessions VII

  1. First up, Catherine Stinson, "Computational models as experimental systems" #SPSP2013 #SPSP2013Toronto

    Continue Reading »

3 responses so far

SPSP 2013 Plenary session #4: Sergio Sismondo

SPSP 2013 Plenary session #4: Sergio Sismondo

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 29, 2013.

  1. Last plenary of conference: Sergio Sismondo, "Toward a political economy of epistemic things," starts in ~10 min #SPSP2013 #SPSP2013Toronto
  2. Knowledge as a quasi-substance (takes work, resources to make; requires infrastructure; moves w/ difficulty) #SPSP2013 #SPSP2013Toronto

    Continue Reading »

Comments are off for this post

SPSP 2013 Contributed Papers: Communities & Institutions: Objectivity, Equality, & Trust

SPSP 2013 Contributed Papers: Communities & Institutions: Objectivity, Equality, & Trust

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 28, 2013, during Concurrent Sessions VI

  1. This was a session, by the way, in which it was necessary to confront my limitations as a conference live-tweeter. The session was in a room where the only available electrical outlets were at the front (where the speakers were), and my battery was rapidly running out of juice.  And my right shoulder was seizing up.  And I ended up in Twitter Jail (for "too many tweets today!" per Twitter's proprietary algorithm), which meant that the last chunk of tweets I composed for the second talk got pasted into a text file and tweeted hours later, while my notes for the third talk in the session went into my quad-ruled notebook.

    With multiple live-tweeters in a given session, this trifecta of fail (in my tweeting -- the session papers were a trifecta of good stuff!) would have been less traumatic for me.  But philosophers are not quite as keen to live-tweet as, say, ScienceOnline attendees ... yet.
    There was, however, a bit of backup!  Christine James was driving SPSP's shiny new Twitter account,   SocPhilSciPract, and she happened choose the same session of contributed papers to attend and to tweet.  She also tweeted some pictures.
  2. Continue Reading »

One response so far

SPSP 2013 Symposium S10: Talking Junk about Transposons: Levels of selection and conceptions of functionality in...

SPSP 2013 Symposium S10: Talking Junk about Transposons: Levels of selection and conceptions of functionality in...

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 28, 2013, during Concurrent Sessions V

  1. As an audience member in this session, I had much less relevant background knowledge than I did in some others.  But I was pretty aware, from goings on in the science blogosphere, that there has been some amount of disagreement about what to say about "junk DNA," the ENCODE project's findings, and the coverage of it all by science journalists.
  2. Waiting for Symposium on "Talking Junk abt Trasposons: Levels of Selection & conceptions of functionality..." #SPSP2013 #SPSP2013Toronto
  3. First up: T. Ryan Gregory, "Junk and the genome" #SPSP2013 #SPSP2013Toronto

Comments are off for this post

SPSP 2013 Contributed Papers: Explanation in the Biological Sciences

Jun 28 2013 Published by under Biology, Conferences, Methodology, Philosophy

SPSP 2013 Contributed Papers: Explanation in the Biological Sciences

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 27, 2013, during Concurrent Sessions III

  1. Again, I had to make a choice about which of four sessions to attend, and this one drew me in.

    You might ask, "What happened to Concurrent Sessions II?"
  2. I know my multi-tasking limits, yo!
  3. On deck: session of contributed papers on explanation in the biological sciences. #SPSP2013 #SPSP2013Toronto
  4. First up: Ingo Brigandt, "Systems biology & the limits of philosophical accounts of mechanistic explanation" #SPSP2013 #SPSP2013Toronto

Comments are off for this post

SPSP 2013 Symposium S1: De-idealization in the Sciences

SPSP 2013 Symposium S1: De-idealization in the Sciences

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 27, 2013, during Concurrent Sessions I

  1. The concurrent sessions required a choice (from five very attractive options).
  2. Just about to start: Symposium on "De-idealization in the Sciences" #SPSP2013 #SPSP2013Toronto
  3. Lots of discussions in literature of idealization, not enough of de-idealization (making models more realistic) #SPSP2013 #SPSP2013Toronto
  4. What are the strategies, processes of de-idealization? The session will look at practices to see ... #SPSP2013 #SPSP2013Toronto
  5. First up: Mieke Boon, "Idealization & de-idealization as an epistemic strategy in experimental practices" #SPSP2013 #SPSP2013Toronto

Comments are off for this post

SPSP 2013 Plenary session #1: Ian Hacking

SPSP 2013 Plenary session #1: Ian Hacking

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 27, 2013

  1. Getting ready for 1st plenary session of #SPSP2013 "Some roles of mathematics in some scientific practices" by Ian Hacking
  2. Getting ready for 1st plenary session of #SPSP2013Toronto "Some roles of mathematics in some scientific practices" by Ian Hacking #BetterTag
  3. Actual title of Ian Hacking's talk: "Some roles of some mathematics in some scientific practices" (Maybe some summing?) #SPSP2013Toronto

Comments are off for this post

The case study protagonist as unreliable narrator.

Even though it seems like my semester just started, I'm already grading the first batch of case study responses from my "Ethics in Science" students. (Students, if you're reading this: I'm quite happy with how the class is doing! You'll get detailed feedback on your response by the end of the week.)

In case you're not familiar with case studies in the context of an ethics class, they usually consist of a brief description of a situation in which a protagonist is trying to make a decision about what to do. I ask my students to look at this description and identify who has a stake in what the protagonist does (or doesn't do); what consequences, good or bad, might flow from the various courses of action available to the protagonist; to whom the protagonist has obligations that will be satisfied or ignored by his or her action; and how the relevant obligations and interests pull the protagonist in different directions as he or she tries to make the best decision. On the basis of these details, I ask my students to choose a course of action for the protagonist and explain why it's an ethical course of action.

But here's something that makes the analysis difficult for the students: Often it's hard to pin down the fact of the case with certainty. The scenario is described from the protagonist's point of view. It seems to the protagonist that there's favoritism in the lab group, or that it's obvious why some of the measurement turned out the way they did, or that a colleague is going to react a particular way if a concern is brought to that colleague's attention. However, my students have been quick to notice in their discussions of the cases, what seems to be true to the protagonist might be false. For any number of reasons, the protagonist may have a skewed perspective on what's going on in other people's minds, on what the issues are with the experiment, even on his or her own competence.

The protagonist, in other words, could be an unreliable narrator.

Making a good ethical decision is easier when you can pin down all the relevant facts (including things like what future events would flow from the protagonist's various courses of action). But, as in real life, the case studies with which we ask our students to grapple have a lot of uncertainty built in. Postponing a decision about what to do until all the facts are in just isn't a practical option. Sometimes you do the best you can with knowledge you recognize is gappy.

Indeed, one of the big reasons I try to get my students to understand discussion as a valuable part of ethical decision-making is that, left to our own devices, each of us can be just as unreliable a narrator as the protagonist of the case study we're thinking through. The protagonist suspects favoritism. We suspect jealousy. Maybe the protagonist is wrong, but maybe the protagonist is right and we're wrong instead. Given the state of our knowledge in the world, we don't won't to lean on ethical decision-making strategies that require us to guess correctly about all of the unknowns.

The moral of the story is assuredly not the "there are no wrong answers" crap that humanities professors get from their naïve undergraduates. Instead, it's that taking account of other people's perspectives may be useful in helping us gain some critical distance on our own (and on the ways it might turn out to be wrong). Also, it's that an ethical course of action might require some active fact-finding to test whether one's perceptions in a situation are reliable before acting rashly on the assumption that they are.

* * * * *
Related posts:

The value of (unrealistic) case studies in ethics education.

Some ethical decisions are not that hard: thoughts on Joe Paterno.

Question for the hivemind: workplace policies and MYOB.

Passion quilt: a meme for teachers.

2 responses so far

GRE scores and other tools to evaluate people for lab positions.

In the last 24 hours there has been an interesting conversation on the Twitters (with contributions from @drugmonkeyblog, @CackleofRad, @mbeisen, @Namnezia, @dr_leigh, @doc_becca, @GertyZ, @superkash, @chemjobber, @DoctorZen, and a bunch of other folks) on the value of standardized tests (like the GRE) in evaluating candidates for a lab position.

The central question at issue seems to be whether GRE scores are meaningful or meaningless in identifying some quality in the candidate that is essential for (or maybe reliably predictive of) success in the environment of an academic lab. And, it's worth noting that the conversation has not been framed in terms of using GRE scores as the only piece of evidence one has about applicants. Rather, it's been about the reliability of GRE scores as a predictor compared to college transcripts, letters of recommendation, personal essays, and the like.

I have thoughts about this issue, thoughts which are informed by:

  • my teaching experiences
  • my own experiences with the SAT and the GRE (I aced them)
  • my own experiences doing research in four different lab settings (three of them while I was an undergraduate)
  • my experiences teaching test preparation courses (for SAT I, SAT II, and MCAT)
  • my experiences as the graduate student representative on a graduate admissions committee (albeit not for a science department)
  • my experiences on hiring committees (where GRE scores weren't an issue but things like letters of recommendation, grades, and personal statements were)
  • broader ongoing conversations with colleagues about the challenges of finding reliable proxies with which to assess the success of our educational efforts.

What I have observed from these:

  1. There are extremely smart, capable people with severe test-anxiety. I'm talking puking-at-the-very-thought-of-sitting-fot-the-test anxiety. The people I've known with this manifest it most strongly when faced with standardized tests; generally they've found ways to deal with the other kinds of exams that are part of their schooling. I doubt that GRE scores would be reliable indicators of the fitness of such people for a position in an academic lab, unless that position involved taking standardized tests on a regular basis.
  2. My own success on standardized tests is mostly a measure of how well I understood the structure of those standardized tests. This is a lesson that was reinforced by my experience teaching others how to do better on standardized tests. I did not make my test prep students smarter about much of anything except strategies for taking the standardized tests. (In a few instances, my work with them may have helped them identify conceptual issues or problem solving skills that they needed to sharpen before test day, but again, I take it the "help" they got was primarily a matter of knowing what material and skills the test was going to assess.) Is understanding the structure of the GRE, or developing a good strategy for taking it, a crucial component of success in an academic lab? Probably not. Is it a reliable proxy for something that is? Maybe, but it would be nice to see an explanation of what that is rather than just putting our faith in the test to tell us about something that matters.
  3. Plenty of people with awesome test scores are hopeless in the lab. Plenty of people with non-awesome test scores are really successful in the lab. What's the level of correlation? I don't know, and you probably don't either. Maybe someone should do an empirical study so we know.
  4. One place that standardized tests seem to be of use (or so I've heard repeatedly over the years from lots of admissions committee folks) is in "calibrating" grades, especially of schools with which one might have less familiarity. What does an A at Podunk U. mean compared to an A at Well-Known Tech? Presumably the GRE scores of the candidates give us some information (so, if they're really low from the Podunk U. student, maybe Podunk U.'s As aren't requiring the same level of mastery as Well-Known Tech's As). But, there's always the possibility that Well-Known Tech has a better developed organization from the point of view of getting its students into grad school, and that part of this might include in-house test prep. Also, what if the lone Podunk U. student who is applying to your program has test-anxiety?
  5. GRE scores are often thought of as an objective counterbalance to letters of recommendation because, as the common wisdom has it, letter writers lie. Or maybe they just put the best possible spin on the candidate's talents. Or maybe they're actually just overestimating the candidate's potential. Or maybe they don't write good enough letters for the students who are not like them in certain relevant respects (including scientific style, socioeconomic background, gender, race, sexuality, etc.). Surely, in many cases there is something like a positive bias in letters of recommendation (and some faculty will advise students to ask someone else for a letter if they themselves are unable to write a glowing recommendation). And, there are instances in which a letter writer will undervalue the talents and potential of students (although one hopes that the other letter writers in such cases will compensate). Still, the letters at least present a space in which actual concrete examples of the student's awesomeness (or shortcomings) can be discussed. Some of these examples may touch on situations or challenges directly relevant to what the applicants may have to face in the academic lab in which they are seeking a position. Plus, at least in fields that are not totally enormous, there is (or could be) a professional cost to lying to a colleague in the profession, even in a letter of recommendation for a student.
  6. If I had to rely on just one proxy, it would be the applicant's personal statement. Again, it strikes me that this is an instrument that creates a space where an applicant can describe past experiences and current interests, challenges overcome and lessons learned from them that might be applied to future challenges. A personal statement can give you a glimpse into what the applicant cares about and why. It can also give you a sense of whether the applicant can think and communicate clearly. However, this is probably another area where someone should do some empirical work to see what kind of correlation there actually is between the quality of the personal statement and the success of the applicant in the position for which the personal statement was part of the application package.
  7. Every single proxy we might look at to select among applicants can fail. It's not clear to me that it could be otherwise, especially given that we're using the proxies to try to predict future success, which you can't do with perfect accuracy unless you have a machine for seeing into the future (and even then ...).
  8. It strikes me that active thinking-on-your-feet interview questions might provide more relevant information. It used to be that you couldn't really use these for things like grad school admission because you couldn't afford to fly all your applicants out to campus. (By the time you saw prospective grad students, they were admits trying to choose between the programs that had accepted them.) But maybe now with tools like Skype those looking to make sensible choices among applicants should do some video interviewing?
  9. Then again, if video interview questions for lab positions become a thing, someone will probably set up a video interview preparation company.

Yeah, I'd say to take GRE scores with a grain of salt. But, I think that's the right attitude to take to all the bits of evidence an applicant presents. Honestly, my attitude toward test scores probably has a lot to do with my knowledge about how easy it can be to do well on them (at least compared to the other pieces of one's application package). It probably also has to do with at least a few gatekeepers who treated GRE scores as definitely more reliable simply because they were quantitative, rather than qualitative.

If you have an applicant-screening item that has never led you astray, please share it in the comments.

10 responses so far

Scientific authorship: guests, courtesy, contributions, and harms.

DrugMonkey asks, where's the harm in adding a "courtesy author" (also known as a "guest author") to the author line of a scientific paper?

I think this question has interesting ethical dimensions, but before we get into those, we need to say a little bit about what's going on with authorship of scientific papers.

I suppose there are possible worlds in which who is responsible for what in a scientific paper might not matter. In the world we live in now, however, it's useful to know who designed the experimental apparatus and got the reaction to work (so you can email that person your questions when you want to set up a similar system), who did the data analysis (so you can share your concerns about the methodology), who made the figures (so you can raise concerns about digital fudging of the images), etc. Part of the reason people put their names on scientific papers is so we know who stands behind the research -- who is willing to stake their reputation on it.

The other reason people put their names on scientific papers is to claim credit for their hard work and their insights, their contribution to the larger project of scientific knowledge-building. If you made a contribution, the scientific community ought to know about it so they can give you props (and funding, and tenure, and the occasional Nobel Prize).

But, we aren't in a possition to make accurate assignments of credit or responsibility if we have no good information about what an author's actual involvement in the project may have been. We don't know who's really in a position to vouch for the data, or who really did heavy intellectual lifting in bringing the project to fruition. We may understand, literally, the claim, "Joe Schmoe is second author of this paper," but we don't know what that means, exactly.

I should note that there is not one universally recognized authorship standard for all of the Tribe of Science. Rather, different scientific disciplines (and subdisciplines) have different practices as far as what kind of contribution is recognized as worthy of inclusion as an author on a paper, and as far as what the order in which the authors are listed is supposed to communicate about the magnitude of each contribution. In some fields, authors are always listed alphabetically, no matter what they contributed. In others, being first in the list means you made the biggest contribution, followed by the second author (who made the second-biggest contribution), and so forth. It is usually the case that the principal investigator (PI) is identified as the "corresponding author" (i.e., the person to whom questions about the work should be directed), and often (but not always) the PI takes the last slot in the author line. Sometimes this is an acknowledgement that while the PI is the brains of the lab's scientific empire, particular underlings made more immediately important intellectual contributions to the particular piece of research the paper is communicating. But authorship practices can be surprisingly local. Not only do different fields do it differently, but different research groups in the same field -- at the same university -- do it differently. What this means is it's not obvious at all, from the fact that your name appears as one of the authors of a paper, what your contribution to the project was.

There have been attempts to nail down explicit standards for what kinds of contributions should count for authorship, with the ICMJE definition of authorship being one widely cited effort in this direction. Not everyone in the Tribe of Science, or even in the subset of the tribe that publishes in biomedical journals, thinks this definition draws the lines in the right places, but the fact that journal editors grapple with formulating such standards suggests at least the perception that scientists need a clear way to figure out who is responsible for the scientific work in the literature. We can have a discussion about how to make that clearer, but we have to acknowledge that at the present moment, just noting that someone is an author without some definition of what that entails doesn't do the job.

Here's where the issue of "guest authorship" comes up. A "guest author" is someone whose name appears in a scientific paper's author line even though she has not made a contribution that is enough (under whatever set of standards one recognizes for proper authorship) to qualify her as an author of the paper.

A guest is someone who is visiting. She doesn't really live here, but stays because of the courtesy and forebearance of the host. She eats your food, sleeps under your roof, uses your hot water, watches your TV -- in short, she avails herself of the amenities the host provides. She doesn't pay the rent or the water bill, though; that would transform her from a guest to a tenant.

To my way of thinking, a guest author is someone who is "just visiting" the project being written up. Rather than doing the heavy lifting in that project, she is availing herself of the amenities offered by association (in print) with that project, and doing so because of the courtesy and forebearance of the "host" author.

The people who are actually a part of the project will generally be able to recognize the guest author as a "guest" (as opposed to an actual participant). The people receiving the manuscript will not. In other words, the main amenity the guest author partakes in is credit for the labors of the actual participants. Even if all the participants agreed to this (and didn't feel the least bit put out at the free-rider whose "authorship" might be diluting his or her own share of credit), this makes it impossible for those outside the group to determine what the guest author's actual contribution was (or, in this case, was not). Indeed, if people outside the arrangement could tell that the guest author was a free-rider, there wouldn't be any point in guest authorship.

Science strives to be a fact-based enterprise. Truthful communication is essential, and the ability to connect bits of knowledge to the people who contributed is part of how the community does quality control on that knowledge base. Ambiguity about who made the knowledge may lead to ambiguity about what we know. Also, developing too casual a relationship with the truth seems like a dangerous habit for a scientist to get into.

Coming back to DrugMonkey's question about whether courtesy authorship is a problem, it looks to me like maybe we can draw a line between two kinds of "guests," one that contributes nothing at all to the actual design, execution, evaluation, or communication of the research, and one who contributes something here, just less than what the conventions require for proper authorship. If these characters were listed as authors on a paper, I'd be inclined to call the first one a "guest author" and the second a "courtesy author" in an attempt to keep them straight; the cases with which DrugMonkey seems most concerned are the "courtesy authors" in my taxonomy. In actual usage, however, the two labels seem to be more or less interchangeable. Naturally, this makes it harder to distinguish who actually did what -- but it strikes me that this is just the kind of ambiguity people are counting on when they include a "guest author" or "courtesy author" in the first place.

What's the harm?

Consider a case where the PI of a research group insists on giving authorship of a paper to a postdoc who hasn't gotten his experimental system to work at all and is almost out of funding. The PI gives the justification that "He needs some first-author papers or his time here will have been a total waste." As it happens, giving this postdoc authorship bumps the graduate student who did all the experimental work (and the conceptual work, and data analysis, and drafting of the manuscript) out of first author slot -- maybe even off the paper entirely.

There is real harm here, to multiple parties. In this case, someone got robbed of appropriate credit, and the person identified as most responsible for the published work will be a not-very-useful person to contact with deeper questions about the work (since he didn't do any of it or at best participated on the periphery of the project).

Consider another kind of case, where authorship is given to a well-known scientist with a lot of credibility in his field, but who didn't make a significant intellectual contribution to work (at least, not one that rises to the level of meriting authorship under the recognized standards). This is the kind of courtesy authorship that was extended to Gerald Schatten in a 2005 paper in Science another of whose authors was Hwang Woo Suk. This paper had 25 authors listed, with Schatten identified as the senior author. Ultimately, the paper was revealed to be fraudulent, at which point Schatten claimed mostly to have participated in writing the paper in good English -- a contribution recognized as less than what one would expect from an author (especially the senior author).

Here, including Schatten as an author seemed calculated to give the appearance (to the journal editors while considering the manuscript, and to the larger scientific community consuming the published work)that the work was more important and/or credible, because of the big name associated with it. But this would only work because listing that big name in the author line amounts to claiming the big name was actually involved in the work. When the paper fell apart, Schatten swiftly disavowed responsibility -- but such a disavowal was only necessary because of what was communicated by the author line, and I think it's naïve to imagine that this "ambiguity" or "miscommunication" was accidental.

In cases like this, I think it's fair to say courtesy authorship does harm, undermining the baseline of trust in the scientific community. It's hard to engage in efficient knowledge-building with people you think are trying to put one over on you.

The cases where DrugMonkey suggests courtesy authorship might be innocuous strike me as interestingly different. They are cases where someone has actually made a real contribution of some sort to the work, but where that contribution may be judged (under whatever you take to be the accepted standards of your scientific discipline) as not quite rising to the level of authorship. Here, courtesy authorship could be viewed as inflating the value of the actual contribution (by listing the person who made it in the author line, rather than the acknowledgements), or alternatively as challenging where the accepted standards of your discipline draw the line between a contribution that qualifies you as an author and one that does not. For example, DrugMonkey writes:

First, the exclusion of those who "merely" collect data is stupid to me. I'm not going to go into the chapter and verse but in my lab, anyway, there is a LOT of ongoing trouble shooting and refining of the methods in any study. It is very rare that I would have a paper's worth of data generated by my techs or trainees and that they would have zero intellectual contribution. Given this, the asymmetry in the BMJ position is unfair. In essence it permits a lab head to be an author using data which s/he did not collect and maybe could not collect but excludes the technician who didn't happen to contribute to the drafting of the manuscript. That doesn't make sense to me. The paper wouldn't have happened without both of the contributions.

I agree with DrugMonkey that there's often a serious intellectual contribution involved in conducting the experiments, not just in designing them (and that without the data, all we have are interesting hunches, not actual scientific knowledge, to report). Existing authorship standards like those from ICMJE or BMJ can unfairly exclude those who do the experimental labor from authorship by failing to recognize this as an intellectual contribution. Pushing to have these real contributions recognized with appropriate career credit is important. As well, being explicit about who made these contributions to the research being reported in the paper makes it much easier for other scientists following up on the published work (e.g., comparing it to their own results in related experiments, or trying to use some of the techniques described in the paper to set up new experiments) to actually get in touch with the people most likely to be able to answer their questions.

Changing how might weight experimental prowess is given in the career scorekeeping may be an uphill battle, especially when the folks distributing the rewards for the top scores are administrators (focused on the money the people they're scoring can bring to an institution) and PIs (who frequently have more working hours devoted to conception and design of project for their underlings rather than to the intellectual labor of making those projects work, and to writing the proposals that bring in the grant money and the manuscripts that report the happy conclusion of the projects funded by such grants). That doesn't mean it's not a fight worth having.

But, I worry that using courtesy authorship as a way around this unfair setting of the authorship bar actually amounts to avoiding the fight rather than addressing these issues and changing accepted practices.

DrugMonkey also writes:

Assuming that we are not talking about pushing someone else meaningfully* out of deserved credit, where lies the harm even if it is a total gift?

Who is hurt? How are they damaged?
*by pushing them off the paper entirely or out of first-author or last-author position. Adding a 7th in the middle of the authorship list doesn't affect jack squat folks.

Here, I wonder: if dropping in a courtesy author as the seventh author of a paper can't hurt, how either can we expect it to help the person to whom this "courtesy" is extended?

Is it the case that no one actually expects that the seventh author made anything like a significant contribution, so no one is being misled in judging the guest in the number seven slot as having made a comparable contribution to the scientist who earned her seventh-author position in another paper? If listing your seventh-author paper on your CV is automatically viewed as not contributing any points in your career scorekeeping, why even list it? And why doesn't it count for anything? Is it because the seventh author never makes a contribution worth career points ... or is it because, for all we know, the seventh author may be a courtesy author, there for other reasons entirely?

If a seventh-author paper is actually meaningless for career credit, wouldn't it be more help to the person to whom you might extend such a "courtesy" if you actually engaged her in the project in such a way that she could make an intellectual contribution recognized as worthy of career credit?

In other words, maybe the real problem with such courtesy authorship is that it gives the appearance of help without actually being helpful.

(Cross-posted at Doing Good Science)

6 responses so far

Older posts »