Archive for the 'Ethics 101' category

Some modest proposals in the wake of Colin McGinn's exit from University of Miami.

More than you could possibly want to read about this case has been posted by the folks you should already be reading to stay up on happenings in the world of academic philosophy: Leiter (here and here), New APPS (here, here, here, and here), Feminist Philosophers (here), The Philosophy Smoker (here).

At issue is whether it is (always) wrong for a professor to send email to his graduate student research associate mentioning that he was thinking about her while masturbating.

I take it as a mark of how deeply messed up the moral compass of professional philosophy is that there are commenters at some of the blogs linked above who seem willing to go to the mat to argue that there may be conditions in which it is acceptable to email your RA you that were thinking about her during your hand-job. Because personal interactions are hard, y'all! And power-gradients in graduate programs that are at once educational environments and workplaces are really very insignificant compared to what the flesh wants! Or something.

Since, apparently, treating graduate students as colleagues in training rather than wank-fodder is very complicated and confusing for people who are purportedly very smart indeed, I'd like to propose ways to make life easier:

1. Let's make it an official rule that professors should NEVER email students, staff, colleagues, supervisors, program officers, et al. ANYTHING mentioning their masturbatory activities or the thoughts that pass through their heads during such activities. I would have thought this is just common sense, but apparently it isn't, so make it a bright line. If you're not able to follow the explicit rule, you probably don't have the chops to handle the more subtly challenging duties of the professoriate.

Anyone who wants to hear about what you're thinking while you're masturbating is either treating you within a therapeutic relationship or someone with whom you're in a position to share a pillow. Just take as given that no one else wants to know.

2. Don't try to date your (department's) students. I don't care if your institution doesn't explicitly forbid it (and honestly, I expect philosophy professors to recognize the difference between "it's not against the rules" and "it's ethical and prudent"). JUST DON'T. It's a risky call, especially for the student. (I have read letters of recommendation for applicants to academic jobs written by the thesis-supervisor-who-dated-the-applicant-until-they-broke-up. In a crowded job market, it's not a good look.)

What about love? If it's real, it will keep until the student is no longer a student. What, you say it's the student pestering you for a relationship? Say no! You can say no to other unreasonable requests from students, can't you? If not, again, maybe the professoriate is not for you.

Really, this should be enough.

And, for the record, having been on the receiving end of unwelcome behavior in philosophy (among other professional communities), I do not for a minute believe that such incidents are a matter of social ineptness or inability to read cues. Rather, a more plausible hypothesis (and one that usually has a great deal of contextual evidence supporting it in particular cases) is that the people dishing out such behavior simply don't care how it makes the targets of the behavior feel -- or worse, that they're intentionally trying to make their targets feel uncomfortable and powerless.

Spending too much time trying to find the possible world in which jerk behavior is OK simply gives the jerks in this world cover to keep operating. We should cut that out.

21 responses so far

Reasonable reactions to kids messing up in dangerous ways.

Kiera Wilmot, a 16-year-old Florida high school student, was expelled from her high school last week for mixing toilet bowl cleaner and aluminum foil in a plastic bottle on school grounds, creating some smoke and enough pressure to pop the cap off the bottle.

No property was damaged. No one was hurt.

Kiera described what she was doing as "conducting a science experiment" while the police described it as "possession/discharge of a weapon on school property and discharging a destructive device" -- both felonies.

While there has been a general increase in "zero tolerance" enforcement of policies by school systems, it is maybe not unimportant in the reaction in this case that Kiera Wilmot is African American. (For more on that, check out DNLee's post and the discussion at Black Skeptics.)

Schools, obviously, have reasonable concerns about students doing "freelance science" on school grounds, without supervision and without sufficient attention to issues like safety. And, there are sensible arguments that messing around with science (even the explode-y kind) outside the constraints of a lesson plan (and the inevitable standardized test question that follows upon that lesson plan) is precisely the kind of formative experience that gets kids interested enough in science to pursue that interest in their formal schooling. There's a challenge in finding the middle ground -- the circumstances where kids can get excited and take chances and discover things without doing permanent damage to themselves, others, or school property. In olden times, when I was in high school, some of our teachers managed to create conditions like these in the classroom. I don't even know if that would be possible anymore.

Meanwhile, we desperately need to figure out how not to read a 16-year-old's momentary lapse of judgment as a sign that she is a criminal, or a dangerous person to have in the classroom alongside other 16-year-olds whose lapses have not (yet) been so publicly observable. Smart kids -- good kids -- sometimes make decisions with less thought than they should about the potential consequences. Imposing draconian consequences on them isn't necessarily the only way to get them to be more mindful of consequences in the future.

My thoughts on this kind of case are made complex by very slight personal involvement with a similar case almost 20 years ago. In 1994, I lived near Gunn High School in Palo Alto, where a "senior prank" in the canter quad led to an explosion, a 15-foot plume of fire, and eighteen injured people, including two students seriously injured with second and third degree burns. The three seniors who confessed said they were trying to make a smoke bomb, but they had gotten it wrong. They all pled guilty to one felony count, were placed on probation, then had their felonies reduced to misdemeanors after they met particular conditions. They also faced a civil lawsuit brought on behalf of the injured students.

And, if memory serves, at least one of the students had his admission offer at an elite private college revoked.

I know this because I was teaching chemistry courses at a nearby community college that summer and the following fall, and one of the "mad bombers" (as they were being called in town) was my student. He was a good student, smart, engaged in the lessons, and hard working. In the laboratory, he took greater care than most of the other students to understand how to do the assigned experiments safely.

He wasn't, when I knew him, someone who seemed reckless with the welfare of the people around him. He definitely didn't seem like a kid looking to get into more trouble. He seemed affected by how wrong the prank had gone, and he gave the impression of having internalized some serious lessons from it.

None of this is to argue that he or the others shouldn't have been punished. They harmed their fellow students, some of them quite seriously, and the civil suits struck me as completely appropriate.

But approaching kids who mess up -- even quite badly -- as irredeemably bad kids (or, worse, as bad kids treated as adults for the purposes of prosecution) just doesn't fit with the actual kid I knew. And, possibly, going too far with the penalties imposed on kids who mess up is the kind of thing that might turn them into irredeemable cases, rather than giving them a chance to make things right, learn from their screw ups, and then go on to become grown-ups who make better decisions and positive contributions to our world.

4 responses so far

I don't know and I don't care: ignorance, apathy, and reactions to exposure of bad behavior.

I've already shared some thoughts (here and here) on the Adria Richards/PyCon jokers case, and have gotten the sense that a lot of people want to have a detailed conversation about naming-and-shaming (or calling attention to a problematic behavior in the hopes that it will be addressed -- the lack of a rhyme obviously makes this more careful description of what I have in mind less catchy) as a tactic.

In this post, I want to consider how ignorance or apathy might influence how we (as individuals or communities) evaluate an instance of someone calling public attention to a microaggression like a particular instance of sexual joking in a professional environment.

It has become quite clear in discussions of Adria Richards and the PyCon jokers that, for any particular joke X, there are people who will disagree about whether it is a sexual joke. (Note that in the actual circumstances, there was agreement between Adria Richards, the PyCon jokers, and the PyCon staff that the jokes in question were inappropriate -- and also significant, if not total, agreement from "mr-hank," who claims to be the PyCon joker who was fired, that some of the jokes in question were sexual.) Let's posit, for the purposes of this discussion, a case where there is no disagreement that the joking in question is sexual.

So, you're with others in a work environment (like audience seating for a presentation at a professional conference). You are in earshot of a sexual joke -- maybe as part of the intended audience of the joke teller, maybe not, but certainly close enough that the joke teller has a reasonable expectation that you may hear the joke correctly (which you do). Do you call the attention of the community to the sexual joking and the people engaging in it?

One reason to point out the microaggression is to address ignorance.

The people engaged in the sexual joking may not realize that they are doing something inappropriate in a professional environment. This lack of knowledge may require a serious commitment -- for example, not to read conference codes of conduct, not to absorb any workplace anti-harassment training -- but I suppose it's not impossible. So, pointing out to individual jokers, "Dude, that's inappropriate!" might reduce the ignorance of those individuals. It might also reduce the ignorance of the silent bystanders also in earshot of the sexual joking.

Drawing attention of the larger community to the particular instance of sexual joking may help dispel the ignorance of that larger community (and of its individual members, including those not in earshot of the joking), establishing the existence of such microaggressions within the community. If members of the community make a habit of pointing out each such microaggression they observe, it can also help the community and its members get good information about the frequency of behavior like sexual joking within the professional environment of the community.

Pointing out the microaggression, in other words, can help the community to know that microaggressions are happening, how frequently they're happening, and who is committing them. The hope is that having good knowledge here is more likely to lead to an effective response to the problem than ignorance would be.

There are other dimensions of ignorance you might want to address -- for example, whether people within the community experience discomfort or harm because of such microaggressions, or what empirical studies show about whether sexual joking in the workplace is harmful regardless of whether members of the community report that they enjoy such joking. Still, the thought here is that identifying facts is the key to fixing the problem.

However, you might not think that ignorance is the problem.

It might be the case that the people telling the sexual jokes are fully aware that sexual joking is inappropriate in a professional environment -- that what they're doing is wrong.

It might be the case that the larger community is fully aware of the existence of microaggressions like sexual joking in their professional environments -- and even fully aware of the frequency of these microaggressions.

In these circumstances, where ignorance is not the problem, is there any good reason to point out the microaggression?

Here, the relevant problem would seem to be apathy.

If the community and its members have good information about the existence of microaggressions like sexual joking in their professional environments, good information about the frequency of such microaggressions, even good information about which of its members are committing these microaggressions and still cannot manage to address the problem of eliminating or at least reducing the microaggressions, you might be pessimistic about the value of pointing out another instance when it happens. Reluctance to use good information as the basis for action suggests that the community doesn't actually care about the well-being of the members of the community who are most hurt by the microaggressions, or doesn't care enough about the harm caused by the microaggressions to put the effort in to doing something about them.

(Those silent bystanders also in earshot of the microaggressions? If they aren't ignorant about what's happening, its inappropriateness, and the harms it can do, they are letting it happen without making any effort to intervene. That's apathy in action.)

But perhaps it is possible, at least some of the time, to shake a community out of its apathy.

Sometimes bringing a microaggression to the community's attention is a way to remind the community that it is not living up to its professed values, or that it is allowing some of its members to be harmed because it won't ask other members to take a bit more effort not to harm them.

Sometimes reporting the microaggressions forces members of a community to reconcile what they say they are committed to with how they actually behave.

Sometimes exposing microaggressions to the view of those outside the community brings external pressure upon the community to reconcile its walk with its talk.

It's looking to me like calling attention to a microaggression -- sometimes attention of individuals committing it, sometimes attention of the community as a whole, sometimes the attention of those outside the community who might put pressure on the community and its members -- has promise as a tactic to dispel ignorance, or apathy, or both.

In the case that microaggressions are recognized as actually harmful, what's the positive argument against exposing them?

15 responses so far

Naming, shaming, victim-blaming: thoughts on Adria Richards and PyCon.

By now many of you will have heard the news about Adria Richards attending PyCon, notifying the conference staff about attendees behind her telling jokes during a conference presentation (about, among other things, making the coding community more welcoming for women and girls). Richards felt the jokes were sexualized enough to harm the environment of the conference. PyCon had a Code of Conduct for the conference that encompassed this kind of issue. In a room with hundreds of attendees, in a context where she hoped this harm to the conference community would be dealt with rather than let go (which gives it tacit approval) but where she also didn't want to disrupt the presentations underway, Richards took a picture of the men telling the sexualized jokes and tweeted it with the conference hashtag to get the conference staff to deal with the situation.

The conference staff addressed the issue with the men telling the jokes. Subsequently, one of them was fired by his employer, although it's in no way clear that he was fired on account of this incident (or even if this incident had anything to do with the firing); Adria Richards started receiving an avalanche of threats (death threats, rape threats, we-know-where-you-live threats, you-should-kill-yourself threats); Adria Richards' employer fired her; and PyCon started tweaking its Code of Conduct (although as far as I can tell, the tweaking may still be ongoing) to explicitly identify "public shaming" as harmful to the PyCon community and thus not allowed.

So, as you might imagine, I have some thoughts on this situation.

My big-picture thoughts on naming and shaming are posted at my other blog. This post focuses on issues more specific to this particular incident. In no particular order:

1. There is NOTHING a person could do that deserves to be met with death threats, rape threats, or encouragement to kill oneself -- not even issuing death threats, rape threats, or encouragement to kill oneself. Let's not even pretend that there are circumstances that could mitigate such threats. The worst person you know doesn't deserve such threats. Making such threats is a horrible thing to do.

2. People disagree about whether the joking Adria Richards identified as running afoul of the PyCon Code of Conduct was actually sexual/sexist/inappropriate/creating a climate that could be hostile or unwelcoming to women. (A person claiming to be the joker who was subsequently fired seems to be ambivalent himself about the appropriateness of the joking he was doing.) But it's worth remembering that you are a good authority on what kind of conduct makes you feel uncomfortable or unwelcome; you are not automatically a good authority on what makes others feel uncomfortable or unwelcome. If you're a social scientist who has mounted a careful empirical study of the matter, or if you're up on the literature describing the research that has been done on what makes people comfortable or uncomfortable in different environments, maybe you have something useful to add to the conversation. In the absence of a careful empirical study, however, it's probably a good idea to listen to people when they explain what makes them feel uncomfortable and unwelcome, rather than trying to argue that they don't actually feel that way, or that they're wrong to feel that way.

In other words, that certain jokes would not have been a big deal to you doesn't mean that they could not have had a significant negative impact on others -- including others you take to be members of your community who, at least officially, matter as much as you do.

3. So, if Adria Richards was bothered by the joking, if she thought it was doing harm and needed to be nipped in the bud, why couldn't she have turned around and politely asked the men doing the joking to knock it off? This question assumes that asking nicely is a reliably effective strategy. If this is your default assumption, please [I just noticed myself typing it as a polite request, which says something about my socialization as a female human, so I'm going to let it stand] cast your eyes upon the #Iaskedpolitely hashtag and this post (including the comments) to get some insight about how experience has informed us that asking politely is a pretty unreliable strategy. Sometimes it works; sometimes, buying a lottery ticket wins you some money. On a good day, politely asking to be treated fairly (or to be recognized as a full human being) may just get you ignored. On a not as good day, it gets you called a bitch, followed for blocks by people who want to make you feel physically threatened, or much, much worse.

Recognize that the response that you expect will automatically follow from politely asking someone to stop engaging in a particular behavior may not be the response other people have gotten when they have tried the approach you take as obviously one that would work.

Recognize that, especially if you're a man, you may not know the lived history women are using to update their Bayesian priors. Maybe also recognize, following up on #2 above, that you may not know that lived history on account of having told women who might otherwise have shared it with you that they were wrong to feel the way they told you they felt about particular situations, or that they couldn't possibly feel that way because you never felt that way in analogous situations. In other words, you may have gappy information because of how your past behavior has influenced how the women you know update their priors about you.

I try to recognize that, as a white woman, I probably don't really grasp the history that Adria Richards (as a woman of color) has used to update her priors, either. I imagine the societal pressure not to be an "uppity woman" falls with much, much more force on an African American woman. Your data points matter as you plot effective strategies with which to try to get things done.

3.5. An aside: About a month ago, my elder offspring was parked in front of her laptop, headset on, engaged in an online multiplayer game of some sort. As the game was underway, one of the other players, someone with whom she had no acquaintance before this particular gaming session, put something pornographic on the screen. Promptly, she said into her headset mic, "Hey, that's not cool. Take the porn down. We're not doing that." And lo, the other player took the pornographic image off the screen.

I was pretty impressed that my 13-year-old daughter was so matter-of-fact in establishing boundaries with online gamers she had just met.

I thought about this in the context of #Iaskedpolitely. Then I realized that I maybe didn't have all the relevant information, so today I asked.

Me: That time you were online gaming and you told the other player to take down the porn? Is it possible the other player didn't know you were a girl?

Her: Not just possible.

My daughter has a gender-neutral username. Her voice is in a low enough register that on the basis of her voice alone you might take her for a 13-year-old boy. This may have something to do with the success of her request to the other player to take the porn off the screen in the game.

Also, she didn't bother with the word "please".

In the three-dimensional world, where it's less likely she'll be assumed to be male, her experiences to date have not departed nearly as much from what you can find in #Iaskedpolitely as a mother would like them to.

4. Some of the responses to the Adria Richards story have been along the lines of "A convention or professional conference or trade show is totally not the same thing as a workplace, and it's a Bad Thing that organizers are trying to impose professional-environment expectations on attendees, who want to hang out with their friends and have fun." I'll allow that even a professional conference is different from work (unless, I guess, your entire job is to coordinate or do stuff at professional conferences), but in many cases such a conference or convention or trade show is also still connected to work. One of the big connections is usually the community of people with which you interact at a conference or convention or trade show.

Here's a good operational test: Can you totally opt out of the conferences or conventions or trade shows with no resulting impact on your professional life (including your opportunities for advancement, networking, etc.)? If not, the conferences or conventions or trade shows are connected to your work, and thus it's appropriate to expect some level of professionalism.

None of which is to say that conventions one goes to off the clock, for fun, should necessarily be anarchic events, red in tooth and claw. Unless that's how the community at that particular con decides it wants to have fun, I suppose.

Also, this is not to say that companies should necessarily fire their employees for any and every infraction of a conference Code of Conduct. Depending on what kind of violation (and what kind of ongoing pattern of problematic behavior and failed attempts at remediation an employee might have displayed) firing might be the right call. I have seen none of the personnel files of the persons directly involved in this case -- and you probably haven't, either -- so the best I could do is speculate about whether particular firings were warranted, and if so, by what. I'm in no mood for such speculation.

5. On the matter of tweeting a photo of the PyCon attendees who were telling the jokes Adria Richards felt were inappropriate in the circumstances: Lots of people have decried this as a Very Bad Way for Richards to have communicated to the conference staff about bad-behavior-in-progress with which she felt they should intervene. Instead, they say, she should have had a sense of humor (but see #2 above). Or, she should have turned around and politely asked them to cut it out (but see #3 above). Or, that she should have done something else. (Email conference staff and hope someone was monitoring the inbox closely enough to get promptly to the location ten rows back from the stage so that Richards could point the jokers out in a room with hundreds of people? Use a Jedi mind trick to get them to stop quietly?)

She alerted the conference staff to the problem via Twitter. She made the call, given the available options, the fact that she didn't want to generate noise that would disrupt what was happening on the stage, and probably her judgments of what was likely to be effective based on her prior experiences (see #2 above).

Maybe that's not the call you'd make. Maybe the strategy you would have tried would totally have worked. I trust you're prepared to deploy it next time you're at a conference or convention or trade show and in earshot of someone behaving in a way likely to make members of the community feel uncomfortable or unwelcome. I hope it's just as effective as you imagine it will be.

Even if Adria Richards was wrong to tweet the picture of the jokers, that doesn't mean that their joking was appropriate in the circumstances in which they were doing it at PyCon. It wouldn't mean that the conference staff would be wrong to investigate the joking and shut it down (and deal with the jokers accordingly) if they judged it in violation of the Code of Conduct.

Also, one of the big complaints I've seen about the tweeted photo of the PyCon jokers is that using Twitter as a tool to report the problem removes the confidentiality that ought to accompany allegations of violations of the Code of Conduct, investigations of those allegations, penalties visited on violators, etc.

There's a couple things I want to say to that. First, dealing with bad behavior "privately" (rather than transparently) doesn't always inspire confidence in the community that the bad behavior is being taken seriously, or that it's being addressed consistently (as opposed to, say, being addressed except when someone we really like does it too), or that it's being addressed at all. Especially when the bad behavior in question is happening in a publicly observable way, taking the response completely private may be nearly as harmful to the community as the bad behavior itself.

Second, shouldn't the people who want us to trust that the PyCon staff would have dealt with the PyCon jokers fairly and appropriately in private themselves trust that the PyCon staff had addressed any violation of the conference Code of Conduct Adria Richards might have committed by tweeting the picture of the PyCon jokers (rather than emailing it or whatever) -- and that they'd dealt with such a violation on Richards' part, if they judged it a violation, in private?

There's just a whiff of a double standard in this.

6. On the post-conference update to the PyCon Code of Conduct to to explicitly identify "public shaming" as harmful to the PyCon community and thus not allowed: I'm hopeful that PyCon organizers take account of the effects on the community they have (and on the community they are trying to build) of opacity in dealing with bad behavior versus transparency in dealing with bad behavior.

It's not like there isn't already reason to believe that sometimes conference organizers minimize the impact of instances of harassment reported to them, or deny that any harassment has been reported at all, or back off from applying their own explicit rules to people they judge as valuable to the community.

These kinds of actions may harm their community just as much as public shaming. They communicate that some harassers are more valuable to the community than the people they harass (so maybe a bit of harassment is OK), or that people are lying about their actual experiences of bad behavior.

7. There has been the predictable dissection of Adria Richards' every blog post, tweet, and professional utterance prior to this event, with the apparent intention of demonstrating that she has engaged in jokes about sex organs herself, or that she has a history of looking for things to get mad about, or she's just mean, and who is she to be calling other people out for bad behavior?

This has to be the least persuasive tu quoque I've seen all year.

If identifying problematic behavior in a community is something that can only be done by perfect people -- people who have never sinned themselves, who have never pissed anyone off, who emerged from the womb incapable of engaging in bad behavior themselves -- then we are screwed.

People mess up. The hope is that by calling attention to the bad behavior, and to the harm it does, we can help each other do better. Focusing on problematic behavior (especially if that behavior is ongoing and needs to be addressed to stop the harm) needn't brand the bad actor as irredeemable, and it shouldn't require that there's a saint on duty to file the complaint.

8. Some people have opined that it was bad for Adria Richards to call out the PyCon jokers (or to call them out in the particular way she did) on account of the bad consequences that might befall them if they were known to have violated the PyCon Code of Conduct. But the maxim, "Don't call out bad behavior because doing so could have negative consequences for the person behaving badly" just serves to protect the bad behavior and the bad actors. Being caught plagiarizing can be harmful to a scientist's career, so for heaven's sake don't report it! Being convicted of rape can end your future as a football player, so your victim ought to refrain from reporting it, and the authorities ought to make sure you're not prosecuted!

Bad behavior has bad consequences, too.

The potential bad consequences of being caught behaving badly should, perhaps, help motivate people not to behave badly, especially in cases where the harms of that bad behavior to individuals or the community are not themselves sufficiently motivating to prevent the behavior.

9. Finally, some people have been expressing that it makes them feel uncomfortable and unwelcome when they are not allowed to act they way they want to, tell the jokes they feel like telling, and so forth.

I don't doubt this for a minute.

However, this is not necessarily a bad thing. In the end, it comes down to a question of who you want in your community and who you want out of it. Personally, I don't want my professional communities to be comfortable places for racists or sexists, for rapists, plagiarists, or jerks. Other people, I imagine, would prefer a professional community that's a comfortable place for racists or sexists, for rapists, plagiarists, or jerks to a professional community that's a comfortable place for me.

But here's the thing: if you say you want your community to be welcoming to and inclusive of people who aren't yet represented in great numbers, it might require really listening to what they say about what's holding them back. It might require making changes on account of what they tell you.

It's still possible that you'll decide in the end to prioritize the comfort of the people already in your community over the comfort of the people you thought you wanted to welcome into your community. But in that case, at least have the decency to be honest that this is what you're doing.

* * * * *

Also, pretty much everything Stephanie says here.

* * * * *

UPDATE: So, there are people who seem very eager to share their take on this situation (especially, for some odd reason, their autopsies of every wrong thing Adria Richards did) in the comments, but without engaging with anything I've written in the 3000 words here -- including the things I've written here that directly address the points they're trying to make.

There are many, many places on the internet where these not-really-engaging-with-the-conversation-we're-having-here contributions would be welcome. But it's probably worth updating some prior probabilities about whether those comments will make it out of moderation here.

74 responses so far

The case study protagonist as unreliable narrator.

Even though it seems like my semester just started, I'm already grading the first batch of case study responses from my "Ethics in Science" students. (Students, if you're reading this: I'm quite happy with how the class is doing! You'll get detailed feedback on your response by the end of the week.)

In case you're not familiar with case studies in the context of an ethics class, they usually consist of a brief description of a situation in which a protagonist is trying to make a decision about what to do. I ask my students to look at this description and identify who has a stake in what the protagonist does (or doesn't do); what consequences, good or bad, might flow from the various courses of action available to the protagonist; to whom the protagonist has obligations that will be satisfied or ignored by his or her action; and how the relevant obligations and interests pull the protagonist in different directions as he or she tries to make the best decision. On the basis of these details, I ask my students to choose a course of action for the protagonist and explain why it's an ethical course of action.

But here's something that makes the analysis difficult for the students: Often it's hard to pin down the fact of the case with certainty. The scenario is described from the protagonist's point of view. It seems to the protagonist that there's favoritism in the lab group, or that it's obvious why some of the measurement turned out the way they did, or that a colleague is going to react a particular way if a concern is brought to that colleague's attention. However, my students have been quick to notice in their discussions of the cases, what seems to be true to the protagonist might be false. For any number of reasons, the protagonist may have a skewed perspective on what's going on in other people's minds, on what the issues are with the experiment, even on his or her own competence.

The protagonist, in other words, could be an unreliable narrator.

Making a good ethical decision is easier when you can pin down all the relevant facts (including things like what future events would flow from the protagonist's various courses of action). But, as in real life, the case studies with which we ask our students to grapple have a lot of uncertainty built in. Postponing a decision about what to do until all the facts are in just isn't a practical option. Sometimes you do the best you can with knowledge you recognize is gappy.

Indeed, one of the big reasons I try to get my students to understand discussion as a valuable part of ethical decision-making is that, left to our own devices, each of us can be just as unreliable a narrator as the protagonist of the case study we're thinking through. The protagonist suspects favoritism. We suspect jealousy. Maybe the protagonist is wrong, but maybe the protagonist is right and we're wrong instead. Given the state of our knowledge in the world, we don't won't to lean on ethical decision-making strategies that require us to guess correctly about all of the unknowns.

The moral of the story is assuredly not the "there are no wrong answers" crap that humanities professors get from their naïve undergraduates. Instead, it's that taking account of other people's perspectives may be useful in helping us gain some critical distance on our own (and on the ways it might turn out to be wrong). Also, it's that an ethical course of action might require some active fact-finding to test whether one's perceptions in a situation are reliable before acting rashly on the assumption that they are.

* * * * *
Related posts:

The value of (unrealistic) case studies in ethics education.

Some ethical decisions are not that hard: thoughts on Joe Paterno.

Question for the hivemind: workplace policies and MYOB.

Passion quilt: a meme for teachers.

2 responses so far

What did Jonah Lehrer teach us about science?

Los Angeles Times book critic David L. Ulin wishes people would lay off of Jonah Lehrer. It's bad enough that people made a fuss last July about falsified quotes and plagiarism that caused Lehrer's publisher to recall his book Imagine and cost him a plum job at The New Yorker. Now people are crying foul that the Knight Foundation paid Lehrer $20,000 to deliver a mea culpa that Lehrer's critics have not judged especially persuasive on the "lesson learned" front. Ulin thinks people ought to cut Lehrer some slack:

What did we expect from Lehrer? And why did we expect anything at all? Like every one of us, he is a conflicted human, his own worst enemy, but you’d hardly know that from the pile-on provoked by his talk.

Did Jonah Lehrer betray us? I don’t think so.

Ulin apparently feels qualified to speak on behalf of all of us. In light of some of the eloquent posts from people who feel personally betrayed by Lehrer, I'd recommend that Ulin stick to "I-statements" when assessing the emotional fallout from Lehrer's journalistic misdeeds and more recent public relations blunder.

And, to be fair, earlier in Ulin's piece, he does speak for himself about Lehrer's books:

That’s sad, tragic even, for Lehrer was a talented journalist, a science writer with real insights into creativity and how the brain works. I learned things from his books “How We Decide” and “Imagine” (the latter of which has been withdrawn from publication), and Lehrer’s indiscretions haven’t taken that away.

(Bold emphasis added.)

Probably Ulin wouldn't go to the mat to assert that what he learned from Imagine was what Bob Dylan actually said (since a fabricated Dylan quote was one of the smoking guns that revealed Lehrer's casual attitudes toward journalistic standards). Probably he'd say he learned something about the science Lehrer was describing in such engaging language.

Except, people who have been reading Lehrer's books carefully have noted that the scientific story he conveyed so engagingly was not always conveyed so accurately:

Jonah Lehrer was never a very good science writer. He seemed not to fully understand the science he was trying to explain; his explanations were inaccurate, overblown, and often just plain wrong, usually in the direction of giving his readers counterintuitive thrills and challenging their settled beliefs. You can read my review and the various parts of my exchange with him that are linked above for detailed explanations of why I make this claim. Others have made similar points too, for example Isaac Chotiner at the New Republic and Tim Requarth and Meehan Crist at The Millions. But the tenor of many critics last year was "he committed unforgivable journalistic sins and should be punished for them, but he still got the science right." There was a clear sense that one had nothing to do with the other.

In my opinion, the fabrications and the scientific misunderstanding are actually closely related. The fabrications tended to follow a pattern of perfecting the stories and anecdotes that Lehrer -- like almost all successful science writers nowadays -- used to illustrate his arguments. Had he used only words Bob Dylan actually said, and only the true facts about Dylan's 1960s songwriting travails, the story wouldn't have been as smooth. It's human nature to be more convinced by concrete stories than by abstract statistics and ideas, so the convincingness of Lehrer's science writing came from the brilliance of his stories, characters, and quotes. Those are the elements that people process fluently and remember long after the details of experiments and analyses fade.

(Bold emphasis added.)

If this is the case -- that Lehrer was an entertaining communicator but not a reliably accurate communicator of the current state of our best scientific knowledge -- did Ulin actually learn what he thought he learned from Lehrer's books? Or, was he misled by glib storytelling into thinking he understood what science might tell us about creativity, imagination, the workings of our own brains?

Maybe Ulin doesn't expect a book marketed as non-fiction popular science to live up to this standard, but a lot of us do. And, while lowering one's standards is one way to avoid feeling betrayed, it's not something I would have expected a professional book critic to advise readers to do.

2 responses so far

An ad in the sidebar that kind of bugs me.

Just now, on this blog, I noticed an ad for an "online reputation management service". There are ads for services like these all over the place (including on the airwaves of the big San Francisco public radio station, although they don't call them "ads" because public radio isn't supposed to have ads).

Anyway, I hadn't really given much thought to these businesses. I figured it was mostly for restaurants or similar kinds of clients trying to "accentuate" their good online reviews (while eliminating the negative ones, or at least pushing them down in the search results). Kind of slimy, but in a way I've come to expect from companies trying to attract me as a customer.

But I have come to learn lately that cheating scientists sanctioned by the ORI have been hiring online reputation managers to try to push the cyber-trails of their cheating out of sight. It's even possible (although not conclusively established by any means) that especially vigorous online reputation managers for hire might be engaging in shenanigans to use false DMCA claims to literally eliminate negative information that the scientific community (and indeed, the larger public) has an interest in being able to access.

So, yeah. Everyone has bills to pay -- people who work in online reputation management, people who had to leave science because they got caught cheating, blog networks like Scientopia. Commerce marches on. But that doesn't mean I have to like all of what happens in the service of paying those bills.

7 responses so far

Passing thoughts on online courses and the temptations they present.

It is interesting to me that there are certain denizens of the university community who are anxious for faculty to increase the number of online courses that we offer, and that this desire for us to pursue this aim is not generally driven primarily by a desire for us to better serve students with inflexible work schedules or scary-long commutes, or even to free up scarce classroom space. Rather, some of the most vocal proponents (at least at my university) of expanding online course offerings seem to believe online classes can accommodate much larger enrollments than can traditional classroom-bound classes.

Technically speaking, that's true -- you can set things up so that your online class will allow hundreds of students to enroll in it, and the fire marshall won't bat an eyelash. However, making enrollments really, really big also makes the workload to assess student work (including discussion board-based discussions, which now read like papers without the benefit of spellcheck) really, really big. Plus, you also get to deal with all the technical glitches the students find with accessing materials and submitting materials and joining groups for discussions and not blowing deadlines. (It doesn't take a really, really big online enrollment for your students to discover every technical glitch there is to find.)

Of course, increasing support for graders might help, but this doesn't come up so much, since the point is to save buckets of money. (I should note that my better half, who has been taking some online courses through organizations that hope at some point to turn a profit, was invited to be a "community TA" for one of the courses so taken --for free! Obviously, the best way to become profitable is to recruit skilled labor that is also free.)

Well, say the hopeful advocates, there are rumors of automated programs for grading student papers. Maybe you can run all the work through those?

Even if I trusted those programs to prioritize the things I'm looking for in student papers (and, you know, to provide useful feedback to my students on their work), the boom in online classes has given rise to a boom in "services" for students "taking" online classes. Inside Higher Education describes the scene:

These sites make an appeal to the busy online student, struggling through a class they’re not good at or not interested in. The description of one site,*, reads: "I’m sure you are here because you are wondering 'how will I have time to take my online class?' It may be that one class such as statistics or accounting. We know some people have trouble with numbers. We get that. We are here to help.”

Prices for a “tutor” vary. advertises a $695 rate for graduate classes, $495 for an algebra class, or $95 for an essay. When Inside Higher Ed, posing as a potential customer, asked for a quote for an introductory microeconomics class offered by Penn State World Campus, offered to complete the entire course for $900, with payment upon completion, and asked for $775, paid up front. Most sites promise at least a B in the course. ...

“If we just had a course that was just a multiple-choice final at the end there’d be a high chance of cheating,” [Eric] Zematis [director of Enterprise Systems at Charter Oak State College, a fully online institution] said. “When we design courses we try to look at having more interaction to try to discourage cheating.”

In the case of a site like We Take Your Class, Zematis surmised, the amount a student would have to pay would probably increase based on the number of assignments. If there were enough assignments, tests, or required discussions, then, using an online class-taking service could become prohibitively expensive.

A couple things worth noting here: First, the pedagogical steps that make it harder for students to cheat in an online course also tend to make student work in those online courses harder to grade. Second, the kids who have enough money to pay someone else to do that work for them seem like they're going to have a better shot at gaming the system and getting credit for taking online courses for which they've done essentially bupkis.

Does this leave me oddly comforted that students in my online classes probably don't have the means to hire someone to do their school work for them? Maybe ...

But wait! Can the Invisible Hand (and the excess of Ph.D.-holders) make sophisticated and hard-to-detect cheating affordable even for lower income students? Perhaps:

The website has teachers writing papers for students.**

“So you can play while we make your papers go away” is its tag line.

Organizers say education has already become a commodity and with tenure harder to get, teachers need work. ...

“I’d say this service represents a new solution to an age-old problem,” said one [person working for], adding the justification is supply and demand and a void in the marketplace.

Noting that has effectively barred students from buying recycled essays on the cheap, the professor said potential clients include international students whose English is poor, students too lazy to complete the work, students too busy with jobs paying for their education to do the work and science students who resent being asked to write papers in the humanities.

The service says the work should not be used to fulfill an academic requirement — but offers to supply dissertation chapters and personal statements used for admissions — and should be used as a guideline.

“This removes the ethical dimension on our side as we have no control over what a client does upon paying for and receiving the project,” said the professor.

“In fact, it places the ethical burden squarely on the shoulders of the student.”

The service started last fall and has recruited about 30 professors. While it doesn’t guarantee an A, it does guarantee high-quality work and turns away about 15 applicants for every one it hires.

I guess when people who have trained for academic careers cannot sell their expertise to a college or university, eventually selling their integrity might become a live option.

Here, if the professorial cheating-enablers are delivering what they seem to be promising, it's likely to be fairly labor intensive for them. At least part of the labor would involve writing a paper that actually sounds like a student paper. (Believe it or not, we can usually tell.) So either they're charging clients through the nose, or they're being financially exploited at least as much as they would be as adjuncts.

There seems to be a possibility, though, that a willing employee of a service like this might not have qualms about cutting some ethical corners herself -- perhaps providing the same basic paper for more than one client (upping the chances that those clients are caught using a paper that has been run through a service like TurnItIn already), or even plagiarizing in the production of a paper.

Also, given the understanding of how ethics works reflected in the claims from the website, I suspect buying an ethics paper from them might be a really bad call.

The selectiveness of hiring of these professorial cheating-enablers -- 15 applicants turned away for every one hired! -- may drive the prices from higher, but it's surely only a matter of time before those applicants who were turned away find each other and set up their own cheating-enabling service, maybe cutting out some layers of management so they can enable cheating among lower income students!

Yeah, there's a reason I skip higher education news coverage for weeks at a time.

* Since the original Inside Higher Education article, "We Take Your Class" has gone offline.

** As far as I can tell, this service is not marketed specifically for students taking online courses, but it seems like it could be used for that purpose.

4 responses so far

On the request for numerical scoring of honesty and integrity.

Oct 02 2012 Published by under Academia, Ethics 101, Teaching and learning

On the Twitters, becca pointed me to this post which raises an interesting evaluative question:

I was recently completing a recommendation form for a former student and was asked to “rate the candidate on a scale of 1 to 10 for his Honesty & Integrity”. What meaningful answer can I hope to give? What level of honesty earns a 8? How much do you have to steal to earn 3?

I am sympathetic as far as the challenge of evaluating this.

I'm guessing some people would reject the notion that a ten-point scale is appropriate here, since (they might argue) honesty is one of those binary properties that is either "on" or "off". Being a little bit honest, on this view, would be as nonsensical as being a little bit pregnant.

And, maybe that's an appropriate way to conceive of individual acts of honesty or dishonesty, where you are making a representation that is truthful or you are making a representation that is not truthful. Maybe it's not, though, since you might view offering a totally made-up lie as a more serious departure from the Platonic form of honesty than leaving out a particular true piece of information. If there's one thing I've learned from playing "Two Truths and a Lie" with other philosophers, it's that there are lots of interesting ways to make a claim that departs from the truth, the whole truth, and nothing but the truth.

However you want to keep score on magnitudes of individual claims, though, I think we also have to recognize that evaluating the honesty of an individual is a more complicated project. Individuals, after all, tend to make lots of different representations, in lots of different kinds of contexts. There may be some contexts in which they play faster and looser with the truth, and others where they are extremely careful and rigorous. Presumably, these contexts matter. A graduate program might have no worries at all about the applicant who cheats on sit-ups, or counting strokes in miniature golf, but have huge worries about the applicant who made up every single bit of data in her lab notebook. One of these contexts seems more immediately relevant to the milieu in that the graduate program cares about than the other.

Not that I don't have worries about a habit of lying in one milieu creeping into one's behavior in others -- I do. However, holding out for people who are 100% honest about everything is a good way to whittle your applicant pool to zero.

Besides which, what kind of actual data would an evaluator have to go on here? Honesty and integrity seem to be qualities that we assume someone has until we are faced with evidence to the contrary -- for example, we tend to assume a student is honest until we catch her cheating. So, ranking someone highly on the scale might just amount to saying, "I've never caught him in a lie." You're flagging a lack of evidence of dishonesty, but that's not quite the same as positive evidence of honesty.

Finally, I think what would be more meaningful to know about an applicant is whether he or she has been honest in circumstances where being honest is difficult -- where a lie or an omission might make life significantly easier. If the applicant has stepped up to be honest in a situation where being honest created extra work, that's someone whose commitment to honesty is serious. Especially if it's robust, and he or she is honest in the next situation where being honest brings additional responsibilities. However, I'm not sure that this is information one is likely to get about a student in the typical college course, or even about an advisee in a typical supervised research environment. Maybe you could build such tests-of-character into those situations (as Willy Wonka did in his chocolate factors), but it would be hard to do so ethically.

Ultimately, then, can we expect that your typical college professor can provide such a seemingly-objective numerical ranking of a student's honesty and integrity without being a little bit ... dishonest?

One response so far

Question for the hivemind: Where do you draw the line in associating with a party that has done something objectionable?

I am thinking my way through a longer discussion of this general question, and I decided it would be useful to get a sense of the intuitions of people who are not me on this matter.

Say there's a person or an organization (or a corporation, which, I've heard, is a person) that has done something you find pretty objectionable.

Say that this person or organization is in a position to contribute something to a goal that you support -- perhaps providing material and/or labor to help build something you think needs to be built, or money to help support a conference you think will serve the good, or speakers to help explain science-y stuff to a general audience.

Would you associate with the party that has done something you find objectionable to the extent that you would accept that contribution of help?

What kind of conditions would you require in accepting the help? For example, would you require that the party not be able to micromanage how their donation is used, who gets to speak at the conference their money is supporting, etc.? Would you insist that they only be allowed to provide help if they also agree to face questions about what you view as their objectionable conduct?

Or, would you rather forgo the help in order not to associate at all with the party that has done something you find objectionable?

Does it matter here whether the party is an organization, some of whose members or organizational units have done something you find objectionable -- but where the help on offer is coming from other members or organizational units -- rather than a person who's done something you find objectionable offering her help?

Feel free to share your thoughts on the ways the precise nature of the "something objectionable" matter to your line-drawing here.

7 responses so far

« Newer posts Older posts »