Ten Phrases Uttered by the Unethical Advisor
A good scientist must see to believe… but if you just landed in the lab and things aren’t working, maybe it’s not you. We all love to try and save our hypothesis, but in this publish or perish climate, looking the other way during truth bending happens, and it happens a lot. Here are some classic warning phrases that might support your hunch that you have an unethical advisor who is putting an artistic spin on the results.
1. “I’m the best scientist at this University.”
If your advisor is a narcissist, the probability increases that something unethical will happen. While anyone is capable of being unethical, a person who is a flagrant narcissist is motivated by a religious belief in their need to be great, and are known to protect this belief by lying (not all narcissists lie, to those that don’t your struggle is esteemed). You might recognize a narcissist by their actions in the lab:
- Constantly talking about how great they are.
- Framing things that others wouldn’t (like old grants proposals, including those that didn’t get funded).
- Signing their name in unusual places as if it were art (like with a sharpie on your benchtop).
- Constantly interviewing people— like ten or more a week— just because they like it.
- Take credit for everything even if it was someone else’s idea.
- Never apologizing or admitting error, unless in jest or play.
2. “I never said that.”
If they lie, and then lie about lying, you know you have a problem and it WILL make its way into the science. At first you might think that you remembered statements incorrectly, and then you might think you misheard. But when you finally realize you are dealing with a liar, it becomes the white elephant in the room: obvious and, yet, fascinating. Either they think no one is intelligent enough to figure out their lies (the narcissist), or they think can get away with anything (sociopath).
3. “We don’t need to repeat this because it looks good.”
This statement might be not a big deal in some situations (like in a figure summarizing previous results as a control) but in others, it could be a flat out lie. Performing only one replicate is not being thorough enough to have confidence in your results. And as tedious or expensive as it may be, being confident in your results goes beyond just doing replicates. Not ensuring your reagents are accurate can be more than just sloppy science if it is done only at opportune times!! Here are a few concrete examples of dishonesty working its way into experiments by intentionally not being thorough:
- Not sequencing key plasmids that are of questionable quality if they give you the answer you wanted (e.g., that are not expressing at the correct size and are tagged, not digesting properly, etc).
- Not ensuring protein controls are functional or properly folded if possible (especially egregious if intentionally overlooked for negative controls).
- Not properly quantifying imaging data (some will select what they want/need to see).
- Not being sure that the band you see with the new antibody is actually your protein of interest (solved with appropriate controls).
4. “We can toss out these data points because clearly there was an error.”
It’s called “cherry-picking” and if done on one of your replicates from an obvious error it might not be so bad. It gets bad when you want to use that particular experiment as your figure because it’s the only one that was statistically significant. And it gets even worse if all your replicates were “cherry-picked”. Tossing out data points should not be done, and if it is, it should be disclosed.
5. “The data is meaningless because the machine is broken.”
If the advisor is making up excuses for all data that contradicts the hypothesis, then you have a problem. There is a little bit of grey here, because oftentimes if an approach doesn’t work then we try something else. But when it becomes a statement heard frequently, or in a manner that makes no sense (like the computer is broken when it is simply counting dots), then something bigger is going on. In the end, if you have five experiments that show the hypothesis is wrong, but you publish with the one weak experiment that suggests it is correct…that’s unethical.
6. “You swapped your lanes and you need to relabel them correctly.”
You know you didn’t switch your lanes, and you plan to repeat it anyway. But when you repeat it and the answer is the same undesirable result, you are bombarded with a series of logical reasons for why the results are still wrong. Questioning data is normal in the lab to some degree, but if these statements are made weekly, to different people, and it is apparent that the advisor believes they know the answers to the experiments before they are performed to the extent that that they actually tell you to relabel no matter what…wow.
7. The postdoc says, “I guess I was doing it wrong.”
The extremely experienced post doc says their experiment didn’t work and they show you jumbled data. But when they go in the advisor’s office they come out with pristine data, and they look a little sick and say, “I guess I was doing/graphing it wrong.” Another variation is the postdoc is getting undesirable results and the advisor says, “I’ll do it.” The advisor gets perfect results on the first try and it’s “one and done”; no repeats because the advisor is just that good. If these things happen frequently, and to more than one person: wake up and smell the crazy!
8. “Put stripping buffer on it to make it look better.”
You’ve repeated the experiment four times and it hasn’t even come close to a positive result. Finally, you are told to put stripping buffer on it so it can be “publication quality.” Jaw drops. It’s now an art project?
9. “If you don’t do this, then….”
It’s called extortion, and it’s very easy to do to both graduate students (threatening loss of degree, visa, reputation, and letters of recommendation) and postdocs (threatening loss of a salary, a visa, a career, a good university record). The gravity of this extortion is escalated when the professor has tenure (can’t touch me) and is well-known at the university. Maybe they are also a medical provider at the university and could access your medical and personal information (is this regulated; should you risk it?). Or maybe with their high-paying job they could pay someone to hack into your personal email and delete all incriminating emails, or get a great attorney to shut you down. If you’re dealing with a truly unethical person that thinks they can get away with anything, then where do they stop if they know where you live?
10. “No one will ever know.”
You have expressed your concerns at more than one meeting and are shut down at every attempt to redirect a flagrant and unethical course of action. It finally ends in the advisor telling you “No one will ever know,” and you realize exactly who you are dealing with. They actually DO realize that they are being unethical. And they don’t care!
What to do next
Lapses in judgement can happen to anyone. Privately talking to your advisor about your concerns will hopefully clear the air. But if you’ve heard these phrases too often, don’t downplay your concerns and gain perspective by talking to those you trust. If unethical science is happening, you should consider leaving the lab—ASAP. As a grad student, talking to a senior person in your department about your concerns and your desire to switch is likely the safest option for your career and your life.
Leave a Comment
You must be logged in to post a comment.
Good article. It’s always good to list red flags of a potential PI especially since there is such an imbalance between student vs. PI. Just to give my two cents to R Dorman’s post: Yes, it does happen. I am currently in the US with two PI’s from the Netherlands who act similarly. Their level of bias in catching cheating switches depending on their favorite student, and they never catch themselves. So, I would argue that this post is spot on.
In all seriousness, is this behavior really happening on more than one occasion? I can’t speak for every lab in the world of course, but my advisor and some advisors i’m in close relationship with (neuroscience labs, in the Netherlands), all have good science very highly on their agenda. Less-then-optimal data is often responded to with only more controls for verification, perhaps adjusting of the project/model or even admitting defeat.
I know fraud happens, but this post kinda scares me.