Q: What is homologous recombinaltion tiniker?
A: Nobody knows!
Q: So where did it come from?
A: It's verbal diarrhea shit from the mouth of CBS News reporter Sharyl Attkisson in a March 31 article titled "Vaccines and autism: a new scientific review"
The idea? It doesn't matter, because it's complete horseshit, but it helps to know the premise to see where the mental train not only went off the track, but straight off the logic cliff into Aphasia Ravine. The idea here is that vaccines cause autism... no, not because of thimerosal... no, not immune load... wait, just listen... wait, it's because vaccines have human DNA in them and human DNA causes brain damage.
Whenever you're done laughing, here's the really damning part (as opposed to the mildly damning part):
http://www.cbsnews.com/8301-31727_162-20049118-10391695.html
Why could human DNA potentially cause brain damage? The way Ratajczak explained it to me: "Because it's human DNA and recipients are humans, there's homologous recombinaltion tiniker. That DNA is incorporated into the host DNA. Now it's changed, altered self and body kills it. Where is this most expressed? The neurons of the brain. Now you have body killing the brain cells and it's an ongoing inflammation. It doesn't stop, it continues through the life of that individual."
Q: So what does it mean?
A: Nothing!
Q: Really?
A: Yes.
Q: So if those words were used properly...
A: ...'tiniker' isn't a word.
Homologous recombination (not 'recombina
ltion') is a meaningful phrase, and happens when a mammal creates sperm and eggs. 'Homologous recombination' is a thing that could happen in the brain, in the sense that spontaneous combustion is a thing that could happen in the brain. Getting DNA sequences into the bloodstream, past the blood-brain barrier, and into nuclei is no simple task; there are literally billions of dollars being spent every year trying to get stuff into brain cells. DNA is fragile when not protected in a nucleus; most of the human body is highly inhospitable to DNA. It's a little like trying to carry a Faberge egg on a spoon into a firefight in Fallujah... or, alternately:
Is this an April Fool's joke? No, it was posted a day early. And Attkisson has posted anti-vaccine woo before.
Sadly, the journal article is not available for free perusal, as far as I can tell.
Respectful Insolence posted a nice little summary. Needless to say, it doesn't make a very convincing argument that human DNA in vaccines causes autism. (Actually, it doesn't make an argument at all, for that matter.)
We all know that science reporting is shitty in the US right now. But how can a major media source be so negligent of basic fact-checking that they would post something so completely nonsensical?
O'Reilly and ICP courtesy of ERV. Alligator courtesy of yours truly.
Posts
Rock Band DLC | GW:OttW - arrcd | WLD - Thortar
:^:
To depress people further:
http://blogs.discovermagazine.com/badastronomy/2011/04/08/antiscience-bill-passes-tennessee-house-vote/
Basicly, the bill says "If a teacher decides to teach something, you can't stop him based on the fact that it's untrue"
Aaaaaawwwweeeessoooommeeeee.
It is cited in the CBS article- Helen V. Ratajczak "Theoretical aspects of autism: Causes—A review" The Journal of ImmunoToxicology 2011; 8(1): 68–79
I will see if I can get to it and report back, but in the meantime...
I would like to take credit for introducing Feral to this story, although he made a brilliant OP, and a much funnier image meme for it than I did.
As to what can be done about science journalism in the US?
Well, firstly I would ask for more obvious citations of the story in question.
Ben Goldacre mentioned this in a recent post, although the criticism is not entirely accurate in this case, as CBS did link to a primary source....albeit one that returns a "server too busy" load error when attempting to access it directly from CBS's hyperlink.
A citation such as the one I posted above would be helpful for those of us interested in finding out what this woman really said, although given her quotes I am not sure if I want to know.
It is unclear whether or not the amusing quote (homologous recombinaltion tiniker) is from the "researcher" herself, or just a typo but I just found the paper (during the time it took me to write and post this) and will report back.
Edit: and yeah Arch did introduce me to this, and it took my brain a good 45m to recover.
the "no true scotch man" fallacy.
In fact, here is a snippet of the conclusion!
Extremely tame and not that controversial, from what I understand.
Now, what I am worried about is two things
Either the researcher (Ratajczak) really misspoke during this interview, or the reporter (Sharyl Attkisson) really really doesn't understand basic scientific concepts.
The review documents a lot of putative influences during development that could lead to autisim- whether or not the cases and events she documents are legitimate is another story, and a bit out of my depth to be honest.
In this case I am going to be putting the benefit of the doubt on the researcher and the onus and blame on the journalist, which gives me more to say.
Is it possible this Sharyl Attkinsson is simply so ignorant of biology that she is unable to realize that "homologous recombinaltion tiniker" was supposed to be "homologous recombination"?
I guess in order to increase the ability of science journalism to effectively communicate science, I would like to see science journalists with actual scientific training commenting on the news articles, although even big journals like Nature have problems with sensationalism as of late.
Someone recently talked about this (should journalists have a science education), but for the life of me I can't remember who it was.
Either way, I would like to see more responsibility on the reporter's end so that the scientific community doesn't have to continually put out fires, so to speak.
Basically I want to vaccinate the public against bad pop-sci by educating reporters.
Kudos on the OP, it gave me a good long laugh followed by a deep sigh.
At this point I think that's being overly specific; reporting is miserable, because our media is miserable. Our media is miserable because its top two drivers are inevitably profit and ageneda in some order.
well they arent exactly clear on which variety we're talking about
currently it is accepted that there are two versions:
and the elf
Wait, if you're replacing one DNA sequence with another, then it's clearly...
the "no true scotch man" fallacy.
the "no true scotch man" fallacy.
Fuck it straight to hell.
You are getting far too good at this Winky
It pleases me
It's notably worse with science because the failure combines both statistical failure with knowledge failure
One of my backups if I can't get into grad school is science reporting.
EDIT:
This isn't even a joke :P.
Alternately, if we think homicide or eugenics aren't in play as solutions, what I'd love to see is a rating system for news organizations and reporters for their ability to reason/read/whatnot.
edit: reading that Tennessee bill, it looks like it still allows the school system to require all teachers to cover the scientific flaws in every theory they cover. Did I misread? Yes I'm aware that will never happen anywhere in the country, but I can dream.
pfft you wish
everyone knows science ruins your ability to write in active voice
This author loled in response to the above quote.
the "no true scotch man" fallacy.
This is more Wernicke's, the reporter sounded more Broca's.
Second: My working hypothesis is that the problem stems from the fact that no casual consumer of news will pay attention to anything that requires more than a single sentence to explain. They will read more than a single sentence about a topic, but mostly for the sake of elaboration and exploring response to the article's topic. If the actual meat of the article cannot be explained in one single sentence that converts the reader into a layman's expert on the topic material, his eyes will glaze over and he will move on.
(As a corollary to my hypothesis, I submit that nothing that can be completely explained in a single sentence is actually worth knowing.)
And journalists know this. It's fine if your article is "Yankees Win World Series" or "Man Murders Five With Axe," and this kind of reporting is generally fine as long as nothing complicated happened. But if you're talking sciense, or politics, or economics, or just about anything more involved than basic human interest, you're boned. Because none of those things can be adequately explained in a single sentence. And yet the journalists will try to do it anyway, because if they don't nobody will read it. Moreover, since science reporting now consists of whatever you can get across in a sentence, there's no reason to bother learning how science works. Because all you'll be regurgitating is single-sentence bastardizations of whatever the scientist has to say. (And it doesn't help that your average scientist sucks at communicating their ideas to real people, since they're more accustomed to communicating with fellow scientists.)
So we wind up with retarded monosyllabic approximations of scientific endeavors. And people, being people, will fill in the (many, many) holes in the journalists' explanations with their own understanding of how the world works. The dreaded common sense and conventional wisdom. So you get something like global warming reduced to "Humans are putting stuff in the air and it's making the climate change, and this could be really bad." This really explains nothing, and the holes will be filled in with the obvious facts that the Earth is huge and we are small and how can something small affect something huge, lol liberal alarmists QED.
Basically, it's impossible, via mainstream media, to communicate anything to anyone that will actually inform them in a way that does not already conform with how they choose to view the world. And at this point, science reporting probably does more harm than good. I legitimately think that, the way media is currently constructed, we would see less harm if major outlets just never said anything about science ever. People would be ignorant, but at least it would be the more benign sort of ignorance whereby they just don't know things, rather than the malicious form we have now where people enthusiastically know wrong things.
I agree with the rest of your post but, even at the risk of going slightly off topic for a moment, I'm not sure the above is correct. It can take a scientist years to produce a result worth reporting, and after that you often need a lot of additional research (in addition to all your previous education and experience) to figure out what the result means. It isn't easy to communicate such results and their scientific value to a layperson in an honest, non-authoritarian way. Usually it goes a bit like this:
Uneducated Friend: What exactly is this paper about?
Me: Well it looks like some patients with disease A have a genetic variant of enzyme B that renders commonly used medication C ineffective or even dangerous to them.
Friend: So if I get disease A I should get tested for variant B before the doctors kill me with medication C?
Me: It's just a preliminary study...
Friend: OMG WHY ISN'T THIS IN THE PAPERS THEY'RE KILLING PEOPLE WITH THE WRONG MEDICATION!
Me: Dude it's just a statistical correlation in one sample set. Maybe there's a confounding variable at play, or it could be just random chance. It's a pretty exciting result, but without additional research it can't be taken as fact.
Friend: So what's the point of your paper?
Me: Well it's an addition to this greater pool of knowledge that will hopefully lead to better treatments...
Friend: So basically you haven't discovered anything.
Me: ...
It's worth noting that the paper mentioned in the OP is a review article, ie. not original research. Review articles can have considerable clout in scientific circles. Ideally they describe the current state of the art in the field. But these articles are published in the top journals of their fields, not some obscure rag nobody ever heard of. I'd be suspicious of Ratajczak's review even without the homologous recombinaltion tiniker debacle.
Uneducated Friend: What exactly is this paper about?
Me: Are you going to overreact and waste my time again if I tell you?
Friend: Yup
Me: Not telling
Or
Friend: Not if you take your time to explain it to me
Me: Some patients with disease A have a genetic variant of enzyme B that might render commonly used medication C ineffective or even dangerous to them. Of course, I could have gotten some bad data, an unlucky set of people or there might be some other thing going on. I won't be able to tell you which of those 4 is the case until more research is done.
Friend: So, as of right now, you know there's something worth looking into, but you don't exactly know what it means?
Me: Yup, that's about it.
edit: the problem is you wouldn't put the unqualified statement "well it looks like some patients with disease A have a genetic variant of enzyme B that renders commonly used medication C ineffective or even dangerous to them" into any sort of formal document because its wrong. Either don't bother telling someone who will be an idiot, or treat them like they can think and give them the qualifiers. Also, don't assume they know all the qualifiers that go with statistics.
Well I guess ElJeffe was right after all. :oops:
In my defense, I picked a conclusion-jumping idiot as an example because I thought it's a decent analogy to how communication fails between scientists and the media/general public. It's also pretty much what happens between me and my extended family, and yeah I'm going with the "not telling" option these days. If the person in your second example were interested in something we've published, I could just send him or her the abstract and Wikipedia would fill in the blanks.
On one hand there's a need to emphasize the importance of your result, partly because public attention often does affect your funding (and your family demands justification for you not having a real job), but on the other you want to keep your credibility and avoid overreaction.
I think that's rather unfair, honestly.
Or, rather, it depends on what you mean by "understand statistics." You need to have a basic understanding of statistics - type I and type II errors, p-values, confidence intervals, statistical significance, being able to tell the difference between ANOVA and a chi-square test, etc. - in order to design even the most rudimentary of studies. At the other end, a lot of research is done using pretty advanced statistical models with some complex multivariate analysis that goes way over my head.
So I'd say that statistical literacy among "scientists" (that is to say, researchers published in mainstream journals and people whose day jobs involve interpreting and applying that research for business or public policy) ranges from "basic fundamentals" up to "as much as any human being can know right now."
the "no true scotch man" fallacy.
And many of them don't even consider this a problem. I once had to listen to a speech by a fresh PhD who proudly proclaimed that she hadn't looked up what linear regression means until the day before her dissertation. Everything in her thesis rested on interpreting linear regression statistics. I can only hope someone double-checked her work.
edit: bias note: Its possible I left my old job because baffling them with bullshit was way more effective than blinding them with brilliance (well, competence. it passed for brilliance there). Its further possible I'm still bitter.
I read a paper as part of my philosophy of science class that actually argued that p values are an awful way to test for significance and there needs to be a fundamental reworking of how we statistically interpret scientific data.
If anyone's interested I can dig it up.
I'll admit that I work in a lab and I know very little about statistics outside of a chi-square. Then again, I still have a research stats class to take before they'll hand me a BS.
There isn't exactly a short version, but I'll try to quote/paraphrase a bit:
With a positive test for schizophrenia, given the more than .95 accuracy of the test, your probability of a positive test when a patient is actually normal is less than .05, so p < .05. If you were to get a positive test you would reject the hypothesis that the patient is normal and conclude they have schizophrenia, but within .05 alpha error.
However, that probability (.05) is not the probability of a person being normal when the test is positive. The actual probability of a false positive is 60%.
Say we had 1,000 cases:
Result
Normal
Schizo
Total
Negative test
949
1
950
Positive test
30
20
50
Total
979
21
1,000
You see, because we're saying that 95% chance of declaring someone who's schizo as schizo, so from the 21 schizos all but 1 will be properly declared. Then we have a 97% chance of declaring normal people normal, so from the 979 normals we'll declare 949 correctly and 30 incorrectly. We end up with a really high false-positive rate of 60%, which would be clearly unacceptable. The problem is that even though our confidence in the test is very high, the actual incidence of schizophrenia is very low.
EDIT: Fixed the wording.
The key here is the difference between the probability that the test will show that they're positive when the person is normal, and the probability that the person will be normal if the test shows that they're positive.
EDITEDIT:
Actually, I suppose there is a short way to say it. Essentially, the p value gives you the probability that you get your data given your hypothesis is false, when what you need is the probability that your hypothesis is false given your data.