Saturday, July 22, 2006

Another Autistic Child Murdered (Take Two)

I have deleted the previous post under this title after reading the comments - including my own - and reflecting on them. In what may be a once-in-a-lifetime occurence, I find myself in agreement with JB Handley on one thing - that I was wrong to suggest that Generation Rescue or any of the other autism advocacy groups bear part of the blame for this latest string of killings. People are responsible for their actions, regardless of their mental state at the time, and so the only people who can be fairly blamed in these killings are the killers themselves.

As I reflected on this, I found myself wondering why these four murders - Alison Davies, Christopher DeGroot, Katie McCarron and William H. Lash, IV - had affected me so deeply. The obvious answer is that any death should affect me, but the hard, cold truth of the matter is that, with people being killed in wholesale lots both in this country and elsewhere, it is hard to maintain that sort of intensity for long.

Another plausible answer is that these are murders of children, which seems somehow worse than the murder of an adult. This is certainly true enough, but still seems a bit too pat. In 1993, the Chicago Tribune put all murdered children (defined as age 19 and less) on their front pages - ending with a massive front-page photo spread on December 31st. As I recall, the number was well over 100. It was saddening and shocking, but somehow it didn't hit me the same way - it didn't get under my skin and into my head the way these four murders have.

Quite possibly, the reason is that I also have a disabled child, and so these murders resonated with me more strongly than others. But then it hit me - what was different about these murders when compared to the hundreds that happen every years was that they were so terribly premeditated.

For some reason, I find it easier to understand the "heat of the moment" murder - probably from years of television shows, starting with Perry Mason, in which otherwise reasonable, normal people kill in a moment of anger. Even in the Chicago Tribune's year-long series on murdered children, almost all were clearly done in the heat of anger (over half were teenagers killed in gang-related violence, the next largest group was infants and toddlers murdered by their mothers' "boyfriends").

I have tried to understand how someone could kill their own child, tried to put myself in their shoes, as it were, with little success. I have been angry at my children, have even wanted to hit them, but I have never wanted to kill them. For me, that emotion is like the blank areas on ancient maps - "Here there be monsters" - terra incognita.

As part of my reflection, I tried to visualize what it would be like to deliberately kill my child. It just doesn't work for me - I get to the point of raising the weapon and see their sad or fearful face and I just melt. I actually wept, just from the thought of it.

In all four cases, the parents who killed their autistic children had time to reconsider what they were doing - they had to push their child off the bridge, smother them, set fire to their apartment and lock their child inside, or take down the shotgun, load it and fire (I doubt that the Lash family kept a loaded shotgun above the fireplace, not in Washington , DC). I can't even begin to comprehend the frame of mind that this would require.

And this is not to say that I haven't experienced hopelessness and despair. There have been times when I saw my entire life stretching out - past retirement and into death - as the tireless and thankless caretaker of a disabled child. But it didn't drive me to murder - it drove me to set up a trust fund. Because no matter how thankless the job may be, it's my job and I'm going to see that it gets done, even after I'm dead.

Another part of the stories that has bothered me is the way that the last two - Katie McCarron and William Lash - seemed to put a lie to the usual "explanations". In neither of these families was there serious financial want or a lack of support. More government programs or more volunteer respite care would not have materially changed the situations these families were in. Neither did it apear that there had been a pattern of irrational behavior that - in retrospect, at least - could have been seen as "warning signs".

I would like to point to the heated and polarizing rhetoric surrounding autism as a factor in these crimes, but the fact is that I just don't know. And that's because I can't even get a glimmer of what these parents must have been thinking when they set out to deliberately and with malice aforethought to murder their children.

Years ago, I went to a wonderful lecture about the origins of the Universe - the "Big Bang". At one point, the lecturer mentioned that the condition of the newly-born Universe - the extremes of temperature and pressure - so far exceeded anything that exists in our time that he wasn't sure that our current physical laws would apply.

That's how I feel about these murders - they are so foreign to what I know and what I've experienced that they are as incomprehensible as the first microsecond after the "Big Bang".


Prometheus

Monday, July 10, 2006

The Seven Most Common Thinking Errors of Highly Amusing Quacks and Pseudoscientists (Part 3):

Thinking Error 4 – Conspiracy Theories:

To paraphrase Samuel Johnson, “Conspiracy is the last refuge of a quack.”

Assertions of conspiracy receive a great deal of play in the claims of quacks and pseudoscientists. These range from the ridiculous to the…well, even more ridiculous. No matter what their details may be (e.g. whether the enemy is “the government”, Big Oil, Big Pharma, the AMA or all four), all conspiracy claims serve the same function:

They divert attention from the failures of the person making the claim.

And, no matter what the details of the particular failure may be, at the root is the same issue:

The failure to provide data to support their claim(s).

So, whether it’s the automobile engine that runs on water (suppressed and sabotaged by Big Oil) or the “fact” that chelating out mercury cures autism (suppressed by “the government”, Big Pharma and the AMA), the reason that these conspiracy “theories” are proffered is always the same:

They can’t prove their claim(s).

After all, if someone had a working model of an automobile engine that could run on water, or the clinical data showing that chelation could cure autism, there wouldn’t be any reason to complain about interference from “the government” or Big Oil or etc.

So, what is it that the conspiracy claims do? They allow the quack or pseudoscientist to make their unsupported claim(s) and blame someone else for their lack of support. This is really no different than claiming that “the dog ate my homework”, except that there is no “dog” (and, of course, there was never any “homework”).

This brings us to the other problem with the conspiracy excuse: plausibility.

Ask yourself, which is more likely; that a single person (or small group of people) might lie (or be self-deceived) or that an entire bureaucracy or corporation, filled with people who might have something to gain by revealing a guilty secret, might conspire to suppress information?

Before you answer that, consider this – the difficulty of keeping a secret rises with the number of people who know the secret. This can be mathematically represented thus:

D = n^(n-1)

Where D is the difficulty of keeping something secret and n is the number of people who know the secret.

Even without the math, anyone who can read the newspaper knows the ability of “the government” or other large organizations to keep embarrassing information secret. What makes you think that the “secrets” about water-fueled automobile engines and chelation curing autism would be any different?

In short, claiming conspiracy is a near-certain indicator of quackery and/or pseudoscience. It is the adult (or, more properly, “pseudoadult”) version of “the dog ate my homework”. In addition, it is not even plausible, given the inability of “the government: and other large organizations to keep secrets.


Thinking Error 5 – Personal Infallibility:

One thinking error that comes up extremely often is the error of personal infallibility. This one seems to be shared not only by the quacks and pseudoscientists but also by their victims.

Personal infallibility does not necessarily mean that the person thinks that they are never wrong (although some do), but refers to a more subtle belief that their observations are an infallible source of “fact”. I find this a particularly amusing belief, especially in the context of modern technology’s ability to deceive the senses, displayed every day in movie theatres across the world. Yet, despite this near-daily demonstration that “seeing is NOT believing”, quacks, pseudoscientists and their faithful followers and apologists persist in deferring to their own experiences as if they were infallible.

The root of this problem is the human ability to find patterns. We are genetically adapted to find patterns in the world around us; we are so good at this that we are continually finding patterns when none exist. We “connect the dots” in optical illusions and we see causation in coincidence.

One of the most important steps on the path to what we now call “science” was the philosophy of the Empiricists. They held that the only way to learn about the universe was to observe it. This contrasted starkly with other “natural philosophers” of the time, who felt that they could sit in their drawing rooms and libraries, drinking brandy and philosophizing about how the universe worked. Like many of the quacks and pseudoscientists of our day, they felt that actually getting in the lab or in the field and seeing if their hypotheses worked was irrelevant. After all, some of the finest minds in the world (theirs and their associates) had agreed that the sun and planets circled the earth, so what was the point in getting cold and tired peering through some blasted telescope?

Although the Empiricists eventually won the day (although their opponents continue to populate the chiropractic and naturopathic colleges), there was a small flaw in the philosophy of Empiricism that needed correcting. You see, while large-scale physics and chemistry are fairly deterministic, there is a great deal of variation and even randomness in biology (and in physics and chemistry on the smaller scales). As a result, it is often difficult to tell if a change seen in a biological system is due to an experimental intervention or simply due to random variation.

Because of the stochastic nature of biology (which, by the way, includes medicine), it is very easy for a researcher to see a change in an organism and erroneously attribute it to some intervention they have made when, in fact, the change was not related at all to that intervention. In addition, since many of the changes seen in biological organisms are hard to measure quantitatively (e.g. pain, depression, language ability), the observer is often called upon to not only observe but to be the “measuring instrument”.

This is the reason that the multiple subject, double blind, placebo-controlled study is considered to be the “gold standard” in research (except by those whose “claims” are disproven by such studies). This is not to say that good data cannot be gotten any other way, but this is the standard to aim for.

So, what’s so great about the multiple-subject, double-blind, placebo-controlled study? Let me explain in parts:

[1] Multiple-subject:

Since biology has a great deal of inherent random variation, a single organism (a single person) is not a good indicator of what the population is like. After all, I am not a good example of what the human population is like since approximately half of the world’s population is of a different sex. Likewise, there are people shorter and taller than me, lighter and darker in skin color, etc.

The way to arrive at what the population looks like is to take a larger sample. You can predict mathematically the likelihood that your sample is an accurate representation of the population based on the sample size (and the population size) – the bigger the sample, the greater the probability that it reflects the reality of the population.

In addition, biological organisms change over time – if they don’t, they’re probably dead. As a result, certain changes will happen regardless of whether an intervention occurs or not. Studying a larger group will “average out” these spontaneous changes, since – by random chance – a roughly equal number will occur before and after the intervention.

[2] Placebo-controlled:

Taken out of order (for reasons that will become apparent), placebo-control is a critical part of biological experimentation, especially when the subjects are humans. A placebo, in general terms, is a treatment that is similar enough to the studied intervention that the subject receiving it (and, ideally, the person giving it) cannot tell it apart from the “real” treatment, but it has no effect. Most classically, it is a sugar pill or saline injection that is the same color and consistency as the treatment under study.

Failure to use placebo control has tripped up any number of medical researchers, including some who are legitimate scientists. Among "real" researchers, this most commonly occurs in behavioral interventions (where interaction with the “therapist” can be as big a factor as the “therapy” is supposed to be) and surgical interventions (where questions of ethics may prevent a “placebo” surgery in human subjects). Unsuspecting quacks often fail to realize that their interaction with the patient may be causing the change they measure, rather than their “therapy”. Charlatans count on it.

In human studies, failure to use a placebo can lead to falsely believing that a therapy has a beneficial effect when, in fact, it is the expectation of benefit that causes the subject to feel better. This has been borne out time and again in pain control research, where supposedly effective therapies have been overturned because subjects receiving a placebo had the same degree of relief. In pain studies, approximately 30% of subjects will report "good" or better pain relief with a placebo.

This is sometimes erroneously referred to as the “placebo effect”, which is an oxymoron. A placebo has – by definition - no effect. It is the psychological effect of the subject believing that they will feel better that causes the effect.


[3] Double-blind:

Double blind means that neither the subject (the organism being studied) nor the observer knows whether any particular subject is receiving a placebo or the studied therapy. A single-blind study would be when the observer knows but the subjects do not.

The advantage of a double-blind study is that not only do the subjects not know who is “supposed” to feel better, but neither do the observers. This is especially important if the measurement of “success” or “failure” of the treatment is not completely objective. When measuring blood pressure or heart rate, it is not so important that the observers not know who received the placebo, since these measures leave little or no room for observer interpretation. However, if the measures are more subjective - such as behaviors, pain, depression, social interaction, etc. - then observer interpretation can be affected by the knowledge of who is “supposed” to get better and who is not.


None of this is hidden knowledge and none of this is particularly new. Yet, every day I read about patients, parents and practitioners who declare, “I see an improvement – are you calling me a liar?” These people are unaware – or are in denial of their awareness – that we humans often see exactly what we want to see and hear what we want to hear. To think that we can be truly objective – especially when it involves ourselves, our loved ones or a hypothesis we are in love with – is to claim an infallibility we are not capable of.

So, when you hear someone say, “I saw it with my own eyes!”, be sure to ask (at least to yourself), “Yes, but what would someone else’s eyes have seen?”


Coming up: Cherry picking – it’s not just happening in the orchard.


Prometheus.

Saturday, July 08, 2006

We interrupt this blog for breaking news...

Extreme Humiliation - A New Sport for the Mercury-Autism Crowd?

Kathleen Seidel, in her Neurodiversity website, has broken what may be the "story of the year" about the "dysfunctional duo" of Geier and Geier (the subjects of many posts on this humble blog). The story is told, with great skill, by Kevin Leitch on his Left Brain/Right Brain blog.

In a nutshell (which is where Geier, Sr. belongs, it would seem), in an opinion handed down on 6 July, 2006, pertaining to a lawsuit alleging that the thimerosal in RhoGam caused a young child's autism, District Court judge James Beaty addressed the qualifications (or patent lack thereof) of Mark Geier, MD in excruciating detail. Kevin covers the published legal opinion thoroughly, but there were a few aspects that I personally found too delicious to pass by.

On page 9 of the opinion, Judge Beaty discusses how Mark Geier has testified in "about one hundred cases before the National Vaccine Injury Compensation Program" and how his testimony, since 1995, has "...either been excluded or accorded little or no weight based upon a determination that he was testifying beyond his expertise." He then proceeds to enumerate those humiliations in a footnote. Ouch! To have your personal failings summed up in a footnote - how humiliating!

Judge Beaty then describes how Mark Geier "...relied on a number of disparate and unconnected studies, including the findings of Dr. Haley [more about him later] and Dr. Lucier, to reach a piecemeal conclusion..." That one's going to leave a bruise. And it just happens to be what I've been saying about the mercury-autism cohort for some time - but Judge Beaty said it so much more eloquently.

Not content with just saying the obvious, Judge Beaty then methodically dismantles Mark Geier's case for thimerosal causing autism, study by study. He even takes the time to dismantle the Hornig "Rain Mouse" study and - my favorite - the Holmes/Blaxill/Haley baby haircut study (see here and here). It's like we're the same person! He then restates what he has made so painfully obvious:

"It is also significant in the review of his methodology that Dr. Geier could not point to a single study that conclusively determined that any amount of mercury could cause the specific neurological disorder of autism."

Bang! Another hickory stake through the heart of that undead spirit, the shade of mercury-autism.

And in yet another painful restatement of reality, Judge Beaty concludes:

"Moreover, Dr. Geier's conclusion that the peer-reviewed literature he has relied upon supports his theory [more properly termed a "hypothesis" or even "ridiculous obsession"] that autism can be caused by thimerosal is flatly contradicted by all of the epidemiological studies available at this time."


Bang, bang! Yet another stake through the heart!

In a footnote, Judge Beaty points out that irrational public statements can come back to haunt a person:

"Dr. Geier has also exhibited some bias against health agencies that have criticized his methodologies on other issues to the extent that he has publicly accused the Centers for Disease Control ("CDC"), the World Health Organization, the American Academy of Pediatricians, and the National academy of Sciences of deceiving the American public as to the dangers of mercury and has specifically called the CDC a 'rogue organization' "

And it keeps getting better! Judge Beaty sums up his assessment of Mark Geier's ability to be an expert witness on the ability of thimerosal to cause autism thusly:

"Thus, while Dr. Geier's presentation of the literature as part of his methodology might at first glance appear convincing, the disconnected literature he presents does not add up to the opinion and conclusion that Dr. Geier is offering."

He then rips into the Geier and Geier dumpster-diving (VAERS database) studies in a most thorough fashion. Having dismantled them, he concludes that he need go no further in discrediting Mark Geier's testimony and then, like an encore after a virtuoso performance, he proceeds to shred Mark Geier's qualifications in general. He criticizes the elder Geier's diagnostic abilities, points out that he is neither a pediatrician or a pediatric neurologist and brings up the painful (to Mark Geier - I found it delightfully ironic) fact that Mark Geier failed his pediatric genetics board examination.

After all that, there wasn't much else that could be said about the thoroughly discredited Mark Geier, unless Judge Beaty wanted to comment on his taste in ties.

And Mark Geier wasn't the only person getting a solid dose of ego-spanking in this opinion. Boyd Haley, who should be eternally grateful that he didn't come fully into the spotlight in this case, also got his measure of ego-trauma. In a footnote (which makes it even more humiliating, somehow), Judge Beaty describes the contributions of Boyd Haley, PhD, MCDU (Mad Child Disease Unapologist) thusly:

"...Dr, Haley's report does not state an expert opinion that thimerosal causes autism, rather just that he has a theory about how such a thing could happen." [emphasis in the original]

Were I Boyd Haley, I would be glad to have been accorded so little notice and thus have escaped a more detailed humiliation. However, the ego of this "great scientist" must smart a bit after being dismissed so casually.

So, all in all it was a happy outcome for the forces of reason and honesty. It also may be a foreshadowing of things to come when the massive (and long-delayed - by the plaintiffs) class-action vaccine lawsuit reaches the put-up-or-shut-up stage later this year.

I'll have the popcorn ready for that one!


Prometheus

Wednesday, July 05, 2006

The Seven Most Common Thinking Errors of Highly Amusing Quacks and Pseudoscientists (Part 2):

Let me begin by apologizing for my prolonged and unannounced absence from the blogosphere – the requirements of job and family prevented me from pursuing my blog for a period of time. With luck (and assuming that grant applications are accepted and funded), I should continue to have “fair sailing” for the next few months.

Thinking Error 3: Post Hoc Correction of Hypotheses:

Before we get into this “thinking error”, I need to make abundantly clear what it is we’re talking about. One of the most fundamental characteristics of real scientists is that they are always revising, modifying and – when necessary – discarding their theories and hypotheses in light of new data. To many people outside of the scientific disciplines, this looks like indecision or just plain waffling – a “bad” thing if you’re a politician trying to stake out a position for legislation or electioneering.

However, a failure to “change with the wind” – as one politician put it – is a certain sign that a researcher has abandoned science and turned their hypothesis into a religion. Real science often means having to abandon a cherished hypothesis, one that you have nurtured and raised from a mere pup of an idea, like it was yesterday’s newspaper – if the data warrant it. To fail in this most solemn duty is to turn down the path to the “dark side” – to pseudoscience and quackery.

At issue here is the basic purpose of a hypothesis or theory. Although many people outside of the sciences (and, regrettably, some inside as well) equate “theory” (and “hypothesis”, if they are acquainted with that term) with “idea”, the fact is that it is much more than that. A hypothesis or theory (more about the difference later) is a model of how the universe works. Now, it may be a model of a very small part of the universe (such as the replication of a virus) or it may be a model of the entire universe (e.g. theBig Bang).

No matter what the scale, the purpose of a hypothesis or theory is to give us a deeper understanding of our world by showing us the workings of the parts we can’t see. Or, sometimes, by explaining why the parts we can see do the things they do. Either way, the model – the proposed explanation – has to conform to the behavior of the real world if it is to survive. And that – in a nutshell – is the difference between a hypothesis and a theory. A hypothesis is a model that has not yet been extensively tested to see if it predicts what the real world does – a theory has already survived a number of tests successfully.

Having survived testing does not necessarily mean that the hypothesis (or theory, if it has gotten to that point) has survived unchanged. In the process of testing even the most inspired hypothesis, discrepancies are found between what the hypothesis predicts will happen and what actually does happen. Sometimes these discrepancies can be explained by flaws in the measurements or data collection, but any consistent difference between what the hypothesis predicts and what the data show must be seen as evidence that the hypothesis – the proposed model of how the world works – needs to be modified or abandoned.

The problem is knowing when a hypothesis or theory should be revised and when it should be abandoned – something that is often difficult to see until enough contradictory data has amassed. But, like that favorite old pair of jeans that you keep patching and patching, eventually a hypothesis becomes more patches than whole cloth and needs to be revamped or rejected. On the other hand, many of today’s solid, tried and tested theories went through a period where they needed some “tweaking” (or even major overhauls) in order to function.

The thinking error of post hoc correction occurs when someone tries too hard to keep a failing hypothesis “in the game”, crossing from legitimate modification of the hypothesis to frantic attempts to keep it alive at all costs. This can be – and probably usually is – done without any intent to deceive. And it can be done by people who have an impeccable record of excellence in science – as the mutual fund people always say, “past performance is no guarantee of future yields”.

The hallmark of post hoc correction is the modification of a hypothesis in response to contradictory data in a way that is:

[a] Not supported by any existing data
[b] Not tested or not testable

Let me make this clearer by two examples – one of a legitimate modification of a hypothesis and one of an illegitimate post hoc correction:

[1] Lost a star but gained a planet.

In the early 1800’s, the French astronomer Alexis Bouvard undertook to publish a corrected table of the orbit of Uranus due to observed discrepancies from the orbital tables published by Jean Baptiste Delambre in 1792. He was unable to get all of the observations to fit into the predicted orbit (predicted by the theory of gravity) and so published his new tables in 1821 with the comment that he was unable to determine if the discrepancy was due to errors in the earlier observations or a “foreign and unperceived cause”.

By 1841, however, it was clear that even Bouvard’s calculations were failing to account for the actual orbit of Uranus. At this point, there were two theories in play, one of which was in need of modification – the theory of gravity or the theory that the Solar System had only seven planets. Although the majority of astronomers at the time were “betting” on the existence of an eighth planet (which we know as Neptune), there were others (i.e. George Airy, the Astronomer Royal) who felt that the theory of gravity was in need of an overhaul.

The British astronomer John Adams and the French astronomer Urbain Le Verrier began a search for a new planet, using the mathematical basis of the theory of gravity to predict where this new planet might be, based on the irregularities in the orbit of Uranus. In 1845, they both (more or less simultaneously) found the planet – despite resistance, reluctance and a good deal of old-fashioned mule-headedness on the part of their more senior colleagues.

To diagram the process:

a. Hypothesis (Theory, actually): Gravitational attraction is proportional to the product of the masses involved and varies with the inverse square of the distance between them.

b. Problem: The orbit of Uranus is not following the course predicted by the Theory of Gravitation.

c. Possible Explanations: The Theory of Gravitation does not apply at large distances from the Sun OR there is another planet beyond Uranus.

d. Resolution: After calculating where a planet would have to be to cause the observed perturbations of Uranus’ orbit, astronomers found a planet – Neptune – in the expected location. The Theory of Gravitation had survived another test!

The “take-home points” of this example are looking for supporting data (the planet Neptune) before deciding which theory to revise and the fact that they did not automatically assume that one theory was “privileged” and therefore not subject to scrutiny.


[2] When low means high

A few years ago, an unlikely group of researchers – a PhD academic chemist, an MD oncologist and an MBA – embarked on a project to prove that mercury caused autism. Since tests on hair, blood and urine had previously failed to show any significant difference in mercury content between autistic children and “normal” controls, they tested hair specimens that had been collected at the child’s first haircut – the so-called “first baby haircut” – and retained as a keepsake. This, they felt, would be the definitive proof that autistic children had been exposed to a significantly higher mercury levels as infants (as stated by one of the researchers, Dr. Holmes, during the 2000 DAN! Conference).

Unfortunately, the mercury levels in the “first baby haircut” samples from autistic children were significantly lower than those from the “normal” controls. This might have proved to be a difficulty, had not the researchers applied a post hoc correction to their hypothesis. They concluded that, based on their data, autistic children are unable to excrete mercury as effectively as their “normal” peers. They made this conclusion despite numerous studies, many dating back a few decades, that showed mercury was passively taken up by hair rather than excreted.

In addition, a later national study showed that the hair mercury levels that they measured in the autistic children were very close to the national average for children of the age when these hair samples were taken (remember, the hair samples were taken when the children were one to two years old – the analysis was performed many years later). In addition, this same national study – which was not studying autism – showed that the hair mercury levels of the “normal” controls was greater than the national average by over fifteen times!

To diagram the process:

a. Hypothesis: Mercury causes autism (subhypothesis: previous studies have failed to demonstrate high mercury levels in autistic children because the mercury “washes out” by the time of diagnosis some years later).

b. Problem: Hair mercury levels in hair taken at the “first baby haircut” of autistic children are lower than those of “normal” controls.

c. Possible Explanations: Mercury is not related to autism (apparently not considered by the authors) OR mercury protects children from autism (supported by the data, but nonsensical) OR children with autism cannot excrete mercury as well as “normal” controls (consistent with their data but not supported by it – also, not consistent with over forty years of data on how mercury and hair interact) OR the laboratory assays were in error.

d. Resolution: Rather than opt for an explanation that is consistent with known physiology, the authors chose an “explanation” that supported their hypothesis that mercury causes autism at the expense of being almost certainly wrong. In short, either dozens of researchers’ work over the past forty years (and more) is wrong or the authors of this “study” are wrong in their conclusion.

The “take home points” of this example are that a hypothesis (e.g. “autistic children cannot excrete mercury as well as non-autistic children, leading to low hair mercury levels”) which contradicts previous well-established hypotheses or theories (e.g. “mercury is not excreted in the hair – the hair mercury concentration merely reflects the blood concentration at the time the hair was formed”) needs to have data supporting it, not merely the assertions of its authors. Additionally, most of the time, many conclusions can be drawn from the data of a single study – the authors of this study were blinded to those alternative explanations by their single-minded desire to “prove” their hypothesis.


In short, post hoc corrections of a hypothesis are those which “save” the hypothesis at the expense of making it unsupported by data. You can properly “save” a hypothesis that fails to correctly predict reality by either changing the hypothesis so that it predicts reality better (as was done when Neptune was added as the 8th planet). Or you can try to change reality itself by asserting that your hypothesis only predicts reality in your laboratory or in the absence of “negative thought energy”. Or you can take the route of adding another unsupported hypothesis to the mix in order to make the whole thing “work”, as the authors of the “study” in example 2 did. The latter two processes are post hoc corrections and only add more unsupported assertions to a hypothesis that is – by definition – already in trouble.


Coming Up: Conspiracy! (or, Et tu, Brute!)


Prometheus