Monday, February 26, 2007

The Power of Critical Thinking: Bias and Self Deception


Some time ago, during one of those post-conference informal discussions, I heard a fellow give the best description of bias that I've ever heard. I will attempt to paraphrase:

Bias is the natural result of every person's self-centrism. We all - except for the few who are incapacitated by self-doubt - are utterly convinced that our beliefs, actions and opinions are absolutely correct at the time.

The caveat "at the time" is an essential part, because all of us (except for those who are pathologically incapable of acknowledging error) have had occasion to realize that we have made a mistake, that our beliefs, actions and/or opinions were wrong.

However, at the moment we thought that, we were convinced we were correct - correct about us having previously been in error.

You can see how this could rapidly get out of hand.

But it holds together very well as a general statement of how people view their own thinking process. We (with very few exceptions) are sure that we are doing or thinking the "right" thing at the time that we are doing it or thinking it.

So, what has this got to do with bias? Plenty.

Bias is a sort of "blind spot" in a person's thinking - a place where their assurance of being right makes them vulnerable to imagining the world to be different from how it truly is. It is, in short, a minor delusional state.

As Mark Twain is reported to have said: "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." So it is that the blind spot of bias gets people into trouble. Ignorance is merely a need for information or education; error - especially when it is believed wholeheartedly - is a calamity waiting to happen.

A good analogy would be a map. If your map of reality has blank spots, you are likely to be more cautious in those areas. You'll ask questions, listen to what people who have been there have to say and keep you eyes open. On the other hand, if your map has roads where there are none and smooth plains where there are cliffs and pits, then you are standing into trouble.

Bias can lead to self-deception (more about that later) when it convinces a person to ignore good information because it conflicts with their pre-formed (and self-corroborated) view of reality. Some spectacular examples of this have occured in the media in recent years, when otherwise sage and seasoned journalists were completely taken in by fakes and phonies. That they often did so in the face of evidence that they were being hoodwinked only serves to underscore the pernicious nature of bias.

As an aside, it is laughable in the extreme to see people state that they are "not biased". Ridiculous! Everybody is biased in some way - everybody has an opinion on everything, even if that opinion is "That's too unimportant to think about." As a result, everybody is prone to - biased toward - a particular view of reality.

To reiterate: no matter what you think, you think that's the right thing - at that time. If it were a conscious decision, it wouldn't be bias. Bias influences, shades and slants conscious decisions, in ways that we are not aware of at the time. Looking back, we may be aware of how our assurance of correctness led us to disaster, but we don't - we can't - see it at the time.

To be sure, people are often aware that they are doing or thinking things that "aren't right", but that is the imposition of an external measure of correctness. It has nothing to do with whether the person feels that their thoughts or actions are right for them, in their situation, at that time. The person who is shoplifting a loaf of bread knows that it is a crime (possibly even a sin), but is assured that they are doing the right thing for their own reasons.

Self Deception:

Bias is usually a necessary precondition for self-deception. It would take an incredible strength of will to truly believe something you know to be false. I'm not sure that it can be done this side of insanity.

However, humans have shown themselves to be masters of convincing themselves that the real is false and the false is real. And the first step in this process is to be assured of the correctness of one's own thinking - our old friend bias!

In some ways, it's rather amazing that people can ever avoid self-deception. What keeps most people sufficiently grounded in reality to allow them to carry out the day-to-day activities of living is a regular contact with aspects of reality that cannot be easily ignored.

Someone who has convinced themselves of their ability to fly (sans aeroplane) will rapidly (at 9.8 m/sec/sec) be disabused of this notion. They may even survive to "internalize" the lesson. Likewise, reality intrudes its unwanted self into most self-deceptions, sometimes, sooner, sometimes later; sometimes subtly, sometimes dramatically.

But there are some self-deceptions that are resistant to reality therapy. Some of them lack a sufficient grounding in reality to ever run across a contradiction. Most religious thought is of this nature. There is simply not enough contact between religion and reality - with some notable exceptions - to provide a convincing "whack" to the deceived... er, the devout.

Other times, the collision with reality is a long time coming, allowing people a "grace period" to believe (and perhaps entrench) their self-deception. For example, people who try quack remedies for real illnesses often have a period of time before the reality of their unchecked disease breaks through their belief. Even worse off are the people who try quack remedies for imaginary illnesses - they will now have two mutually-reinforced self-deceptions!

Additionally, there are some counter-reality thoughts that can diminish the impact of reality. Some of these are (not an exhaustive list by any means):

- It would be so embarrassing if I were wrong!
- I've invested too much time/money/effort/reputation on this to admit that I was wrong!
- People would be so mad at me if I were wrong!
- I really trust the person who told me this...they can't be wrong!
- I really need for this to be right!

You'll notice that I haven't included anything like "It's all a conspiracy!" or "They're lying to me!" That's because these sort of thoughts are the result of self-deception (and might even be considered diagnostic of it) rather than contributing factors. Once a person has convinced themselves that the cost of being wrong is (for them, at this time) greater than the cost of persisting in error, then the mind will generate a suitable set of excuses (sometimes called "rationalizations" or even "delusions") in order to maintain their denial of reality.

Rationalizations are a way of covering up the gap between reality and a person's conception of reality. The more their mind-generated world-view comes in conflict with reality, the more rationalizations are needed. At some point, they may even cross the imaginary line between denial and delusion.

People who are in denial are often very hostile toward people who try to bring them back to reality. After all, it's hard work to maintain all those rationalizations and they do not appreciate visitors who want to track reality all over their snug, safe sanctuary. They will often lash out at people who have the temerity to disagree with them.

As a result, people in denial often seek out others with the same - or similar - world-view. The Internet has aided this process immensely, providing innumerable places for people to gather to share, refine and reinforce their denial of reality.

Avoiding Self Deception:

While it would be nice if we could all look reality in the face at all times, this simply is not the human condition. Failing a complete refit of the human psyche, what can we do to avoid self-deception?

[1] Check your ideas with someone else.

This is the basic concept behind "peer review" in scientific journals. Your ideas, methods, data and conclusions are put before a number of independent (they aren't all sitting in one room, influencing each other) reviewers who read it and give their critique (and usually criticisms). This provides a number of minds that - in all probability - do not have the same biases (the same blind spots) as the author(s).

For the average citizen, "peer review" can be a bit harder to find. A common mistake is to ask someone who already thinks as you do to critique your idea. Thus we have UFO conspiracists asking other UFO conspiracists if the small sharp thing in their bum is a splinter or an alien mind-control device, with the predictable bizzare answer.

This is also where a lot of the news media have fallen foul of self-deception. In a group with the same political and social mindset, it may be hard to find someone to say, "Gee, Dan, that memo doesn't sound very believable to me. Maybe you ought to check it out better." It's also unreasonable to expect people to tell the boss that they think he's gone crackers. Better to find an independent appraisal.

[2] Be skeptical of everything you hear, especially if you agree with it.

Again, the news media would have saved themselves a number of black eyes and bruised egos if they had followed this simple rule. For that matter, a number of people in science would have been better off if they, too, had heeded this advice.

You are very much more likely to be taken in by a falsehood that conforms to your world-view (your biases) than to believe a truth that doesn't. Keep that always in your mind. Question the basis for any statement that claims to be "fact"; question it twice if you find yourself wanting to believe it.

[3] Occasionally step back and ask yourself, "What would it take to make me believe/not believe this?"

When we are self-deceived, the answer to this simple question - if we are honest with ourselves - is most often "There is nothing that would change my mind." This is a very bad answer because it means that your belief (or non-belief) in something is a matter of religious devotion, not reason. If that's OK for you, so be it. But it might just prevent a nasty misstep if you can recognize that you are not seeing the world as it really is.

This has just scratched the surface of a topic that could fill volumes (and has!). Perhaps I'll add another volume to the world's collection someday. It's on my list (right after cleaning out the attic).

Until next time!


      Sunday, February 18, 2007

      The Power of Critical Thinking: Scientific Method

      “Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.” Richard Feynman

      (from the lecture “What is and What Should be the Role of Scientific Culture in Modern Society”, given at the Galileo Symposium in Italy, 1964.)

      There’s really no better definition of science than that.

      So, how do we do science? It’s really quite easy. Just follow these four simple steps.

      [1] Hypothesis.

      Contrary to what some people think, a hypothesis is not “just a guess” – it is a proposed model of how the universe works. Usually, it is a model of how a small part of the universe works, but it is still a model of the universe.

      The most important requirement is that the hypothesis be testable - it must be falsifiable. There must be some way to determine if the model is correct (or incorrect), otherwise, it is useless. Hypotheses that invoke unseen beings, undetectable forces or supernatural beings are all untestable.

      Needless to say, a hypothesis must explain the available data on the topic in question, although it can explain that some (or all) of the data is wrong. However, hypotheses that start out with the claim that all of the existing data are wrong usually don’t fare too well.

      [2] Observation / Experimentation.

      The reason that hypotheses need to be testable is that the second step is to test them. No matter how “brilliant” or “progressive” a hypothesis is, it is utterly worthless if it either cannot be tested or makes no testable predictions.

      This is the core problem of “Intelligent Design” – it makes no predictions that can be tested. Since “Intelligent Design” essentially states that everything is as it is because some supernatural being – the “designer” – made it that way. No matter how things are – no matter what a researcher might find – it is all exactly as it was made by the “designer”. It’s a tidy bit of religious philosophy, but it isn’t a hypothesis.

      Once you have a hypothesis in hand, the very next step is to come up with a way to test it. The best way is to see if a prediction made by the hypothesis comes true. This can either be an observation, such as the bending of starlight as it passes close to the sun, or it can be an experiment, such as the Yellow Fever experiments. The distinction between observation and experiment is a subtle one, but either one can be used to test a hypothesis.

      One thing that many want-to-be scientists fail to realize is that they aren’t the only ones who get to test their hypothesis. Anyone can do it – and will, if your hypothesis is interesting enough. This leads us to the next step:

      [3] Evaluation.

      Once a hypothesis has been tested, the time comes to see how well it did. An unsuccessful test – one where the results were not what the hypothesis predicted – indicates that the hypothesis is not a valid model of reality and needs to be revised. Sometimes, the needed revision is drastic – such as completely abandoning the hypothesis. Either way, a hypothesis that fails to predict what will happen is not valid – it needs to be fixed or discarded.

      This step is the one that separates the real scientists from the pseudoscientists. It is especially revealing when a researcher refuses to respond to criticism of their hypothesis, especially when that criticism includes data that contradicts their findings. This is a recurring problem in science and – in my experience – indicates a serious flaw in the research.

      Peer review is an integral part of the evaluation process. Peer review starts with the review prior to publication but continues long after. It is a critical part of the process because it exposes flaws or weaknesses that the original researcher failed to think of – it illuminates any potential blind spots. Peer review also has the uncomfortable effect of forcing the researcher to explain their assumptions, their methods, their results and conclusions.

      [4] Repeat.

      That's right. Repeat. And repeat again. No hypothesis gets confirmed by just one test. Not even theories - which are veteran hypotheses that have been tested and confirmed so thoroughly that they are given a certain degree of respect - get to rest on their laurels.

      Sooner or later, somebody will find a new way to test, say, the theory of gravity and find a problem. Then they get to propose how to correct the theory. And then everybody else gets to critique that proposal and offer their own changes, tweaks and suggestions. And the process goes on.

      The process has its flaws, to be sure. It slows down the acceptance of radical new ideas and can prolong old ideas beyond their time. However, it is the best system that the human has yet devised for sorting the few grains of “truth” from the vast amount of chaff. And if it slowed the widespread acceptance of some ideas – such as stomach ulcers caused by Helicobacter pylori, a favorite of “scientific method bashers” – it also prevented the premature acceptance of ideas like cold fusion.

      Imagine, if it hadn’t been for stodgy old scientific method, our homes would be powered by cold fusion – and we’d all be sitting in the dark.

      Up next – bias and self-deception.


      Sunday, February 11, 2007

      Word Play: Believing

      The English language is often tricky, even for those of us who grew up speaking it. One problem is that some words have several - often somewhat contradictory - meanings. One such word is "believe":

      be·lieve /bɪˈliv

      –verb (used without object)

      1. to have confidence in the truth, the existence, or the reliability of something, although without absolute proof that one is right in doing so: Only if one believes in something can one act purposefully.

      –verb (used with object)

      2. to have confidence or faith in the truth of (a positive assertion, story, etc.); give credence to.

      3. to have confidence in the assertions of (a person).

      4. to have a conviction that (a person or thing) is, has been, or will be engaged in a given action or involved in a given situation: The fugitive is believed to be headed for the Mexican border.

      5. to suppose or assume; understand (usually fol. by a noun clause): I believe that he has left town.

      "Believe" crops up in the context of this 'blog because of the disparate ways in which people have validated the assertions that they make.

      In my business, "believe" is used in the sense of definition 2, with the additional caveat that our confidence in the truth of some statement is based on examination and understanding of the data used to support that statement.

      Thus, when I say, "I believe in the Theory of Evolution.", I am not expressing a religious faith (as the Creationists/Intelligent Designers would have you think), but am stating:

      "I have examined the theory of evolution and have found it consistent with the available data from current life and the fossil record"

      It's just shorter to say that I "believe" it.

      Another way in which some people use "believe" in the realm of science (as opposed to religion, where definition 1 would be entirely appropriate) is to indicate that they trust the source of their information such that they are willing to accept it without actually understanding it. In essence, they are simply "parroting" what they have been told.

      Now, most of the people who are acting as "parrots" don't see it that way. They are usually convinced that they understand the topic quite well enough to see that their assertions are correct. They assume that a superficial - and usually over-simplified - understanding of an issue can make them equal to people who have studied the topic in depth.

      To some extent, this belief (definition 1) that "everyday people" can understand a topic as well (or, in some instances, better) than the "experts" is a lingering zeitgeist of the 20th century, epitomized by many in the "Baby Boomer" generation (my generation, regrettably). It is, unfortunately, no more a reflection of reality than most of the philosophical nonsense to come out of that generation.

      Moral philosophy, no matter how good it makes you feel, never trumps reality. The Soviet Union had to learn this lesson the hard way (see: Trofim Lysenko) and it appears that many in the Western World are determined to repeat their mistakes.

      No, as much as we might like it to be different, there is no substitute for actually learning the subject. Reading "Molecular Biology for Dummies" will not put you on par with someone who has put in the hours and effort required to really learn the subject. This is not to say that people with lots of education and advanced degrees cannot be wrong - that is most definitely not true (see: cold fusion)! However, when discussing their field, the smart money is betting on the "expert" over the "self-educated".

      So, why is it that so many people think that they know more about, say, autism and mercury than the people who have studied it for decades? For the answer to this question, we have to go to the source - literally.

      These days, it is considered indelicate for a doctor or scientist to say (or imply) that a topic is too complex for the "public" (i.e. the "average" person) to understand. This despite the all too obvious fact that many - if not most - topics in science have grown too complex even for scientists outside the field to fully grasp. The "average" person, with at most a semester or two of college science, hasn't a chance.

      As a result, people read about some aspect of biology, chemistry or physics in Scientific American and think that they can intelligently discuss the matter with someone who does research in the field (true story). And in situations where the source either doesn't have a full grasp of the topic or has a personal reason to over-simplify the matter (actually, those two aren't mutually exclusive), the potential for deception - intentional or not - is extreme.

      For example, there is currently a belief that the GnRH agonist, leuprolide (Lupron) will help autistic children. On the part of the people originating this idea, the belief (definition 2) in Lupron's effectiveness is based on two other beliefs (definition 2?):

      [1] Autism is caused by mercury toxicity

      [2] Testosterone and mercury form a complex, with sheets of testosterone surrounding mercury atoms

      The validity of belief [1] is very doubtful at present, but belief [2] is valid - with one caveat:

      The complex of testosterone and mercury has only been seen when equimolar amounts (equal number of molecules of each) were mixed with the minimal amount of hot (50 degrees C, about 120 degrees F) benzene.

      Now, it is entirely possible that the people who started this idea are unaware of the little caveat above. But I'd be willing to bet a largish sum that the people who are parroting the "lupron helps cure autism" claim are completely unaware of it. Yet it is information that is freely available...if you can understand it.

      So why do I worry about people repeating nonsense they don't understand?

      A large part of the problem with the parroting of bad (or unsubstantiated) information is that people - so the psycholgists tell me - tend to give credence to things they hear from numerous other people. "Everybody's saying it, so it must be true!" Such tempting logic, but false.

      This same sort of "logic" crops up in the various autism polls, surveys and questionaires that litter the Internet. The folks running them must believe (definition 1, I suspect) that a collection of "average people" will generate an above average understanding of the topic. The myth of the "wisdom of the common man" is repackaged as the myth of the "wisdom of common parents".

      Don't get me wrong; I think that parents know a lot about their kids - I know a lot about mine. I also know a lot about my car, but I don't think for a moment that I understand its inner workings better than my mechanic. I still get peeved when she can't find the rattle or shimmy that I describe, but I don't think that I know more about how my car works than she does.

      In the end, scientific "fact" has traditionally shown no respect for opinion polls. No matter how many school children want Pi to equal 3, the mathematical relationship between a circle's radius and circumference remains obstinately the same (3.14159...). And even majority support on the school board won't make "Intelligent Design" a valid hypothesis - let alone a theory. The same will happen with polls and surveys about the cause of autism - no matter how many people "vote" for mercury, the facts will remain unchanged.

      You can choose to believe that or not.


      Wednesday, February 07, 2007

      What You Want is What You Get

      A few weeks ago, a reporter interviewed me about my research. During the interview, she asked me, “What do you want the result to be?” I was flabbergasted – it truly had never occurred to me to “want” an outcome to my research. I want to find out what really happens, what reality is – at least in the small world of my research project.

      “Wanting” an outcome is what starts a lot of pseudoscience. The Holmes, et al paper is a great example of what happens when you “want” a certain result. Certain that mercury was the cause of autism, the authors took their very screwy results and spun a “Just So” story that, as it turned out, “Just Isn’t So”.

      This is not always a conscious distortion of reality. Too often, it is the absolute certainty that their hypothesis is right that leads otherwise rational scientists over the edge and into the abyss. Just ask Stanley Pons and Martin Fleischman, “discoverers” of cold fusion . Neither of them was a “crackpot”, but they let their belief in their own hypothesis blind them to the flaws (major flaws) in the data.

      Which leads us to the question of mercury and autism.

      Let’s go ‘way back to the beginning of the hypothesis. Some folks thought that the apparent rise in autism prevalence (the “autism epidemic”) starting around 1985 could be due to the increase in thimerosal-containing vaccines that happened at about the same time. The timing wasn’t particularly close, but it wasn’t the worst hypothesis ever written. The known neurotoxicity of mercury made it biologically plausible (but didn’t prove that thimerosal could cause autism).

      In real science, a hypothesis needs to explain the data – all the data – or it needs to be revised (or replaced). Things started to go wrong with the mercury-autism hypothesis when the Madsen, et al study failed to show a drop in Danish autism prevalence after Denmark removed thimerosal from its vaccines. There were some methodological problems with the study (which were spelled out by the authors), but it certainly raised a lot of doubt.

      Before too long, more studies came out showing the lack of association between thimerosal dose and autism prevalence (Verstraeten, et al and Andrews, et al; Fombonne, et al). Rather than modifying (or abandoning) the mercury-causes-autism hypothesis, its proponents concentrated on attacking the motivations and ethics of the researchers.

      A few studies using the thoroughly discredited VAERS database (see Goodman and Nordin) attempted to refute the better-designed studies, but were generally disregarded, except by those who were desperately trying to keep the mercury-causes-autism hypothesis alive. A report of declining autism prevalence (Geier and Geier) was not only poorly done but, as later data revealed, wrong.

      Here is the crux of the matter: the autism prevalence data from the California Department of Developmental Services (CDDS) and the United States Department of Education (USDE) have not shown a decline in autism prevalence. That is not an issue in question – it is simple fact. Whether or not this data is valid (and there is some doubt about that, see Shattuck, Newschaffer and Laidler), it was what the original mercury-causes-autism hypothesis was based on.

      The reason that the prevalence of autism is so critical to the mercury-causes-autism hypothesis is, of course, because thimerosal was removed from children’s vaccines in the US sometime between 2000 and 2001 (depending on which source you use). No matter when it was finally completely removed, the thimerosal dose received by children in their vaccines has [a] not risen since 1999 and [b] has significantly fallen since 2000.

      Thus, even if thimerosal had remained in all children’s vaccines at its 1999 concentration – which it hasn’t – the autism prevalence should have reached a plateau by now. And it hasn’t.

      Let me repeat that, for emphasis.

      Even if the total amount of thimerosal that a child received in vaccines had remained at the 1999 level, the prevalence of autism would have reached a plateau by now if thimerosal was a major cause of autism.

      Given that the thimerosal dose received by children born after 2001 (to allow an overly generous time for all the “on the shelf” doses to be used) has been significantly reduced, we should have seen a decline in autism prevalence by now, if thimerosal caused a significant fraction of autism cases. And we haven’t – decidedly not.

      Now, the mercury-causes-autism proponents have been forced to resort to a variety of “bait and switch” tactics to keep their supporters’ eyes off of the poverty of their hypothesis. Mercury from power plants, China and crematoria has been invoked – despite the fact that mercury deposition rates have been declining since 1961 (Roos-Barraclough, et al ).

      They have also proposed mercury in dental amalgams, Rho-Gam shots and flu shots as potential sources of mercury.

      The basic problem with all this “bait and switch” is that they are acting as though the core assertion of their hypothesis – that mercury can cause autism – has been proven, which it decidedly has not!

      The failure of US, UK, Danish, Swedish and Canadian autism prevalence to fall following the removal of thimerosal from children’s vaccines has not supported their claim – in fact, it has weakened it. In addition, showing that mercury can cause autoimmune disorders, “oxidative stress” and neuronal injury is not the same as showing that it can cause autism.

      Pretty much everybody knows that mercury is not a good thing for you. This is not the question. The question at hand is whether it can cause autism. And the answer to that question – at least so far – has been “No.”

      So, asking if I’m in favor of exposing kids to mercury or if I think that mercury is a good thing for kids is simply a ploy to shift attention away from the glaring absence of data supporting the mercury-causes-autism hypothesis.

      And pointing out that flu shots have mercury, or that there is mercury in our food, water and air is just more of the same. The onus is on those who propose that mercury causes autism to bring sufficient data (data, not testimonials, stories about “recovered” children or wild fantasies about governmental conspiracies) to the table to show why their hypothesis is a better explanation of how the universe (and neurobiology) works than the “null hypothesis” (that mercury doesn't cause autism).

      A few ground rules:

      As a personal pre-condition (a promise made to myself) of returning to ‘blogging, I have elected to moderate all comments. I do this to prevent the sort of free-for-all insult-fests that are all too common from certain individuals (who need not be named).

      I expect people to behave themselves in a civilized fashion. Do not bring fights from other ‘blogs into this one simply because you have been banned somewhere else. Fore Sam has already gotten himself banned for a day for doing just that. And because he has already used up his lifetime allotment of second chances. Fore Sam, consider yourself on permanent probation – you’ve earned it.

      If it upsets you that your commentary is not appreciated here, then you are free to set up your own ‘blog and rant about my arbitrary justice to your heart’s content. You are free to say what you want here, as long as you remain within the bounds of civilized discourse.

      Finally, do not be surprised or upset if your assertions ("It's right because I say it's right! Are you callin' me a liar!?!"}, anecdotes ("It worked for me!"), unsupported hypotheses ("I don't have any data! I don't have time to get data - I'm busy saving lives!") and conspiracies ("It's a guv'mint plot!") fail to convince. That's just the way science works.



      Sunday, February 04, 2007

      Prometheus Rises

      Sorry to have bugged out so suddenly and for so long. I hope that I still have a few readers left after all this time.

      I unexpectedly received a grant (these days, almost nobody expects to get an NSF grant) and have been busy getting things underway. With only a year to show results, I decided it was better to scurry now than scramble later.

      Anyway, life has returned to the usual degree of insanity, so I'm going to take a foray back into blogging. In the six months that the blog has been cooling, I have continued to monitor the blogosphere. And what I've seen has not been encouraging.

      Mercury/Autism Madness:

      Despite several goalpost moves, time is running out on the thimerosal-causes-autism hypothesis. Autism numbers from the USDE and Cal DDS continue to rise in blatant disregard of several predictions from prominent mercury/autism spokespeople that autism numbers will be falling "in 2005", "in 2007" and "sometime real soon".

      Anticipating the inevitable demise of the thimerosal-in-vaccines-causes-autism hypothesis, the promoters of mercury/autism are branching out. Their strategies to date (and I may have missed a few) are:

      [1] Trace amounts of thimerosal remain in the vaccines -

      This tactic has an almost homeopathic ring to it. According to its advocates, the trace amounts (nanograms) of thimerosal in vaccines are as effective at causing autism as the larger amount (micrograms) previously in vaccines. This ignores the fact that the amount of thimerosal in current vaccines is orders of magnitude lower than the amount children received in the 1960's and 1970's - before the "autism epidemic" started.

      [2] It's the aluminium, formaldehyde, or other additives in the vaccines.

      This is the old "bait and switch" tactic. They have no data to support their proposed link between thimerosal and autism, but they have created a public uproar about it. This strategy is an attempt to transfer the "buzz" they created about mercury to some other vaccine component.

      It took years to put a stake through the heart of mercury/autism and it would take just as long to accumulate the data to refute the aluminium/autism hypothesis. This would be a tremendous waste of time and money, since there is even less reason to suspect that aluminium in vaccines can cause autism.

      [3] Mercury from other sources is causing autism.

      Considering the numerous studies that have shown that terrestrial mercury deposition is lower now than any time since the early 1960's, this is an obvious desperation shot. It is clearly aimed at the general public, rather than anyone in the scientific community, where it was dead on arrival. Mercury from power plants, crematoria or China are not going to push exposure above where it was in the 1960's (or 1880's), so this has no real hope of success.

      [4] Data from USDE and Cal DDS are not valid.

      This one is almost embarrassing, since it's what their opponents have been saying all along. However, there are a few arguments in the blogosphere from mercury/autism proponents that are using this line of "reasoning". Having used the USDE and Cal DDS data to support their "autism epidemic", they now argue that the same data source cannot show the expected decline. In other words, the data can only show a rise in autism prevalence, not a decline.

      Of course, more than a few people have argued for years that the USDE and Cal DDS don't show anything useful about real autism prevalence - up or down. Now, having (ab)used the data to come up with an "autism epidemic", they argue that the data is no good. Amazing!

      Autism "Therapies"

      The insanity continues unabated. Risperidone is an evil pharmaceutical and Lupron is a wonder drug. Amazing. There are numerous studies showing that the former is a useful treatment for some children with autism and nothing except some wild (and nonsensical) supposition to support the latter. Yet parents are told to fear and shun "drugs" and treat their children with Lupron (not a drug?) instead.

      I eagerly await the news that the "alternative" autism treatment community has found a single therapy without merit and abandoned it. But it hasn't happened yet. And I suspect that it will never happen.

      Real medicine periodically finds therapies lacking and discards them. "Alternative" medicine appears to make no mistakes and adopts nothing but effective therapies. This would be in keeping with the larger-than-life superhuman nature of its practitioners, no doubt.

      Creationism ("Intelligent Design")

      Having undergone several painful and public drubbings, the Creationists have returned to their holes to lick their wounds. But, they will be back - count on it. In fact, they are already sending out some tentative feelers to see how they can get their religion back into the science classes. The first of these may be an attempt to cast "Darwinism" as a religious faith. Another is the portrayal of the recent court cases as attempts to "suppress scientific dissent".

      In either case, we will have to remain vigilent against the intrusions of the creationists into the science classrooms. I have my own ideas, which I will in due course trot out for evaluation right here.

      For the meanwhile, it's good to be back in the blogosphere.