Sunday, February 18, 2007

The Power of Critical Thinking: Scientific Method

“Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.” Richard Feynman

(from the lecture “What is and What Should be the Role of Scientific Culture in Modern Society”, given at the Galileo Symposium in Italy, 1964.)

There’s really no better definition of science than that.

So, how do we do science? It’s really quite easy. Just follow these four simple steps.

[1] Hypothesis.

Contrary to what some people think, a hypothesis is not “just a guess” – it is a proposed model of how the universe works. Usually, it is a model of how a small part of the universe works, but it is still a model of the universe.

The most important requirement is that the hypothesis be testable - it must be falsifiable. There must be some way to determine if the model is correct (or incorrect), otherwise, it is useless. Hypotheses that invoke unseen beings, undetectable forces or supernatural beings are all untestable.

Needless to say, a hypothesis must explain the available data on the topic in question, although it can explain that some (or all) of the data is wrong. However, hypotheses that start out with the claim that all of the existing data are wrong usually don’t fare too well.

[2] Observation / Experimentation.

The reason that hypotheses need to be testable is that the second step is to test them. No matter how “brilliant” or “progressive” a hypothesis is, it is utterly worthless if it either cannot be tested or makes no testable predictions.

This is the core problem of “Intelligent Design” – it makes no predictions that can be tested. Since “Intelligent Design” essentially states that everything is as it is because some supernatural being – the “designer” – made it that way. No matter how things are – no matter what a researcher might find – it is all exactly as it was made by the “designer”. It’s a tidy bit of religious philosophy, but it isn’t a hypothesis.

Once you have a hypothesis in hand, the very next step is to come up with a way to test it. The best way is to see if a prediction made by the hypothesis comes true. This can either be an observation, such as the bending of starlight as it passes close to the sun, or it can be an experiment, such as the Yellow Fever experiments. The distinction between observation and experiment is a subtle one, but either one can be used to test a hypothesis.

One thing that many want-to-be scientists fail to realize is that they aren’t the only ones who get to test their hypothesis. Anyone can do it – and will, if your hypothesis is interesting enough. This leads us to the next step:

[3] Evaluation.

Once a hypothesis has been tested, the time comes to see how well it did. An unsuccessful test – one where the results were not what the hypothesis predicted – indicates that the hypothesis is not a valid model of reality and needs to be revised. Sometimes, the needed revision is drastic – such as completely abandoning the hypothesis. Either way, a hypothesis that fails to predict what will happen is not valid – it needs to be fixed or discarded.

This step is the one that separates the real scientists from the pseudoscientists. It is especially revealing when a researcher refuses to respond to criticism of their hypothesis, especially when that criticism includes data that contradicts their findings. This is a recurring problem in science and – in my experience – indicates a serious flaw in the research.

Peer review is an integral part of the evaluation process. Peer review starts with the review prior to publication but continues long after. It is a critical part of the process because it exposes flaws or weaknesses that the original researcher failed to think of – it illuminates any potential blind spots. Peer review also has the uncomfortable effect of forcing the researcher to explain their assumptions, their methods, their results and conclusions.

[4] Repeat.

That's right. Repeat. And repeat again. No hypothesis gets confirmed by just one test. Not even theories - which are veteran hypotheses that have been tested and confirmed so thoroughly that they are given a certain degree of respect - get to rest on their laurels.

Sooner or later, somebody will find a new way to test, say, the theory of gravity and find a problem. Then they get to propose how to correct the theory. And then everybody else gets to critique that proposal and offer their own changes, tweaks and suggestions. And the process goes on.

The process has its flaws, to be sure. It slows down the acceptance of radical new ideas and can prolong old ideas beyond their time. However, it is the best system that the human has yet devised for sorting the few grains of “truth” from the vast amount of chaff. And if it slowed the widespread acceptance of some ideas – such as stomach ulcers caused by Helicobacter pylori, a favorite of “scientific method bashers” – it also prevented the premature acceptance of ideas like cold fusion.

Imagine, if it hadn’t been for stodgy old scientific method, our homes would be powered by cold fusion – and we’d all be sitting in the dark.

Up next – bias and self-deception.



Blogger Bartholomew Cubbins said...

Controls, controls, controls. Not the most difficult concept, but it takes planning, patience, and an intellectual curiosity satisfied by the conclusions derived from a well-executed experiment rather than some predisposed notion of what is fated to be.

Those out to make a quick buck will avoid experimental controls like the plague.

19 February, 2007 10:27  
Blogger Prometheus said...

Too true, Master Cubbins.

Controls are critical in any experiment, in order to determine if the observed phenomena are due to the variables being manipulated in the experiment or are just random variations (or, perhaps, due to fluctuations in a variable that is not being held stable).


19 February, 2007 11:19  
Blogger notmercury said...

"And if it slowed the widespread acceptance of some ideas – such as stomach ulcers caused by Helicobacter pylori"

An idea, by the way, that still isn't universally accepted.

20 February, 2007 17:35  
Anonymous Anonymous said...

Interestingly, there's another condition, Central Serous Corioretinopathy (CSR), which has a pretty solid association with stress and cortisol, and has recently also been associated with H Pylori. I'm guessing they'll need to work out the association between stress and H Pylori.

24 February, 2007 06:08  

<< Home