twoeleven: Hans Zarkov from Flash Gordon (Default)
[personal profile] twoeleven
i've been meaning to post about scientific stuff i've read recently, but i've been a tad busy. the long weekend gives me some time to clear my "fun" to-do list.


i usually don't cite science news, since it's a popular magazine. while the authors (usually) try hard to get the science right, they're not perfect, and in any case, their desire to popularize science sometimes leads them to simplify complex information to the point of being wrong. however, science news recently ran an article on psychology and its problems which has direct quotes from researchers trying to solve the field's problems. that makes it a primary source worth talking about.



klaus fielder understands the problem alright, but he then gets lost looking for a solution:
[P]sychologists tend to overlook or dismiss hypotheses that might topple their own, says Klaus Fiedler of the University of Heidelberg in Germany. They explain experimental findings with ambiguous terms that make no testable predictions at all; they build careers on theories that have never bested a competitor in a fair scientific fight. In many cases, no one knows or bothers to check how much common ground one theory shares with others that address the same topic. Problems like these, Fiedler and his colleagues contended last November in Perspectives in Psychological Science, afflict sets of related theories about such psychological phenomena as memory and decision making. In the end, that affects how well these phenomena are understood.

Fiedler’s critique comes at a time when psychologists are making a well-publicized effort to clean up their research procedures, as described in several reports published alongside his paper. In fact, researchers generally concede that many published psychology studies have been conducted in ways that conceal their statistical frailty — and thus the validity of their conclusions. But Fiedler suspects the new push to sanitize psychology’s statistical house won’t make much difference in the long run. Findings published in big-time journals draw enough media coverage to bring the scrutiny of other researchers, who eventually expose bogus and overblown effects. “Advances in psychology will depend more on open-minded theoretical thinking than on better monitoring of statistical practices,” he says.
it's that last sentence that's the problem. sure, everybody likes their pet theories; that problem is hardly unique to psychology. nor does it appear that the field wants for theoreticians; i'd say it's rather over-endowed with them. they all seem to want to dream up ideas for how the mind works, but then work hard at avoiding checking those ideas, lest their dreams fall apart.

in the real sciences, verification of ideas -- or rather falsification of them; thanks, karl poppper -- is what we spend most of our time doing. a lot of that is simply gathering and analysing data. statistics is a big help for that, because it allows us to figure out what's going on when there's variation between the things being studied.

psychology studies people -- or tries to -- so it should be no surprise to those in the field that people don't all act the same way. that's true in biology as well, even at the molecular level: two adjacent skin cells in your body have different arrangements of their components, and they act subtly differently. in some experiments, that difference matters. it's even true in particle physics: collisions between sub-atomic particles aren't all alike; many are tiny fender-benders but it's usually the rare head-on collisions that do the neat stuff. since variation is so typical in the real world, we frequently need statistics to study it. statistical results -- when properly used -- make up the observations in many fields.

at their core, the real sciences are built on detailed observations. issac newton's mechanics was built on careful measurements galileo galilei and others made. albert einstein didn't just think up relativity because his day job was dull, but because other scientists had gathered a bunch of information that showed that light -- when measured carefully enough -- simply refused to act that way we thought it did. (/tips hat to albert michelson and ed morley.) antoine lavoisier built chemistry out of spare parts left over from alchemy¹ by deliberately attempting to copy galileo's methods. the real sciences have been very successful in understanding the world; psychology wants to be just like them when it grows up.

1: he also had an amazingly complete intellectual machine-shop in his head to rework all the bits that were simply too bizarrely shaped to fit his new world-view.

so, i think herr fiedler is simply leading his field into a morass. rather than adopting the methods that work so well in the real sciences, he wants people to be more willing to live in different castles in the air. this isn't gonna get him or psychology anywhere. shooting down some of those castles just might.


geoffrey loftus, the other guy science news talked to, is just as confused. worse, he thinks he knows what he's talking about:
Geoffrey Loftus, a psychologist at the University of Washington in Seattle, is an ally in Fiedler’s battle to broaden psychology’s perspectives. As editor of Memory & Cognition from 1993 to 1997, Loftus implored researchers to avoid a standard statistical practice in psychology known as null hypothesis significance testing that, in his view, perpetuates theoretical chaos. He continued to attack the practice in a talk last November at the Psychonomic Society’s annual meeting in Minneapolis.

Null hypothesis refers to a default position: that there is no relationship except chance between two measured phenomena in an experiment (for example, it’s only by chance that college students walk at different speeds after they’ve read words that refer to old age). To conclude that there are grounds to say that a relationship exists between two phenomena, the null hypotheses must be rejected. This technique requires researchers to calculate whether an assumption that no experimental effect exists can be rejected as statistically unlikely based on measured differences between groups.

This is a statistical charade, Loftus contends, since measures taken before and after any test are virtually never the same. Rejecting a null hypothesis doesn’t tell a researcher anything new, even if the threat of finding an effect that doesn’t really exist has been eliminated. “Significance testing is all about how the world isn’t,” Loftus contends, “and says nothing about how the world is.”
um, no. or more bluntly: well duh. of course there's variation in before-and-after tests; that's obvious and inevitable.

what he fails to grasp is that this is why real scientists run control experiments -- those in which nothing new is done to the things or people being studied -- to get a handle on how much variation there is between them. one then has to show that there's a different variation between the before and after groups that did experience something new (the "experimental groups") than between the ones where before and after groups that experienced the same thing (or experienced some change that's not expected to matter).

this is, for example, a fundamental part of figuring out whether new medical treatments work: new treatments are compared to old ones by giving the new one to an experimental group while continuing to use the old one to treat a control group. people's responses to both old and new treatments vary quite a bit, but we hope that the experimental group responds differently... for the better. this allows us to reject the null hypothesis and show that the new treatment works better.

the sad thing about mr. loftus is that he and his field appear completely isolated from the rest of the world. control and experimental groups aren't just used for null-hypothesis testing in medicine -- or even just in the biological sciences -- but are standard ways of figuring out whether some change matters in everything from marketing campaigns to industrial quality control. they're a common part of experimental design in many fields. but loftus wants to lead psychology away from them out of misunderstanding.

between loftus and fiedler, i think psychology will fall ever further behind the real sciences. while there's part of me that will happily mock them for doing this, another part is more concerned with the effect of this stupidity on the rest of us. correct -- or at least less wrong -- ideas about how "normal" people think (and what "normal thinking" means) should have the same effects on treating mental illness as correct ideas of physics had on making things fly: we go from crude, trial-and-error solutions that kinda-sorta work to being able to tailor-make solutions that exactly fit the problem at hand. as it is, i'm less than hopeful we'll get off the ground.

Profile

twoeleven: Hans Zarkov from Flash Gordon (Default)
twoeleven
May 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 2025
Page generated Jun. 15th, 2025 06:23 pm