Research Conclusions Can be Confusing
Published 12:00 am Tuesday, June 21, 2005
A. G. Ogden, Ph. D. / Guest Columnist
In an age inundated by research data which purports to have both validity and meaning, I am reminded by so many sources of the necessity of accuracy in the research methodology and in extrapolating conclusions from the research.
This observation seems particularly acute with our politicians currently relying heavily upon the various “polls” produced by the media and from which they make their decisions in the hopes of achieving good government, as well as promoting whatever political agenda the esteemed representatives are momentarily espousing.
Referencing polls is the most readily comprehensible example of “research” for most people, and polls are particularly good in this instance because they can point up the flaws of shoddy or hasty “research”.
There is no denying the necessity of research data.
From such data we can accurately make sound decisions for future planning, for budgeting, for daily activity, and for personal achievement.
At the same time, we depend largely upon research data to make sensible predictions about our work, our family plans, and our social well-being.
Hence, it is crucial that we conduct research in fashions and approaches which produce conclusions that can be trusted and which exude integrity.
Applications of good research to education are obvious; we construct and reconstruct entire curricula based on findings and data from what have come to be known as “longitudinal studies,” that is to say, “research which has taken place over a representative length of time” which usually means at least three to five years.
It was this kind of data which moved most public school systems from the 6-3-3 (elementary school, junior high school, high school) sequence for K-12 to the 5-3-4 (elementary school, middle school, high school) sequence.
And whether or not we agree with it, some research team convinced us that the 5-3-4 sequence serves student needs better than the 6-3-3 sequence.
Although I still have my reservations, ultimately it is not the “sequence” of school years so much as it is the delivery in the classroom which makes the difference in student achievement.
Viewing polls again, we know that the pollsters have made every effort to remain objective and to present a set of research inquiries designed to elicit honest and factual answers from those polled.
We are told that “scientific methods” have been used to construct a given poll and to deliver it to a representative cross-section of the population, thus yielding a more accurate and reliable index of public attitudes regarding the question being polled.
In the grand scheme of things, nevertheless, it is absolutely necessary that our research data applied to our educational ventures is valid and reliable.
The future of our democratic republic relies on an educational establishment which is fair, representative of the people’s needs, and mindful of the public’s preferences.
Some frivolous flirtation with a new educational concept based upon questionable research is always a liability.
My concern over research comes from a number of reflections in a lifetime of education.
Our most recent national experience with faulty research came with the 2004 Presidential Election during which “exit polls” woefully mis-predicted the eventual outcome of the race.
Apparently the pollsters had an agenda which clouded their judgment in questioning those polled and this cloud further caused the pollsters to reach invalid and incorrect conclusions.
Invalid conclusions are always the bane of researchers.
I am reminded of an account in which some graduate researchers were seeking to “train” a cricket to jump on command from a human voice.
The team had constructed an acceptable methodology and began the experiment.
To gain a “variable” the procedure was adopted to remove one of the cricket’s legs after each “command” had been mastered by the cricket.
So the team would command, “Jump!” at which the cricket did, in fact, jump.
Then one of its legs would be removed and the command given again. “Jump!” and the cricket jumped with one less leg.
This process took place until all legs of the cricket had been removed. Each time the command was given, the cricket jumped so long as it had at least one leg.
Finally, there on the laboratory pad atop the reagent bench laid the legless cricket.
The research team thought that they had reached a point of critical mass now that the cricket had no legs, and they gave the command again. “Jump!”
But the cricket did not move.
Again, “Jump!” but still no movement from the cricket.
After several more attempts at the command, the research team stopped the experiment and turned to reaching conclusions about their project.
It did not take them long to come up with their conclusion from what they had believed was faultless research.
“When all legs are removed from a cricket, the cricket becomes deaf!”
Such research conclusions are not uncommon today.
There are research groups, tanks, pods, or whatever you may wish to call them in a glut of abundance.
Likewise, there are a plethora of research organizations which will be only too glad to provide a “poll” to support your point of view.
The question becomes one of validity and veracity for the poll and the research organization.
Hence, the next time you see the results of some research or some poll, it might be wise to take a closer look at the methodology, the queries, and the conclusions – for heaven’s sake, the conclusions!