โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ขย 

The
Behavioral
Measurement
Letters

Behavioral
Measurement
Database
Service

โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ข โ€ขย 

Measurement Instruments at Your Fingertips

Vol. 9, No. 1: Spring 2006

 

Introduction to the Spring 2006 Issue of The Behavioral Measurement Letter

There is a growing demand in the market place for expertise in psychometrics, at a time when fewer and fewer people are receiving training in measurement. According to a recent article on the ‘front page of the New York Times (Herszenhorn, D.M. ‘ยทAs Test-Taking Grows, Test-Makers Grow Rarer,” May 9, 2006, A1, A19), testing company executives are urging federal action to increase opportunities for training in measurement, including government-paid fellowships to increase the numbers of students entering graduate programs to become experts in psychometrics. As demand for measurement specialists continues to outstrip supยญ ply, it will become increasingly critical in the years ahead to broaden and strengthen educational training in measurement in the health and psychosocial sciences.

In this spirit, the Spring 2006 issue of The Behavioral Measurement Letter is devoted to a topic that is currently missing from the background training of most researchers in the health and psychosocial sciences-namely, how to select appropriate measurement instruments for research. Students receive extensive undergraduate and graduate training in how to formulate hypotheses, design research studies, and collect, analyze, and interpret research data in their various empirical disciplines. They learn how to maximize the validity of research conclusions using fully factorial experimental designs in randomized laboratory experiments, cross-sectional and longitudinal designs in survey research, and quasi-experimental designs in applied settings. They learn how to use computer software to conduct a wide range or sophisticated statistical analyses, and how to write and publish research reports communicating their findings. But most students are never taught how to go about finding an appropriate measurement tool for use in their researchโ€”they are never formally trained in the process of instrument selection, or “instrumentation.”

It is particularly ironic that students typically receive no training in how to select suitable measurement tools for research, given that the quality and precision of measurement determines the quality and precision of research findings. Imagine that space scientists were trained to operate the way undergraduate and graduate students in the health and psychosocial sciences are trained to operate with respect to instrument selection. On the one hand, NASA would spend billions of dollars designing and constructing a sophisticated space probe capable of flying millions of miles through space to another planet, landing safely on the planet’s surface, and relaying information back to Earth electronically; and scientists would devote years of effort in carefully scrutinizing and interpreting the data the probe relayed back to Earth. On the other hand, scientists would spend little time or money in deciding what measurement tools to include onยญ board the space probe in the first place. Instead, they would choose instruments haphazardly from among whatever options they could find, never being sure whether they were missing an important tool in the process. Space scientists might al o grab the first available measure of a particular variable of interest to put in the space probe, without considering whether it actually reflects the construct of interest. Surely there’s a better way.

Accordingly, the Spring 2006 issue of The Behavioral Measurement Letter addresses this gap in health and psychosocial research training by presenting three articles, each aimed at helping researchers optimize the process of instrument selection. This issue begins with a reprint of a 1999 article from Eye on Psi Chi by Daniel Moore, Fred Bryant, and Evelyn Perloff, “Measurement Instruments at Your Finger Tips.” Moore, Bryant, and Perloff remind readers that measurement surrounds us in everything we do, yet we know so very little about how to select the most appropriate measures for a given purpose. This article also introduces the HaPI database as a resource for optimizing researchers’ use of the measurement toolbox.

Also reprinted in this issue is a 1998 article from Teaching of Psychology by Jennifer Brockway and Fred Bryant, “You Can’t Judge a Measure by Its Label: Teaching the Process of Instrumentation,” describing an exercise in instrument selection that increases awareness of the issues involved in measuring conceptual variables. This article is intended as a resource for classroom instructors to use in courses on research methods in the health and psychosocial sciences, including counseling, education, medicine, nursing, psychology, social work, and sociology. Brockway and Bryant provide empirical evidence that the exercise is effective in raising student’s consciousness about how to select suitable instruments for use in their research.

Finally, in “Selecting Instruments for Behavioral Research: Advice for the Intermediate User,” Thomas Hogan of the University of Scranton’s Department of Psychology provides advice for intermediate-level researchers (which includes most instrument users) about the informational resources available and how to use them to identify suitable measurement instruments, retrieve copies and scoring protocols for instruments, and find the right measure for the job at hand. Dr. Hogan high-lights available electronic databases and printed compendia that provide researchers with the tools necessary to locate, evaluate, and select appropriate measurement tools in the health and psychosocial sciences.

All three articles in this issue emphasize the need for a better understanding of the process through which researchers select instruments in their research, so as to enhance the quality of research conclusions. This issue of The Behavioral Measurement Letter is intended to serve as a resource for instructors in research methods courses within the various disciplines that constitute the health and psychosocial sciences. It is hoped that instructors will use this issue of the newsletter as a tool for raising students’ awareness of the issue involved in selecting appropriate measurement instruments for research.

Behavioral Measurement Database Services (BMDS) is grateful to Lawrence Erlbaum Associates, Inc. (Mahwah, NJ), for permission to reprint Brockway, J. H., & Bryant, F. B. (1998). You can’t judge a measure by its label: Teaching students how to locate, evaluate, and select appropriate instruments. Teaching of Psychology, 25, 121-123.

BMDS also thanks the Psi Chi National Office (Chattanooga, TN) for permission to reprint Moore, D., Bryant, F. B., & Perloff, E. (1999). Measurement instruments at your fingertips. Eye on Psi Chi, 3, 17-19.

As always, we invite written responses from our readership. Please address comments, suggestions, letters, or ideas for topics to be covered in future issues of the journal to: The Editor, The Behavioral Measurement Letter, Behavioral Measurement Database Services, P.O. Box 110287, Pittsburgh, PA, 15232-0787. Email: bmdshapi@aol.com.

We also accept short manuscripts to be considered for publication in The Behavioral Measurement Letter. Submit, at any time, a brief article, opinion piece, or book review on a topic related to behavioral measurement, to The Editor at the above address. Each submission will be given careful consideration for possible publication in a forthcoming issue of The Behavioral Measurement Letter.

HaPI reading…

The Editor

 

Measurement Instruments at Your Fingertips

 

IMAGINE YOU ARE A COLLEGE senior. For the past three -years, you’ve seen and met a variety of academic challenges. You’ve endured grueling daily lectures, tutored struggling students, participated in numerous class projects, and written endless essays and papers. In every case, you completed your assignments promptly and well. However, this time is different. Sitting in class early in the term listening, you look dumbstruck as Professor Sticklerโ€”a strict disciplinarian with a demanding teaching styleโ€”describes an assignment that leaves you wondering why you majored in psychology.

 

Eye on Psi Chi Winter 1999

IT ALL STARTS ROUTINELY ENOUGH. Professor Stickler announces that the semester would be dedicated, as is this paper, to exploring a concept that most people take for granted: measurement. Professor Stickler says: “We usually think of measurement as a person’s or an object’s dimensions, quantity, or capacity, which, of course, it is. But it is also much more.” The professor pauses while reaching for the Oxford English Dictionary, then continues, “This dictionary, the unassailable authority on our language, allots over four pages of text to measurement and its many manifestations (Perloff, 1994). Why?” Professor Stickler pauses again. “The answer should be obvious. Just look around and you will see some sort of measurement in nearly every area of life.”


During the rest of the hour, Stickler cites dozens of ways that measures lend order and understanding to the world. But, in keeping with Stickler’s style, the first examples are from space. You learn that a light-year is equivalent to 5.88 trillion miles, a nearly unimaginable distance, but it pales in significance as Stickler recounts how astronomers have used light-years to measure the seemingly infinite vastness of the entire universeโ€”which is still expanding. Stickler then returns to Earth and reminds you of the many everyday instances where measures are evident. Units of weight, area, and time allow us to conceptualize, for example, a ton of bricks, 40 acres, and a millennium. One buys a dozen roses, wears a size 8, adds a pinch of salt, checks the Dow Jones Averages, compares prices, scans the headlines, keeps score, measures the opposition, and drives 55 miles per hour. Stickler drones on and on, while you, somewhat bored, think, “True, measures are common and necessary, and we tend to ignore them, but what does that have to do with psychology?” You are about to find out. Over the next several weeks, the class takes a guided tour of the evolution of modern psychology. You learn, of course, about Wundt and Watson and Freud and Jung, each pioneers in their respective areas. Not one to leave a stone unturned, Stickler then covers the contributions of noteworthy others: Pavlov, James, Harlow, Erikson, Rogers, and Skinner, to mention a few. Throughout these lectures (nearly a month of them), Stickler repeatedly stresses that the theories of these early pathfinders would have been meaningless-that they would have amounted to little more than unsubstantiated speculation-unless their underlying concepts could be measured. To underscore this point. Stickler declares, emphatically, “Measurement is the cornerstone of science. In fact, just as in the physical sciences, advances in psychology are proportional to advances in measurement. As Robert Pool (1988) stated in the case of the physical sciences, “These advances are vital, because science’s understanding of the physical world is limited by the accuracy with which science can measure that world.” At the beginning of the next class, Professor Stickler asks how many students hope to live the good life. Every hand in the room shoots up. Stickler then asks for a definition of a “good life”; not a hand stirs. Stickler continues, “Is it happiness, and if so, how is it measured?

Can you collect a sample of it, cup it in your hands, and then examine it under a microscope?” You smile at the absurdity of Stickler’s questions, and think, “If happiness does not lend itself to precise quantification, then how does one measure it, and the kaleidoscope of other forces, that shape attitudes and behavior?” Stickler echoes your thoughts, and explains that generations of researchers have responded to this problem by developing a vast array of measures or instruments that indirectly assess people’s psychological states. Most numerous are the traditional “paper-and-pencil” tests, questionnaires, checklists, inventories, rating scales, and interview schedules, many of which now have computerized versions or scoring systems. Stickler then reviews instruments that employ alternative methods. For example, Csikszentmihalyi and Larson (1987) devised the Experience Sampling Measurement, a measurement technique that uses an electronic pager to signal participants to record their feelings as they go about their daily activities. Vallacher, Nowak, and Kaufman (1994) created their appropriately titled “mouse paradigm,” a method designed to assess people’s social judgments based solely on where they position a cursor on a computer screen. Stickler continues this emphasis up to and through the midterm examination. Immersed in measures and measurement issues, you struggle to grasp concepts such as construct validity, internal consistency, forced-choice, ratio scale, multitrait-multimethod matrix, measurement modeling, and Cronbach’s alpha. You take copious notes, do the assigned readings, and try to appear knowledgeable in class discussions. This strategy must have worked, for your midterm grade is a B+. At this point, you learn that in place of a final exam, Stickler has devised an assignment in which you must explore a particular construct that he will speak to shortly. Although you do find this information of interest, you uncomfortably await Dr.

Stickler’s assignment which then follows all too quickly, but is, you agree, appropriate to your immediate feelings. The assignment concerns the construct of anxiety. Each student is required to select two instruments designed to assess anxiety and to compare and contrast the measures for a class presentation. Topics to be addressed could include: strengths, weaknesses, theoretical orientation, type of measure (e.g., questionnaire, interview schedule, checklist, vignettes) intended population (e.g., children, adolescents, the elderly, college presidents), number of questions. Professor Stickler then posts a sheet for students to list their selected titles as soon as they have identified them, because no instrument may appear twice. Now, you do become anxious. With 20 students in your class, you realize that 40 different measures need to be located, and you wonder whether there are as many as 40 instruments to measure anxiety. Surely, you think, Professor Stickler would not make such a demand, if this were not the case. The thought urges you to get started as soon as the period ends, but where to start? As you leave class, a friend recalls seeing an instrument store around the corner. Sounds good, you think, and decide to make this your first stop, although you wonder can it be so easy. Of course, you should have known better, and you head for a more likely source: the library. At the Reference Desk, you request information on how to find materials about measurement instruments for the behavioral and social sciences. The librarian quickly directs you to a section containing several shelves of relevant books. Feeling better, you begin to review their contents. After several hours of browsing and finding interesting but not specifically useful information, you return to the Reference Desk and ask about other sources you could consult. The librarian then refers you to The Mental Measurements Yearbooks (Buros, 1938-1978; Conoley & Impara, 1989, 1995; Cramer & Conoley, 1992; Impara & Blake, 1998; Mitchell, 1989) and Test Critiques (Keyser & Sweetland, 1984-1994), both containing reviews of commercially produced tests. But the prospect of wading through 13- and 11-volume collections does little to lessen your growing anxiety.

At this point, the librarian asks if you have consulted the Ovid Technologies online databases, especially PsychINFO (American Psychological Association), Medline (National Library of Medicine) and HaPI (Behavioral Measurement Database Services). You are familiar with the first two, which are abstract data-bases of information on the behavioral and medical literature, respectively. The last database, however, is less familiar. You inquire further and learn that HaPI is the acronym for Health and Psychosocial Instruments, a computerized database that consists of more than 60,000 records describing tests, questionnaires, check-lists, interview schedules, and other types of measures used in the health and psychosocial sciences.

At first, you refuse to believe that a database dedicated solely to instruments exists, but after typing the word “anxiety” and seeing the results, you have a change of heart. HaPI returns more than enough anxiety measures! That’s not all, for you notice that many of the records contain additional information about the instrument, such as the number of questions, a statement of purpose, response formats, and relevant references. You quickly realize that the HaPI database provides the necessary information to complete your assignment. Moreover, the anxiety that has accompanied you since you started this project now gives way to a welcome sense of well-being. But what’s happened to your fingers!

 

References

Buros, O. K. (Ed.). (1938-1978). The first-eighth mental measurements yearbooks. Lincoln:
University of Nebraska, Buros Institute of Mental Measurements.

Conoley, J. C., & Impara, J. C. (Eds.). (1989). The tenth mental measurements yearbook. Lincoln:
University of Nebraska, Buros Institute of Mental Measurements.

Conoley, J. C., & Impara, J. C. (Eds.). (1995). The twelfth mental measurements yearbook.
Lincoln: University of Nebraska, Buros Institute of Mental Measurements.
Cramer, J. J., & Conoley, J. C. (Eds.). (1992). The eleventh mental measurements yearbook.
Lincoln: University of Nebraska, Buros Institute of Mental Measurements.

Csikszentmihalyi, M., & Larson, R. (1987). The Experience Sampling Method. Journal of Nervous and Mental Disease, 175, 526โ€“536.
Impara, J. C., & Blake, B. S. (Eds.). (1998). The thirteenth mental measurements yearbook.
Lincoln: University of Nebraska, Buros Institute of Mental Measurements.
Keyser, D. J., & Sweetland, R. C. (Eds.). (1984-1994). Test critiques (Vols. I-X). Austin, TX:
pro-ed.
Mitchell, J. V., Jr. (Ed.). (1989). The ninth mental measurements yearbook. Lincoln: University of Nebraska, Buros Institute of Mental Measurements.
Perloff, R. (1994). For good measure. The Behavioral Measurement Letter, 1(1), 4-5.
Pool, R. (1988). Advances in measurement science. Science, 240, 604-605.
Vallacher, R. R., Nowak, A., & Kaufman, J. (1994). Intrinsic dynamics of social judgment.
Journal of Personality and Social Psychology, 67, 20-34.

ย 

“In physics the truth is rarely perfectly clear, and that is certainly universally the case in human affairs. Hence, what is not surrounded by uncertainty cannot be the truth.”

Richard P. Feynman

 

Read additional articles from this newsletter:

You Can’t Judge a Measure by Its Label: Teaching the Process of Instrumentation

Selecting Instruments for Behavioral Research: Advice for the Intermediate User

 

vol-9-no-1-Spring-2006

Subscribe to our Newsletter Today

Stay up to date! Newsletters sent out quarterly.

Copyright ยฉ 2024 BMDS | ย All Rights Reserved

Design: LDS