Vol. 4, No. 1: Fall 1996
Introduction to This Issue
In this issue, Donald W. Fiske celebrates the distinguished career of his long-time colleague, Donald T. Campbell. Don Campbell is remembered fondly by those who knew him personally. He is, of course, known also among students and scholars in numerous fields for his valuable contributions to measurement, experimental design, evaluation research, the sociology of science, and epistemology. Don Campbell is probably best remembered professionally for the paper, “Convergent and discriminant validation by the multitrait multimethod matrix” published in 1959 in the Psychological Bulletin, notes co-author, Professor Fiske. Don Campbell is deeply missed, but his many significant contributions to behavioral and social science are a living memorial and a lasting enrichment to the behavioral sciences.
Also in this issue of The Behavioral Measurements Letter, Robert M. Brackbill of the Centers for Disease Control and Prevention writes on surveys of sexual behaviors. His column presents an instructive summary of literature on surveys of sexual behavior and a classification of such surveys as ”general population surveys” and ”high-risk population surveys.” Based on his review and analysis of recent sexual-risk surveys, he alerts us to two impediments to surveying sexual behaviors and practices: instrumentation and survey methodology. His observations will stimulate researchers in behavioral measurement, sexually transmitted disease and survey methods, and will suggest fruitful research directions.
”HaPI Thoughts” is but one of many significant quotations from the writings of the late Don Campbell. This small sample of his work speaks both to Don Campbell’s professionalism and his humanism. Thus we celebrate his life while mourning his passing.
Address comments and suggestions to The Editor, The Behavioral Measurements Letter, Behavioral Measurement Database Services, PO Box 110287, Pittsburgh, PA 15232-0787. If warranted and as space permits, your communication may appear as a letter to the editor. Whether published or not, your feedback will be considered seriously and appreciated.
We also accept short manuscripts for The BML. Submit, at any time, a brief article, opinion piece or book review on a BML-relevant topic to The Editor at the above address. Each submission will be carefully examined for possible publication.
HaPI reading.
ย
HaPl’nings
As many of you are aware, or may guess, production of the Health and Psychosocial Instruments HaPI database is a labor intensive process. Similarly, users may experience disappointing results following a search of HaPI records. Both situations are, of course, not tolerable, albeit realistic and normal. As we begin to correct this state-of-affairs, I believe it appropriate to share with you at this time a brief explanation of why this occurs. Specific remedial recommendations for authors and editors will follow in a later issue.
Let me begin by describing a typical research study that calls for HaPI usage. Imagine that you are preparing a proposal to study the impact of violence on local residents. Your initial task is to identify residents who are fearful of being attacked and to compare them with residents who express few fears in this regard. You know the funding source will not fund development/validation of new instruments, unless it can be clearly documented that none exist. Aware that instruments are available to assess the major variables of fear and invulnerability, you search HaPI, but your retrieval is disappointing. Retrieval is minimal. You immediately ask “Why”?
Our analysis of this situation and of many like it suggests two primary reasons. First, HaPI journal coverage does not include all the relevant journals it should, but new ones are added on a continuing basis. The second issue relates to the kinds of measurement information that we do and do not find in the health and psychosocial literature. These we refer to as โerrors of omission and commission.โ
I list only the three following issues at this time, because as I indicated previously we are currently preparing detailed documentation for reporting information on measurement instruments: (a) authors of instruments do not provide exact titles for their instruments; (b) authors of articles fail to include either the given titles of the instruments they administered or the correct authorship of these instruments; and (c) authors of articles describe instruments that have been โmodified/adapted,โ but do not specify exact titles, authorships, and/or who developed the original instruments.
You may wonder why these issues concern us. Our answer is tripartite: (a) we desire to insure accuracy of what we report, (b) we feel strongly about giving credit to all those, past and present, who have contributed to measurement in the health and psychosocial sciences; and (c) looking back to the beginning of this HaPI’nings, we are always on the lookout to reduce the incidence of user frustration and disappointment.
In Memoriam:
Donald T. Campbell, 1916-1996
Donald W. Fiske
Don Campbell was a wonderful guy. He was warm, sociable, and much beloved. It was a genuine pleasure to be with him. He also was an easy person with whom to collaborate. He helped his friends and associates, and they helped him. He needed their support, and thus a large proportion of his papers were jointly authored. Even when he had an important idea, he wanted to develop it with someone else, probably for reassurance. Although he was a collaborator on many major books (such as Unobtrusive Measurement), he never published a book authored only by himself.
A festschrift for Don, edited by Marilynn Brewer and Barry Collins (Scientific inquiry and the social sciences: A volume in honor of Donald T. Campbell. San Francisco: Jossey-Bass), appeared in 1981. Overman (Methodology and epistemology for social sciences. Selected papers: Donald T. Campbell. Chicago, University of Chicago Press) edited a collection of Don’s papers that was published in 1988. Each of these volumes has a list of his publications, complete to the time the volume appeared. His obituary in The New York Times of May 12, 1996 is a highly professional precis of his life and works.
For those who never had the privilege of knowing Don, the best way to make up for this deprivation is to read his ”Comment: Another perspective on a scholarly career,” a paper that he wrote as the final chapter in the festschrift and that was republished in the Overman volume. This autobiographical statement is candid and moving. It is primarily a historical account of his professional work, with more emphasis on his tribulations than successes. He notes that he was 33 when his first paper was published, and that it was followed by scores more in later years. In typical Don Campbell fashion, this fact emerges as part of his stated hope that his experience will give heart to younger colleagues who find themselves in that position.
Don was a Renaissance man, involved in several areas of knowledge and contributing to each at various, often coextensive periods in his life. Overman (1988) identified the areas in which Don contributed as ranging from “measurement” and “experimental design” through “applied social experimentation” and “interpretative social science” to the “sociology of science” and “epistemology.”
He delighted in working out intriguing, if lengthy, titles for his papers, titles that would puzzle prospective readers and perhaps induce them to read the first paragraph, for by then, they would be hooked. A good example is his paper, “A tribal model of the social system vehicle carrying scientific knowledge” (Overman, 1988, 489-504). What could that be about? The opening sentence reads, ”Theorists of science have become increasingly convinced that science is a social process; that its social nature is relevant to the validity of scientific theories and therefore is relevant to the problem of knowledge even as philosophers have traditionally defined it.” (Note the small dig at philosophers.) This sentence requires some contemplation. The social nature of science is relevant to the validity of its theories. That is an intriguing idea.
At least as stimulating, and somewhat disturbing, was Don’s idea that the particular method used in measuring contributes considerable variance to any set of scores generated by its application. It was my privilege to work with Don on the ”Convergent and discriminant validation by the multitrait-multimethod matrix” paper (Psychological Bulletin, 1959, 56, 81-105). The basic idea was his. He asked me to work with him because I had some data he wanted to use in the paper. He may also have hoped that I could solve the problem of how to analyze that kind of matrix. After working on the problem for a while, I decided that the matrix should be treated as a step toward better measurement, so it should be carefully examined by eye, looking for concepts (traits) that needed to be revised and methods that needed to be refined, or possibly discarded. Today there is still no consensus on how to make a formal analysis of such a matrix. Perhaps the method problem needs a complete reformulation.
Both of us were highly pleased and quite surprised to learn in 1992 that this was by far the most frequently citedย ย ย paper published in the Psychological Bulletin in the last several decades. Our surprise was due, in part, to the fact that many published studies using our concepts or matrix had not cited the paper, presumably because it was so well known.
The high citation rate, added up to more than 2,000 by 1992 and accumulating at a rate of over 100 a year, is a little difficult to understand. I think the paper demonstrated a fact and raised a problem. The fact was that many, or most, psychological measurements do not just reflect the substantive variables at which they are aimed, but are also biased or confounded by the method used to obtain those measurements. Our colleagues in psychology and other disciplines seem to have accepted this most disagreeable finding. The problem, in the form of a question, is: How can we measure without this confounding? Here, too, there is no consensus on an answer, although many methodologists have tried to analyze these matrices so as to estimate the contribution of method to a measurement’s variance. (For some references to these efforts, see Fiske & Campbell, Psychological Bulletin, 1992, 142, 393-395.)
Don Campbell made many major contributions to psychology and .related sciences. If the 1959 paper had been his only contribution, he would still deserve a place among psychology’s “greats.”
Officially retired in 1986, Fiske is now almost completely retired from the frenetic world of actual research and scholarship. At the University of Chicago from 1948, the time of his doctorate at the University of Michigan, he worked on many problems, not solving any of them but contributing to our understanding of the problem and its consequent reformulation: intraindividual variability over time; our overreliance on words as stimuli, responses, and in instructions; rater-judge effects; and โ most central of all โ the method problem.
Surveying High-Risk Sexual Behaviors
Robert M. Brackbill
Soon after the AIDS epidemic was identified, specific sexual practices were reported as risk factors for AIDS. As a result, a new era began when human sexual behavior was a major focus of epidemiological and prevention research. Until this time, work by Kinsey and associates provided the dominant theoretical and methodological approach to human sexual behavior. Their focus was primarily on the multiple dimensions of individual sexuality. However, with the increasing incidence of AIDS and other sexually transmitted diseases, a new, public health approach was needed to understand sexual behavior as social interaction.
A seminal study of HIV infection using a public health approach was done at the University of California (Winkelstein et al., Journal of the American Medical Association, 1987, 257, 321-325). The researchers designed a populationยญ based cohort study to measure the association between specific sexual behaviors and HIV infection in San Francisco’s gay male community. In 1982, single men 25 to 55 years old were probabilistically selected from neighborhoods that had elevated rates of sexually transmitted diseases (STDs). They found that men who had engaged in receptive anal intercourse had a two-fold greater risk for acquiring HIV infection than those men who had not. Concerted public health efforts to monitor sexual and HIV risk reduction behaviors in many different settings and populations were well underway at this time. Thus, as the San Francisco study continued, declines in the prevalence of these high-risk behaviors and parallel declines in the annual rates of HIV infection were found.
Where are we now?
In a recent search of the published literature (Brackbill, Anderson, & Wald, unpublished, 1995), two colleagues and I found reports on nearly 100 sexual risk practice surveys conducted over the past 10 years. We grouped these surveys into general population and high-risk population surveys.
General population surveys, through sampling adults using the household as the sampling unit, collect information about broad population groups. These surveys are useful for estimating the size of risk groups within the adult population, for instance, the percentage of the population that engages in same gender sex. On the other hand, surveys of high-risk populations are designed to obtain information about specific practices and behaviors of groups at high risk for being infected by or transmitting an STD, such as male commercial sex workers.
General population surveys may be grouped into national periodic surveys and national in-depth, non-periodic surveys. The first group includes such surveys as the General Social Survey (Smith, Family Planning Perspectives, 1991, 25, 61-66), National Survey of Family Growth (Mosher & Bachrach, Family Planning Perspectives, 1996, 28, 4-12; Anderson, Brackbill, & Mosher, Family Planning Perspectives, 1996, 28, 25-28), and the Youth Risk Behavior Survey (Centers for Disease Control, unpublished, 1995). For the most part, these surveys incorporate relatively few questions on sexual behaviors, for instance, number of partners, sex of partners, and number of partners other than main partner. Although general in nature, such items can be used to measure trends in sexual risk practices.
In contrast, the national in-depth, non-periodic surveys have questions about specific activities with specific partners, characteristics of sex partners, feelings and attitudes about sexual activities, and alternative sexual outlets. In the last few years, several major in-depth surveys have been reported: the National AIDS Behavioral Survey (NABS) (Catania et al., Science, 1992, 258, 1101-1106), National Health and Social Life Survey (NHSLS) (Laumann, Gagnon, Michael, & Michaels, The Social Organization of Sexuality, 1994), and National Survey of Men (Tanfer, Family Planning Perspectives, 1993, 25, 81-86). These in-depth surveys used instruments independently developed for them. They also had major differences in survey modality (e.g., telephone vs face-to-face interview), and sampling frames (e.g., national sample or a combination of a national sample and samples of urban populations). In addition, the NABS was designed to be a longitudinal study to monitor changes in behavior of respondents over time through follow-up interviews, whereas the other two were oneยญ time only surveys.
The 80 high-risk population surveys we reviewed were categorized as those that targeted adolescents/youth, high-risk adults (such as people who live in geographical areas where there is an excess incidence of STDs), drug users, homosexual men, and HIV-infected persons. Although these surveys in general covered sexual behaviors comprehensively, the majority did not use a method of selecting respondents that would be considered representative. Respondents often were selected through convenience sampling, that is, they were included in the study because they were available at certain locations or times. Thus, it is problematic whether these studies are generalizable to groups beyond the individuals surveyed. Furthermore, questions common among two or more survey instruments were constructed differently, thereby making them less or non-comparable and thus restricting generalizability across surveys. In addition, questions were constructed in many different ways among these different surveys. For instance, no standard set of questions was employed to measure a variable as important as the frequency of condom use for disease protection.
Challenges for the Present and Future
There are many challenges in surveying sexual risk practices that have consistently impeded work in this area. First, those populations that are at the highest risk for STDs are also the most difficult from which to obtain a representative sample. This is especially true for out-of-school adolescents, homeless persons, young singles, and commercial sex workers. For example, in a survey of households with telephones, it was found that single people younger than 25 are more difficult to reach because they are more often not at home (Groves & Lyberg, An overview of nonยญresponse issues in telephone surveys, In Groves et al., Telephone Survey Methodology, 1988, 191-212). Since these people are typically sexually active, with nearly 40% having had more than one sex partner in the past year (author’s calculation from McKean, Muller, & Lang, Data Set 12-13, 1992 National Health and Social Life Survey), monitoring their sexual risk practices is obviously important. Perhaps new approaches are needed to portray the sexual behaviors of this and other high-risk, hard-to-sample groups.
A second challenge in doing sexual risk surveys is question development. The difficulty of designing questions is apparent when one considers the multiplicity of sexual practices occurring in even one encounter, let alone the diversity of behaviors across sexual partners and encounters. The challenge is even greater in designing a minimum set of questions for general population health surveys, because these surveys have limited space available for questions on any particular topic. In addition, sexual risk practices may be even more difficult to measure than intelligence, for at least in measuring intelligence, there is predictive validity in relation to, for instance, educational success. Although the occurrence of STDs may be used as measures of predictive validity of self-reported sexual risk practices, the correlation between sexual behavior and STDs can be low or high depending on their prevalence in the population being surveyed. For example, if the prevalence of STDs were low, as in the case of a general population survey, then the degree of sexual activity or condom use would not have much of an effect on disease incidence rates, while on the other hand, for a group with intermediate prevalence of STDs, the correlation between frequency of condom use and incidence of STDs may be high (Aral & Peterman, International Journal of STD and AIDS, 1996, in press).
There are numerous other issues associated with developing sexual risk practice instruments that have been difficult to address systematically because of the urgency of knowing about population-level sexual risk practices. These issues include under-reporting of stigmatized behaviors, joint interviewer-respondent characteristics (e.g., correspondence/nonยญ correspondence in age and gender), and cultural variation in sexual practices and terminology (Catania, Gibson, Chitwood, & Coates, Psychological Bulletin, 1990, 108, 339-362). Research is being done to meet these challenges, but it is difficult to translate findings into the actual practice of conducting sexual behavior surveys because, for instance, costs of obtaining a representative sample of hard-to-reach persons and developing culturally appropriate questions can be very high.
Where are we going?
Several different strategies are being pursued to establish a consistent program for monitoring high-risk sexual practices and those behaviors that prevent the spread of STDs. First, a workshop, sponsored by the Centers for Disease Control and Prevention and the Kaiser Family Foundation, was held in September 1995 involving researchers in the areas of drug use and sexual behaviors and HIV prevention. The primary purposes of this workshop were to identify gaps ii) the existing sexual behavior data, and to recommend strategies for filling those gaps. Needs identified in the workshop include need for standardized measures, need for core data elements, and need to adequately describe sampling methodologies in research reports to permit others to assess representativeness in survey studies. Overall workshop recommendations were to design sexual behavior surveillance studies from the local perspective, to employ rigorous methods of sampling and interviewing, and to use common core data elements in survey instruments to permit generalizability across surveys. In addition, it was recognized that it is important to build local/ regional infrastructure for analysis, interpretation, and use of these data, and that ownership of data by local decision-makers is vital for their ultimate use in identifying upward trends that indicate an increase in prevalence of disease, or downward trends that indicate positive impact of prevention efforts (Centers for Disease Control, MMWR โ Morbidity & Mortality Weekly Report, 1995, 44, 124-125).
A minimum set of core sex behavior questions was developed recently for the Behavioral Risk Factor Surveillance System (BRFSS). BRFSS is a monthly telephone survey conducted by state health departments in collaboration with the Centers for Disease Control and Prevention. The primary purpose of the BRFSS is to provide stateยญ specific estimates of the prevalence of behaviors related to leading causes of death of US adults. Starting in 1996, states may elect to add a series of questions on sex behavior and other risk factors for HIV. These core questions were developed collaboratively by state HIV/AIDS programs and state BRFSS coordinators. Although these questions are very limited in scope as they are add-on questions to an existing survey, their inclusion in the BRFSS is a very crucial step to having a set of questions covering core data elements with a “local” scope (state rather than national) in a survey using a commonly agreedยญ upon and scientifically valid methodology.
As long as public health practitioners need to know about emerging changes in sexual risk behaviors to anticipate increases in the incidence of sexually transmitted disease or measure the effectiveness of education and prevention programs to increase safe sex practice, sexual behavior surveying will be part of public health practice. Thus, given the current state-of-the-art in monitoring sexual risk behaviors, we need to continue to improve existing questions and instruments and perhaps develop new ones, pursue and use methodological research to improve the representativeness of survey samples and validity of self-reports of sexual risk practices, and increase generalizability across surveys.
Robert M. Brackbill holds a doctorate in psychology from the University of Minnesota and Master of Public Health degree from The University of California at Berkeley. He is a Scientific Information Specialist in the Division of Sexually Transmitted Disease Prevention at the Centers for Disease Control and Prevention in Atlanta. Dr. Brackbill’s research interests include behavioral correlates of sexually transmitted diseases, methodological development of behavioral surveillance systems, and behavioral interventions in prevention of sexually transmitted diseases.
ย
“One thought fills immensity.”
William Blake
4-1-fall-1996