5) MORE RIGOROUS DATA ANALYSIS METHODOLOGIES BETTER SUITED TO UNCOVERING THESE EFFECTS This analysis focuses on the power of current models of media exposure to predict statistically significant effects. "The results of the study indicate that the vast majority of election studies lack the statistical power to detect exposure effects for the three, five, or perhaps 10 point shifts in vote choice that are the biggest that may be reasonably expected to occur. Even a study with as many as 100,000 interviews could, depending on how it deploys resources around particular campaign events, have difficulty pinning down important communication effects of the size and nature that are likely to occur." Increase in sample size doesn't always, however, mean a direct parity with increases in predictability, and detection of exposure effects is likely to be unreliable unless the effects are both large and captured in a large survey. The point is that not ALL samples necessarily need to be large, just those that try to predict certain types of effects. --ZALLER 2002 Arguments: Reliance on self-reported levels of news media use should be abandoned for more reliable measures of media knowledge recall, and even this measure is very strongly related to a respondent's background political knowledge. All else equal, more use of more types of media should indicate greater exposure to the news, hence greater recall. Results indicate, however, that exposure to media items predicted recall of national and international news events less well than any other independent variables, indicating that respondents are well stratified by preexisting levels of general political knowledge. Well-informed people succeed in learning most types of news, regardless of the topic, and domain-specific effects, even larger ones, do not override the association between general political knowledge and news reception but instead supplement the effect of prior information. The authors argue that reception of the news (i.e., exposure plus attention, comprehension, and retention) is a prerequisite for media-induced opinion change, but indicate that it is conceivable that simple exposure to news media-even if it does not result in a lasting store of retrievable information-was equally sufficient in producing various attitudinal effects. --PRICE & ZALLER 1993 Implications: Zaller notes: "The slightly positive intercept captures the idea that a few people who have zero media exposure may nonetheless respond to campaign events, presumably because they discuss politics with people who have higher levels of media exposure. If exposure to a debate or news story makes citizens more likely to vote for a candidate, it might positively affect trait judgments, emotional reactions, and thermometer scores, and these variables might then absorb the media effect. This danger would be especially great if media exposure were measured with more error than the other variables, as it may well be. The problem can occur in any sort of model, but is especially likely to occur when summary variables, like trait evaluations or emotional reactions, are included in a vote model." --ZALLER 2002 This multicoliniarity issue is actually of particular interest. If much of the media effects are being 'absorbed' and transferred to opinion or vote change though other variables, instead of creating complex power models, shouldn't we be spending time trying to acertain the nature and extent of these absorbed effects? Granted, this is a complex situation; who is to say what variables are affected (if any for some) and whether all individuals absorb, transmit or otherwise receive these effects in the same way. However, it seems to me that to just assume that this is a "necessary but realistic fact" and not assume that we could somehow figure out a way in which to separate and/or capture absorbed effects seems to me hubristic. Unfortunately, I don't have any "off the cuff" ideas, whether remote or significant, that might even begin to address this issue, so it is left for future pontification, but the point remains; do we really want to assume that there is only one path to "verifiable significance" when it comes to determining media effects? --ZALLER 2002 Current survey correction methods do not account for DIF (subjective response variation) between respondents. Instead the authors recommend interpersonally comparable measurements using answers to vignette assessments that have actual levels of a variable that are the same for every respondent, thus adjusting for self-assessments that create the DIF. These are statistical corrections that systematically link related vignette-measured variables pairwise by respondent, essentially providing a unique "anchor point" for each respondent that can be transposed to multiple response sets. The authors explain: The perception of respondent about the level of the person described in vignette j is elicited by the investigator via a survey question with the same K1 ordinal categories as the first self-assessment question. Response consistency is thus formalized. --KING, MURRAY, SALOMON, & TANDON 2003 Exposure to misleading statements about Social Security causes citizens to have misleading assumptions regarding the program's future. Words, symbolisms and images elicit communication that misleads even the most informed, educated and interested citizens. Although the impact of misleading rhetoric is lessened for more sophisticated citizens, very misleading environmental influences affect even financial experts and the more educated. The quality of political information, therefore, and not just the amount of information available, significantly affects the quality of citizen knowledge. --JERIT & BARABAS 2003 This analysis focuses on the power of current models of media exposure to predict statistically significant effects. "The results of the study indicate that the vast majority of election studies lack the statistical power to detect exposure effects for the three, five, or perhaps 10 point shifts in vote choice that are the biggest that may be reasonably expected to occur. Even a study with as many as 100,000 interviews could, depending on how it deploys resources around particular campaign events, have difficulty pinning down important communication effects of the size and nature that are likely to occur." Increase in sample size doesn't always, however, mean a direct parity with increases in predictability, and detection of exposure effects is likely to be unreliable unless the effects are both large and captured in a large survey. The point is that not ALL samples necessarily need to be large, just those that try to predict certain types of effects. --ZALLER 2002 Recent research also suffers because unanticipated effects are frequently ignored (17); falsely equating statistical significance with political significance (17); does not account for effects of elites rather than 'ordinary citizens' (17) --GRABER 2002 Information necessary to answer questions of opinion or long-term memroy are most prone to variations, and that responses come not from one indication of preference, but instead as a culminationa of a range of preferences, which are processed to find the 'correct' a nswer on a survey questionnaire. Not all questions are answered in this way, however, so analyzing results should take this into consideration. --FELDMAN 1995 What is typically considered 'measurement error' or static or inconsistency within methodological analysis of survey responses may actually be indications of this variation, or range of preferences, inherent in the decision-making process. If this is true, then how we view these results can be either greatly enhanced, or largely diminished in relevance depending on whether or not we account for this variance or consider it as a random error term. --FELDMAN 1995 Recent research is hampered by measurement problems: media effects are highly complex and difficult to measure: researchers lack tools to measure objective human thinking (NLP potentials?); the cognitions, feelings and actions that respondents bring to the table is difficult to separate from actual effects that may have changed; impacts also vary based on subject matter. "Inability to prove the scope of mass media impact beyond a doubt has made social scientists shy away from assessing media influence on many important political events" (16). --GRABER 2002 the ability of researchers to draw general conclusions from this literature has been frustrated by inconsistent methods for analyzing news content, conflicting ideas of what "independent" news coverage might look like, and the tendency to study press-state relationships using stand-alone case studies having unique policy contexts and dynamics that obscure common patterns. --ZALLER 2002 Methods and Data: sample survey of two provinces in China (with n=371 respondents) and three in Mexico (n=551); for the self-assessment and each of the vignette question, respondents are given the same set of ordinal response categories, for example, "(1) No say at all, (2) Little say, (3) Some say, (4) A lot of say, (5) Unlimited say." Monte Carlo simulations analyze the similarity in using the above vignette process versus using typical ordinal probit methods on uncorrected responses --KING, MURRAY, SALOMON & TANDON 2003