Browsing by Author "Gaugler, Barbara B."
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item A test of a model of employment interview information gathering(1994) Spychalski, Annette C.; Gaugler, Barbara B.Interviewers' questioning behavior and predictive validity of applicant ratings varies considerably in unstructured interviews. A model hypothesizing a relationship between these variables is tested in this study. The model proposes that the relationship between interviewer questioning behavior and evaluation validity is mediated by the diagnosticity of applicant information that is collected during the interview. The process and content of three interviewers' questioning of 149 candidates for an entry-level correctional officer position was examined. Although the complete information gathering model was not supported, a robust relationship between questioning behavior and information diagnosticity emerged. Furthermore, the validity of individual interviewers' applicant evaluations varied considerably. These results reinforce the existence of differences in interviewers' questioning behavior and differences in the quality of applicant information they gather. Because differences in questioning behavior correspond to differences in the predictive validity of applicant ratings, both these variables should be monitored at the individual interviewer level.Item Context effects in a group interaction exercise(1991) Butler, Stephanie Kay; Gaugler, Barbara B.Context effects are a robust finding in psychology and are manifested in the form of assimilation effects and contrast effects. Assimilation effects occur when judgments of a target stimulus are biased toward the level of non-target, context stimuli. Contrast effects occur when judgments of a target stimulus are biased in the opposite direction of non-target context stimuli and are much more prevalent than assimilation effects. Limited research has been conducted on contrast effects in the area of industrial/organizational psychology and no study has yet examined contrast effects when target and non-target stimuli are observed simultaneously. The purpose of this study was to examine contrast effects in a group interaction setting (a leaderless group discussion (LGD) exercise of an assessment center) where all stimuli were observed simultaneously. Two factors were manipulated: the performance level of non-target stimuli (above standard and/or below standard candidates) and observation condition of the target stimulus (a standard candidate). In addition, the order in which the standard candidate was rated was counterbalanced. It was hypothesized that (1) contrast effects would occur in the LGD. One hundred, eighty-seven undergraduates were trained as raters and then viewed a videotape of a leaderless group discussion exercise in which a standard candidate was interacting either with two above standard candidates, two below standard candidates, or an above standard and a below standard candidate. Each videotape contained the same footage of the standard candidate; consequently, her performance was identical across conditions. Participants were assigned to observe one of the three candidates (the target candidate or one of the non-target candidates). During the rating session when the assessors discussed the performance of the candidates, performance of the standard candidate was discussed in either the first, second or third position. Individual ratings and consensus ratings were collected and analyzed. At the individual rating level, contrast effects were present in leaderless group discussion exercise ratings. Specifically, the standard candidate was rated significantly higher when performing with below standard candidates than with above standard candidates. The observation assignment had no significant influence on the magnitude of contrast effects; however, a leniency effect occurred for those assessors who were assigned to observe the standard candidate. Contrast effects were not present in the raters' consensus ratings. Conclusions, suggestions for future research, and implications for the study are discussed.Item Contrast and assimilation effects: A meta-analytic review(1994) Rudolph, Amy Spence; Gaugler, Barbara B.The effects of contrast and assimilation in person and sensory perception tasks were reviewed and examined within and across psychosocial and psychophysical research domains. A wide range of effect sizes that varied in both magnitude and direction were found. Meta-analysis of 57 studies containing 172 effect sizes across the total sample revealed a mean corrected d and variance of $-.21$ and 1.06, respectively, indicating contrast. The mean corrected d for studies in the psychosocial domain was $-.22,$ and the mean corrected d for psychophysical studies showed little effect, $-.04.$ Effect sizes were corrected for sampling error and unreliability in the dependent measure, accounting for little variance in study outcome. Sufficient variance remained both within and across domains after correcting for statistical artifacts to justify the search for moderator variables. Across the total sample, effect size was moderated by type of rater and stimulus presentation order. Serial presentation of context and target stimuli resulted in contrast effects, and simultaneous presentation resulted in assimilation effects. Graduate students produced ratings with the greatest magnitude of contrast effects, followed by psychology undergraduates, and unspecified undergraduates. The ratings of nonprofessional adult subjects showed assimilation. Contrast effects resulted when studies were published in perceptual psychology journals, when stimuli were presented simultaneously, and when the degree of discrepancy was high. Type of rater did not moderate effect size within the psychophysical domain. Within the psychosocial area, contrast effects were seen when the study was unpublished or found in the education literature, when the context and target stimuli were presented in similar forms, when the research was conducted in an applied lab setting, when stimuli were presented serially, when subjects were instructed to form an impression or evaluate performance, and when subjects actively rated the contextual stimuli. Assimilation effects were found in this domain when nonprofessional, unspecified adults served as subjects, when subjects were familiar with the stimuli, and when subjects were not trained in the rating process. The degree of discrepancy between the context and target stimuli, the time span between observation and rating, the presence of a distracter task, and subjects' interaction with other did not moderate effect size within the psychosocial domain. The findings suggest that, although contrast and assimilation may be pervasive, many variables moderate the magnitude and direction of the effects. In addition, integration across psychophysical and psychosocial domains may not be appropriate. Limitations of meta-analysis, implications, and suggestions for future research are discussed.Item Is there judgment bias in the assessment center method?(1990) Hayes, Theodore Laurance; Gaugler, Barbara B.Recent analyses of assessment center ratings have demonstrated that assessors who are trained to make dimension-based assessments may instead base their judgments on information other than dimension performance. This study evaluated the effects of enhanced accountability to make justifiable behavioral recordings and evaluations on assessor accuracy. Specifically, it was predicted that enhanced accountability to justify ratings and behavioral observations would lead assessors to make more accurate ratings and observations, as compared to the ratings and observations made by assessors whose personal accountability was not enhanced. Results showed that when accountability was not enhanced, as predicted, assessors relied on extraneous performance information (exercises, personality evaluations) when making their overall ratings. Assessors whose accountability was enhanced used only dimension information when making overall ratings, made more efficient behavioral observations and classifications, and had higher overall rating accuracy than did assessors whose accountability was not enhanced. However, enhanced accountability did not result in significantly different overall confidence of assessors in their decisions as compared to those whose accountability had not been enhanced.Item The influence of dimension concreteness on assessors' judgments(1991) Parker, Debra K.; Gaugler, Barbara B.Assessment center dimensions have often been found to be low in convergent and discriminant validity (Hinrichs & Haanpera, 1976; Sackett & Dreher, 1982; Sackett & Hakel, 1979; Turnage & Muchinsky, 1982). Assessors' use of prototypes may interfere with assessment center ratings. Reliance on prototypes may be especially pronounced when dimensions are abstract. In this study, the influence of concrete dimensions on assessors observations, classifications, rating accuracy, and convergent and discriminant validity was investigated in an assessment center simulation. Sixty-six university students were trained as assessors. Using either concrete or abstract dimensions, they then evaluated the performance of confederates in three situational exercises. Subjects who rated concrete dimensions classified behaviors more accurately, rated dimensions more accurately according to two accuracy measures, and produced somewhat better convergent and discriminant validity than did subjects who rated abstract dimensions. Subjects who rated abstract dimensions had more accurate ratings according to one accuracy measure than did subjects who rated concrete dimensions.Item The influence of ratee performance variations on raters' judgments(1990) Rudolph, Amy Spence; Gaugler, Barbara B.The purpose of this study was to investigate whether prior performance variations within and among job candidates affect evaluations of present performance and whether these variations result in ratings that are exaggerated or erroneous. There were three conditions: a consistent performance condition (CP), a within candidate performance variation condition (WCV), and a between candidate performance variation condition (BCV). Contrast effects were found in both the BCV and WCV conditions. In addition, ratings obtained when there were performance variations within and among candidates were significantly more accurate than those obtained when there were no performance variations. Practical implications and future research suggestions are discussed.