Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Native and Non-Native Raters of L2 S...
~
Bogorevich, Valeriia.
Native and Non-Native Raters of L2 Speaking Performance: Accent Familiarity and Cognitive Processes.
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Native and Non-Native Raters of L2 Speaking Performance: Accent Familiarity and Cognitive Processes./
Author:
Bogorevich, Valeriia.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
Description:
271 p.
Notes:
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: A.
Contained By:
Dissertation Abstracts International79-10A(E).
Subject:
Linguistics. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10821820
ISBN:
9780438007437
Native and Non-Native Raters of L2 Speaking Performance: Accent Familiarity and Cognitive Processes.
Bogorevich, Valeriia.
Native and Non-Native Raters of L2 Speaking Performance: Accent Familiarity and Cognitive Processes.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 271 p.
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: A.
Thesis (Ph.D.)--Northern Arizona University, 2018.
Rater variation in performance assessment can impact test-takers' scores and compromise assessments' fairness and validity (Crooks, Kane, & Cohen, 1996). Rater variation can also undermine a test's validity and fairness; therefore, it is important to investigate raters' scoring patterns in order to inform rater training. Substantial work has been done analyzing rater cognition in writing assessment (e.g., Cumming, 1990; Eckes, 2008); however, few studies have tried to classify factors that could contribute to rater variation in speaking assessment (e.g., May, 2006).
ISBN: 9780438007437Subjects--Topical Terms:
557829
Linguistics.
Native and Non-Native Raters of L2 Speaking Performance: Accent Familiarity and Cognitive Processes.
LDR
:05200nam a2200349 4500
001
931668
005
20190716101635.5
008
190815s2018 ||||||||||||||||| ||eng d
020
$a
9780438007437
035
$a
(MiAaPQ)AAI10821820
035
$a
(MiAaPQ)nau:11514
035
$a
AAI10821820
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Bogorevich, Valeriia.
$3
1213869
245
1 0
$a
Native and Non-Native Raters of L2 Speaking Performance: Accent Familiarity and Cognitive Processes.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
271 p.
500
$a
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: A.
500
$a
Advisers: Soo Jung Youn; Okim Kang.
502
$a
Thesis (Ph.D.)--Northern Arizona University, 2018.
520
$a
Rater variation in performance assessment can impact test-takers' scores and compromise assessments' fairness and validity (Crooks, Kane, & Cohen, 1996). Rater variation can also undermine a test's validity and fairness; therefore, it is important to investigate raters' scoring patterns in order to inform rater training. Substantial work has been done analyzing rater cognition in writing assessment (e.g., Cumming, 1990; Eckes, 2008); however, few studies have tried to classify factors that could contribute to rater variation in speaking assessment (e.g., May, 2006).
520
$a
The present study used a mixed methods approach (Tashakkori & Teddlie, 1998; Greene, Carcelli, & Graham, 1989) to investigate the potential differences between native English-speaking and non-native English-speaking raters in how they assess L2 students' speaking performance. Kane's (2006) argument-based approach to validity was used as the theoretical framework. The study challenged the plausibility of the assumptions for the evaluation inference, which links the observed performance and the observed score and depends on the assumption that the raters apply the scoring rubric accurately and consistently.
520
$a
The study analyzed raters' scoring patterns when using a TOEFL iBT speaking rubric analytically. The raters provided scores for each rubric criterion (i.e., Overall, Delivery, Language Use, and Topic Development). Each rater received individual training, practice, and calibration experience. All the raters filled out a background questionnaire asking about their teaching experiences, language learning history, the background of students in their classrooms, and their exposure to and familiarity with the non-native accents used in the study.
520
$a
For the quantitative analysis, the two groups of raters 23 native (North American) and 23 non-native (Russian) raters graded and left comments for speech samples from Arabic (n = 25), Chinese (n = 25), and Russian (n = 25) L1 background. Students' samples were in response to two independent speaking tasks; the students' responses varied from low to high proficiency levels. For the qualitative part, 16 raters (7 native and 9 non-native) shared their scoring behavior through think-aloud protocols and interviews. The speech samples graded during the think-aloud included Arabic (n = 4), Chinese ( n = 4), and Russian (n = 4) speakers.
520
$a
Raters' scores were examined using the Multi-Faceted Rasch Measurement using FACETS (Linacre, 2014) software to test group differences between native and non-native raters as well as raters who are familiar and unfamiliar with the accents of students in the study. In addition, raters' comments were coded and also used to explore rater group differences. The qualitative analyses involved thematical coding of transcribed think-aloud sessions and interview sessions using content analysis (Strauss & Corbin, 1998) to investigate the cognitive processes of raters and their perceptions of their rating processes. The coding included such themes as decision-making and re-listening patterns, perceived severity, criteria importance, and non-rubric criteria (e.g., accent familiarity, L1 match). Afterward, the quantitative and qualitative results were analyzed together to describe the potential sources of rater variability. This analysis was done employing side-by-side comparison of qualitative and quantitative data (Onwuegbuzie & Teddlie, 2003).
520
$a
The results revealed that there were no radical differences between native and non-native raters; however, some different patterns were observed. Non-native raters also showed more lenient grading patterns towards the students with whom their L1 matched. In addition, all raters, regardless of the group, demonstrated several patterns of rating depending on their focus while listening to examinees' performance and interpretations of the rating criteria during the decision-making process. The findings can motivate professionals who oversee and train raters at testing companies and intensive English programs to study their raters' scoring behaviors to individualize training to help make exam ratings fair and raters interchangeable.
590
$a
School code: 0391.
650
4
$a
Linguistics.
$3
557829
690
$a
0290
710
2
$a
Northern Arizona University.
$b
English.
$3
1188248
773
0
$t
Dissertation Abstracts International
$g
79-10A(E).
790
$a
0391
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10821820
based on 0 review(s)
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login