Data and codebook for SSLA article: "A closer look at a marginalized test method: Self-assessment as a measure of speaking proficiency"
Principal Investigator(s): View help for Principal Investigator(s) Paula Winke, Michigan State University; Xiaowan Zhang, MetaMetrics
Version: View help for Version V1
Name | File Type | Size | Last Modified |
---|---|---|---|
|
text/csv | 11.6 KB | 03/14/2022 07:51:AM |
|
text/csv | 1.9 KB | 03/14/2022 07:53:AM |
|
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | 14.7 KB | 03/14/2022 09:43:AM |
|
text/csv | 123.3 KB | 03/14/2022 09:43:AM |
|
application/x-spss-sav | 102.2 KB | 03/14/2022 09:43:AM |
|
text/csv | 40.9 KB | 03/14/2022 09:43:AM |
|
application/x-spss-sav | 69.3 KB | 03/14/2022 07:52:AM |
Project Citation:
Winke, Paula, and Zhang, Xiaowan. Data and codebook for SSLA article: “A closer look at a marginalized test method: Self-assessment as a measure of speaking proficiency.” Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2022-03-14. https://doi.org/10.3886/E164981V1
Project Description
Summary:
View help for Summary
Second language (L2) teachers may shy away from self-assessments because of warnings that students are not accurate self-assessors. This information stems from meta-analyses (Ross, 1998, Li & Zhang, 2021) in which self-assessment scores on average did not correlate highly with proficiency test results. However, researchers mostly used Pearson correlations, when polyserial could be used. Furthermore, self-assessments today can be computer-adaptive. With them, nonlinear statistics are needed to investigate their relationship with other measurements. We wondered, if we explored the relationship between self-assessment and proficiency test scores using more robust measurements (polyserial correlation, continuation ration modeling), would we find different results? We had 807 L2-Spanish learners take a computer-adaptive, L2-speaking self-assessment and the ACTFL Oral Proficiency Interview – computer (OPIc). The scores correlated at .61 (polyserial). Using continuation ratio modeling, we found each unit of increase on the OPIc scale was associated with a 130% increase in the odds of passing the self-assessment thresholds. In other words, a student was more likely to move on to higher self-assessment subsections if they had a higher OPIc rating. We found computer-adaptive self-assessments appropriate for low-stakes L2-proficiency measurements, especially because they are cost effective, make intuitive sense to learners, and promote learner agency.
Funding Sources:
View help for Funding Sources
Language Flagship, National Security Education Program (NSEP) and the Defense Language and National Security Education Office (DLNSEO) (8/1/2014 - 7/31/2016: 2340-MSU-7-PI-093-PO1; 8/1/2016 - 12/31/2019: 0054-MSU-22-PI-280-PO2)
Scope of Project
Subject Terms:
View help for Subject Terms
Self-assessment;
proficiency;
speaking skills;
Spanish;
College learners;
foreign languages;
continuation ratio modeling;
foreign language learning;
OPIc;
oral proficiency exam
Geographic Coverage:
View help for Geographic Coverage
Michigan
Time Period(s):
View help for Time Period(s)
1/10/2017 – 5/15/2017 (Spring 2017)
Collection Date(s):
View help for Collection Date(s)
3/10/2017 – 5/15/2017 (Late Spring 2017)
Universe:
View help for Universe
College-level Spanish language learners
Data Type(s):
View help for Data Type(s)
other;
survey data
Collection Notes:
View help for Collection Notes
The data in this study is a subset of the data collected for the Language Proficiency Flagship project at Michigan State University. For the project, a sample of intact Chinese, French, Russian, Spanish classes at Michigan State University were pseudo-randomly selected to have their proficiency measured on five occasions over the course of three academic years (fall 2014 through spring 2017). At each time of testing, the sampled classes were brought by their language instructors to a computer lab to take a background survey, a self-assessment of oral skills, and a computerized oral proficiency interview test from Language Testing International (a test officially known as ACTFL’s OPIc). For the current study, we focus on the Spanish students who were tested in spring 2017, and we use their oral proficiency interview test scores and their self-assessment outcomes.
Methodology
Response Rate:
View help for Response Rate
The students who received interpretable OPIc scores were enrolled in first-year (100-level; N = 131), second-year (200-level; N = 251), third-year (300-level; N = 346), and fourth-year (400-level; N = 79) Spanish courses within the four-year program, for a total of 807 students in this study.
Sampling:
View help for Sampling
The sample size was not determined via a priori power analysis. It was determined by the number of students who enrolled in the Spanish courses during the study period.
Data Source:
View help for Data Source
We used two sets of test data (oral self-assessment, and ACTFL’s OPIc scores) for this project, and we additionally recorded the students’ year in the 4-year Spanish program as a gross indicator of Spanish ability.
Collection Mode(s):
View help for Collection Mode(s)
computer-assisted self interview (CASI);
web-based survey
Scales:
View help for Scales
Please see the codebook, as there is a different scale for each variable.
Weights:
View help for Weights
NA
Unit(s) of Observation:
View help for Unit(s) of Observation
Individual students
Geographic Unit:
View help for Geographic Unit
NA
Related Publications
Published Versions
Report a Problem
Found a serious problem with the data, such as disclosure risk or copyrighted content? Let us know.
This material is distributed exactly as it arrived from the data depositor. ICPSR has not checked or processed this material. Users should consult the investigator(s) if further information is desired.