Details


History

CAPE was developed in the late 1990’s as a collaborate effort between BYU professors and language departments. CAPE’s ongoing development is overseen by Dr. Jerry Larson, Director of the Humanities Technology and Research Support Center. Originally, CAPE was developed for Windows machines and delivery was on CD. However, since 2002 CAPE has evolved to webCAPE and is now administered completely online. Spanish was the first language developed, followed by French, German, Russian, ESL and Chinese. The development of CAPE requires significant involvement and research. Content has to be developed and reviewed, followed by rigorous testing to determine the level and significance of each question. The process must repeat multiple times until the each question is calibrated and weighted according to its difficulty.

Each language has a database of questions that range from 400 (Russian) to as many as 1000 (Spanish). Studies have shown that a student would have to take the exam approximately 6 times to begin to see repeated questions.

How CAPE works

After starting the CAPE exam, the student enters a password and responds to questions regarding their previous language experiences to initiate a test record file.

Once the record identification information is entered, the computer prepares the student for the test. The first screen briefly explains that the student is to respond to multiple choice questions by typing and confirming the letter of the correct answer. To ensure the student has understood the instructions, a sample test item is given. After this screen the actual test begins. The computerized adaptive placement exams are designed to provide individualized testing, identifying the student’s ability level with combinations of grammar, reading, and vocabulary questions.

As a student proceeds through the test, the computer selects and displays items based upon his or her previous responses to previous items. The adaptive testing algorithm has been written so that the first six questions serve as “level checkers.” After the first six items, the test begins to “probe” in order to fine-tune the measurement by increasing or decreasing the difficulty by one level after each response. The test terminates if 1) the student incorrectly answers four questions at the same difficulty level, or 2) the student answers five questions at the highest difficulty level possible.

By requiring at least four misses at a given level, the test makes allowances for lucky guesses or inadvertent errors due to lack of concentration, nervousness, or other distractions. To avoid duplicate questions, the index to the question is flagged. When a question is used during the test, a sequential file is created to show the student’s performance during the test. At the conclusion of the test, the computer displays the performance level of the student. The student then consults the placement chart (which is determined by the respective language department) that lists the ranges of performance levels that pertain to the various language courses of the curriculum. Thus, the student is immediately advised of the class that appears most suited to his or her ability level.

A survey of BYU students who had taken the CAPE showed that students with little or no computer experience strongly agreed that having limited experience with the computer had little or no effect on their performance on the computer test.

Validation

The validity correlation coefficients for the Spanish, French, and German tests were calculated using the Multiple Assessment Programs and Services (MAPS) tests from Educational Testing Service in Princeton, New Jersey. They are as follows:

Spanish = .91
French = .80
German = .89

The reliability (test-retest) coefficients were calculated using an alternate forms reliability coefficient on student scores from the two administrations of the CAPEs. The coefficients were obtained using the Pearson Product Moment Correlation formula. The reliability coefficients are as follows:

Spanish = .86
French = .76
German = .80

In order to determine how well placements (i.e., accuracy of placement) can be made based on the S-CAPE scores, a study was conducted using 179 students enrolled in Spanish 101, 102, 201, and 302. These students were given the S-CAPE at the beginning of the semester. Midway through the semester the teachers of each student were interviewed and asked to rate on a six-point scale (0 – 5) how appropriately their student(s) had been placed. A rating of 0 – 2 represent a “bad to poor” placement, and marking of 3 – 5 indicated a “good to excellent” placement. The teachers interviewed indicated that 143 of the 179 students had been placed appropriately (79.9%). Results of the study indicated that only three students had been placed too high; the majority of the teachers who indicated their students had not been placed properly said the placement should have been one course higher, meaning that, for the most part, the errors in placement seemed to be conservative.

WebCAPE English Language Assessment has been calibrated in accordance with the standards American Council for the Teaching of Foreign Languages (ACTFL) proficiency guidelines: novice, intermediate, advanced and superior. These proficiency levels are defined separately for the ability to listen, read, and write.

The three sections of the English Language Assessment (listening, reading, writing) are taken independently. At the completion of one section, it is possible to continue to another section or stop and resume at another time.

Comments are closed.