The Proper Use of Placement Test Scores
The University of Wisconsin Placement Tests are designed for the sole purpose of placing students into college level courses. The questions on the placement tests are specifically selected with this single purpose in mind. The tests are not intended to measure everything that is learned in high school. Neither are the tests designed to compare students from one high school with students from another or to measure success in college-level courses. These tests are intended as part of the information to be used by advisors for placing students into the most appropriate course in an introductory college-level sequence in English, French, German, mathematics, or Spanish.
To this end, the tests are designed only to measure enough of a student’s achievement to provide an initial screen for course placement. The tests are not intended to be used as the sole piece of data for determining where students should be placed. The experienced teacher also will quickly realize, upon examining the objectives that are measured in the placement tests, that many skills which are taught in high school and which are necessary for success in college are not measured by the UW Placement Tests. Placement tests do not need to measure everything that is taught in high school in order to help place students into the most appropriate course.
History of the Development of the UW System Placement Tests. It is important to note that the placement tests were all developed at the request of committees composed of faculty from throughout the UW-System. Committees of UW faculty in each of the five disciplines for which tests are developed met over periods of up to three years to discuss common entry-level curriculum problems. One problem that was repeatedly identified in each of these committees was the need to correctly identify course placement for incoming undergraduate students. There was general and uniform agreement for each of the disciplines that it would be highly valuable to have a quick, reliable way to sort students into the appropriate classes, and to establish fair and uniform standards for measuring students’ abilities in a statewide university system in which curricula vary from campus to campus. One important outcome of these deliberations was the decision to develop a standardized placement test for each discipline. The first forms of the English and Mathematics Placement Tests were published in 1975 and 1982, respectively. The French, German, and Spanish tests followed in 1988.
Each of these committees began by carefully analyzing the individual curricula at each UW System institution. For each committee, the deliberations resulted in a written, detailed set of prerequisite objectives for all courses in the introductory sequence in that discipline. After this list was approved by all campuses, the committee began developing test questions to measure the knowledge and skills identified in the set of objectives. Although the tests include many different types of items designed to measure different knowledge and skills, the committees each decided individually that all items would be administered in a multiple-choice format. This decision was driven by the need for quick, cost-effective scoring and reporting of test results. The multiple-choice format is particularly flexible and useful in that it allows for writing of items which can measure different objectives and various levels of difficulty. This format also allows for precise and accurate score reporting in a quick and relatively inexpensive manner.
Validity of Placement Tests. The effectiveness of the UW System Placement Tests is gauged by how well the information they provide helps to accurately place students. This effectiveness is determined on an institution-by-institution basis, not for the UW System as a whole. This campus-by-campus analysis of validity is necessary given the variability in the different missions of each institution and the different curricula necessary to meet the needs of the respective missions. For each institution, the placement test is used to identify the highest-level course in a sequence for which a student is ready to learn in each discipline. As such, the tests are designed to measure potential or readiness to begin study in a sequence of courses in a discipline.
It is tempting to gauge how well a placement test works by comparing the test score with the final grade in the course. Unfortunately, course grade is neither an appropriate nor a useful criterion variable, as there is very little commonality among courses in the meaning of grades and course grades are very difficult to measure reliably. In addition, there are just too many other factors, unrelated to readiness to learn, that combine to contribute to final course grade. Such factors include lack of interest, improper time management throughout the semester, inappropriate study habits, late or incomplete assignments, personal problems unrelated to school work or incompatibility with the instructor. Each of these is known to affect the quality of a student’s work in the course. None of these has to do with improper course placement. All are reflected in final grade.
There are a number of very good ways to assess how well a placement test works at its intended task. More appropriate criteria exist for assessing the validity of each of the placement tests than comparison of test score with final course grades. Mid-semester grades, grades on the first midterm, or even scores on a more authentic, performance-based assessment taken in the first week or two of the course provide a better criterion against which to compare the placement test. Surveys of students’ satisfaction with their placement or instructors’ satisfaction with the homogeneity and readiness of their students are also both very helpful for monitoring the success of the tests for placement. In some cases, particularly for smaller institutions, instructors are asked to evaluate each student in the second or third week and identify the course in the introductory sequence for which that student is best suited. Another useful criterion variable is the add-drop information. Students who are misplaced often drop the course in which they enrolled and attempt to add a more appropriate class. Add-drop data aren’t always available because these data may not be reported for students who drop very early in a term. When they are available, however, high numbers of adds and drops suggest that students aren’t happy with their placement, haven’t been successful early in the class or realize they are not ready for the course. At various points in time, the Center has used all of these variables as criteria against which to measure the success of the placement test and the appropriateness of a campus’ cutscores.
Impact of Misplacement. Student misplacement has serious ramifications for the students, the department, and the institution. Research by the Center on the placement test has repeatedly demonstrated that students who enroll in a course that is too high are far less likely to be successful than students who enroll in the appropriate level course. Depending on the specific course and the discipline, over-enrolled students earn an average of about one-half to three-quarters of a grade point less than students who are correctly enrolled. Students enrolled in a higher than appropriate course in the foreign languages also have been found to be at a much greater risk of failing to earn a high enough grade to receive retroactive credits. Such students, furthermore, tend not to continue in the discipline beyond that course, whereas students who enroll in the appropriate level of course will often continue studying in the foreign language. Students who enroll too low don’t usually experience the decline in performance realized by students enrolling in a higher than appropriate course. The improvement in grade for these students, however, is usually not more than a quarter to a third of a grade point. At the price of several hundred dollars per credit, plus the possibility that it will take them longer to graduate, most students are unwilling to take courses below their optimal level. Furthermore, student course placement satisfaction surveys by the Center have demonstrated that correctly enrolled students are much more satisfied at the end of the semester with their experiences in the course than are students who enroll in a course that is either too low or too high for their level of readiness.
Misplacement is also costly for the academic department, and not just in terms of added paperwork or time spent adding and dropping students. Student misplacement and the resulting dropping and adding of courses leads to classes in which students do not have an equal level of readiness. Such classes are very difficult to teach. This, in turn, can cause problems, particularly in the foreign languages, where students enrolling below their readiness level often intimidate and stifle correctly enrolled students. Furthermore, having inadequate placement tests makes it very difficult for departments to accurately allocate their instructional resources. Students adding one course and dropping another may cause some sections to be over-full and others to be half empty. Similarly, it is very hard to gauge the correct number of sections to offer for various courses if, for example, students are allowed to self-select their courses.
The cost of misplacement for the entire university is the most difficult to quantify, but it is certainly nontrivial. Students who do poorly in a course (e.g., College Algebra, Freshman Composition, or 3rd semester Spanish), usually cannot subsequently decide to enroll in a course earlier in the sequence (e.g., Remedial Math, Remedial English, or 2nd semester Spanish) and receive credit toward graduation. Therefore, those students who over-enroll may never develop the skills that they need to prosper in other courses across the university.
Why Use a Placement Test? The placement tests we have are not perfect instruments, but such things as perfect instruments do not exist. No matter how one decides to place students, there always will be students who are incorrectly measured. Nevertheless, there are types of tests that might be more successful at placing students appropriately. The question that needs to be asked is "What is gained by using a test versus not using a test?" In fact, self-selection was explored early in the development of the Foreign Language Placement Testing Program and not found to be a viable option. In fact, it was because of the failure of self-selection at most of the UW institutions that the decision was originally made to create the Foreign Language Placement Tests. The effects of student misplacement were felt both in high levels of student dissatisfaction and in institutional inefficiencies.
Choosing among placement tests (or even whether or not to use a placement test) boils down to a cost-benefit analysis. Under the current funding model for the Center, each of the 14 UW campuses pay a share of the Center’s budget, proportional to the size of that campus’ incoming freshman class. Individual campuses often pass this fee along to their students. In exchange for this fee, the vast majority of students are correctly placed, thereby saving the students, departments, and universities numerous hidden costs.
Criticism of Multiple-Choice Tests. Multiple-choice tests are often criticized for measuring only basic concepts and for not measuring fully the abilities of interest. It is true that multiple-choice tests often measure only a part of the abilities of interest, but it is also true that any test measures only a small fraction of the abilities of interest. No test can fully measure all the knowledge or abilities of interest. More importantly, however, is the fact that it is not necessary to measure the full-range of skills and knowledge in order to place students correctly.
Multiple-choice tests are sometimes criticized as measuring artificial abilities that are not authentic, that is, abilities that are not like the abilities of interest. The use of multiple-choice tests in these contexts must be evaluated with respect to how well the tests provide valid information for the intended purpose. In the case of the placement tests, this information must be judged by how well students are sorted into homogeneous groups that have the requisite skills needed to succeed in the identified course. Multiple-choice tests validated against this criterion have been found to provide useful information that can guide the placement of students into the introductory sequence of courses in a discipline. This determination, however, must be made on a campus-by-campus basis, as the types of students, the curricula, and the prerequisite skills needed for success differ across campuses.
The UW Placement Tests are sometimes criticized for being overly narrow in scope. The English and foreign language tests, for example, only measure language mechanics and reading comprehension, but not writing, speaking, or listening. These tests, like the Mathematics Test, measure discrete objectives. They only ask students to select from a set of choices, not to produce essays or show how they arrived at a particular answer. The narrowness of these tests does make them less than ideal choices if the objective is to get a complete picture of a student’s achievement. In fact, the placement test committees have experimented with broadening the scope of the tests, but have not yet found a model that improves placement sufficiently to justify the added expense. As an example, all three foreign language tests used to include a separate section on listening comprehension. However, one by one, the campuses stopped using the listening test because they found that it was extremely difficult to administer and provided little additional information, for the cost, to justify continuing its use. The one remaining campus stopped using it when a research study found that, contrary to what one might believe, the listening comprehension test did not improve placement. In fact, in many cases, it hurt placement.
The English and foreign language committees have each explored the use of essay tests either as stand-alone tests or to be used in addition to the current multiple-choice tests. One problem that always arises is that scoring the essays is both very time-consuming and expensive. In addition, contrary to what was anticipated, in each instance, the improvement in placement was found to be minimal and not worth the added expense.
Current Experience with the UW Placement Tests. UW Placement Tests appear to be working adequately. Research by the Center and anecdotal reports from UW faculty have routinely indicated that these tests are effective tools that can be used quickly and inexpensively to accurately place most of our students. A key aspect of the placement process, however, is that UW instructors are always encouraged to assess their students during the first week of classes to verify the appropriateness of the placement. These tests are only part of the information that is needed for placement.
The English and foreign language Placement Test Committees are considering facilitating this assessment by developing a series of standardized writing prompts and scoring rubrics for use by instructors during the first few weeks of classes. As these tools are developed, we will take all measures necessary to assess their adequacy and determine whether they provide effective information that will lead to improving course placement. The goal is to work with faculty to help provide useful and relatively inexpensive tests for placing students into the appropriate introductory college-level courses.
Return to Homepage