We are awash in data on students, and yet the focus on data has not dramatically transformed our schools. This is partly because we are not looking at the right data.
The next time you hear someone talk about “student performance data,” try asking, “What kind of data do you mean?” Watch the stammering that this simple question provokes. If people think that the goal of education is to raise students’ scores on standardized multiple choice tests, why do they get so uncomfortable saying this out loud? Why do people feel the need to cloud the issue with the language of “student performance data”?
To me, there is a legitimate debate to be had about the value of multiple-choice tests. For example, as someone who has been rabidly opposed to standardized multiple choice tests, I have to admit that when I look at the English portion of the California High School Exit Exam, I think that our graduates should be able to read a passage at that level and answer some simple multiple choice questions about what they just read. And I acknowledge that for the small percentage of our students who have not passed the test after the second attempt, we do in fact concentrate more resources on helping those students pass. Students who cannot pass this test are not merely “bad at taking tests.” The test seems to be picking up that these are struggling students who we have had a hard time teaching well. The high school exit exam has caused us to improve our practice for these students. On the other hand, many other tests, such as the California Standards Tests, completely divert energy from productive teaching and learning with their relentless emphasis on memorizing lists of soon-to-be-forgotten facts.
So I welcome the debate on the merits of statewide accountability efforts and the costs associated with various attempts to improve our schools. What is increasingly disturbing to me as I meet with educators and policy makers around the country is the way in which we are becoming unwilling to say out loud that to which we are subjecting our students. If, on balance, multiple choice tests are a cost effective way to gather some kinds of information about what students are learning (a debatable proposition), then let’s embrace the tests for what they are, acknowledging their flaws and limitations as we do so. What I see instead is an abdication of this debate by pretending that the only way one could look at how schools are doing is by measuring “student performance data.”
The limitations of standardized tests have been well documented, and I will leave that critique for others. However, I would like to make one point to anyone who supports giving a standardized test to a student. Larry Rosenstock, the CEO and founding principal of High Tech High, has quipped that if any legislative body wants to give standardized tests to students, they should first give the test to all the legislators and make the results public, then give the test to all the teachers and principals and make the results public, then give the tests to all the parents and make the results public, and then, if anyone is still paying attention, give the tests to all the students and make the results public.
This line always elicits a chuckle, yet this principle can be applied on a small scale with dramatic results. My corollary to the Rosenstock principle is the following: before subjecting students to a multiple choice test, first take the test yourself. At High Tech High, there was a suggestion that we have every student take a particular multiple choice test as a pre/post test, “so that we can measure growth of students over time and make data driven decisions that lead to higher student performance.” But before we started giving out the test, we did something radical. A number of us sat down and took the test ourselves. The results were telling. Support for the idea evaporated. The pain of taking these odious tests and realizing once again how little what was being tested matched our goals for our students completely changed our conversation. In the end, we may choose to give students a pre/post multiple choice test, but if so, it will be given with full understanding by stakeholders as to what is and is not being measured by such instruments.
So, am I arguing that standardized tests are the devil incarnate, are ruining our public schools, are draining all the creativity out of teaching, and are causing our best teachers to leave the profession? I am not. What I am saying, however, is that we could be “holding schools accountable” to other data that would have a more dramatic and immediate impact on students’ lives and learning.
The pot of gold at the end of the rainbow is for students to earn a college degree. Publishing data on the percentage of students from a given high school that eventually earn that degree (say, in six years) would transform that school as well as the national debate. Since it is difficult to wait years to find out the outcome of this data point, it seems reasonable to keep track of some other data along the way that reasonably predicts college entrance and success. Bear in mind that if we are going to compare schools based on this data, any reasonable system would honor schools that do better than you would expect given the demographic of students they serve.
Admissions. What kinds of students are admitted into the school? It defies common sense to compare schools that have meritocratic admissions processes with schools that do not.
Demographics. Who ends up going to the school? Again, let’s not compare apples and oranges.
Attrition. Who stays in the school, and who leaves? Some well-known schools that have raised test scores have also lost a lot of their students (i.e. the ones who don’t do well on standardized tests) along the way.
Curriculum. What percentage of students take college preparatory coursework, disaggregated by ethnicity and family income level? These data should be readily available. California already keeps track of and publishes the percentage of students who complete the University of California entrance requirements, as well as the percentage of students who take physics, chemistry, and advanced math courses.
College entrance exams. How do students perform, and more importantly, what percentage of the students at a school even take these tests?
High school graduation rates. Current reports on student drop-out rates dramatically under-report these data.
College acceptance rates. What percentage of the ninth grade class from four years ago has been accepted into a four-year school?
College attendance rates. Do the students show up in the fall? The National Student Clearinghouse, a voluntary database that follows students from high school into and through college, can help us find the answer.
College graduation rates. College entrance tests such as the SAT have been found to be mildly predictive of first-year college grades, but not college graduation. It is important to remember that college graduation is the goal.
The above numbers are easy to compile, understand, and compare. If I were to seek one “silver bullet” to reliably compare schools, it would be this: what percentage of free-and-reduced lunch eligible ninth graders eventually complete a four-year college degree program? The answer to that question would give us “student performance data” worth looking at!
We are awash in data on students, and yet the focus on data has not dramatically transformed our schools. This is partly because we are not looking at the right data.
The next time you hear someone talk about “student performance data,” try asking, “What kind of data do you mean?” Watch the stammering that this simple question provokes. If people think that the goal of education is to raise students’ scores on standardized multiple choice tests, why do they get so uncomfortable saying this out loud? Why do people feel the need to cloud the issue with the language of “student performance data”?
To me, there is a legitimate debate to be had about the value of multiple-choice tests. For example, as someone who has been rabidly opposed to standardized multiple choice tests, I have to admit that when I look at the English portion of the California High School Exit Exam, I think that our graduates should be able to read a passage at that level and answer some simple multiple choice questions about what they just read. And I acknowledge that for the small percentage of our students who have not passed the test after the second attempt, we do in fact concentrate more resources on helping those students pass. Students who cannot pass this test are not merely “bad at taking tests.” The test seems to be picking up that these are struggling students who we have had a hard time teaching well. The high school exit exam has caused us to improve our practice for these students. On the other hand, many other tests, such as the California Standards Tests, completely divert energy from productive teaching and learning with their relentless emphasis on memorizing lists of soon-to-be-forgotten facts.
So I welcome the debate on the merits of statewide accountability efforts and the costs associated with various attempts to improve our schools. What is increasingly disturbing to me as I meet with educators and policy makers around the country is the way in which we are becoming unwilling to say out loud that to which we are subjecting our students. If, on balance, multiple choice tests are a cost effective way to gather some kinds of information about what students are learning (a debatable proposition), then let’s embrace the tests for what they are, acknowledging their flaws and limitations as we do so. What I see instead is an abdication of this debate by pretending that the only way one could look at how schools are doing is by measuring “student performance data.”
The limitations of standardized tests have been well documented, and I will leave that critique for others. However, I would like to make one point to anyone who supports giving a standardized test to a student. Larry Rosenstock, the CEO and founding principal of High Tech High, has quipped that if any legislative body wants to give standardized tests to students, they should first give the test to all the legislators and make the results public, then give the test to all the teachers and principals and make the results public, then give the tests to all the parents and make the results public, and then, if anyone is still paying attention, give the tests to all the students and make the results public.
This line always elicits a chuckle, yet this principle can be applied on a small scale with dramatic results. My corollary to the Rosenstock principle is the following: before subjecting students to a multiple choice test, first take the test yourself. At High Tech High, there was a suggestion that we have every student take a particular multiple choice test as a pre/post test, “so that we can measure growth of students over time and make data driven decisions that lead to higher student performance.” But before we started giving out the test, we did something radical. A number of us sat down and took the test ourselves. The results were telling. Support for the idea evaporated. The pain of taking these odious tests and realizing once again how little what was being tested matched our goals for our students completely changed our conversation. In the end, we may choose to give students a pre/post multiple choice test, but if so, it will be given with full understanding by stakeholders as to what is and is not being measured by such instruments.
So, am I arguing that standardized tests are the devil incarnate, are ruining our public schools, are draining all the creativity out of teaching, and are causing our best teachers to leave the profession? I am not. What I am saying, however, is that we could be “holding schools accountable” to other data that would have a more dramatic and immediate impact on students’ lives and learning.
The pot of gold at the end of the rainbow is for students to earn a college degree. Publishing data on the percentage of students from a given high school that eventually earn that degree (say, in six years) would transform that school as well as the national debate. Since it is difficult to wait years to find out the outcome of this data point, it seems reasonable to keep track of some other data along the way that reasonably predicts college entrance and success. Bear in mind that if we are going to compare schools based on this data, any reasonable system would honor schools that do better than you would expect given the demographic of students they serve.
Admissions. What kinds of students are admitted into the school? It defies common sense to compare schools that have meritocratic admissions processes with schools that do not.
Demographics. Who ends up going to the school? Again, let’s not compare apples and oranges.
Attrition. Who stays in the school, and who leaves? Some well-known schools that have raised test scores have also lost a lot of their students (i.e. the ones who don’t do well on standardized tests) along the way.
Curriculum. What percentage of students take college preparatory coursework, disaggregated by ethnicity and family income level? These data should be readily available. California already keeps track of and publishes the percentage of students who complete the University of California entrance requirements, as well as the percentage of students who take physics, chemistry, and advanced math courses.
College entrance exams. How do students perform, and more importantly, what percentage of the students at a school even take these tests?
High school graduation rates. Current reports on student drop-out rates dramatically under-report these data.
College acceptance rates. What percentage of the ninth grade class from four years ago has been accepted into a four-year school?
College attendance rates. Do the students show up in the fall? The National Student Clearinghouse, a voluntary database that follows students from high school into and through college, can help us find the answer.
College graduation rates. College entrance tests such as the SAT have been found to be mildly predictive of first-year college grades, but not college graduation. It is important to remember that college graduation is the goal.
The above numbers are easy to compile, understand, and compare. If I were to seek one “silver bullet” to reliably compare schools, it would be this: what percentage of free-and-reduced lunch eligible ninth graders eventually complete a four-year college degree program? The answer to that question would give us “student performance data” worth looking at!