Table of Contents

Reporting FAQs

What is a value-added model?

Value-added modeling is a statistical analysis used to measure the academic growth rates of groups of students from year to year. In other words, value-added modeling measures the change in achievement for students with a district, school, or teacher within a tested grade or subject (such as Science, Reading, or Math) over time.

What is EVAAS?

SAS® EVAAS provides value-added measures for the state of Ohio. There is not just one EVAAS model; there are many models, each of which is customized for the available testing, policies, and preferences of each state. The reporting is available through a secure web-based application that includes additional reports for diagnostics, interactive queries, and proactive planning through student projections.

Why is value-added important? Why is it important to measure growth?

Education policy has historically focused on proficiency, which had an unintended consequence of focusing on students who are on the cusp of proficient scores on state assessments rather than all students. However, we believe that all students are important, that all students can learn, and that educators should focus on all students. When we focus on growth, we focus on all students.

What is the difference between growth and achievement?

Achievement is what is measured by a test score. Growth is determined by comparing students against themselves and their change in achievement as measured by their performance on state assessments across multiple grades and subjects.

How does a value-added model work?

A value-added model measures how much growth a group of students made relative to the expected growth. A group could be all the students in a particular district, school, or classroom, in a specific subject, grade, and year. Expected growth is based on how the individual students within the group performed, on average, compared to other students just like them across the state.

A value-added model then compares a group's actual growth to the expected growth to see if there is evidence that that group made more than, less than, or about the same growth as expected.

How is a value-added reporting system useful?

The information available from value-added modeling can help educators in a multitude of ways, such as:

  • Providing a robust tool for educators to reflect on their practices and have a positive impact on all students' achievement
  • Identifying strengths and weaknesses of curriculum or instructional practices in specific grades and subjects (such as science, reading, or math)
  • Detecting the effectiveness of programs with special student populations
  • Pinpointing incoming students' academic needs before those students set foot in the classroom

How is value-added calculated for Ohio's Reading and Math tests in grades 4—8?

The value-added model for Ohio's Reading and Math tests in grades 4–8 is a gain-based model, which means that it measures growth between two points in time for a group of students. To estimate the average achievement of a group of students, EVAAS does not simply average the students' assessment scores in the selected subject and grade. Instead, it uses all EOG scores across years, grades, and subjects. Using this much data helps to minimize the impact of measurement error associated with any individual test score. It also allows all students to be included, even those with incomplete testing histories. The resulting estimates of each group's average achievement are more precise and reliable, and they better reflect the group of students served, despite missing test scores or high mobility.

The growth expectation is met when a cohort of students from grade to grade maintains the same relative position with respect to statewide student achievement in that year for a specific subject and grade. For more details about this model, please click the Value Added Resources link on the login page to view the technical reports.

How is value-added calculated for Ohio's End-of-Course Exams?

The value-added model for Ohio's End-of-Course exams is a regression-based model that measures the difference between students' expected scores for a particular subject/year with their actual scores. The growth expectation is met when students with a district/school/teacher made the same amount of progress as students in the average district/school/teacher with the state for that same year/subject/grade. This model does not require that students have previous test scores in the same content area, or even the same subject area, so it is a good fit for end-of-course exams. Instead, a student must have at least three prior assessment scores for an expected score to be generated. This does not mean three years of scores or three scores in the same subject, but simply three prior scores across grades and subjects. The process of generating each student's expected score begins with building a robust statistical model of all students who took the selected assessment in the most recent year. The model includes the scores of all students in the state or reference group along with their testing histories across years, grades, and subjects. This model determines the relationships of all the prior assessment scores to students' performance on the selected assessment. For more details about this model, please click the Value Added Resources link on the login page to view the technical reports.

How can value-added models predict student performance on tests using test scores from other subjects?

For assessments such as OST Science or EOC tests, the value-added measure is based on the difference between a student's expected and actual performance. The expected performance for each student is based on that student's prior testing history as well as how students with similar test scores performed, on average, for the test in question.

Just as weight and shoe size can be used to predict height, EVAAS models can use prior testing data from a variety of tests, subjects, and scales to determine a student's expected score on a test. The important thing is that there is a relationship between test scores from previous subjects and grades with test scores for the test in question. To say it differently, student performance in Math typically has a relationship to student performance in Reading since a student's ability to read impacts the ability to interpret and understand Math questions. In fact, the prediction to a particular test is possible even if the student does not have prior test scores in the same subject.

Note that EVAAS models do not assume there is a relationship between prior and current data. It uses the actual relationship determined by the students' data. If there isn't a strong enough relationship to provide a value-added measure, then that test will not be reported.

What if my student has a bad test day? Does value-added modeling hold me accountable based on one test given on one day?

Value-added modeling is not about one student on one day. It's about looking at the growth of an entire group of students over time.

Teacher, school, and district value-added measures incorporate data from groups of students. It only provides a growth measure for groups of students. Growth models look well beyond what students do on just one day by looking for a pattern across multiple years of growth estimates and multiple years of student test scores to see whether students, on average, made expected growth.

How does value-added modeling accommodate team teaching?

Teachers in tested subjects will have the opportunity to enter the percentage of instructional responsibility that they had for each student through the MCOECN portal. This is incorporated into the model to account for scenarios where more than one instructor had responsibility for a student's learning in a given subject/grade.

With so much emphasis on standardized testing, does value-added modeling encourage teaching to the test?

No, this is not about teaching to the test. Effective teaching is about helping each student make academic progress regardless of their entering achievement level, and that is what value-added modeling captures. By teaching a narrowed curriculum, very low and high achieving students will have limited opportunities to make appropriate academic growth. Thus, the teacher or school might actually be less likely to be highly effective from a value-added perspective.

If I'm worried about the quality of the state assessments, which is the basis of value-added modeling, should I also be worried about the quality of the value-added results?

All assessments used in the value-added modeling are reviewed and analyzed to ensure that they are appropriate for value-added analysis. These protections are put in place to ensure that valid and reliable results are provided to educators to make accurately informed decisions.

Value-added modeling is just a number, so how can it help me improve my instruction or tell me what to do with my students?

The value-added reporting is part of a comprehensive web application that provides teachers with interactive diagnostic data for additional insight, school and district reporting for context, and student-level projections for proactive planning with current or future students. This information, combined with teacher knowledge of their own students and schools, can greatly assist in school improvement.

Is value-added modeling fair even though my students are…

a. Low achieving or high achieving?

Yes, value-added modeling is fair even if your students are low achieving or high achieving. Value-added modeling is based on growth, not proficiency or achievement. The growth expectation for each student is based on each student's own previous achievement. Regardless of whether students are proficient, the Ohio value-added model will estimate whether, on average, students met the growth expectation for a class or school.

b. Part of a certain socioeconomic/demographic group?

Yes, value-added modeling is fair even if your students are part of a certain socioeconomic/demographic (SES/DEM) group. EVAAS uses all student testing history and includes students even if they have missing data so that each student serves as their own control. In other words, a student's SES/DEM status is usually quite consistent from year to year, and that student has always been testing with those same personal variables in place in addition to many other variables that can impact academic performance. Therefore, the impact of those variables has already been captured in the student's previous test results.

c. Highly mobile?

Yes, value-added modeling is fair even if your students are highly mobile. Teachers in tested subjects will have the opportunity to enter the percentage of instructional responsibility that they had for each student. This is incorporated into the model to account for students who were not in that teacher's classroom for the entire year.

d. Identified as Special Education?

Yes, value-added modeling is fair even if your students are identified as Special Education. EVAAS only uses the regular versions of the OST test, not the alternate assessment. That said, value-added modeling is based on growth, not proficiency or achievement. The growth expectation for each student is based on each student's own previous achievement. Regardless of whether students are proficient, the value-added model will estimate whether, on average, students met the growth expectation for a class or school.

Value-added modeling seems like a black box; how do I know it is accurate?

You can rest assured that the EVAAS modeling will use standard, vetted statistical methods to produce valid and reliable results. Non-partisan, independent experts in value-added analysis have replicated models similar to the EVAAS approach over the years, and their most recent findings suggest that these models are among the most reliable, least biased, and most suitable for teacher value-added modeling (Source: McCaffrey, D.F. and J.R. Lockwood [2008]. Value-Added Models: Analytic Issues. Presented at the Workshop on Value-Added Modeling sponsored by the National Research Council and the National Academy of Education, Board on Testing and Accountability, Washington, DC, Nov. 13-14, 2008).

For more details about the exact models, business rules, and customization for Ohio's value-added implementation, please click the Value Added Resources link on the login page.

Is value-added modeling reliable?

Yes. Independent researchers have noted that the reliability of value-added modeling is similar to what is used elsewhere for high-stakes decision-making (Source: Evaluating Teachers: The Important Role of Value-Added). Furthermore, the growth rating is not based on a ranking of teacher estimates from low to high, which can be rather unstable and uninformative; rather, the growth rating uses standard errors to determine effectiveness levels.

Value-added modeling is just an estimate, so how can I be held accountable for an estimate?

The growth rating takes this into account. The growth rating is based on the value-added estimate as well as its statistical precision (known as standard error). This means there must be significant evidence that students have made more than or less than the expected growth before receiving the highest or lowest ratings. Requiring significant evidence protects teachers from misclassification (identifying an effective teacher as ineffective or vice versa).

How can I have a low value-added rating when all my students passed the test?

Value-added is not about students passing or failing. This is about how much growth students made. Value-added modeling is based on growth, not proficiency or achievement. The growth expectation for each student is based on each student's own previous achievement. Regardless of whether students are proficient, the value-added models estimate whether, on average, students met the growth expectation for a class or school.

Why is the value-added methodology complex?

Although growth is conceptually easy to understand, statistical rigor is necessary to provide precise and reliable growth measures. A value-added model must overcome students' mobility, any missing test scores, and measurement error present in any test score. Simple models do not address these issues. As a result, a simple gain calculation does not provide a reliable estimate of effectiveness and could unfairly advantage or disadvantage teachers, schools, and districts according to the characteristics and prior achievement of their students. In contrast, the statistical rigor of Ohio's value-added approach ensures fairness for teachers, schools, and districts regardless of differences in the students that they serve as explained elsewhere in this document. Despite the complexity of the methodology, it is not necessary to be a statistician to understand the educational implications of EVAASreporting. With the EVAAS web application, educators have a wealth of reports that go beyond a single estimate of effectiveness and assist in identifying accelerants and impediments to student learning.

Why can school and district growth measures be different when there is only one school in the district?

School and district growth measures are calculated in two separate models. In the school model, a student's expected score is calculated based on prior test scores and the assumption that the student attended the average school in Ohio. In the district model, a student's expected score is calculated based on prior test scores and the assumption that the student attended school in the average district in Ohio. Because the number of schools is not the same as the number of districts, the growth of the average school might be different from the growth of the average district. In addition, community schools are not considered districts and are not included in the district model whereas community schools are included in the school model.

Where can educators learn more about interpreting and using value-added measures?

For more information about school and district value-added measures and about how value-added is used for educator evaluations, please click the Value Added Resources link on the login page.