Blank, R. K. (2013). Science instructional time is declining in elementary schools: What are the implications for student achievement and closing the gap? Science Education, 97(6), 830–847. doi:10.1002/sce.21078
For over a decade, science educators have lamented the ways in which testing in reading and mathematics has reduced time for science instruction. Blank used 20 years of teacher and student data to understand how time allocated to science instruction combines with student demographics to shape test scores. The study found a small but significant positive relationship between time on science instruction and performance.
This study combined teacher reports on classroom instructional time from the Schools and Staffing Survey (SASS) with student performance data from the National Assessment of Educational Progress (NAEP). SASS, which began in 1987, poses a consistent set of questions to a nationally representative sample of 60,000 public school teachers every four years. The most recent data available for Blank’s study were from the 2007–2008 school year.
The NAEP exams are administered by the National Center for Education Statistics, a branch of the U.S. Department of Education. NAEP state-level exams have been administered since 1990. They assess the performance of a nationally representative sample of fourth and eighth graders in math, science, reading, and writing. NAEP does not test every student or report student-level data. No Child Left Behind mandates state participation in NAEP math and reading components only. Science participation is voluntary; in 2009, four states and the District of Columbia did not participate.
Blank shows that, over the last two decades, time spent on science in U.S. elementary classrooms briefly increased and then began to decline. In 2007–2008, the most recent years for which SASS data were available, the average time spent on science instruction in grades 1 to 4 was 2.3 hours a week. Twenty years earlier, the average time was 2.6 hours a week. In 1993–1994, elementary science time reached a high of 3.0 hours.
State data for fourth-grade science instructional time in 2007–2008 showed wide variations, from a high of 3.8 hours in Kentucky to a low of 1.9 hours in Oregon. The average was 2.8 hours per week.
Blank then compared each state’s average time on science instruction with its NAEP scores and found a slight positive relationship. As time spent on science increased, scores also increased slightly.
However, student poverty had far more influence on science assessment performance than did instructional time. Blank identified a 12-point gap in NAEP scores between students in states that were highly focused on science, with four or more hours of instruction, and those that spent less than an hour per week on science. However, this 12-point gap is small in comparison with the over 25-point gap between low-income and higher-income states with the same level of science education time. For example, low-income students with less than one hour a week of science instruction scored an average of 126 on the NAEP. Students not classified as low income who had the same minimal instructional time scored an average of 154. Even with more than four hours of instruction, low-income students had an average score of 138—still below the average for higher-income students with only an hour of instructional time.
Implications for Practice
Blank suggests investigating other differences, beyond instructional time, that may be responsible for higher science achievement, including differences among states in accountability, curriculum, professional development, and other policies. We further suggest that identifying the differences among the in-school and out-of-school science experiences of low-income and higher-income students could also be important. As poverty had a much stronger influence on student achievement than instructional time, this reminds us of the importance of attending to the needs children from low-income families, as well as those from non-dominant populations. Informal science education (ISE) practitioners should ensure their programs incorporate the current research-based strategies to engage and support these populations.
Additionally, ISE practitioners and other science education stakeholders may want to look at their states’ specific data to make connections between their state education policies and student performance. See the related Relating Research to Practice brief, Ballard (2015), for one investigation into this relationship. While, standardized test scores are not the marker of science learning used by the ISE field, it is important for ISE practitioners to be aware of the pressing issues affecting formal education.