The research problem, “effectiveness of a science intervention program on science achievement and attitudes of middle-grade students attending a 5-week academic enrichment program” (p. 236) has an independent variable of implementation of a science intervention program and a dependent variable of post-test to the pre-test scores on two measures. Research questions are “What is the effect of a science intervention program on fifth and sixth grade students’ science achievement?” and “What is the effect if a science intervention program on fifth- and sixth-grade students’ attitudes towards science?” (p. 238). The hypothesis was “a science intervention program, which included inquiry-based curriculum with real-world applications, would promote middle-school science achievement and positive attitudes toward science” (p. 237-8). The hypothesis states the expected relationship between variables, namely an intervention program will positively increase middle school students science achievement and attitude towards science. If the hypothesis is correct, the summer intervention program and components could be applied to school-year science classes and be implemented in other summer programs with similar conditions as this Georgia middle school.
The problem Parker and Gerber studied is significant, as “increased science achievement is needed to prepare youth to assume future science related careers in business and industry” (p. 236). Further, a projected shortage of those in the science field will lead to an adverse impact on the nation’s economy. In addition, the United States ranks 20th out of 40th among industrialized nations in terms of science achievement. Parker and Gerber argue “students need to be better prepared in science if the US is to keep pace in global markets” (p. 237).
The researchers’ backgrounds, interests, and potential biases are not clear from the outset. It can be inferred they have a science background and one of the authors (Parker) works at a middle school. A bias could be present if Parker is running the science intervention program . She would want the program to succeed and might be biased to view the program as more successful than it was. The researcher’s involvement might affect the findings. For example, the teacher-researcher is recording perceptions in the teacher log. The teacher-researcher might have a bias towards the students they are already familiar with and might judge a behaviors based on past behaviors. The researcher might be looking for and overemphasizing the positive. One way to avoid this would be to have a non-partial observer writing notes.
Studies reviewed in the introduction are connected to the problem. Doll (1996) connects to the type of instructional methodology being advocated, and NRC (1996) supports the use of materials for manipulation. In addition, authors discuss the learning cycle studies by Gerber (1996) which aligns with methods used in this research. Results of studies were succinctly summarized, but not critically reviewed or flaws noted. The researchers also did not justify their research based on shortcomings of earlier studies. The review established a theoretical framework for the significance of the study by referencing statistics about student achievement and how other research found inquiry based learning and hands on learning to be successful. The review is mostly well organized, grouped for example by ideas that are essential to the study including placing science in real world context. However, sections on increase student’s science achievement could have been better grouped.
The setting includes the locale (rural South Georgia), and the type of school (summer academic enrichment program). The research design is identified as a mixed methodology design. The quantitative design used a criterion-referenced test, a survey, and correlational t-tests of the pre-and post-test scores on criterion referenced test. The qualitative design consists of the narrative descriptions of students behaviors. The design is appropriate since it is important to gather statistics on students’ capabilities before and after treatment, as well as the students own opinions and the teacher’s observations of their growth.
The population is not clearly defined but it can be assumed to be students from “economically disadvantaged families” and “eligible for free or reduced lunches” (p. 238). Additionally, the method of sampling is not clearly described. Based on the sample itself, it can be assumed the sample method was a convenience sampling, where the sample was “willing and available to be studied” (Creswell, 2012, p. 145). Students with below average performance in mathematics and reading during the school year were “enrolled in a summer program… providing educational assistance in academic school subjects” (p. 238). The end sample consisted of 11 fifth (four of eleven) and sixth graders (seven of eleven), being 5 boys and 6 girls. It is recommended there be “30 participants for a correlational study” and for a survey design, 350 individuals (Creswell, 2012, p. 146). Since there were only 11 participants, the sample size was not adequate. The authors acknowledge “due to the limited size of the study”, the study does not provide insight into the impact of the study on a large population.
The instruments include a criterion-referenced test and Attitudes Toward Science Survey and the teacher log. The instruments were appropriate as they allow the researchers to see content knowledge growth and attitude changes. The criterion referenced test was “judged to have high content validity” and the reliability of a coefficient alpha .52 on the pretest and .47 on the posttest. The low reliability was explained by the limited number of questions (the fewer questions on a test, the lower the reliability), and that the coefficient alpha has a “lower bound estimate of reliability” (p. 238). The Attitudes Towards Science Survey had a reliability of .84. There was not information on the validity of this instrument. Quantitative data was collected through the teacher log. Procedures for collecting qualitative data are described but they could be more specific. For example, observations were logged “during” the program, but it does not specify when (as they occur? After they occur?)
Data analysis occurred through analyzing quantitative and qualitative research results “to address the two research questions of the study” (p. 239). A correlational t-test was conducted from criterion referenced test results. The observations of student behavior were aligned with t-test results to draw conclusions about effectiveness of the program. In the observation examples, the authors focused on students with the largest growth (Sharonda, Mauves and Rita). Other students with little growth, such as Ashley or Brenda, should have been discussed. While the analysis makes use of qualitative data through including excerpts, the researcher’s interpretations are derived from selected qualitative data. The use of qualitative data could be stronger if excerpts from a range of students for a fairer analysis.
Scores for the criterion referenced tests are accurately presented in a table, but the table could be made easier to read with gridlines, data that was closer together, and a column added to show their growth. Also, there is no data provided for the attitude survey. Including a bar graph and line graph of the criterion referenced data as well as the attitude survey would make the results section stronger. The statistical summary is appropriately descriptive describing t-test results, mean scores and t scores for the correlational test based on the criterion referenced test, mean scores for both the pre- and post-test (ATSS, Science Motivation Scale, and Science Importance Scale) as well as the correlational-tests results. For all tests, it designates whether the scores are statistically significant.
Conclusions drawn with regards to the first research question is the intervention program was successful in increasing students’ achievement, as there were significant increases in the correlational t-test results. With regards to the second question, it can be concluded that the science intervention program leads to some, but not significant increases in student’s attitude towards science. Science enrichment programs lead to a larger increase in student’s knowledge than they do an increase in the student’s attitude towards science. The results support middle school science intervention program raises student’s achievement more than increases their attitude towards science and implications of research findings include having more hands on science and connections to real life in students science classes. Findings are compared to Simpson and Oliver’s (1990) study, stating this study “adds further support” to the use of quality science curriculum to increase students’ content knowledge and attitudes towards science (p. 241). Limitations of the findings include the small sample and the short research period. Both would be rectified by using a larger heterogeneous sample and completing the study over a longer period.
Overall, the research topic is worthwhile but this paper does not add dramatically to topic literature, reinforcing existing literature and not challenging its own thesis by critically analyzing results. People who might benefit from reading this paper include professors of science education, science teachers, and students in master’s or undergraduate education courses. This audience teaches best practices of science instruction, teach science to students, or are learning best practices in teaching science.
Parker, V. & Gerber, B. (2000). Effects of Science Intervention Program in Middle-Grade Student Achievement and Attitudes. School Science and Mathematics. 100 (5), 236-242.
Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (4th ed.). Upper Saddle River, NJ: Pearson Education.