Chapter 3 – Methodology
While students receiving Tier III math interventions are not provided with self-instruction materials as in Ghana, they are provided with instruction from an interventionist who provides model problems, guided practice problems, and independent practice problems. The student completes these problems using the techniques modeled and practiced during the lesson. This is done with instructor guidance and support in the beginning, but, as they progress, they become more independent and there was a release of responsibility. In Ghana, the switch from independent self-learning to independent video-learning led to the learner being able to better apply their newly gained skills. Therefore, like in Ghana, the purpose of instructional videos in this research was to see if student learning gains and retention over time were greater with the use of these instructional videos.
This study used instructional videos to replace the instruction, but not the support, provided by the interventionist. The students in the experimental group received the same supports as the control group. Vocabulary was discussed before instruction; during instruction, worksheets guided students through the video; and there were opportunities for discussion and practice after the student watched the video, but before beginning Five in a Row on Khan Academy Coach (KAC). While students completed Five in a Row on KAC, the interventionist monitored the students work and intervened when the student was struggling (i.e. answered multiple questions wrong in a row, expressed a question or concern).
Several assessment instruments were used in this study including: attitudinal survey (pre and post), teacher’s log, interview, content tests (pre and post for each topic), and long-term content assessment (pre and post study). Questions from Taylor and Galligan’s (2002) study formed the basis for attitudinal survey questions (see Appendix A) with an emphasis on the students previous experiences, their experiences during the program, and their experiences and perceived learning after the program. The attitudinal survey was used to see how the control group and the experimental group felt towards their respective programs before and after the study. The pre-attitudinal survey was administered before students began participation in the study. The post attitudinal survey was administered to the students after the study was completed to see how they felt towards the control and experimental lessons. The participants chose answers based on 5-point Likert-type scale (1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, and 5 = Strongly Agree) with the option of not sure/not applicable, with yes/no response and also with written response.
Interviews (see Appendix B) were conducted at the conclusion of the study to gain more fluid feedback on each student’s experience which would have been harder to gain through the post-attitudinal survey. The interview provided an opportunity to ask more in depth questions and gain more in depth reflective answers. The interviews were conducted before the post-attitudinal surveys in an attempt to gain more detailed answers on the attitudinal survey. By including both instruments, the researcher received more complete answers, as some students were able to express themselves verbally during the interview, while other students were better able to express themselves in writing. Student responses from the interview also offered the opportunity to compare answers in the attitudinal survey and answers in the interview, as similar questions were asked in both. The interviews were recorded then transcribed (see Appendix C). The interview responses, as well as the written responses on the attitudinal survey (see Appendix D) were analyzed by studying the trends that occurred among student responses as well as any outlier responses that differed from the general response. The teacher’s log was used to record the students’ activities for each meeting time, including the current lesson, the progress made, and any issues or insights.
Quantitative instruments included topic tests and an assessment. For each lesson completed in this study, students took a pre-topic test and a post-topic test for that specific lesson. The pre-topic test was taken before the lesson began and measured the students’ current level of knowledge on the topic before instruction. If students were already proficient in the topic, they could move onto a new lesson. If they were not, the pre-test showed what skill sets they did and did not have, relevant to the topic. The post-topic test was administered after the lesson was completed. It measured what the students had learned and whether they were proficient in the topic. If they were not, the topic was revisited through extended reteaching, using new different methods to teach the concept and offering additional time to practice; if they were proficient, they went to their next lesson. The post-topic test when compared to the pre-topic test measured growth from the beginning to the end of the lesson. Overall, each topic test contained between 3-9 questions and were formal formative measures in nature.
In contrast to the topic tests, the assessments (see Appendix E) were formal summative in nature. The pre-assessment and post-assessment measured the students’ knowledge in a total of 16 topic areas. The pre-assessment was administered at the very beginning of the study. Its main purpose was to measure students’ knowledge level and to see in what areas they needed instruction. The post-assessment was administered at the completion of the study. Its purpose was to measure students’ growth in each topic compared to the pre-assessment at the beginning of the study. Its purpose was also to measure retention in the specific lessons as compared to the post-tests.
The research design was a mixed methods design. Data was mainly qualitative because the problem and central phenomenon (best instructional practices in Tier III mathematics settings, and specifically the use of instructional videos in this setting) “require[d] both an exploration (because we need to better know how to teach these children) and an understanding (because of its complexity) of the process of teaching and learning.” In this study, consistent with the qualitative approach of “learning from the participants”, the researcher relied on qualitative data such as logs, survey, and interviews to measure the effectiveness of the videos (Creswell, 2012, p. 16 -17). Quantitative aspects of the study are the statistical analysis of the tests and assessments in numeric form to answer the research question.
The study took place in an interventionist setting, where students were provided with supports and intervention lessons on targeted topics. SRBI is aimed at students at varying levels of abilities and is divided into three tiers. Tier I includes 80 to 90 percent of students in the school. Students in Tier I generally score at the 40th percentile or higher on benchmark tests and these children “receive high quality curriculum and instruction in the general education classroom or program.” Students in Tier II consist of five to ten percent of the school and generally score between the 16th and the 40th percentile on benchmark tests. The school provides help for these “children who need more support than they are receiving from the general curriculum” through SRBI instruction during workshop (study hall at the end of the day) twice a week, typically with their math teacher. One to five percent of the school (generally students who score between the 1st and the 15th percentile), receive “more individualized instruction [as these] children… need the most support”, in the form of interventions twice a week from an interventionist (Connecticut State Department of Education (A family guide), 2009, p.6).
The Tier III intervention setting was chosen because the researcher had direct access to these students and the sample was drawn from these students. Not all students who received Tier III services were included in the study. Some students tested out of SRBI and the content of the study did not meet the needs of all students. There were two groups of students, one group of four students who participated in the study for 12 weeks; and a second group of four students who participated in the study for 9 weeks. Both groups received control and experimental treatments.
This study aimed to improve the intervention instruction received by students through data analysis and the use of formative and summative assessments to measure the effectiveness of instructional video resource Khan Academy Coach. This study addressed CCSS standards for sixth and seventh grade students. Additionally, some content (i.e. prime numbers v. composite numbers; mixed numbers and improper fractions) matched fourth and fifth grade CCSS standards to ensure students had the proper background knowledge to succeed in their current math class. This study took place in the fall, beginning in September and ending in late November. Students met with the interventionist twice a week for 42 minute blocks of time. Students received control and experimental treatment in the time frame of nine to twelve weeks. In the first week, the pre-assessment, as well as the pre-attitudinal survey were administered. In the last two weeks of the study, the post-assessment and post-attitudinal survey were administered. Group interviews with students were also conducted during this time.
A total of five lessons [topics] had enough student data for analysis. They were converting mixed numbers into improper fractions, converting improper fractions into mixed numbers, prime factorization, least common multiple, and greatest common factor.
Before each lesson, the student took a pre-assessment to check for prior knowledge and whether student had a need for this lesson. Most students demonstrated a need for instruction on all five topics. However, with greatest common factor, there were two students who scored a 100 on this pre-test and they moved on to the next lesson (see 6G2 and 6B5 in Table 2). After students took the pre-test, they would then proceed to either the control or experimental lesson. With the control lesson, vocabulary would be introduced in the beginning of the lesson. Then the interventionist would model solving the problem using a specific strategy. The student would practice solving a problem using this strategy with interventionist guidance (See Appendix F1). At times only one strategy would be taught. At other times, multiple strategies would be taught. For each strategy taught, the student completed a guided practice problem. Then students would then complete 4-5 problems independently using the strategy or strategies taught.
In the experimental lessons, before students watched the video, key vocabulary and expectations for completing the worksheet would be discussed. Students would then watch the video on KAC while completing a worksheet. After watching the video the student would discuss the worksheet and the discussion questions with the interventionist. Any issues or confusion they had would be addressed at this time (see Appendix F2). Students would then complete practice problems on KAC (an average of 10 problems for each lesson). After the lesson was complete, both the control and experimental groups would take a post-test. Based on the post-test, students would either move onto the next topic or they would potentially re-visit the topic. Revisiting the topic never actually happened as all students showed requisite proficiency on the post-tests. This was interspersed with students using this time to complete math homework and study for quizzes and tests.
Two limitations impacted this study. The first was sample size and the second was difficulty in predicting the timing of topic completion.
Although it was anticipated that 10 to 15 students would be part of the project, at the start of the semester there were initially seven students who were receiving Tier III interventions. However, as time progressed three students were removed from the study due to behavior and level of math abilities. Two students were reluctant learners and time in SRBI was focused on catching up on class work. The second reason for their withdrawal was their benchmark scores since they scored well outside of the Tier III percentile cutoff when they took the STAR benchmark test. The third student‘s score on the benchmark test was so low that time needed to be spent solely on homework and classwork. In addition, when he was placed in the experimental group, he did not respond well to the videos.
After the removal of these three students from the study, four new students were added. These students scored within the Tier III cut off on the Fall STAR benchmark test (the other four students initially began receiving Tier III services based on their CMT scores, May benchmark scores, and recommendations from the previous year). These four new students participated in the study for only 9 weeks and not the full 12 weeks. There ended up being two sample groups, one for 12 weeks and one for 9 weeks. Thus the number of students included in the study was smaller than expected.
The second limitation was the difficulty in anticipating the time students would need to complete each lesson. Originally, it was intended that students would switch from control to experimental or from experimental to control at the six week point, with the same number of students learning the topic in the experimental group as in the control. However, in practice students took longer (or conversely shorter) time periods to complete topics than expected. Because of this, they were either switched earlier or switched later leading to a non-even split for each topic.