Formative Assessments
Formative assessments, including exit tickets, chapter quizzes, and annotation checks, drive my planning and instruction on a daily and weekly basis. These three methods of assessment are vital to monitoring the progress of my students as they work toward mastery on specific standards and objectives throughout a unit. Without utilizing formative assessments daily and weekly, I would be planning blindly, not knowing if my lessons were effective or accessible and missing common misconceptions that are affecting the entire class.
Exit Tickets
Exit tickets are a mainstay in my classroom. The students know that at least three times a week, our class will end with an exit ticket, and they know that the following day, they will receive their graded exit tickets and see their class average compared to other 7th grade classes on the content board in their room. Exit Tickets guide my decision making as a teacher because I know based on the data whether we are near or approaching mastery on the target skill or objective, and they engage students in their own progress monitoring because of the timely and consistent feedback.
The exit tickets always address the guiding or essential question for the day’s lesson. In the examples to the left, you can see the essential question — a few of the students annotated EQ on the box — at the top of the page. This essential question is introduced at the beginning of the lesson, so it is not new to the students on the exit ticket. This consistency is vital because it allows me to see whether the lesson effectively taught that specific objective, and it helps guide the students’ decision making because they would have practiced with that question prior to the graded formative assessment at the end of class. This builds confidence for the students and has proven to be an effective and accurate measure of their progress.
In the planning stage, I always intellectually prepare my exit tickets, so that when I write multiple choice and written response questions, I have defined, expected, and intentional distractors and exemplars in mind before students take the assessment. This allows me to clearly see and anticipate where misconceptions may occur.
In the student samples above, the lower-level student, page 1, responded to both question one and two with the distractor — similarities because they were both afraid. This student also chose a piece evidence that supported her answer to Part A, even though Part A was not an exemplar response. This showed me that this student is approaching proficiency on supporting her claims with relevant evidence; however, this student failed to identify the most important difference between the historical text and the fictional text. Both the mid- and high-level students, pages 2-3, identified the primary difference between the two texts, with the mid-level student struggling to identify the strongest pieces of evidence. The data gained from this exit ticket informed my decision-making process in the following lesson, where I had students pre-annotate the exit ticket text for similarities and differences.
Data such as this is vital to my day-to-day planning and is easily trackable on SchoolRunner, my school’s web-based academic and cultural data tracking and analysis platform, which tracks everything from attendance and behavioral issues, to academic and gradebook data, to bus routes and family contact information.
To the right, you can see a snapshot of assessment mastery data per exit ticket, organized by class, as well as the four 7th grade class’ averages for each exit ticket. In the larger picture, this data informs groups for literacy Focus classes and remediation. On a more immediate level, this data guides my decision making by allowing me to see when it is necessary to spiral in skills to later lessons within the unit or replan a lesson in order to reteach an important skill.
For example, in unit one, we focused heavily on comparing fictional texts to historical texts, and I wanted my students to have a strong understanding of this skill from the very beginning of the novel. However, their first exit ticket with this skill, R1.01, was not mastered, resulting in a 69% average. Because this skill was extremely important to the unit as a whole, I chose to reteach the same objective the following day, replanning my lesson and pacing calendar. The next exit ticket, R1.02, was mastered, resulting in an 81% average.
Chapter Quizzes
Each Wednesday, my students take a brief chapter quiz based on the chapters we have read throughout the week to measure whether students are actually reading and comprehending their novel on a basic level. This formative assessment, to the right, informs whether students are reading and understanding the text and identifies students who are having trouble comprehending what they are reading on a surface level. Their summative assessments and exit tickets are skills-based, whereas these quizzes are designed to be simple, quick, content recall questions — they are not meant to test skills.
This method of assessment was born from an issue I struggled with in my early years of teaching: Do I test the students on content or skills? Skills, of course, are more important, as my goal is for students to be able to analytically read any text, not just recount the plot of a specific book. However, students must be reading their books in order to be successful on their skills-based assessments. Thus came our quick, weekly chapter quizzes.
This data is much more student-facing and helps guide the decisions they make about prioritizing their time and committing to homework. Students know how they perform on the quiz within five minutes of taking their quiz, and then they are able to improve their scores as needed with their homework rereading the chapter(s).
To the left, you can see a snapshot of assessment mastery data per chapter quiz, organized per student. Generally, students in red are not reading their books consistently. With this information, I request specific Focus groups for remediation dedicated to getting caught up with their novel, and I also use this information to identify families I need to contact to create more accountability for the students. As I tell my students every day: “In reading class, we read” — and if they are not reading, they will not be successful on skills-based tasks.
Annotation checks
In addition to the weekly chapter quizzes, I also check students’ book annotations each Wednesday. Students are expected to annotate in their books during class as we are reading — both teacher directed and independently — as well as during their homework reading assignments.
Like every other aspect of my teaching practices, I have adjusted the way I assign and grade student annotations. Initially, the expectation was that students must have at least two meaningful annotations per page. However, what qualified as a “meaningful annotation” was not necessarily equitable across the levels of the students, and this did not build the skill of annotation or using annotations to answer skill-based questions. Students were simply annotating for mood or characterization or whatever annotation they felt most comfortable with, but they were not learning how to use annotation as an analysis skill.
Now, students are told specifically what to annotate for in class or for homework. For example, at the beginning of a Reading Workout in class, students note the two or three annotations we are looking for, such as setting, theme, and tone, and they know that those are the only three annotations they need to make during that passage. Following the reading and annotation of a chapter or passage, students answer skill-based questions that align with the annotations they were directed to make during reading. This has increased the number of meaningful annotations I have seen in students’ books and has also improved their ability to cite relevant textual evidence.
Above are three student samples for annotations.
The low student’s annotations, page 1, shows the student is getting the big ideas we are discussing in class; however, the student is not underlining specific evidence to go along with the annotations and is taking very few notes to explain the significance of the annotation.
Page 2 shows a mid student’s annotations. This student has strong annotations with notes and is identifying specific evidence within the paragraphs to connect those annotations to. This is an average sample that shows the basic expectations of annotating in class.
The high student, page 3, went above and beyond the requirements of the class. The high students often make their own annotations outside of the assigned annotations. For example, this student identified unfamiliar words and annotated for notes and inferences the student had while reading.
To the right is the student-facing annotation check rubric, which students receive after their books have been graded. This rubric shows students exactly what I was looking for and clears up any questions about why they scored a specific grade. With this rubric, students are also given the opportunity to take their work from good, to better, to best. Students can go back and make their annotations “two-part” annotations, which includes adding notes along with the basic symbols (see above page 2), or if they were missing the annotation altogether, they can go back to that page and find the evidence and add the annotation. The next day, students who chose to improve their annotations can turn in their books again for a revised grade. This system, while streamlining the grading process for me, also gives students the responsibility and ownership to identify their own errors and improve their grades.
The rubric is graded using the following system:
√+ — Student has the exact annotation as the exemplar, which looks like underlining the evidence in the passage and annotating per the correct coding, as well as a short sentence explaining what is important about that annotation (25 points).
√ — Student has partially annotated for the exemplar, which may include the underlined evidence in the passage with the correct annotation coding but without a brief explanation (15 points).
√– — Student has barely annotated for the exemplar, which may include only the coding in the margin without clearly connecting the annotation to evidence in the passage and does not include any type of explanation (5 points).
X = Students has not annotated at all (0 points).
This is a system the students are used to seeing on their homework, diagnostic aggressive monitoring, and written response questions. The students engage in this feedback and have the opportunity to re-annotate their passages with the rubric. This assessment puts the ownership into the hands of the students, who then decide whether to improve their grades.
Similar to the chapter quizzes, these annotation checks also provide me valuable data of students who may not be reading or comprehending the books on a basic level, which then informs remediation.