top of page
Online Learning
Aerial Playground
Light Bulb
Chilren During Physical Education Lesson
Educational Gardening

Assessment & Evaluation

What does the Research Say About A&E? 

What does meaningful student assessment look like and how does it impact learning?

A Literature Review by Christina Nyentap

Published: November 12, 2020 

​

​

​

I was motivated to take a deep dive into the concept of meaningful assessment because the word “meaningful” seemed to be a sort of buzzword to describe best practices when it comes to curriculum on Twitter and in my B.Ed program. In my other courses I also learned that feedback should be meaningful and continuous, but I did not know what that meant per se. What does meaningful assessment look like? What makes assessment relevant and meaningful to a learner? How does assessment impact student learning? What motivated my research was the desire to know more about why we are encouraged to use things like checklists, progress reports, exit cards, quick writes, and so forth because I did not see much of these practices being used when I was in high school. I wanted to know if assessment for learning and aslearning was actually as useful to learners as people claim it to be or if it was just a formality. 

 

Additionally, I have always been confused about the best practices when it comes to assessment and evaluation and what methods are most impactful to student learning. I did not know why formative assessment should not be evaluated and why there are three levels of assessment. What seemed “off” to me when I thought back to my practicum experience and my own experience in high school is that students are often solely motivated by the grade. Students commonly ask questions like “why are we learning this and when will we ever use this in life?” I remember asking those questions myself when I sat in my grade 12 Calculus class. It’s not often that students want to do an assignment for the sake of doing it. This type of motivation worries me especially when it comes to the concept of assessment for and aslearning. My bias in approaching this assignment was that I assumed that if there was no grade, it would be hard for students to find the task meaningful and relevant and they would not be motivated to do it. I learned in my literature review that connecting tasks to real life, using assessment tasks that are authentic, and assessing without a grade is actually more impactful to student learning and increases students’ intrinsic motivation to learn. This annotated bibliography was organized in alphabetical order as outlined by MLA formatting on Owl Perdue. Subheadings have been included to support the identification of the key learning of each source in relation to my research question. 

Strategies for Self-Assessment

Andrade, H., & Valtcheva, A. “Promoting Learning and Achievement Through Self-Assessment.” Theory into Practice, vol. 48, no. 1, Informa UK Limited, Jan. 2009, pp. 12–19, doi:10.1080/00405840802577544.

Andrade and Valtcheva define self-assessment as “a process of formative assessment during which students reflect on the quality of their work, judge the degree to which it reflects explicitly stated goals or criteria, and revise accordingly” (13). It is usually done in drafts where students can go back and revise their work after they have self-assessed. Self-evaluation, on the other hand, involves students grading their own work as part of an assessment of learning task. The authors caution against using self-evaluation as students often self-inflate to ensure they receive a better grade. Self-assessment is useful because it can increase student learning and achievement, increase student’s ability to self-regulate, and help students learn to monitor their own learning. 

Unlike other sources I came across in my literature review, the authors define conditions for effective self-assessment to occur including, student awareness of the value of self-assessment, clear criteria to base the assessment on (i.e., a rubric), a task to be assessed, models of self-assessment, direct instruction and assistance in the self-assessment process, practice self-assessing, student cues for when is the best time to asses, and opportunities to revise the task after assessment. A good rubric describes the mistakes students tend to make and ways in which student work can shine. Students should not have to guess what makes up the expectations or learning targets of high-quality work. 

​

To engage students in effective self-assessment, teachers can first articulate expectations. For example, teachers and students can co-create all or part of the rubric in class by analyzing a critiquing weak and strong examples of student work. Second, teachers can engage students in self-assessment. Students create initial drafts of their volleyball serve, essay, lab report, speech, etc. then compare the initial draft of their work with the expected results. Students can colour code key phrases in the rubric with one colour and then underline or circle their drafts providing evidence that they have met these criteria in their work. If students find they have not shown evidence of a standard, they can make a note for improvements on their next draft. Students should be given 1-2 class periods for this. Third is revision where students use the feedback from their self-assessments to guide their revisions. This three-step process can be enhanced with peer feedback and teacher feedback. Having students self-assess leads to noticeable improvements in student’s work.  Self-assessment may take on written forms such as journals, checklists, and questionnaires or oral such as interviews and student-teacher conferences. 

Andrade and Valtcheva found that students embrace rubric-referenced self-assessment. Results indicated that students attitudes toward self-assessment improves as they gain more experience with it, students were more likely to self-assess and do so effectively when they were aware of teacher expectations, students recognized self-assessment as useful when they realized how it impacted their grades, students found they benefited from self-assessment in that it helped them focus on key elements of the assignment, learn the material, increased their ability to identify strengths and weaknesses in their work, increased motivation, and decreased anxiety. Students also reported that they would sometimes transfer their self-assessment skills to other classes, and that there were sometimes differences and tension between students’ ideas of quality work and teacher expectations. This tension may be addressed through conversations and codifying criteria.  

​

This paper is very relevant and useful as it includes many practical examples within different classroom subjects. It shapes a framework for teachers to keep in mind as they create or co-create rubrics, and as they design their course work with opportunities for self-assessment in mind. They give practical evidence on how self-assessment can enhance student’s learning and students’ achievement by bettering their understanding of curriculum objectives. Although the authors caution against using self-evaluation, I have seen many people still use this strategy (especially in university). In terms of my research question, meaningful assessment is something that allows students to improve on their work so that they have greater success when it comes to being evaluated. Students can self-assess using a rubric in order to inform revisions to their work and then the final copy can be evaluated by the teacher. This gives students a chance to understand how exactly they will be evaluated later. Self-assessment impacts student learning as it helps them better understand the gap between their work and the expected quality of work.  

Assessments Are Not as Objective as They Appear to Be

Brookhart, Susan M., and Daniel T. Durkin. “Classroom Assessment, Student Motivation, and Achievement in High School Social Studies Classes.” Applied Measurement in Education, vol. 16, no. 1, Jan. 2003, pp. 27–54. EBSCOhost, doi:10.1207/S15324818AME1601_2.

This article by Brookhart and Durkin ties assessment to learning and explains how assessment is for learning. Acknowledging my own bias, I previously understood assessment and learning as mutually exclusive. I believed that students must perform engaging and meaningful tasks where they acquire knowledge and skills, and assessment puts those knowledge and skills to the test. Unexpectedly, this article offered a viewpoint into the fundamental shift in assessment philosophy from “students will be able to…” to “students will choose to…”. Brookhart and Durkin ask readers to consider the validity and reliability of seemingly objective assessments as all students approach these assessments differently. Like the previous article, this approach to assessment requires teachers to consider the human being, the learner. 

​

With the same assessment in the same classroom, this study revealed that student’s perceptions of the assigned task and perceived self-efficacy, their reported mental effort invested in the task, their goal orientations, and their learning strategies used differed by assessment and differed by student. Understanding the dynamics of classroom assessment is essential for improving education and for understanding the current state of learning in classrooms. What influences the classroom assessment environment are the purposes of assessments, the assessment methods used, the criteria for selecting them, the quality, the teacher’s use of feedback, the teacher’s experience in assessment, the teacher’s perceptions of students, and assessment policy. 

One assessment that is disseminated to a whole class or group of students will be encountered by students differently (how they perceive and respond to the assignment will differ). Students’ perceptions of importance or value of the assigned task and students’ perceptions of self-efficacy will ultimately affect the student’s ability to achieve. Assessment tasks should account for student interest. Teachers have the greater ability to control situational factors influencing interest like hands-on use, discrepancy, novelty, social interaction, modeling, games and puzzles, content, fantasy, narrative, etc. Students are also affected by their feelings of ownership – student perceptions were often that they had to learn the textbook’s or the teacher’s material. This bleeds into the idea of co-construction of knowledge through means of assessment tasks. 

​

Assessment is going beyond assessing or checking off a certain objective. This article supposes that assessment means applying knowledge learned in a way that suits the learner. This study suggested that performance assessments (also known as alternative or authentic assessments, requiring that students perform a task rather than select a ready-made answer) may be connected to productive student goal orientations and learning strategies. Understanding how assessment links to motivation and effort is important to understand student learning. Assessment tasks must be relevant to the real world but challenging also. Performance assessments were associated with higher student self-efficacy and mastery and performance goals, tapping into more than one motivational source. 

Practical Methods of Assessment to Encourage Higher order Thinking in ELA

Catholic District School Board of Eastern Ontario. “Targeting Achievement for All, Assessment For, As, and Of Learning”. Magazine two of six, n.d. 

This resource was loaned to me by my associate teacher. It is one part of six other magazines that the Catholic District School Board of Eastern Ontario (CSDBEO) developed as part of their FROG (Facilitating Reading for Optimum Growth), HAWK (Higher Achievement in Writers Know-how) and STOMP (Success Through Optimizing My Potential) initiatives for student success. As students progress through the education system (through grades K to 12) they are expected to progress through the FROG (primary reading), HAWK (junior writing) and STOMP (intermediate thinking) levels. This resource is in print form and is not available online, although it was based off of the ministry developed resource, Learning for All (2013) and the Ontario English Language Arts (ELA) Curriculum. The resource was made for teachers within the board by selected teachers and board members. My associate teacher helped develop the resource and presented it to other schools in the board at the time. It describes assessment of learning, for learning, and as learning, how assessment works for teachers and learners in terms of reading, writing, and thinking, the literacy assessment cycle, assessment strategies and tools, the A to Z of assessment accommodations, an assessment toolkit to support readers, an assessment toolkit to support writers, and an assessment toolkit to support thinkers. There are many practical examples and sample handouts within the booklet to support the practical implementation of assessment. 

​

Members of the CDSBEO made this resource with the belief that instruction and assessment allows students to achieve and succeed. Rich experiences in reading, writing, and thinking while making critical connections ensure that all students can become self-directed, self-assessing individuals. The Assessing Thinkerssection is the most relevant to me as it targets intermediate students in ELA (one of my teachable subjects). This document suggests that struggling students will thrive when we clearly define learning goals, scaffold learning experiences, provide varied opportunities for practice (assessment as learning) and feedback. In order to foster higher order thinking, we can assess students as they move through the following stages: retell (remember, understand) relate (apply, analyze), and reflect (evaluate, create). We might assess student’s knowledge and understanding (retell) with questions like “identify the main idea…”, “place the events in order”, explain concepts, “create a lesson outline to show…”, “summarize the chapter on…” or “describe how two characters are similar and different”. Identifying, recognizing, recalling, finding, explaining, summarizing, and comparing and contrasting questions can help assess knowledge and understanding. We might assess student’s thinking and application skills (relate) with questions relating to applying information to a task, carrying out a procedure to solve problems, inferring or justifying logical conclusions using evidence, differentiating between events, deconstructing something to create a point of view, or examining multiple part of information to create a larger picture. We might assess student’s reflecting or commination skills (evaluating and creating) by asking students to judge or evaluate the appropriateness of procedures or ideas, detect and assess inconsistencies, prioritize strategies, debate issues, or predict outcomes, develop new ideas from a previous one, invent or create an advertisement, pamphlet, or product, combine or reorganize elements to form a coherent whole (i.e., developing a food menu for a resultant). 

 

This resource is very useful for formulating exit cards or assessment tasks as there are a variety of sample questions offered to support different levels of thinking. In terms of my research question, this resource explains that meaningful student assessment is something that helps students develop higher order thinking skills. It impacts learning by fostering deeper connections and relationships with the material. This resource is very practical and offers many ideas that would be useful in creating exit cards for learning activities. 

Co-creating Assessment Criteria

Fraile, J., Pardo, R., & Panadero, E. “Co-Creating Rubrics: The Effects on Self-Regulated Learning, Self-Efficacy and Performance of Establishing Assessment Criteria with Students.” Studies in Educational Evaluation, vol. 53, Elsevier Ltd, June 2017, pp. 69–76, 

doi:10.1016/j.stueduc.2017.03.003

Traditionally rubrics are given to students alongside assignments and are used as a summative assessment tool. More recently, using rubrics as a form of formative assessment has gained popularity. Fraile, Pardo, and Panadero hypothesized that when students are involved in the rubric design/creation, students will have a greater understanding and internalize the assessment criteria, rather than hold a single-minded focus on the final score. This shift in thinking may serve to motivate students in their learning. The aim of this study was to explore the effects of co-creating rubrics on students’ performance, self-regulated learning, self-efficacy, and perceptions about rubrics’ use. Fraile, Pardo, and Panadero suggest that rubrics can increase learning and performance under assessment for learning (AfL) and Assessment as Learning but not through Assessment of Learning. Co-creating rubrics allows for a deeper understanding of what is expected and more detailed feedback. If the rubric is only used for summative purposes, the aim is no longer about the student’s learning but may still enhance the reliability of the assessment. Using a rubric for formative assessment also allows students to self-assess (which involves the learners in making their own judgments about their achievements). Through self-assessment, students can better understand their strengths and weaknesses so that they can improve.

 

Assessment criteria should be introduced before the execution of the task begins so that students can learn to self-regulate, monitor, and evaluate their success accordingly. Simply providing a rubric is not enough; Fraile, Pardo, and Panadero also suggest that the learning tasks include opportunities for reflection throughout the learning process. Having access to a rubric can help students formulate and plan out how they will progress through their assessment task. Providing general rubric can help to increase students’ self-efficacy, however, it is not clear of whether co-creating rubrics have a greater effect on self-efficacy over general rubrics. Rubrics have been criticized for constraining students’ learning to achieving teacher-defined objectives and desires, and for promoting shallow approaches to learning. Involving students in the co-creation of rubrics can improve student autonomy and increase feelings of empowerment as they get to create and negotiate parts of which they will be assessed/evaluated on.  

​

Fraile, Pardo, and Panadero’s study took place over one semester where one group took part in the co-creation of rubrics and one group did not. Most participants were male, with the average age being 23 years old. Fraile, Pardo, and Panadero point out that only one other study of this kind had been conducted before. In all, results showed that co-creating rubrics resulted in better internalized assessment criteria helping students become more self-regulated, co-creating rubrics did not enhance the self-efficacy of the sample population, co-creating rubrics partially effected students’ performances, however this may have been due to the nature and differences between tasks assigned, and students who co-created rubrics did not have more positive perceptions of them. Fraile, Pardo, and Panadero acknowledged that this study was limited by the sample size (n=63), inconsistent protocols in tasks, no control groups without rubrics, and the sample was mainly males. 

​

The results of Fraile, Pardo, and Panadero’s study varied significantly from the findings in their literature review. In relation to my research question, I think more information is needed in this area to conclude whether the co-creation of rubrics enhances student learning to make it more meaningful. There is no doubt that student voice and choice will enhance student’s autonomy and make them more interested and motivated to learn. Co-creating rubrics to ensure understanding of assessment criteria so that students are clear about the goals and objectives of the assignment would in theory support student achievement.

Feedback Is a Necessary Component of Assessment

Grainger, Peter. “How Do Pre-Service Teacher Education Students Respond to Assessment Feedback?” Assessment & Evaluation in Higher Education, vol. 45, no. 7, Oct. 2020, pp. 913–925, doi:10.1080/02602938.2015.1096322.

This research article by Grainger explains feedback as an aspect of assessment literacy. Despite that feedback is always acknowledged as an essential part of effective learning, there is no one process or best practice that has been identified in order to positively impact student achievement. One of the core responsibilities of any teacher is assessing and giving feedback on student work, however, how to give that feedback has not been clearly defined. In the exploratory study, pre-service teachers gave their responses to new feedback processes within their teacher education course. The purpose of this study was to gain some insight from a student’s perspective into the feedback process (specifically how students perceive assessment feedback), if criteria sheets are really valued, and if specific feedback comments are really valued by students. The reason for this study was that teacher graduates expressed dissatisfaction with assessment and feedback in their previous courses. The results of the study indicated that feedback preferences vary considerably and that there is no one-size-fits-all method, but that feedback needs to be customized to each student. 

 

In order to improve feedback practices, the author suggests we rethink the place of assessment and feedback within the curriculum. Feedback is information given to students about the quality of the work being assessed. In the present study it is given a) directly through electronic or handwritten annotations, b) through criteria sheets, or c) both. In considering best feedback practices, a feedback transmission model which includes telling students what went wrong is considered less effective than a constructivist model which allows students to construct their own meanings. Feedback may be given for a number of reasons (e.g., correction, reinforcement, forensic diagnosis, benchmarking, and longitudinal development).  

​

In order to learn, students need feedback. Effective comments and suggestions are generally considered of value by students, however, there is a common perception that students are only interested in the mark, and that little attention is given to reading the feedback. The problem may be that students value feedback, but choose not to read it because they do not understand the terminology or how it will help them in the future. For feedback to be effective, the statements made by the person grading the assignment must be in student-friendly language. Some students preferred generic feedback (feedback given to the whole class) over specific feedback. On the flipside, some students were dissatisfied with vague feedback that was not specific to their work and could not see how vague feedback would help them improve their work specifically.

​

A well-established source of feedback is the rubric. The rubric is a form of criteria sheet that identifies specific criteria, curriculum objectives, and standard descriptors which all describe the quality of the work. While these have the potential to be useful, many students struggle with the academic language often contained within them. The language used is either hard to understand, ill-defined, ambiguous, subjective, or allows for assumptions to be made. Assessment criteria should be explicitly stated in language students will understand so that they are able to respond effectively to assessment tasks. The author contends that providing feedback in the forms of criteria sheets (rubrics) and annotations do not assist the student in developing autonomy, increased understanding, or motivation to learn but ensures the continual dependence on the teacher. 

The study found that students like to receive written feedback as they consider this very helpful and consistent but not many students took advantage of student-teacher meetings to gain more insight on the written feedback when offered. For those that did meet with the assessor in-person, it was reported that the assessors helped clarify the written feedback and improvements to be made for next time. The researcher inferred that it may be difficult to organize student-teacher meetings and that the more time that passes, the less motivated students are to hear the feedback. Not knowing why students do not take advantage of in-person meetings is one limitation of this study. Most students stated that the most effective feedback was when written comments and criteria sheets were used in combination. Some students like being told explicitly where they went wrong. In order to learn from feedback, students must have the opportunity to construct their own meaning from the received message. This means analysing it, asking questions about it, discussing it with others, and/or connecting it with prior knowledge. Students need to act on the feedback they receive. 

​

In relation to the research question, “what does meaningful student assessment look like and how does it impact learning?”, this article suggests that meaningful assessment is not only about the task, but the feedback provided after the fact. It is not only about receiving the feedback, but the learner acting on the feedback; the ‘so what now’? Effective feedback that students understand and that students also gain by asking their own questions and making their own conclusions about where they went wrong will impact student learning. Making feedback immediately and readily available (e.g., dedicating time to student-teacher meetings) will also allow students the opportunity to act on their feedback.

Meaningful Assessment Ideas

Larson, Bruce E., and Timothy A. Keiper. Chapter 2: Preparing Learning Objectives and Assessing Student Learning (pp. 35-46). Instructional Strategies for Middle and Secondary Social Studies Methods, Assessment, and Classroom Management. Routledge, 2011. Retrieved from https://books-scholarsportal-info.proxy.bib.uottawa.ca/en/read?id=/ebooks/ebooks2/taylorandfrancis/2013-04-15/2/9780203829899#page=50

Larson and Keiper discuss validity, reliability, and usability when it comes to assessment. When assessments are not tied to instructional strategies, there may be an issue with the validity of the assessment. Despite the negative association with the phrase “teaching to the test”, the authors suppose that “teaching to the test” can be a useful tool when it comes to ensuring teacher-made assessments are valid. Assessments must also be reliable meaning that the assessment will produce the same results over and over again. Having a rubric or another form of a grading scale helps with reliability. Another factor that comes into play with assessment is usability. Usability is how easy an assessment tool is to administer (e.g., T/F tests are more usable as they take up less time and are easier to implement than having students perform 5-minute presentations). It is important to keep the whole picture of the student in mind and not depend on one form of assessment and evaluation. 

​

Authentic assessments resemble real-life tasks that are related to the subject area. For example, you could ask the question “what skills does a cartographer need to design a map?” and assess students on those skills. Authentic assessment values the process as much as the product. Performance assessments emphasize problem-solving and reasoning. It requires students to construct responses rather than select from a bin of answers. Recent trends in assessment have taken classrooms from written exams to coursework, teacher-led assessment to student-assessment, implicit criteria to explicit criteria, competition with classmates to cooperation, product assessment to process assessment, content to competencies, and assessment for grading to assessment for learning. Examples of authentic assessments include writing to real audiences such as a newspaper or magazine, short investigations about a local community issue, developing a museum display, engaging in a service-learning activity, portfolios of work showing progress over time, and examples of self-selected “best” work and self-assessment by students of their performance.

Rubrics describe the quality of student work and can be made on a scale from 0 to 3 with a score out of three for each category. Receiving a rubric at the beginning of a task will help students accept more responsibility for their learning.

​

Diagnostic assessments help the teachers understand where students are starting, formative assessment help student understand what they are learning, and summative assessments prove what students have learned and whether or not they have met expectations. Larson and Keiper give examples of formative assessment: have students define a concept before and after instruction, ask students to summarize the main ideas of a lecture, discussion, or reading, have students complete “check for understanding questions” at the end of an instruction period, interview students, and assign brief, in-class assignments or exit cards. 

This portion of the textbook chapter about assessment types and strategies was useful for understanding what meaningful assessment looks like and how it impacts learning. There were practical ideas given that I could use in my classroom tomorrow. One comment I have is when it comes to usability of assessments. Although the authors claim that T/F or MC type questions have higher usability, I disagree that they are the best option. They assess student’s ability to recall information, but in a world where competencies and skills are valued over knowledge (only) since it is so readily available, I think taking the time to do assessment tasks that result in deeper learning is more meaningful to students. 

Practical Strategies and Best Practices for Assessment 

The Power of Formative Assessment to Advance Learning. 1, Student Motivation and Achievement; 2, A Six Step Process for Student Growth; 3, Strategies for Checking for Understanding.  Association for Supervision and Curriculum Development, 2008. 

In this three-part documentary series called The Power of Formative Assessment to Advance Learning, teachers and professionals were interviewed about best practices related to formative assessment in the classroom. Part one describes how formative assessment enhances student motivation and achievement, part two describes a step-by-step process for using formative assessment in the classroom, and part three outlines specific strategies for checking understanding. 

 

In part one, this film argues that when assessments are used throughout the learning process, they can inform instructions, identify modifications needed, and serve to motivate students toward higher achievement. Formative assessments are purposeful and planned and happens when/as learning is taking place. Formative assessment results in quick feedback that can be used within minutes, hours, or days (e.g., using personal response systems, brain dumps, quick writes, sequencing events on the board as a class, or thumbs up/down for understanding terminology). Formative assessment focuses on curriculum driven learning goals linking the curriculum to the instruction as it informs teachers of students needs/gaps in their learning. Formative assessment teaches students how to measure their progress toward the desired learning goal, especially if they have a role in their assessment. Student roles may include asking questions in a literature circle, peer-assessing using a rubric, and exit tickets. Formative assessment also helps teachers to plan lessons to address the needs of students. It identifies students who need intervention, and serve as pacing guides for covering curriculum expectations.  

 

In part two of the film, school professionals outline formative assessment as a series of recursive events that involve the teacher and students. The six steps of formative assessment are ensuring students understand the learning target (having them see it and hear it), have student produce work related to learning the target, compare performance of the work with learning target, evaluate strengths and weaknesses, give feedback for improvement, and then adjust lessons and instructional approaches to lessen the gap between what the student know and the learning the target. The assignments or assessment activities should match the nature of the learning target. 

 

In part three, the professionals in the video outlined specific strategies for checking students’ understanding. They argue that formative assessment is critical to know if students are learning and that there is no one kind of assessment or one-size fits all way of doing assessment. Assessments gives both teachers and students insights into how students are thinking about their answers – not just what they answer but what led them to the answer. Specific strategies include oral language (e.g., having students respond in class, conversing ideas, and moving from restating information to describing and explaining which leads to higher order thinking), asking questions related to content (e.g., student-generated questions), writing (e.g., quick writes – at the beginning of class to help them remember what was covered in previous classes then give them rubric of 4, and debrief in student-teacher meetings), and quizzes and tests (that are not evaluative). Have students code test results (K-knowing, T-Testing Strategies, G-Guessed, D-Don’t Care) so that they can see their thinking behind the answers. Help students develop a study guide from there (e.g., the vocabulary if students were not sure, examples, nonmonic devices, etc.) Additionally, projects and performance tasks can serve as formative assessment. When students construct something, they have to mobilize and apply all the information they know about the world. Checking for understanding allows students and teachers to understand the content. Formative assessment connects what students are doing with what they are learning in their own mind. 

 

This film series was intended for schools or districts already using or wanting to use formative assessment and includes real-life experts and practitioners from elementary and secondary schools. Published in 2008, the images within this film feel outdated however many of the concepts are still relevant to today’s learners. This documentary was excellent in lending many examples for a wide variety of subjects and age groups and fully explained formative assessment. One criticism, however, is that the content of this video is heavily focussed on tasks opposed to the learner. Although the people in this video do advocate for student voice (in the sense of student generated questions) the learning tasks and assessments are always designed by teachers. More currently, we hear of co-construction of rubrics and multimodal assessments to appeal to different learning styles. This video also referred to assessments as a process of collecting data. While formative assessment is a process of collecting data about student learning, it is not that objective. Students should have a bigger role in designing their learning. This video uses real-world teachers and talks about how formative assessment may help teachers plan and fulfill more curriculum learning objectives. I appreciate that this film is realistic and represent the realities that teachers face, opposed to being set in an ideal world. The film discussed closing the gap between what the students know and the curriculum mandated learning goals. While achieving the learning objectives is of utmost importance, there should also be room for students to take strand information and apply it to their own lives in order to make meaningful connections beyond classroom walls and the curriculum. 

 

In terms of my research question, meaningful student assessment looks like the 6-stage process outlined, is multimodal, and is relevant to students so that it progresses them towards the learning goal. Although outdated, this film offered many practical examples that can be used as a base for creating formative assessments in 2020. This film uncovered that what is found within the formative assessments (students’ knowledge and thinking processes) guides the direction of classroom instruction and students’ learning, ultimately having a major impact. 

Meaningful Assessment Promotes Deep Learning and is Not Evaluative

Umer, Muhammad, Choudhary Z. Javid, and Muhammad U. Farooq. "Formative Assessment: Learners' Preferred Assessment Tasks, Learning Strategies and Learning Materials." Kashmir Journal of Language Research, vol. 16, no. 2, 2013, pp. 109-133. ProQuest, https://search-proquest-com.proxy.bib.uottawa.ca/docview/1628966214?accountid=14701.

Despite that the authors clearly define formative assessment (assessment as learning) as “assessment [that is] included in the assessment regime of a curriculum to help learners diagnose and improve their learning weaknesses”, the authors in the rest of their paper fail to use this definition properly. The study fails miserably as a study about formative assessment because it draws multiple conclusions about assessments that are evaluated, fitting the definition of summative assessment more than formative. Although this study fails to accurately represent formative assessment, it does offer useful insight into why formative assessment is needed, and why grading everything can be harmful to student learning, therefore still adding a contribution to my research question. The purpose of this study was to explore if Taif University English-major students’ perceptions of formative assessments influenced their learning strategies used and learning materials used. Data was collected through a student questionnaire. 

 

Umer, Javid, and Farooq claim that there are factors that might prevent formative assessment from resulting in better learning outputs. They attempt to explain the washback of formative assessment in general and on students' learning strategies in particular. In their literature review, Umer, Javid, and Farooq discuss multiple examples of studies advocating for formative assessments as a means to enhance student learning, however, attempt to refute each one. They contend that formative assessment may be more useful to low achievers than other students. They also caution against using formative assessment. They warn that formative effectiveness cannot be achieved by only giving students grades and that this may sometimes have reverse effects. Quality feedback needs to be ensured in order to improve student learning. 

 

In their literature review, Umer, Javid, and Farooq describe surface level learning and deep learning. Short answer questions result in lower level thinking compared to questions that involve essay-type answers, thus suggesting that assessment tasks that enable student to analyze and synthesize will result in deeper learning. Shorter questions that only test knowledge and do not involve considerable reflection and originality on the part of learners result in shallow learning outcomes where learners often use memorization strategies and a narrow scope of specific text materials. Questions that ask student to analyze and synthesize embodies learner interest, ownership, and personal involvement. This critical reflection on the content allows learners to gain a deeper understanding.

 

The learning environment was also mentioned as influential to the students’ achievement in assessments. Understanding the leaning objectives, a reasonable workload, and useful learning materials play a vital role in the learner’s approach to understanding content. 

 

Assessment preferences were also found to influence the learning strategies used by the learners. Students preferred lower level thinking questions because they perceived them as easier, thereby increasing their chances of getting a good grade. Assessment tasks affect learners' learning strategies as well as the scope of the learning materials they use. This specific study demonstrated that memorization was the main learning strategy used when studying for a formative assessment task (which they referred to as quizzes and midterm exams). They also revealed that students often memorize content even if they do not understand it. Students prioritize passing. These results are limited in their utility however, as they imply that formative assessment is graded, and formative should not be evaluated. These results offer some insight into how multiple choice and true and false questions on a summative exam may not benefit learners in terms of increasing their knowledge and only serve as a mere demonstration of what they are able to memorize. If we are to promote deeper learning in classrooms, grades should not be assigned for formative assessments and different types of evaluative tasks such as project-based learning or more emphasis on application on exams should be considered. 

 

Results of this study also reveal that formative assessment tasks (referred to as quizzes and midterm examinations) can also constrict the scope of materials that students learn. With such emphasis on grades, students tend to study what they were told to study and do not broaden out to further their knowledge with readings that were not assigned. In terms of the depth of the student’s learning, this study has found that the Taif University’s English-major learners do not benefit fully from the formative assessment tools because they only care about the passing grade. A majority of respondents disagreed with the idea of including essay-type questions in the formative assessment tasks because they feared that they would fail due to spelling and grammar errors. This piece of evidence not only proves that the students of this sample are only motivated by grades and only care about knowledge, but that they are uninterested in improving in areas where they have known weaknesses. The results showed that the learners expect the assessment to be based on course books and materials and they expect to know what exactly they will be tested on (to the extent that page numbers are provided). This shows a clear lack of motivation and lack of meaningfulness in what students are learning. 

 

In terms of defining what meaningful student assessment looks like and how it impact learning, we can conclude that meaningful assessment (as in assessment that is useful to learners in furthering their understandings) is not summative assessments that consist of shallow level knowledge questions and that are evaluated. We can also conclude that only using or placing emphasis on evaluated assessments can result in learners who do not care about what they learn, but the passing grade, thereby having a negative impact on learning.

Assessment and Becoming More Human 

Vu, Thuy T., and Gloria Dall’Alba. “Authentic Assessment for Student Learning: An Ontological Conceptualisation.” Educational Philosophy and Theory, vol. 46, no. 7, June 2014, pp. 778–791, doi:10.1080/00131857.2013.795110.

While Vu and Dall’Alba aim to provide insight on meaningful and authentic modes of assessment in education, they lean more into the philosophy of what is considered human achievement (becoming more human through learning opportunities) and fall short on practical examples of what authentic assessment looks like in the classroom. Based on the assumption that authentic assessment enhances student learning, the authors take a deep dive into the meanings behind authenticity and what that means in terms of the task and learner. This article gives some useful insight into philosophies one should embed in their practice, may help some teachers understand their learners’ motivations for learning, and reminds readers of the bigger picture for why humans learn. Although this article does not explain what meaningful student assessment looks like, it offers useful points on the conditions that bring about meaningful assessment. 

 

Vu and Dall’Alba begin with the belief that assessment drives student learning. Authentic assessment opportunities can enhance students learning, therefore preparing them to adapt to the changing world. There is a lack of conceptualization of what authenticity or authentic assessment is. Authentic assessment is traditionally believed to be those tasks that are real-to-life or have real-life value, however this view is too narrow because it only focuses on the task, not the learner. An authentic assessment is only valuable to students if it engages the whole learner (their prior knowledge, attitudes, behaviours, and identity). The real-world, real-person connection roots the student’s learnings to the world. Authentic assessment is not an end all be all, rather authentic assessment should be thought of as a process that learners go through, continually opening up new opportunities for learners to understand themselves in the world. Instead of looking at assessment in terms of the qualities of the task, this article looks at assessment from the unique perspective of learners as human beings and what it means to be in the world. 

 

The author translates the philosophy behind our ways of “being” in the world to how we teach. For instance, because as humans we tend to accept the world as it is, we confirm to public way, we might also have a tendency to not step outside of the box (outside of PowerPoints and handouts into interactive technologies). The authors propose that in order to not get lost in the world, we need to stand on our being and take responsibility for who we are. The research article goes into great depth about inauthenticity (going with the flow of society) and how it leads to conformity. Conformity (especially in education) is comfortable. When students and teachers alike step outside of the box, it often results in feelings of discomfort. Becoming authentic is associated with feelings of anxiety, guilt, ambiguity and entanglement, but becoming authentic is where we are inspired to innovate our own teaching. 

 

Assessment is authentic when students are encouraged to become authentic themselves (become more fully human). In this way, knowledge and skills are not seen as an end in themselves but empower students to form and establish themselves in the world. Students need to be challenged to think critically about the way their lead their lives, their interactions, and who they would like to become using the curriculum strand knowledge. Authentic assessment is not mere completion of tasks or learning activities, but challenge students to take up and respond to the question of who they are becoming. More practically, students might extend what they are learning into their life-projects then demonstrate how what they have learned have impacted who they’ve become. Authentic assessment challenges students to challenge taken-for-granted assumptions, results in feelings of discomfort, but extends deeper into our human condition. 

Motivation, Assessment, and the Learning Environment

Walters, Simon R., Pedro Silva, and Jennifer Nikolai. "Teaching, Learning, and Assessment: Insights into Students' Motivation to Learn." The Qualitative Report, vol. 22, no. 4, 2017, pp. 1151-1168. ProQuest, https://search-proquest-com.proxy.bib.uottawa.ca/docview/1894908416?accountid=14701.

In this study by Walters, Silvia, and Nikolai, the authors draw upon the perspectives of sport and recreation undergraduate students in New Zealand. These students were involved in the design of their own assessment and in discussions on implications of teaching and learning environments in their university courses. The reason for this study was a shared concern by the authors in what they perceived to be a lack of student motivation and engagement in the learning process in the school. Their aim was to capture the perspectives of both students and lecturers to create and implement an academic support strategy. A previous study uncovered student criticism of teaching strategies and assessment methods. In this study, the authors first do a literature review of theories related to motivation, environmental influences, and assessment in relation to learning. After which, they explain their study which includes students from a second-year sociology of sport class who were invited to design their own exam, and students from a third-year sports coaching paper who were invited to fully design their own assessments. Student experiences were captured through focus interviews and Self-determination Theory (SDT) was used to analyze the findings. 

 

In their literature review, Walters, Silvia, and Nikolai explain that SDT is based on the concept that humans are driven by the need for growth and fulfilment. In order to achieve self-determination, students would have to have three basic needs met: autonomy, competence, and relatedness. Intrinsic motivation is completing an activity for oneself, whereas extrinsic motivation has a means-end structure. A person engages in the activity to achieve an external reward or outcome. It is argued that western models of education promote controlling teacher-centred learning environments and traditional instructional methods which hinders student motivation and deep learning. Environments that are externally controlled are more authoritarian, use more coercive teaching strategies and rewards, and impose deadlines. An autonomy-supportive context, includes a teacher who is able to understand and empathise with the student’s perspective. Students experience more self-initiation and choice. The author warns about the difference between so much choice that students feel socially isolated and enough that they feel empowered. 

 

Additionally, environmental influences such as the goals of the bigger institution (to assess and measure progress) determines whether assessment has a positive or negative impact on the learners. Excessive pressure on academics by assessing and measuring progress reflects more the needs of policy makers than the needs of teachers and learners and this comes through. We must avoid promoting an “audit culture”. If there is too much emphasis on assessment opposed to learning, both teachers and students may feel a lack of autonomy. Freedom to explore and take risks helps teachers and students become more creative and form deeper learning. Students are more engaged in their learning when they feel a sense of belonging.

 

Unlike other articles, Walters, Silvia, and Nikolai reveal how assessments can either hinder or enhance learning. The intent of assessment for learning should be to nurture a collaborative autonomy-supportive process where students can learn to monitor of their own learning. Formative assessment is useful in this area.

 

In their study, Walters, Silvia, and Nikolai found that teacher passion and evidence of caring inspires learners. The way the class is structured, the assignments, and the delivery of the material is important. Also found was that students often perceive assessments as tools of measurement, opposed to being structured to promote optimal learning. There was a heavy emphasis on grades and the percentage that the assignment was worth. The question remains, how to get learners away from this type of thinking so that they see assessments as a useful tool to monitor their own learning. This study reveals that students need to be passionate about what they are learning. Backwards design (starting with the end project/goal in mind that accounts for students’ passions) and then determining what would be important to develop this (lessons and assessments) could be the key. Students also expressed that when it comes to assessment, they don’t like the lack of direction given when they have to design their assessment task, but when students help to design the learning process it resulted in feelings of pride. Students were critical of an environment that delivered content and then measured their ability to reproduce that content. 

 

This study was very well done and contributed a more complete picture of what meaningful assessment looks like. It addressed how assessment can be both helpful and a hindrance to student learning (an area where many authors fall short). From this paper, I can conclude that formative assessment does not impact student learning by doing it as much as possible, but by helping learners co-construct the learning process. The authors explain that in order to have meaningful assessment opportunities, the tasks must be supportive of learners needs: autonomy, relatedness, and competence. This paper not only talks about the task but the learner too. There are not practical examples outlined in this article, but it does structure what a learning/assessment environment should look like. From this article, I have considered an alternative approach to structuring lessons and assessments. I am now considering with a more complete understanding how backwards design could impact learning and how assessments would be more meaningful having that end goal in mind (one that is relevant to students needs and interests). This differs from the approach I have been using where I take a curriculum goal first and then try to make it relevant to students. 

More on Assessment & Evaluation

Girls During Soccer Practice
Screen Shot 2020-11-19 at 8.57.57 AM.png
Screen Shot 2020-11-19 at 9.06.09 AM.png

© 2020 by Christina Nyentap, University of Ottawa

Proudly created with Wix.com

bottom of page