The Pedagogical Refinery

View Original

The Layers of Learning

How can requiring educators to apply multiple-leveled questions & thinking evolve pedagogy?

Schoology allows for the Common Core State Standards (2010) (CCSS) to be linked to assessment questions. However, we do not have the accessibility to apply what types of questions or levels of thinking we are asking our learners.

Types of questions are:

  • Literal: A literal question asks the reader to recall facts explicitly stated in the text. The can be located “On the lines” (Who, what, when, where…)

  • Inferential: An inferential question asks the reader to read “between the lines.”

  • Evaluative: An evaluative question asks the reader to decide whether he or she agrees with the author's ideas or point of view in light of his or her own knowledge, values, and experience. These questions can be answered from “In your head.”

Levels of thinking are:

o   Remembering

o   Understanding

o   Applying

o   Analyzing

o   Evaluating

o   Creating

o   Recall & Reproduction

o   Skills & Concepts

o   Strategic Thinking

o   Extended Thinking

Though we do ask a diverse range of questions and push our learners to have multiple levels of thinking, do our assessments carry more weight amongst a certain type of question and lower levels/higher levels of thinking? If so, how is this equitable? If a learner is not at that level of thinking yet, we need to differentiate for that range of learner.

Schoology should also have a flow chart option for assessments that automatically assign learners to their level of assessment. For instance, if I receive 95% on an assessment, Schoology will assign me an assignment that is differentiated for learners who achieved my level of understandings and continues to challenge me. If I receive a 75% on an assessment, Schoology should not assign me an assignment that is the same as the 95% learners. That may be too fast/complicated for me as a learner, and I need more time to work on the skills which are my threshold for the time being.

Should Multiple-Choice Assessments Be Used?

"The decision to use multiple-choice tests or include multiple-choice items in a test should be based on what the purpose of the test is and the uses that will be made of its results. If the purpose is only to check on factual and procedural knowledge, if the test will not have a major effect on overall curriculum and instruction, and if conclusions about what students know in a subject will not be reduced to what the test measures, then a multiple-choice test might be somewhat helpful -- provided it is unbiased, well written, and related to the curriculum. If they substantially control curriculum or instruction, or are the basis of major conclusions that are reported to the public (e.g., how well students read or know math), or are used to make important decisions about students, then multiple-choice tests are quite dangerous.

Students should learn to think and apply knowledge. Facts and procedures are necessary for thinking, but schools should not be driven by multiple-choice testing into minimizing or eliminating thinking and problem-solving. Therefore, classroom assessments and standardized tests should not rely more than a small amount on multiple-choice or short-answer items. Instead, other well-designed forms of assessment should be implemented and their used properly.

In sum, multiple-choice items are an inexpensive and efficient way to check on factual ("declarative") knowledge and routine procedures. However, they are not useful for assessing critical or higher order thinking in a subject, the ability to write, or the ability to apply knowledge or solve problems" (Fairtest, 2007)

Benefits

"MCQ tests can aid teaching and learning by, for example:

  • providing students with rapid feedback on their learning

  • being continually available without increasing the marking load (if delivered online, with automated feedback)

  • lending themselves to design using quiz tool software, either within or independently of Learning Management Systems (e.g. Moodle). With such software, you can automate presentation and publication and facilitate quiz administration, scoring and feedback provision.

  • allowing objective scoring. There can be only one right answer to a well-designed question, so marker bias is eliminated.

  • allowing scoring by anyone, or even automatically, thereby increasing efficiency, particularly in teaching large cohorts

  • being immune to students' diverse capabilities as writers

  • containing recyclable questions. Across the discipline, you can progressively develop and accumulate questions in pools or banks for re-use in different combinations and settings" (UNSW Sydney, 2017).

Challenges

"It can be challenging to use MCQ tests because, among other things, they:

  • are time-consuming to develop and require skill and expertise to design well

  • are generally acknowledged to be poor at testing higher order cognition such as synthesis, creative thinking and problem solving

  • have been shown, when they are used for summative assessment, to encourage students to adopt superficial approaches to learning (see Scouller, 1996)

  • can be answered correctly by guesswork. If poorly designed, they can provide clues to encourage guessing.

  • typically provide students with little direction as to how to improve their understanding—although you can overcome this with carefully designed feedback

  • can disadvantage students with lesser reading skills, regardless of how well they understand the content being assessed

  • are very subject to cultural bias" (UNSW Sydney, 2017).

"Students learn skills and acquire knowledge more readily when they can transfer their learning to new or more complex situations, a process more likely to occur once they have developed a deep understanding of content (National Research Council, 2001). Therefore, ensuring that a curriculum aligns to standards alone will not prepare students for the challenges of the twenty-first century. Teachers must therefore provide all students with challenging tasks and demanding goals, structure learning so that students can reach high goals, and enhance both surface and deep learning of content (Hattie, 2002).                                               

Both Bloom's Taxonomy and Webb's depth of knowledge therefore serve important functions in education reform at the state level in terms of standards development and assessment alignment" (Carlock, Hess, Jones, & Walkup, 2009, p. 6).


FairTest. (August 17, 2007). Multiple-choice tests. Retrieved from http://www.fairtest.org/multiple-choice-tests

UNSWSydney. (July 31, 2017). Assessing by multiple choice questions. Retrieved from https://teaching.unsw.edu.au/assessing-multiple-choice-questions#

Carlock, D., Hess, K. K., Jones, B. S., & Walkup, J. R. (2009). Cognitive rigor: Blending the strengths of Bloom's taxonomy and Webb's depth of knowledge to enhance classroom-level processes. Education Resources Information Center, 6.