Assessment is a really crucial part of learning – Part 2

ChecklistAssessment is a really crucial part of learning and should be as performance-based as possible. That was number 9 or 12 truisms I cited as having influenced how I have been doing my work as a learning and performance consultant over a number of years. I invited you to support them with a book reference, a personal experience or a case study, or knock them down using similar sources of evidence. Last time I began to express some beliefs I hold about assessment. To complete the picture I want to say that I believe the two criteria of “validity” and “reliability” are guiding principles; they are much more than academic labels and are the best measure of the effectiveness of assessment.

The concept of validity is probed by a simple question, “If you say ‘watch me’ will the observer see you doing precisely what you’re required to do in the real world?” Watch me fill out this form while questioning a customer” is not quite the same as “Watch me fill out an image of the form on screen while reading a scenario and looking at a photograph of a customer.” At some point it is necessary to decide how far you can compromise where assessment is concerned. Is “Watch me land this simulation of a plane” quite as valid as “Watch me land this plane?”

“Reliability” is more to do with the structure and composition of the test and the conditions under which it is administered. For example if you test someone at the start of the day when they have no other commitments on their diary and have been given time to prepare, and then you use the same test on someone you’ve taken by surprise ten minutes after their working shift has ended and they are anxious about missing their bus, should you expect the same results?

So taking part one of this posting on assessment alongside this second part on validity and reliability, here are some statements that may be true and may be false assumptions, and I’d welcome your views and experience.

  1. Well designed, performance-oriented tests inform learners about job requirements and guide their learning.
  2. Tests must be both valid and reliable.
  3. Learners who are frequently tested do better than those who are tested less often.
  4. Learners generally take two kinds of tests: knowledge tests and performance tests.
  5. Knowledge tests tell you whether people have learned information important for safety, and acquired knowledge that regulates their performance.
  6. Skill checks and performance assessments measure the competence of learners and reveal gaps and weaknesses in the method, media and content of instruction as well as gaps in the learner’s understanding and skills.
  7. Errors can be used not only to identify learning gaps to close, but also to motivate learners to deepen their enquiries and seek information that they appear to lack. They can also be used heuristically to give the learner, in a safe no-penalty environment, a view of what happens when you do things wrong.

Please let us all know what you think.

Advertisements

Assessment is a really crucial part of learning – part 1.

WomanAssessment is a really crucial part of learning and should be as performance-based as possible. That is the ninth of 12 truisms that have influenced how I have been doing my work as a learning and performance consultant over a number of years. I hope you will feel free to support it with a book reference, a personal experience or a case study, or knock it down using similar sources of evidence.

I believe that assessment is a really crucial part of learning. I try to make it performance-based as far as is possible, and I always map testing closely to the goals of a training program.

I use assessment and testing before, during and after instruction in order to judge learners’ readiness to begin, chart their progress, reveal what they are finding difficult, and provide them with individualised learning routes to overcome those difficulties.

So testing is credible and focused on performance outcomes, and those outcomes emerge from the font-end analysis of the jobs that trained individuals are required to do.

Testing can assume many different formats including simulated exercises, oral and written quizzes and tests, work-based and field assignments, classroom questions, and comprehensive checks of skills and performance. Whatever the chosen method it is desirable, within the boundaries of what is feasible and affordable, to keep assessment as job-like as possible.

When trainers begin to add packaged learning to the mix, they often find it hard to change the way they approach the design of assessment. For face-to-face delivery they have often been encouraged to follow a 1, 2 3 stepwise approach in which Step 1 is state a clear objective, Step 2 is design some content and Step 3 is design an assessment to test for the transfer of that content.

In an effective performance-based approach to learning the sequence of steps is different. As before Step 1 is to state a clear objective, but Step 2 to design an assessment that will test for the accomplishment of that objective and Step 3 is design just enough content to enable the subject to pass the assessment.

So much of assessment looks like a multiple choice quiz because they are easier to mark by machine, and because it seems easier to test knowledge than to test skill. Skills need to be practised and observed and then measured against a standard. Knowledge is declared or demonstrated through recall. It is often separated from any context and so does not prove or depend upon any understanding. Testing this kind of discrete knowledge is easy – you just throw in the odd multiple choice quiz and the job is done.

That’s where “reliable” and “valid” enter the equation. But we’ll come back to that later.

Heuristic teaching uses assessment as a tool to help learners discover things for themselves. Then it matters not so much whether the answer is right or wrong, it is from the finding out that the value of the learning is derived.

This performance-based assessment should make the elements of the test as much like the elements of the criterion-based objectives as efficiency will permit.

A skill check has to be hands-on; if it is not then it cannot be valid. You would not expect to test one’s ability to drive through the theory test alone. You might be able to test reaction time and observational acuity through a simulation but sooner or later a novice driver has to get behind the wheel and demonstrate their skills in a natural environment.

Quizzes should be restricted to knowledge that is critical for safe and compliant job performance. If workers use manuals, reference material or job aids to find the information they need on the job, then assessment should be “open-book” and allow access to the same supports, but often it does not, and people are expected to memorise and recite facts and information that they never need to recall from memory on the job.

There is more to say. The two criteria of “validity” and “reliability” are guiding principles; they are much more than academic labels and are the best measure of the effectiveness of assessment. I’ll put those under the microscope next time, and then I’ll ask you to accept or reject some assumptions about assessment.