Instructional objectives have some practical use and come from a process known as “needs analysis”.

ListeningInstructional objectives have some practical use and come from a process known as “needs analysis”. That is the last of 12 truisms that have influenced how I have been doing my work as a learning and performance consultant over a number of years. I hope you will feel free to support it with a book reference, a personal experience or a case study, or knock it down using similar sources of evidence.

I have devoted a good deal of energy to persuading people that instructional objectives have some practical use. I strongly believe that they are not a tick-in-the box requirement, an unnecessary chore or a lucky charm. I regard them as the most fundamental part of specifying learning from which all other considerations must issue. The process of analysis is not one thing but many, each of which can influence instruction and testing, and so frame and shape a learning experience.

I see measurable or observable objectives as the bridge between what a learner takes from the content of a course and what has to be tested. Without them there is bound to be a large amount of irrelevant course material and learners will lack the sense of priority that will enable them to focus their efforts.

I don’t include detailed lists of objectives in what the learner sees, but I always make them explicit because I feel sure that when learners know the general purpose of a task and the standard they need to attain in completing it, their confidence improves and anxiety decreases.

It may be useful to include formal objectives to help to explain very complex learning materials, but not for simple, assignments that are easy to understand and follow.

It is certainly easier to write precise objectives for concrete, observable tasks than for abstract or academic content or for activity that aims to manipulate attitudes and emotions, but nowhere can I find the suggestion that objectives are more useful in one domain than another.

A widely-recognised behavioural model for framing objectives is Mager and Pipe’s performance – conditions – standard, although many do not associate it with that source.  I find it works well to simplify it a little in plain English using the template “You’re going to do this… so you’ll need this… and here is how we’ll know you’ve done it.”

So let me conclude this journey around the truisms I’ve encountered over the past several years by setting a final set of 9 straw man statements for you to craft into a permanent figure or blow into dust.

  1. Course objectives come from analysis
  2. Analysis takes many forms – goal analysis, performance-gap analysis, stakeholder analysis, audience analysis, feasibility analysis, organisational analysis (readiness to adopt and support), content analysis, job analysis, task analysis
  3. In combination these forms of analysis provide objectives that form the basis for designing, developing and implementing instruction and assessment, and for quantifying and valuing the results they deliver. The effectiveness, fitness and efficiency of a learning strategy, materials or course rely upon this.
  4. The quality of a learning program is in direct proportion to the adequacy of its objectives. It is barely possible to develop a relevant course and adequately test its users without a proper analysis of needs.
  5. It is quite common to find courses and learning materials where the content does not match the stated objectives.
  6. Some trainers and designers misuse objectives by displaying them as a kind of decoration, makeweight or rubber-stamping of course design.
  7. Course with unclear objectives usually contain irrelevant information or omit vital information.
  8. Without clear objectives you cannot create valid and reliable assessments.
  9. Every objective in a programme of learning should be assessed in some way.

Thank you for staying with me during this journey through my 12 truisms. When you are ready for the next dozen, please let me know.

Advertisements

Assessment is a really crucial part of learning – Part 2

ChecklistAssessment is a really crucial part of learning and should be as performance-based as possible. That was number 9 or 12 truisms I cited as having influenced how I have been doing my work as a learning and performance consultant over a number of years. I invited you to support them with a book reference, a personal experience or a case study, or knock them down using similar sources of evidence. Last time I began to express some beliefs I hold about assessment. To complete the picture I want to say that I believe the two criteria of “validity” and “reliability” are guiding principles; they are much more than academic labels and are the best measure of the effectiveness of assessment.

The concept of validity is probed by a simple question, “If you say ‘watch me’ will the observer see you doing precisely what you’re required to do in the real world?” Watch me fill out this form while questioning a customer” is not quite the same as “Watch me fill out an image of the form on screen while reading a scenario and looking at a photograph of a customer.” At some point it is necessary to decide how far you can compromise where assessment is concerned. Is “Watch me land this simulation of a plane” quite as valid as “Watch me land this plane?”

“Reliability” is more to do with the structure and composition of the test and the conditions under which it is administered. For example if you test someone at the start of the day when they have no other commitments on their diary and have been given time to prepare, and then you use the same test on someone you’ve taken by surprise ten minutes after their working shift has ended and they are anxious about missing their bus, should you expect the same results?

So taking part one of this posting on assessment alongside this second part on validity and reliability, here are some statements that may be true and may be false assumptions, and I’d welcome your views and experience.

  1. Well designed, performance-oriented tests inform learners about job requirements and guide their learning.
  2. Tests must be both valid and reliable.
  3. Learners who are frequently tested do better than those who are tested less often.
  4. Learners generally take two kinds of tests: knowledge tests and performance tests.
  5. Knowledge tests tell you whether people have learned information important for safety, and acquired knowledge that regulates their performance.
  6. Skill checks and performance assessments measure the competence of learners and reveal gaps and weaknesses in the method, media and content of instruction as well as gaps in the learner’s understanding and skills.
  7. Errors can be used not only to identify learning gaps to close, but also to motivate learners to deepen their enquiries and seek information that they appear to lack. They can also be used heuristically to give the learner, in a safe no-penalty environment, a view of what happens when you do things wrong.

Please let us all know what you think.