My live blogged notes with session with Ethan Edwards of Allen Interactions (www.alleninteractions.com).
Same issues we’re dealing with in elearning today that we were dealing with 30 years ago.
Better elearning, better processes!
ADDIE…we’ve heard of it, is it iterative? etc.
Regardless of what you do – you need analysis/backgrounding.
Challenges of design for elearning:
- instruction must stand on its own
- can’t be adjusted or fixed on the fly (when we teach classroom…”let’s skip these pages.” if we make mistakes in analysis, then the learner is stuck…)
- must account for all things instructor might do
Does your org think elearning is formatting text, decorating screens and test? (so much elearning is like scrapbooking)
We want to create experiences that will create lasting and beneficial change in the performance environment
Where does analysis go awry?
- Often it’s skipped altogether. many authoring systems don’t support design – they provide formatting options.
- Or it’s perceived that analysis is already done by SMEs. SMES view analysis as a content dump and not a dialog. documentation of content does not equal design.
- Course already exists – conversion doesn’t require analysis, right?
These are failures. We shouldn’t accept them. Often trainers are too nice! Need to tell people when it’s a bad idea.
The Five Questions
1. what do learners expect to do after completing the course that they can’t do now? The purpose of elearning can’t be to create expertise. Expertise takes years to develop. Instead we should be aiming to create minimal competence.
Ask why is this important? Create a table: on the left list content “requirements”. along the top list the performance outcomes. then put checks across the grid to identify what you really need to include. There will be rows with no checks at all.
Push back on this is “well it doesn’t hurt to put in there.” But for every page that the learner says, ah I’ll just skip this, the more likely they’ll skip everything. Don’t overstuff the elearning with unnecessary stuff! If the SME still wants it, put it in resources.
You’ll find outcomes that are desired that have no content to go with them.
Most compliance training actually does have value to your org (don’t just do it cuz you have to)
Because elearning is $ to create, there’s no reason to create it unless there’s an outcome.
Even if your main goal is knowledge (don’t think it’s possible to have knowledge objectives)
The chance of learner mastering a performance objective when there is no opportunity to practice are slim.
So how do we translate moving your cursor into something with meaning (you can point, click, type)
If what you have people do is click a, b or c – that doesn’t translate into installing a router. There’s little opp for transfer.
I have never been asked a mult choice question in my daily life. Training prepares people to answer MCQs.
2. What are consequences to the learner if they fail to master the intended outcomes?
Often the learner’s highest level goal is to get through this course quickly!
Need to provide meaningful consequences.
Artificial consequences in elearning: a failing score, a jeopardy game where you lose money (that’s not a meaningful consequence).
Risk and consequences strongest tools to create motivation and buy-in. (Remember consequences to YOU or the ORG are often different than the cons. to the LEARNER).
Understand what the learner is motivated by. Relate those consequences to the real-world environment.
If I learn through the exercise that you’re going to tell me the answer, then I don’t do the work to figure out the answer. Make the learner figure it out.
3. Can you show me an active demo, a detailed sim, or provide an opportunity to directly observe the desired performance?
The expert assumes they know – they don’t always know what’s changed, or haven’t done it in years.
Include recent learners in your analysis. They remember what they didn’t know. They are now minimally competent (our target!)
4. What specific performance mistakes do new learners usually make?
The content is often not the challenge.
5. What tools, resources, or job aids, or help do successful performers (or even experts) use to do their tasks?
Often you want to help the learner use those tools.
Interactivity shouldn’t be designed to get correctness – instead encourage lots of mistakes! We add to our knowledge when we make mistakes.
So you might need a final test to “assess” – but don’t do that throughout – the learner feels too observed. Let the learner mess around.
You want the right things to be hard. In the real world, there’s no penalty for using the employee manual. Let the learners use their manuals in the elearning!
Time now for some demos on the Allen Interactions site: http://www.alleninteractions.com/content/case-studies-and-demos
Showing an example where the exercise is to fill out a field notes 9for a copy interviewing gang members) – there’s no right or wrong, but the learner can compare their text input with an expert’s input.
Mastery is a better model – keep people working until they can prove they can do it without error.