Showing posts with label ruth clark. Show all posts
Showing posts with label ruth clark. Show all posts

Tuesday, December 13, 2011

Ruth Clark: eLearning and the Science of Instruction: A 10 Year Retrospection

These are my live blogged notes from December 13, 2011: eLearning Guild Thought Leaders Webinar

Ruth Clark and Richard Mayer’s book e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning – now in its 3rd edition.

An expert in evidence-based elearning – author of seven books!

Ruth@clarktraining.com

Let’s reflect back on the three editions of the book – what’s stayed the same and what’s changed?

Technology has changed!  Smart phone…search functions…facebook/web 2.0…gaming…the cloud.

And what’s happened in your own life – new job? new family? losses? gains?

Some surprises:

  • Virtual classroom – SLOW adoption
  • Online collaborative learning? We’ve seen the tech expand; what about the research?
  • Multimedia principles – basically the same
  • Growth in scenario based

In 2001 = about 11% of delivery media was elearning; in 2009 to 36% – gradual decrease in instructor led.

Goal of books – help practitioners apply evidence based elearning guidelines to design, dev, eval of multimedia learning ---> to help us move toward professionalization!

So are we an emerging profession? Are we still order takers? Or are we growing to become business partners?  (64% of audience said – we’ve become more professional in the last 20 years as a profession, but not much…)

First half of book based on Mayer’s multimedia research…

Section 3 (of book):

Summary of Mayer’s research (use of graphics and words) – multimedia, contiguity, modality, redundancy, coherence, personalization, segmentation pretraining.

Working memory – we can hold five chunks (used to say seven, plus or minus 2).  The cognitive model of working memory hasn’t changed that much.

But now we’re talking more about cognitive load.  (didn’t talk about that at all in the first edition.)  Credit to Sweller.

3 Forms of Cognitive Load:

  • intrinsic (the complexity of your content) – listen to the audio, translate it, construct a response and pronounce it quickly – the number of cognitive activities you have to perform.  When you have greater intrinsic load, you have to attend more to cognitive load.
  • extrinsic – extraneous load put onto learner by poor design.
  • germane load – the good stuff.  When people are learning we want people to be engaged with their working memory.

Your job – to manage intrinsic load (esp when high), keep extraneous load low, and to maximize germane load…(you’ll see more about this in the 3rd edition of the book).

Eye Tracking Research

Meta-Analysis

Right now just have a few experiments in this area…

Section 3 (of book): Use of Key Methods

Evidence around practice, collaboration and learner control in learner.

Research on examples and worked examples  -- where was learning better? (A) example, practice, example, practice…OR B) example, practice, practice, practice) – does more practice lead to better learning? –> worked examples/A) was less training time and better outcomes/fewer mistakes on the test.  Combining worked examples with practice gives you better and faster learning).

Better learning transfer when you distribute practice.

Worked examples lead to better learning outcomes for novices…having worked examples for the expert actually depressed their learning outcomes (the expertise reversal effect).  Some instructional methods for beginning learners may degrade learning for experts. (it might disrupt the experts own working models).  So as we worked with experts, we should FADE back worked examples…

Some problems with worked examples:  learners can gloss over them.  Need to make worked examples  more engaging.  Add a self-explanation question. Add a question that forces the learner to process and think deeply.  (e.g., – in a scenario program now ask “why is it important to verbally recap the doctor’s questions about contra indicators?”…)

Tests can be work-related projects that demonstrate quality.

The research on online collaboration

Evidence mostly around collaboration IN THE CLASSROOM vs. in an elearning setting.  Need more research in this area.

Kirchner (2011 study) – collaboration in problem solving – notes that collaboration takes cognitive resources.  Do you benefit enough? If the problems are relatively easy, then learning better in a solo setting.  If problems more complex, then collab will lead to better learning.

Architectures

Three approaches to design.

  • Receptive – little overt engagement (as in this webinar) – documentaries, college lectures, books – these are mostly briefings.
  • Directive – traditionally used for procedures – have to do each step exactly in order.  Used for software training.  Instruct, Demo, Practice, Feedback.
  • Guided discovery – emerging in last five years – some people call this immersive learning.

Poll question – which architecture is predominant in your org’s elearning? (directive = Captivate; low percentage of guided discovery…) 42 % receptive, 45% directive, 4% guided, 8% we use all three equally.

A Look at Guided Discovery

(She’s showing a demo of guided discovery of a car repair – virtual shop that you have to go into and diagnose).

Good for critical thinking and problem solving. 

“Experience packaged in a box.” – simulations.

Does it work?  Research on part-task (more traditional directive learning) vs. whole task training (guided discovery) – better transfer with the whole task test….(e.g., how to use Excel to create a budget – to determine how well they could apply what they’ve learned in a different setting.)

Discovery vs. guided discovery: Mayer said “discovery learning does not work” – meta-analysis of discovery approach – much better learning from direct instruction or guided discovery.  Pure discovery = let them explore and go here and there.  It doesn’t work as well – learner’s need guidance.

Scaffolding – this means we need to do better scaffolding. We need to provide guidance and structure.

Start with simple cases and move to more demanding ones.

Case 1: demo; Case 2: let the learner complete part; Case 3: have learner do more; Case 4 – have learner do them all.

Ruth Clark new book – the essentials of scenario based elearning – she’s finishing it up now – will include scaffolding in Scenario Based eLearning (SBEL) – coming out next year.

Thursday, May 27, 2010

Book Review: Ruth Clark’s Evidence-Based Training Methods

I’ve been trying to read a lot lately – books, not just blogs. 

And I do find that the age-old book report is a great way to synthesize and encode all those juicy learning nuggets.

ASTDMy latest review: Ruth Clark’s Evidence-Based Training Methods: A Guide for Training Professionals

Recommendation:  Thumbs Up. 

I presented a webinar today and found myself quoting liberally from this book.  So if that’s not a good indicator of its usefulness, I don’t know what is!

Clark effectively summarizes current learning research, covering important topics like use of audio and graphics.  And bashes the learning styles myth.  A lot of the material was familiar to me from having read E-Learning and the Science of Instruction and from participating in her session of the same name last fall at DevLearn (my notes are here).

Click here to read my review of Evidence-Based Training Methods on the Kineo site.

Wednesday, November 11, 2009

Ruth Clark: Evidence Based E-Learning #dl09 #dl09-104

These are live blogged notes from DevLearn '09 -- session with Ruth Clark on Evidenced Based E-Learning. I arrived a few minutes late to the session and just had to dive in...

**************************

Learning Styles/the Learning Styles Myth --

Did an experiment: Self report, Barsch learning style inventory, memory test. What were the correlations? If someone said they were kinesthetic, were the other measures showing the same? Instead, they found NO Relationships!

(Her new book takes on the biggest myths of the field including Learning Styles. This is controversial!)

Replace it with Mayer’s Multimedia Principles

Richard Mayer (25 years of research)

Experimental evidence…

Do graphics improve learning?

(Having just read Mayer’s Multimedia Learning on the plane over here, I’m not sure if this will be a useful session for me – but I really wanted to see Ruth Clark speak!)

Multimedia Principle – having visual results in better learning.

For whom do graphics improve learning?

experiment: visuals in the courtroom --

Judge gives instructions on self-defense to 2 juries: 90 legally untrained adults, 90 law students.

  • One got all audio instruction
  • The other was audio + visuals (flow chart)

The worst was novices with audio only. Visuals helped learners with no prior knowledge the most. The law students didn’t show great improvement with video.

Invest more in visuals in beginning level courses. If people have prior knowledge in domain, they can create own visual by activating their prior knowledge.

Spatial Aptitude

Are all visuals equal?

Jazz it up and make it more engaging. Are these types of visuals helpful?

Which gets better learning:

  • Base text and graphic
  • Interesting anecdotes added to base lesson

The basic version wins.

The Coherence Principle

The interest factor did not serve learning. Distracting.

Learning is better when extraneous materials are eliminated.

What is the relationship between student ratings and learning?

Liking vs. learning.

Recent study looked at thousands of surveys and looked at correlation. This was a meta analysis of all the ratings on the courses – looked at classroom based learning and not elearning. Correlation between liking (ratings) and learning (tests) is:

  • Delcarative learning (concepts and facts) – really small (.12)
  • procedural learning – really small (.15)
  • delayed procedural learning) – really small

The relationship between ratings (level 1 student rating sheets) is too small to assess lesson effectiveness.

Use explanatory graphics

3 types of graphics: Decorative (generally overdone!), explanatory and representational (here’s what the screen looks like – these are important in our work)

Explanatory Visuals -- Show relationships among your content topics.

  • organizational (shows qualitative relationships among topics – tree, concept map)
  • relational (summary of quantitative, pie charts and bar charts)
  • transformational (shows change in time and space)
  • interpretive (take invisible, abstract ideas and make visible – used in science a lot to show molecules, etc.)

[Cammy sidebar: My burning question is about using visuals in storytelling/scenarios. I have a course on sexual harrassment and I use a picture to help the story. If it’s a picture of the woman looking upset after an incident what is that? I don’t think that’s a distracting visual as it puts a human face to the story.]

Animation:

Which is best?

  • Visuals (animation) with narration
  • Visuals with Text
  • Visuals with Text and narration

Visuals (animation) with narration.

Modality Principle:

When modality applies -- exceptions

  • The content and/or visual are complex
  • learners are relatively novice
  • instructional pacing
  • words NOT needed for reference (important new words may need to be on screen)
  • native language

Redundancy Principle: learning is better when visuals are explained by audio narration than by text

We are ALL visual learners. We ALL benefit from audio!

Contiguity Principle: Put text in with the graphics (not off to the side or under the screen) – integrate text as close to relevant visual as you can.

As you read a book – you have to turn page to see visual that goes with text on the page. Annoying.

Learning is better from integrated text.

Avoid scrolling screens when text is on bottom and visual at top…

When Less is More (new research)

1. complex vs. simple graphics

Comparing line drawing to realistic 3D drawing. – lean vs. rich multimedia.

Carol Butcher, University of Colorado study – where was learning best?

  • text
  • text & simple graphic (this was more effective!)
  • text & complex graphic

2. Stills vs. animations?

Are animations better? How a toilet works…

stills with text vs. animated with audio?

4 diff experiments done.

STILLS fared better in all experiments. The animation can give too much visual information and it’s often out of the control of the learner.

Two theories about why:

1) animations can – impose extraneous mental load have to hold animation frames in memory to link one to the next.

2) animations can - promote a passive mental state (vs. mentally animating or self-explaining the key steps) – we go into couch potato state with animations…

(The discussion is now transgressing to whether or not people know how toilets work…)

Are animations better? (part II)

stills vs. animation to learn a procedure – animations were MUCH better. Mirror neurons. Adapted part of our brain to learn movement – doesn’t impact working memory.

Learning of motor skills is better when illustrated with animation vs. stills.

3. Learning from examples in text, video and animation

Which led to better learning?

Which was rated higher?

Animation examples were the highest on both, followed closely by video. Visual examples were more effective – but animation/video not statistically different from each other.

(Gotta run!)

Friday, April 03, 2009

Visuals and Audio in eLearning (Ruth Clark)

Donald Clark posted a link to a lovely little article by Ruth Clark: Give Your Training a Visual Boost in the April 09 edition of ASTD's T&D.

The article contains such gems as:

"Decorative visuals defeat learning."

and

"The least successful learning resulted from text and audio repetition of that text."

This would be a great article to forward to a client.  You know the one.  They want  audio narrating that long paragraph of text that they think should also appear on the screen in order to appeal to different learning styles...