These are my live blogged notes from Tim Martin's session at ASTD Tech Knowledge, wrapping up today in Las Vegas.
Tin Can 101: It’s a shared system for two
systems to talk about the things that a person does. Unlike SCORM which was
about content in a browser. (There’s
much more here: http://tincanapi.com/overview/)
TinCan allows simulators, servers, mobile
devices, etc to communicate.
Tin Can is driven by a web service based
solution. It uses a RESTful web Server
(and yes, I had to ask what this is: a more commonly adopted web service
architecture. Developers, apparently, will like this. http://en.wikipedia.org/wiki/Representational_state_transfer)
Two parties in xAPI/Tin Can: Activity
Provider and the Learning Record Store
How might you use Tin Can today? Start small.
Success comes from starting small, understanding something narrow and then
expanding. Instead of trying to measure everything in your organization, think
about designing experiments.
This echoes
something @reubentozman said in his
session yesterday on Designing for Data (sorry – I didn’t blog it, but I did
Tweet a lot for Reuben’s session):
"Think about design as an experiment. Design your experiments to capture data you want/need."
Tin Can has a noun (the person, email address, ee identifier, etc), verb, object structure. “Cammy read a
book.” “Cammy ran a marathon.” “Cammy presented at a conference.”
Learning Record Store = collects statements
from Tin Can (A much better definition can be found here: http://tincanapi.com/learning-record-store/)
Organizations might use multiple LRS’s in
their organizations – each one looking at the same set of data and presenting
them in different ways for different parts of the organization. Use the data in
the LRS from an interesting analytics view.
This page gives a really nice introduction to
how you an approach the design process as an experiment. http://watershedlrs.com/site/watershedmethod.html
So the analytics could help you identify
completion rates if a course is designed with higher fidelity vs. lower
fidelity (helping you test your hypothesis that the production value of yourx`
course does matter).
Or “People who do that and this are more
successful than people who do that and not that…” Follow the learning path and
watch who succeeds based on the paths.
Experiment Design:
- The company was looking to look at cultural adoption of the code of conduct. They wanted people to do the right thing because they believed it, not because they were forced to. So they wanted to see the # of calls going up to report ethics violations and help desk.
- They had a piece of content from a rapid learning tool. The experiment compares high fidelity content (with video and simulations) to the actual text of the code of content. So completion statements for each modality. “Tim completed this.” – can see the depth to which Tim explored the simulation. The sim sends statements.
- For the text version can just know if they got through all the pages of the document. So text doc sends statement.
- Quizzes to test comprehension.
- Pre and Post surveys to self-assess afterwards. Comes in from a survey tool. These make statements.
- Calls to the help desk can send statements. (“Tim called hotline.” Or “Tim reported violation.”)
So it's a fixed set of statements about a fixed
population.
Gave one part of the company the high
fidelity content and the other part the text. We think (the hypothesis) that
high fidelity content is going to help people get it better.
The high fidelity group follows through on
the path and completes different tasks, surveys etc. And if the data proves
that high fidelity content creates better outcomes (e.g., more calls to help
desk) then the business can make different decisions.
The first meetings are asking questions like
“so where do you think you can make a difference in the organization.”
Some designers just think of themselves as
designing content in a tool. Instead, let’s think about designing experiments.
No comments:
Post a Comment