Summary
Frick and Bohling’s work holds a strong underlying parallel to Mager’s Tips on Instructional Objectives: Establish a sense of direction and a set of standards for success before foisting a learning product on learners (in this case on users.) In both cases, the authors set out a systemic framework for critical thinking about and evaluation of instruction. The obvious implication is that such critical thinking is often lacking, or is lacking in sufficient rigor to produce an effective learning experience. As stated on P. 4: “You can do this process if you have some patience, are willing to learn, have some common sense, and become a good observer. A lot of this boils down to having an inquiring mind. You make empirical observations to answer questions and make decisions.”
They call “inquiry-based, iterative design and development” that avoids common problems such a lack of user input, design and site testing, problem identification before a site becomes public, and site repairs or undetected problems after launch.
Frick and Bohling strongly imply that design of web-based instruction is, ideally, a team process, with team members dedicated to the areas of analysis, instructional design, information and graphic design, web technology, evaluation and usability testing, and web sever and system administration.
Like Mager, they begin by focusing on instructional goals. Unlike Mager, they advocate identifying and working with stakeholders (including students and one’s supervisors) to determine instructional goals. They also take a significant step beyond Mager by advocating authentic assessment: eliciting “student performance and/or artifacts which will indicate whether or to what extent learners have achieved the goals you had in mind.” (p. 10) This parallels Mager’s focus on behavior in instructional objectives – especially his delineation of eliciting overt behavior to provide evidence of covert (unobservable) behavior – but goes beyond Mager, who settles for behavioral indicators as the goal for instruction and the basis for judging students’ success. “Observed behavior and products created by students are indicators,” Frick and Bohling say by contrast. “They are not the goals.” (p. 10)
Further points for the analysis phase:
- Learner analysis: “What are relevant characteristics of students that will help determine what instruction may be needed in order to reach the goals of instruction?” (p. 14) Mager addresses this and other analysis not at all.
- Context analysis: Why use the web for this instruction? What other resources will it require?
- Most importantly, self-analysis: are you ready to use new resources or try new or nontraditional ways of teaching?
Critique
I find this systematic approach highly effective and more robust than Mager’s concepts, focusing as it does on authentic assessment and “indicators” where Mager focuses only on behaviors. The richness of the inquiry-based approach makes me want to learn more; these chapters also provide a sense of how very much there is to mater in designing online learning. I wonder, too: What about a one-person design shop? In my industry, many trainers work alone and occasionally lean on the expertise of others – while web-based learning is provided almost exclusively by institutions. I’ll be watching to see how much of Frick and Bohling’s systematic approach I can accomplish in my own workplace, where I will largely “fly solo” in training design.
Kevin - I like how you compared key points in this reading to the Mager reading. Personally, I found the Frick and Boling reading lacking in the learner analysis section. They basically recommend that you try to teach the lesson at least once, so you can better understand your learners. It's been my experience that e-learning developers are rarely given this opportunity but still need to understand their learners. What do you think?
ReplyDeleteI've never delivered e-learning, but built a prototype last semester for an online writing course. My overall impression is that, for professional development institutions, such learning is framed by overall knowledge of the industry in question, its demands and challenges for growth and success. I've seen institutions employ "what do you want to learn" questionnaires to augment their knowledge of industry needs and trends, but it seems to me that there's a huge amount of guesswork/intuition/appealing to the masses going on, rather than designing for specific individuals.
ReplyDelete