Monday, May 30, 2011

Week 3: Frick & Bohling chapters 1 and 2


Summary
Frick and Bohling’s work holds a strong underlying parallel to Mager’s Tips on Instructional Objectives: Establish a sense of direction and a set of standards for success before foisting a learning product on learners (in this case on users.) In both cases, the authors set out a systemic framework for critical thinking about and evaluation of instruction. The obvious implication is that such critical thinking is often lacking, or is lacking in sufficient rigor to produce an effective learning experience. As stated on P. 4: “You can do this process if you have some patience, are willing to learn, have some common sense, and become a good observer. A lot of this boils down to having an inquiring mind. You make empirical observations to answer questions and make decisions.”

They call “inquiry-based, iterative design and development” that avoids common problems such a lack of user input, design and site testing, problem identification before a site becomes public, and site repairs or undetected problems after launch.

Frick and Bohling strongly imply that design of web-based instruction is, ideally, a team process, with team members dedicated to the areas of analysis, instructional design, information and graphic design, web technology, evaluation and usability testing, and web sever and system administration.

Like Mager, they begin by focusing on instructional goals. Unlike Mager, they advocate identifying and working with stakeholders (including students and one’s supervisors) to determine instructional goals. They also take a significant step beyond Mager by advocating authentic assessment: eliciting “student performance and/or artifacts which will indicate whether or to what extent learners have achieved the goals you had in mind.” (p. 10) This parallels Mager’s focus on behavior in instructional objectives – especially his delineation of eliciting overt behavior to provide evidence of covert (unobservable) behavior – but goes beyond Mager, who settles for behavioral indicators as the goal for instruction and the basis for judging students’ success.  “Observed behavior and products created by students are indicators,” Frick and Bohling say by contrast. “They are not the goals.” (p. 10)

Further points for the analysis phase:
  • Learner analysis: “What are relevant characteristics of students that will help determine what instruction may be needed in order to reach the goals of instruction?” (p. 14) Mager addresses this and other analysis not at all.
  • Context analysis: Why use the web for this instruction? What other resources will it require?
  • Most importantly, self-analysis: are you ready to use new resources or try new or nontraditional ways of teaching?


Critique
I find this systematic approach highly effective and more robust than Mager’s concepts, focusing as it does on authentic assessment and “indicators” where Mager focuses only on behaviors. The richness of the inquiry-based approach makes me want to learn more; these chapters also provide a sense of how very much there is to mater in designing online learning. I wonder, too: What about a one-person design shop? In my industry, many trainers work alone and occasionally lean on the expertise of others – while web-based learning is provided almost exclusively by institutions. I’ll be watching to see how much of Frick and Bohling’s systematic approach I can accomplish in my own workplace, where I will largely “fly solo” in training design.

Week 3: Mager's tips on instructional objectives


I. Mager's Tips on Instructional Objectives

As a writing coach, I’m drawn to this sentence in Mager’s article: “If you don't know where you are going, it is difficult to select a suitable means for getting there.” That’s precisely the same challenge that writers face in trying to give form, theme and meaning to the information they’ve gathered, and Mager succinctly captures the sort of critical thinking task that instructional designers, like writers, must engage in if they’re to fashion a meaningful learning experience. They need to create a roadmap to guide their work. A roadmap implies that you know where you want to go. For writers, the “where” is the story’s theme, and its presumed ending. For instructional designers, it’s the intended performance they want to produce among learners.

Summary

Mager defines an instructional objective as “a description of a performance you want learners to be able to exhibit in order to consider them competent.” (p. 1) Then he adds an important  caveat: Learning  objectives  describe “an intended result of instruction, rather than the process of instruction itself.” (p. 1, italics and boldface in original.) He delineates the reasons for stating learning objectives; the four qualities of useful objectives, with an in-depth examination of each quality in turn; and common pitfalls of objective writing.

1. Reasons for stating objectives:
  • They provide the basis for selecting and designing “instructional materials, content, or methods.” (p. 1)
  • They provide the basis for creating or selecting tests to allow both the instructor and the learner to determine whether the desired performance has been achieved (italics mine)
  • They allow students to determine the means they’ll use for achieving the performance. While he does not state it explicitly, this puts some aspect of shaping the instruction under the learners’ control. 

2. Qualities of useful objectives:
  • Audience: The person undertaking the performance: the learner
  •  Behavior: what the learner should be able to do as a result of the instruction; this can be the desired action(s) or its result(s). Behavior is a verb, and it must be something observable. There are two types:

1. Overt behavior: what can be seen directly
2. Covert behavior: is internal (thinking) and can only be inferred by actions. To create an objective for a covert action, add a verb that describes what students must do to demonstrate mastery of the covert action. Make this “indicator” behavior “the simplest and most direct one possible.” (p. 4)
  • Condition: the “conditions (if any) under which the performance is to occur.” (p. 2) 
  • Degree: the thing that judges success: how well the successful learner must perform
3. Common pitfalls
  • False performance: objectives that contain no observable behavior (performance) by which to judge whether the objective is being met
  •  False givens: in general, these describe the instruction itself, rather than the conditions under which learners will perform
  •  Teaching points: describe some part of a class activity, not a desired behavior
  •  Gibberish: education-speak. “It is noise,” Mager says. (p. 7) Avoid it.
  •  Instructor performance: The point of instructional objectives is to describe how the learner is expected to perform, not the instructor.
  •  False criteria: fail to describe an observable degree of performance


Critique

Mager is to be commended for his straightforward presentation; he focuses on clarity and communicating in plain English so that he can be understood across a range of disciplines and contexts. He also does well to focus on verb choices in building descriptions of desired actions, and especially in his focus on action verbs; to-be verbs are of no use for such objectives because they imply a state of being rather than a behavior.

He seems to leave a bit of wiggle room for doubters on the degree of performance. “Sometimes such a criterion is critical. Sometimes it is of little or no importance at all.”(p. 5) This strikes me as unhelpful to his cause, even as it acknowledges reality. I think the better way to express his point would be to say that while in some circumstances one may have a hard time determining a desired degree of behavior, the effort of doing so can reap great rewards – even if the effort falls short.

I think the greatest value of this system is to create a framework that mitigates against laziness or disinclination to pay attention to detail. I, for one, seem to possess a distressing level of both these traits. I think, too, that such objective-writing can add immense utility and rigor in corporate training, where such attention to detail can often be lacking and where the focus can be on delivery of content at the expense of creating desired, measurable performance.

Thursday, May 26, 2011

Week 2



I. Summary and critique: Merrill’s 5 Star Instructional Design Rating

Merrill’s rating system offers up to five stars for instructional design, depending on the design’s adherence to the First Principles of Instruction (Merrill, Barclay & Schaak, 2008) from the Week 1 reading. It offers detailed criteria for judging adherence to each principle, and offers bronze, silver and gold levels for each star category, presumably (it is not stated explicitly) reflecting the number of criteria met for each star. The categories are:
1.     Problem (Task-centered approach in First Principles): Is the courseware presented in the context of real world problems? Does it engage at the problem or task level, not just the operations level (as in: step 1, step 3, step 3)?
2.     Activation: Does the courseware attempt to activate relevant prior knowledge or experience? If they have relevant experience or knowledge, are the given the opportunity to demonstrate it?
3.     Demonstration: Does the courseware show examples of what is to be learned rather than merely tell what’s to be learned? Are they given examples and nonexamples, and given multiple representations and demonstrations?
4.     Application: Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?
5.     Integration: Does the courseware encourage learners to transfer the new knowledge or skill into their everyday life? Do they get to publicly demonstrate it? Reflect on and discuss it? Create, invent and explore new and personal ways of using it?

Merrill makes clear that the rating system is not appropriate for all instruction, including reference material, psychomotor skills courseware, and tell-and-ask “information-only” materials such as quizzes. It is, however, “most appropriate for tutorial or experiential (simulation) courseware.” (P. 1)

I find the five-star rating system to be thorough, complete, and even intuitive from an action/social/situative learning perspective. Even if courseware is designed for learners to interact with the material and not other learners or, perhaps, even a live instructor, the rating system’s attention to public demonstration of new knowledge and skills implies a level of social feedback based on such demonstrations that allows for deeper learning and situates such cognition within the learner’s community of practice.

I would point out one seeming inconsistency, however, and it my simply be due to the brevity of the description the reading presents. Merrill makes clear that “T&A (tell-and-ask) instruction gets no stars.” Yet under criterion No. 4, he clearly allows for information-about, parts-of and kinds-of practice, all of which imply a prior “telling” followed by an “asking” for a skills demonstration. Clearly, Merrill endorses instruction that goes beyond mere fact presentation followed by true/false, multiple-choice or checklist assessment. He would do well to state explicitly that the point is not to merely listen to facts and respond to questions, but to engage knowledge in a practical way and demonstrate its use in a social context.


II. Rating for the Tulane business module 
http://payson.tulane.edu/courses/ltl/projects/entrepreneur/main.swf


Five stars. (Based largely on “Veasna’s Pig Farm”)

1.     Is the courseware presented in the context of real world problems? Yes. Veasna saw a real social need as a business opportunity and acted to address it.
2.     Does the courseware attempt to activate relevant prior knowledge or experience? Yes, but weakly. The tutorial makes clear that business ideas can come from anywhere – even TV, friends or one’s current job. In that sense, it presents life experiences as relevant to identifying business opportunities.
3.     Does the courseware demonstrate (show examples) of what is to be learned? Yes. It offers models for a product, service, restaurant and retail business, and within each offers examples of a business product that must be mastered to create those businesses. For Veasna, it was the business plan.
4.     Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?  Yes. The Veasna module invites learners to manipulate various aspects of a business plan based on the pig farm, including marketing plans, income statements and operations management plans.
5.     Does the courseware provide techniques that encourage learners to integrate the new knowledge or skill into their everyday life? Yes. The final “Your Own Business” module explicitly encourages the learner to “dare to begin” and walks the learner through the steps for identifying a business opportunity, pitching a business idea, identifying and acquiring resources, and starting and managing a business.


III. Rating for Dreamweaver CS4 Essential Training at Lynda.com. 10 hours, 15 minutes
Address (IU I.D. required):

Four stars.

1.     Is the courseware presented in the context of real world problems? Yes. The course progressively builds a website for a surfboard company.
2.     Does the courseware attempt to activate relevant prior knowledge or experience? Yes, though not in a systematic way. The instructor often makes reference to “if you’ve ever tried to …” experiences, most often as a means of expressing how well Dreamweaver aids web page design or solves challenges such as creating a single CSS style sheet for multiple pages of one website.
3.     Does the courseware demonstrate (show examples) of what is to be learned? Yes. Lynda.com’s practice files, downloadable with each course, provide numerous examples that are used within each lesson, or “movie,” to show how the program works. The progression within each lesson is usually presented as challenge-application-solution. Further, different lessons employ different iterations of the website under development, allowing the learner to see how the entire site is built piece by piece.
4.     Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?  Yes. The practice files allow learners to follow along and mimic the instructor’s actions during the lesson, and/or to repeat those actions as often as one wishes.
5.     Does the courseware provide techniques that encourage learners to integrate the new knowledge or skill into their everyday life? Not explicitly, and for this reason I award no star for integration. Such application is implied within each lesson and the tutorial as a whole, however, and one could easily award a star for it.



IV. Summary and critique: Kim and Frick's Changes in student motivation during Online learning

The authors examine the literature for influences on motivation among learners who choose self-directed, web-based instruction, developing a theoretical framework organized into three major categories: internal (features of coursework that can influence motivation), external (features of the learning environment that can influence motivation), and personal (influences caused by the learner) factors. Also, the authors compare their framework to Keller’s ARCS model of motivation – Attention, Relevance, Confidence, and Satisfaction – and find numerous areas of commonality. The authors study the factors that predict learner motivation at the beginning, during and at the end of e-learning courses; whether motivation changes during instruction; and the factors related to such motivational change. The study population was about 800 adults learners in the United States.

Among the authors’ major findings:
·      94.2 percent of respondents chose online learning because face-to-face learning did not fit their schedule or was not available, or because the online learning was “convenient and flexible.” (P. 10)
·      Respondents reported relatively high motivation before and during the course; more than a third reported increased motivation during the course, while more than a quarter reported decreased motivation.
·      Learners’ motivation during the course was the best predictor of positive change in motivation.
·      Motivation at the start of the course was the best predictor of motivation during the course.
·      The course’s perceived relevance was the best predictor of motivation at the start of the course.
·      Along with relevance, learners’ competence in/comfort with technology makes them more likely to be motivated when they begin a course.
·      Older learners have advantages over younger learners in that they are more likely to be motivated when starting a course, and more likely to be concerned with the relevance of course content. The latter is attributed to older learners’ apparent increased knowledge of the learning required for their jobs.
·      E-learning courses should be designed to help learners stay motivated.

“The findings in this study make practical sense,” the authors write (P. 14).  I would agree.  People will not start or stick with an online course that they perceive has little relevance to work or life goals, or if they struggle with its technology. It’s unsurprising, too, that relevance is of greater concern to older workers and that they come to online learning with greater motivation. This reflects my own life experience: I look for direct benefits to my personal or professional life in my online courses, and find myself less motivated by learning for learning’s sake. I want my learning to “take me somewhere” – that is, help me to accomplish something concrete, not simply take up space in my mind.



References

Kim, K.J & Frick, T.W. (2011). Changes In Student Motivation During
Online Learning. Educational Computing Research, Vol. 44(1) pp. 1-23. Downloaded from Oncourse April 7, 2011

Merrill, D. (2001). Five Star Rating Scale. Downloaded from Oncourse April 7, 2011


Sunday, May 15, 2011

Week 1: Prescriptive Principles for Instructional Design

Summary


I've read over this paper three or four times now and I'm not sure I wholly understand all that it's saying. It's often quite dense and introduces some terms without much explication, such as "principled skill decomposition," leading me to believe that the authors presume some advanced knowledge of arcane instructional terms, i.e., a graduate education degree. I'm in business, not education, so I lack such a background. Also, some of the charts are quite hard to follow and decode; were I these authors' editor, I would have sent much of this back for revisions and clarification. Even assuming some familiarity with the field, it is often obtuse. Nonetheless, I'll do the best with what knowledge I have.


The authors identify five "first principles" of instruction -- principles that promote learning -- that are common to multiple theories of instructional design: a task-centered approach (seeing examples of and applying skills and undertaking "whole tasks"); activation (recalling or demonstrating prior knowledge); demonstration (seeing the skills in question put to action); application (using new knowledge and receiving feedback on their performance); and integration (putting new knowledge to work in real life).


The authors then proceed to present a number of instructional theories which are compared against the first principles, and present a number of what I'll call "systems" for employing these first principles, such as e-learning and multimedia learning. The effect is to demonstrate that the principles are embodied in a wide range of instructional design theories, and can be applied in a wide range of systems.


Critique


I'll begin by noting my discomfort with the writers' presentation. I still don't know what a "whole task" is (are not all tasks, and their component parts, by definition whole?), but I will presume they speak of a sequence of actions resulting in a completed product, such as finding two fractions' common denominators or assembling a widget. In any case, my lack of knowledge may lead to errant conclusions here.


Second, I can assent to the first principles with little difficulty; in retrospect I know that I've seen them applied in my own learning experience and, without knowing it, have applied them myself in corporate training classes. If I were to paraphrase them for my own training context, they would look something like this:

  • Here's what we're going to learn how to do, and here's what it looks like when done properly
  • This new thing is a lot like what we've done before but adds some important new steps; or: This new thing changes much of what we've done before in an effort to make it better
  • Watch a step-by-step demonstration (or: Watch a demonstration of the first step; now try it yourself; repeat for all steps.)
  • Now try it yourself; I'll offer tips if you do something incorrectly or can't figure something out
  • Now let's commit to putting these new skills to work on the job.

I was pleased to see that the authors included Foshay et al's Cognitive Training Model (pp. 178-180). I and a team of classmates employed this model in designing a video-shooting class for newspaper writers for our project in R521, and it worked to very good effect. Relating the new skills to existing knowledge and encouraging students that they could learn the new skills without great difficulty proved to be invaluable and made the lesson more enjoyable and efficient. All went on the use the new skills on the job, sometimes repeatedly; one who was especially concerned about learning the new skills has become a productive photo and video shooter.


I'll be paying special attention, too, to Allen's e-learning principles (table 14.2, p. 179) as the semester progresses; these strike me as the sort of principles worth keeping in mind when designing my project for use in my workplace (the nature of the project must still be negotiated). 




References:
Merrill, D., Barclay, M. & van Schaak (2008). Prescriptive Principles for Instructional Design. Downloaded May 6, 2011, from class resources in Oncourse.