Saturday, June 25, 2011

Week 7: Building and testing a web prototype


Summary
Having addressed the rigors of effective paper prototyping, Frick and Bohling now turn their attention to the need for a parallel exercise on the web that will show the limitations of the designer’s design plan – early in the process, to prevent a cascading of problems later on. “If we learn that what we have in mind just isn't going to work very well on the Web, then we can look at other alternatives before making too big an investment.” (p. 76)

They begin with a lengthy discussion of how the web, and web pages, work, and the available options for introducing interactivity, such as Java, Perl or PHP. Then they describe means of creating feedback, such as discussion boards and chats, that are outside the scope of our immediate class project.  But their treatment of student interaction with the computer is right up our alley: the use of multiple-choice responses and web forms. Finally, they discuss the use of templates (either custom-built or those provided by programs like Dreamweaver). Templates allow rapid prototyping and streamline production after usability testing.

The goal is to create a web prototype as rapidly as possible – with enough content to test the design but without every single part of the final product. The web prototype serves to test revisions that arose from the paper prototype, and reveal any problems with the web-based design. Like the paper prototype, it’s a process of discovery, and the process follows the same guidelines as the paper-based testing. Once the testing is done and the web prototype is revised, Frick and Bohling emphasize that the designer (or design team) faces a choice: whether to conduct another round of usability testing to test the revised product. They endorse choosing another round of testing: If you still have problems with the Web prototype, and you go headlong into production, you'll just repeat the mistakes in the product and could end up spending more time and money fixing the mistakes later.” (p. 101)

Then, when the product is revised and finalized, they advise one more round of testing if at all possible. In Chapter 6, the authors provide an extensive guide to “bug testing,” the purpose of which is to “ensure that the product behaves as specified, without deviation from the design and without errors.” (p. 106) The steps are to decide the scope of the test, create a testing matrix, and conduct repeated cycles of testing (with documentation) and fixing – followed by “regression:” taking the fully tested and fixed product and working through the entire matrix again to ensure that every part behaves as it’s designed to.

Critique
I must confess to two basic reactions to these chapters. The first is that, whenever I encounter lengthy descriptions of web programs and their capabilities, my eyes glaze over quickly. I have to read repeatedly to comprehend, and often find my comprehension limited nonetheless. This stuff ties my brain in knots very quickly. The second is that, the longer I read (especially Chapter 6), the more I thought, “You’ve got to be kidding.” Perhaps in a perfect world, certainly in a university setting, but not in my workplace can I take the time to repeatedly test a product, or hope for a team with which to test it. As I suspect is the case in most settings, the designer/tester is a one-man band. What is realistic in my setting is to design the product, test it, revise it and then launch. Each user then becomes an iterative tester, and the product is debugged as it’s used. In my workplace, people will be quick to share the user experience, both good and bad, so one can count on feedback. Further, the product can be launched with the public caveat that it will be improved as needed – which implies the need for a feedback mechanism embedded within the product.

Lest I seem to be dismissing Frick and Bohling’s system, I am not. In fact, as with all things I see it paralleling the sort of advice I give to writers and try to follow myself. Once you’ve created a draft (a prototype), read it critically and/or find a trusted reader, then revise. Then read it critically again. And, if possible, again and again, in different settings and on both paper and computer screen to vary the user experience. Some people read their stuff aloud in an effort to find rough spots and errors. Whatever works. Even so, I recognize that for many people one draft and one revision is all they have time for before they must hand it off to an editor for a final read. That’s one reason why more than editor sees it before publication. So I find this system to be sound, as much as time permits. I simply think that any repeat testing before launch would have to be done by me, on my own time.

Sunday, June 19, 2011

Team project reflection


Looking back, I’d have to say that the team project was far more challenging than I would have expected, and on many levels. This was my first experience building an interactive online course, so I suppose that discomfort should be expected. Nonetheless, I struggled in a few areas.
  •  Technology: I don’t know why, but when I meet a new technology I face an extensive learning curve, and it’s only made worse by my underlying suspicion that, somehow, programs are set up to trap new users in dead ends. As a result, I was fiddling with the Hotpotatoes-based quiz until the last day. I never could get it to generate a proper index page, which was a small matter in the end. I was also disappointed that it wouldn’t allow me to create multiple-choice quizzes with multiple wrong choices. I think it’s better for the learner to reason their way through a larger set of choices than only three or four, especially when they’re being asked to enumerate three dimensions of a single problem, such as sustainability or the three R’ of waste prevention. I wanted to get learners to engage each dimension separately, not as a set, and Hotpotatoes seemed inadequate to the task. As for Dreamweaver, I’m still learning after a couple of years of fiddling with it how little I truly know. I’ve never worked with templates, created multiple site folders to a single server address, or uploaded via Dreamweaver; I’ve always used Cyberduck. I’m still not sure I understand the whole folder thing, but I think I can see its value, and uploading with Dreamweaver is much faster. I wonder if I can use it to go back and re-upload previous class projects that I’ve overwritten with later Cyberduck uploads? I think I’ll need that skill to develop a portfolio.
  • Learning: As a trainer, I’ve long had the luxury of interacting with students in a face-to-face setting. Even last spring, when I created a prototype for an online course, it assumed extensive instructor interaction, both synchronous and asynchronous (such interaction was required, in fact). This is the first time I’ve tried to create learning where no instructor was present, and it has felt unnatural. I don’t think I’m terribly good at it, and it has made me doubt that I can build a useful final project. I have multiple possible subjects at hand as a longtime writing coach, but I’m hard-pressed to imagine how I can make any of them happen without direct interaction. I can’t help but think: doesn’t this sort of learning need to be reserved for fairly simplistic subject matter? This is likely to be a longtime challenge for me.
  • Personal: I’ve struggled greatly to attend to this class while balancing work and life demands (somehow they seem greater during the summer), and I came within a hair’s breadth of withdrawing from the course. Now I’m a couple of weeks behind as a result. Between that fact and my lack of confidence in building a project that meets the course objectives, I feel like Sisyphus pushing the rock up the hill with little hope of reaching the top. Still, I’ve found that I commonly go through some sort of mini-crisis in each course I take before I finally stumble upon a workable project.
  • Teamwork: Once again I’ve seen how incredibly valuable it is. Mikah did the bulk of the design work; I simply helped with the words and provided a couple images. Had I been on my own, I might have needed the full semester to produce this project. Working with templates alone would have taken me a week to understand. It’s great to have a teammate whose skills make up for your weaknesses.

Week 6: Effective, efficient, engaging instruction



What Makes e3 (effective, efficient, engaging) Instruction? By 
M. David Merrill



Summary
Merrill frames his thesis as an examination of BYU Hawaii’s efforts to increase both the quality of instruction and the number of students who are reached by distance learning. In doing so it is employing three types of instruction: problem-centered, peer-interactive and technology-enhanced.

First, he argues, such instruction has the advantage of making use of mental models for memory processes, which are generally agreed to be more stable (less prone to forgetting) than associative memory, which is related more closely to memorization.  Problem-centered learning allows students to mentally assemble the various dimensions (components, in his words) of a problem into a whole, thus creating a new – or revised/built upon a previous – mental model. Merrill also argues that such learning is highly motivating, in that learners see themselves doing something they could not do previously.  Second, peer interactivity – sharing, collaboration and critique – allow learners to test and refine their mental models, use their knowledge in new situations and employ the sort of teamwork that is required in the real world. Third, technology allows management of instruction to promote problem solving and meaningful interaction.

Merrill describes the steps in problem-centered learning as showing, then doing: Demonstrate the solution to the problem, teach the knowledge and skills (called component skills) needed to solve it, then use peer interaction to involve learners in problem-solving. “Peer interaction is most effective when in the context of solving real problems or doing real tasks,” Merrill argues (p. 4). This takes place at three levels: acquiring knowledge or skills and applying them to a single solution; discussing and defending different solutions to achieve a consensus solution; and “critiquing another solution based on their understanding of the problem and possible solutions.” (p. 4) Merrill argues that a sequence of three or more problems is most effective for learning the needed skills; new skills can be demonstrated and applied in each sequence as needed to complete the whole task, or set of tasks.

As for the instructor, Merrill says his/her role is to develop and implement the course, and shift during course delivery from presenting information to guiding and coaching learners in their interaction.

Critique

It’s easy to assent to Merrill’s thesis, especially his assertions of appealing to mental models and that “problem-centered, peer-interactive instruction is more motivating, produces better learning, and increases learner motivation.” (p. 2) Indeed, his description of problem-centered instruction, especially as compared to problem-based instruction, makes entirely good sense and is inherently appealing; people want to be taught how to solve real problems, especially in business settings. What goes unaddressed in such a short treatment is whether such an approach is best suited for teaching tasks rather than concepts, or is suited at all for addressing ill-defined problems. It does seem, on the surface, to be skewed in favor of task instruction, and offers little clue to its applicability to ill-defined problems and brainstorming. I would tend to think it could be applied across those other domains, but would appreciate a fuller treatment, with examples, that illustrates such uses.

Finally – though to me the differences matter little in the real world – Merrill seems to blur the line between cognitive learning and constructivist learning, appealing both to the construction and use of mental models and to collaborative, social knowledge construction. As a trainer in a corporate setting, I’m inclined to embrace such blurring if it’s proved to work, as apparently is has in Merrill’s setting.

Week 5: Paper prototyping

Snyder: Making a paper prototype


Summary
Snyder offers a surprisingly robust system for paper prototyping. He suggests using white poster board as a background upon which other pieces can be placed – almost as a template that allows pieces to be swapped out as needed – plus blank paper for larger pieces and note card stock or index cards for smaller pieces. He notes that a background aids both the prototype designer and the test subject, by providing a “reality check” about what can fit on the screen and by framing the user’s visual experience as a representation of a computer screen. (This is one of the most useful takeaways for me.) This “screen real estate” check can be especially valuable in evaluating designs for mobile devices. Snyder also suggests removable tape to represent radio buttons, checkboxes and text fields, and index cards for dialogue boxes. New pieces can be introduced manually to simulate links when the user navigates by pressing a “button” or choosing a menu option. Snyder emphasizes that it’s important to represent the state of each button or menu item so that the user doesn’t have to remember his or her choices. Also, and more usefully for the designer, “sometimes it's also possible to miss subtle problems unless you have responded to all the user's actions in the exact order they happened.” (p. 83)

Critique
Snyder’s treatment is far more robust than I would have imagined, though his easygoing and often fun approach (scissors: “Don’t run with them!”) makes it quite accessible. When I first heard the term “paper prototype,” I imagined something that was hand-drawn on a single piece of paper, without any parts to switch out as the user navigates the “site.” I’ve been far too naïve about this. I’m going to have to devote serious thought to my prototype content and presentation, as opposed to simply drawing some images and foisting them on a test subject. It’s also comforting that he emphasizes that artistic ability is not needed to make a prototype. That’s reassuring for someone like me who isn’t terribly good at drawing. Of all the parts listed, I think I’ll be using the expandable dialog boxes/expandable lists the most for my grammar exercises, so it’s great to have these sections to refer to later. Time to find some removable tape!

Further, I see strong parallels here to a writer’s draft of a story: “If you're a writer or trainer, testing a paper prototype will show you what information users truly need, as opposed to what's nice but not necessary. You'll also get a sense of where users look for information.” (p. 95) This is what writers do; create a fast draft (prototype) to tell them what’s needed and what isn’t, and to weigh how a reader (the user) might perceive and process it. This is some of the most useful and important advice Snyder presents.

Overall, this chapter gives me far more confidence that I can create an effective and efficient paper prototype.

Sunday, June 5, 2011

Week 4: Designing and testing a prototype



Summary

In chapters 3 and 4, Frick and Bohling call for rapidly prototyping and testing a paper-based version of planned web-based instruction. The overarching purpose is to answer three essential questions: Is it effective? Are students satisfied with it? Do they find it usable?

In reviewing these chapters I’d like to extend the comparison I employed last week, when I related the creation of instructional objectives to the critical-thinking phase that opens the writing process: Where do I intend to go, and how will I know it when I see it? In the current chapters, we have an analogy to the writer’s draft that follows the collection of information needed to build the writing. Literally, we’re creating a first daft of the instruction and testing it. Like a writing draft, it allows us to ask: Have I got what I need? What’s missing? What works? What needs work? What works, in this case, is what students find easy to use and makes them willing to learn, and enables them to demonstrate mastery of the lesson content. What needs work is that which falls short of the mastery goal or results in student dissatisfaction or a lack of usability. Effective drafts always show the scope of a written piece and develop its major thematic points; they also, almost without fail, produce surprises of inspiration or gaping holes. They show the writer what must be done next to produce a usable piece. Likewise, Frick and Bohling find that prototyping almost always leads to at least one redesign in pursuit of usability.

To that end, they counsel a rapid, efficient design and development of a prototype on paper, drawing on one’s own teaching experience or consultation with others who have taught the content. The prototype should include the bulk of the lesson content (if not all of it), and should reflect what one might call the skeleton of the site, with representations of all major sections and subsections. They also call for developing several series of pages that mimic the links students (presumably) would follow to the major areas of the site, especially its deepest areas. The presumption reflects the fact that students during testing will follow a trail other than the one that we, as designers, desire for them. That done, the designer should apply Merrill’s five-star criteria (real-world problem, activation, multiple examples, extensive practice, transfer) to the prototype; develop a means of assessing student mastery that employs authentic assessment; develop a means of assessing satisfaction; and seek appraisal of the design by an expert.

 When it comes to administering the test, Frick and Bohling call for three main ingredients (pp. 40-41): authentic subjects (those who would actually use the instruction); authentic tasks (rooted in the instructional goals); and authentic conditions (the same level of support students will have). From here, the authors assume a team-based test-and-revision system, in which the team creates, pilots and follows a blueprint for the prototype test, including a script for what team members say to test subjects. The authors counsel watching for such things as test subjects giving up or doing what’s not expected, doing the right thing but with anxiety, or doing the wrong things with great confidence. (pp. 53-55) Test subjects are given a pre-assessment, a post-assessment and reactionnaire, and are debriefed afterward.

When the time arrives to analyze results, the authors call for identifying patterns in the test observation data and, from those, drawing conclusions about probable causes of design problems, with special attention to effectiveness, satisfaction and usability. From these, the team should develop a prioritized list of revisions that address ways to improve those core areas.

Critique

While much of my critique is embedded above, I would add that the authors’ assumption of a design and testing team is unrealistic in many settings, especially in a corporate context. While corporate training certainly needs greater attention to instructional prototyping aimed at improving effectiveness, satisfaction and usability, it is most likely to do so with individual practitioners consulting off-site subject matter experts for help with formulating content. Fortunately, the authors provide an otherwise sound, effective framework for prototyping and testing. The challenge in many settings is finding the time; indeed, the first “live” instruction is likely to serve as the first prototype, and improvements are likely to be made incrementally as the instruction matures. The authors would do well to address this reality directly.






Monday, May 30, 2011

Week 3: Frick & Bohling chapters 1 and 2


Summary
Frick and Bohling’s work holds a strong underlying parallel to Mager’s Tips on Instructional Objectives: Establish a sense of direction and a set of standards for success before foisting a learning product on learners (in this case on users.) In both cases, the authors set out a systemic framework for critical thinking about and evaluation of instruction. The obvious implication is that such critical thinking is often lacking, or is lacking in sufficient rigor to produce an effective learning experience. As stated on P. 4: “You can do this process if you have some patience, are willing to learn, have some common sense, and become a good observer. A lot of this boils down to having an inquiring mind. You make empirical observations to answer questions and make decisions.”

They call “inquiry-based, iterative design and development” that avoids common problems such a lack of user input, design and site testing, problem identification before a site becomes public, and site repairs or undetected problems after launch.

Frick and Bohling strongly imply that design of web-based instruction is, ideally, a team process, with team members dedicated to the areas of analysis, instructional design, information and graphic design, web technology, evaluation and usability testing, and web sever and system administration.

Like Mager, they begin by focusing on instructional goals. Unlike Mager, they advocate identifying and working with stakeholders (including students and one’s supervisors) to determine instructional goals. They also take a significant step beyond Mager by advocating authentic assessment: eliciting “student performance and/or artifacts which will indicate whether or to what extent learners have achieved the goals you had in mind.” (p. 10) This parallels Mager’s focus on behavior in instructional objectives – especially his delineation of eliciting overt behavior to provide evidence of covert (unobservable) behavior – but goes beyond Mager, who settles for behavioral indicators as the goal for instruction and the basis for judging students’ success.  “Observed behavior and products created by students are indicators,” Frick and Bohling say by contrast. “They are not the goals.” (p. 10)

Further points for the analysis phase:
  • Learner analysis: “What are relevant characteristics of students that will help determine what instruction may be needed in order to reach the goals of instruction?” (p. 14) Mager addresses this and other analysis not at all.
  • Context analysis: Why use the web for this instruction? What other resources will it require?
  • Most importantly, self-analysis: are you ready to use new resources or try new or nontraditional ways of teaching?


Critique
I find this systematic approach highly effective and more robust than Mager’s concepts, focusing as it does on authentic assessment and “indicators” where Mager focuses only on behaviors. The richness of the inquiry-based approach makes me want to learn more; these chapters also provide a sense of how very much there is to mater in designing online learning. I wonder, too: What about a one-person design shop? In my industry, many trainers work alone and occasionally lean on the expertise of others – while web-based learning is provided almost exclusively by institutions. I’ll be watching to see how much of Frick and Bohling’s systematic approach I can accomplish in my own workplace, where I will largely “fly solo” in training design.

Week 3: Mager's tips on instructional objectives


I. Mager's Tips on Instructional Objectives

As a writing coach, I’m drawn to this sentence in Mager’s article: “If you don't know where you are going, it is difficult to select a suitable means for getting there.” That’s precisely the same challenge that writers face in trying to give form, theme and meaning to the information they’ve gathered, and Mager succinctly captures the sort of critical thinking task that instructional designers, like writers, must engage in if they’re to fashion a meaningful learning experience. They need to create a roadmap to guide their work. A roadmap implies that you know where you want to go. For writers, the “where” is the story’s theme, and its presumed ending. For instructional designers, it’s the intended performance they want to produce among learners.

Summary

Mager defines an instructional objective as “a description of a performance you want learners to be able to exhibit in order to consider them competent.” (p. 1) Then he adds an important  caveat: Learning  objectives  describe “an intended result of instruction, rather than the process of instruction itself.” (p. 1, italics and boldface in original.) He delineates the reasons for stating learning objectives; the four qualities of useful objectives, with an in-depth examination of each quality in turn; and common pitfalls of objective writing.

1. Reasons for stating objectives:
  • They provide the basis for selecting and designing “instructional materials, content, or methods.” (p. 1)
  • They provide the basis for creating or selecting tests to allow both the instructor and the learner to determine whether the desired performance has been achieved (italics mine)
  • They allow students to determine the means they’ll use for achieving the performance. While he does not state it explicitly, this puts some aspect of shaping the instruction under the learners’ control. 

2. Qualities of useful objectives:
  • Audience: The person undertaking the performance: the learner
  •  Behavior: what the learner should be able to do as a result of the instruction; this can be the desired action(s) or its result(s). Behavior is a verb, and it must be something observable. There are two types:

1. Overt behavior: what can be seen directly
2. Covert behavior: is internal (thinking) and can only be inferred by actions. To create an objective for a covert action, add a verb that describes what students must do to demonstrate mastery of the covert action. Make this “indicator” behavior “the simplest and most direct one possible.” (p. 4)
  • Condition: the “conditions (if any) under which the performance is to occur.” (p. 2) 
  • Degree: the thing that judges success: how well the successful learner must perform
3. Common pitfalls
  • False performance: objectives that contain no observable behavior (performance) by which to judge whether the objective is being met
  •  False givens: in general, these describe the instruction itself, rather than the conditions under which learners will perform
  •  Teaching points: describe some part of a class activity, not a desired behavior
  •  Gibberish: education-speak. “It is noise,” Mager says. (p. 7) Avoid it.
  •  Instructor performance: The point of instructional objectives is to describe how the learner is expected to perform, not the instructor.
  •  False criteria: fail to describe an observable degree of performance


Critique

Mager is to be commended for his straightforward presentation; he focuses on clarity and communicating in plain English so that he can be understood across a range of disciplines and contexts. He also does well to focus on verb choices in building descriptions of desired actions, and especially in his focus on action verbs; to-be verbs are of no use for such objectives because they imply a state of being rather than a behavior.

He seems to leave a bit of wiggle room for doubters on the degree of performance. “Sometimes such a criterion is critical. Sometimes it is of little or no importance at all.”(p. 5) This strikes me as unhelpful to his cause, even as it acknowledges reality. I think the better way to express his point would be to say that while in some circumstances one may have a hard time determining a desired degree of behavior, the effort of doing so can reap great rewards – even if the effort falls short.

I think the greatest value of this system is to create a framework that mitigates against laziness or disinclination to pay attention to detail. I, for one, seem to possess a distressing level of both these traits. I think, too, that such objective-writing can add immense utility and rigor in corporate training, where such attention to detail can often be lacking and where the focus can be on delivery of content at the expense of creating desired, measurable performance.

Thursday, May 26, 2011

Week 2



I. Summary and critique: Merrill’s 5 Star Instructional Design Rating

Merrill’s rating system offers up to five stars for instructional design, depending on the design’s adherence to the First Principles of Instruction (Merrill, Barclay & Schaak, 2008) from the Week 1 reading. It offers detailed criteria for judging adherence to each principle, and offers bronze, silver and gold levels for each star category, presumably (it is not stated explicitly) reflecting the number of criteria met for each star. The categories are:
1.     Problem (Task-centered approach in First Principles): Is the courseware presented in the context of real world problems? Does it engage at the problem or task level, not just the operations level (as in: step 1, step 3, step 3)?
2.     Activation: Does the courseware attempt to activate relevant prior knowledge or experience? If they have relevant experience or knowledge, are the given the opportunity to demonstrate it?
3.     Demonstration: Does the courseware show examples of what is to be learned rather than merely tell what’s to be learned? Are they given examples and nonexamples, and given multiple representations and demonstrations?
4.     Application: Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?
5.     Integration: Does the courseware encourage learners to transfer the new knowledge or skill into their everyday life? Do they get to publicly demonstrate it? Reflect on and discuss it? Create, invent and explore new and personal ways of using it?

Merrill makes clear that the rating system is not appropriate for all instruction, including reference material, psychomotor skills courseware, and tell-and-ask “information-only” materials such as quizzes. It is, however, “most appropriate for tutorial or experiential (simulation) courseware.” (P. 1)

I find the five-star rating system to be thorough, complete, and even intuitive from an action/social/situative learning perspective. Even if courseware is designed for learners to interact with the material and not other learners or, perhaps, even a live instructor, the rating system’s attention to public demonstration of new knowledge and skills implies a level of social feedback based on such demonstrations that allows for deeper learning and situates such cognition within the learner’s community of practice.

I would point out one seeming inconsistency, however, and it my simply be due to the brevity of the description the reading presents. Merrill makes clear that “T&A (tell-and-ask) instruction gets no stars.” Yet under criterion No. 4, he clearly allows for information-about, parts-of and kinds-of practice, all of which imply a prior “telling” followed by an “asking” for a skills demonstration. Clearly, Merrill endorses instruction that goes beyond mere fact presentation followed by true/false, multiple-choice or checklist assessment. He would do well to state explicitly that the point is not to merely listen to facts and respond to questions, but to engage knowledge in a practical way and demonstrate its use in a social context.


II. Rating for the Tulane business module 
http://payson.tulane.edu/courses/ltl/projects/entrepreneur/main.swf


Five stars. (Based largely on “Veasna’s Pig Farm”)

1.     Is the courseware presented in the context of real world problems? Yes. Veasna saw a real social need as a business opportunity and acted to address it.
2.     Does the courseware attempt to activate relevant prior knowledge or experience? Yes, but weakly. The tutorial makes clear that business ideas can come from anywhere – even TV, friends or one’s current job. In that sense, it presents life experiences as relevant to identifying business opportunities.
3.     Does the courseware demonstrate (show examples) of what is to be learned? Yes. It offers models for a product, service, restaurant and retail business, and within each offers examples of a business product that must be mastered to create those businesses. For Veasna, it was the business plan.
4.     Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?  Yes. The Veasna module invites learners to manipulate various aspects of a business plan based on the pig farm, including marketing plans, income statements and operations management plans.
5.     Does the courseware provide techniques that encourage learners to integrate the new knowledge or skill into their everyday life? Yes. The final “Your Own Business” module explicitly encourages the learner to “dare to begin” and walks the learner through the steps for identifying a business opportunity, pitching a business idea, identifying and acquiring resources, and starting and managing a business.


III. Rating for Dreamweaver CS4 Essential Training at Lynda.com. 10 hours, 15 minutes
Address (IU I.D. required):

Four stars.

1.     Is the courseware presented in the context of real world problems? Yes. The course progressively builds a website for a surfboard company.
2.     Does the courseware attempt to activate relevant prior knowledge or experience? Yes, though not in a systematic way. The instructor often makes reference to “if you’ve ever tried to …” experiences, most often as a means of expressing how well Dreamweaver aids web page design or solves challenges such as creating a single CSS style sheet for multiple pages of one website.
3.     Does the courseware demonstrate (show examples) of what is to be learned? Yes. Lynda.com’s practice files, downloadable with each course, provide numerous examples that are used within each lesson, or “movie,” to show how the program works. The progression within each lesson is usually presented as challenge-application-solution. Further, different lessons employ different iterations of the website under development, allowing the learner to see how the entire site is built piece by piece.
4.     Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?  Yes. The practice files allow learners to follow along and mimic the instructor’s actions during the lesson, and/or to repeat those actions as often as one wishes.
5.     Does the courseware provide techniques that encourage learners to integrate the new knowledge or skill into their everyday life? Not explicitly, and for this reason I award no star for integration. Such application is implied within each lesson and the tutorial as a whole, however, and one could easily award a star for it.



IV. Summary and critique: Kim and Frick's Changes in student motivation during Online learning

The authors examine the literature for influences on motivation among learners who choose self-directed, web-based instruction, developing a theoretical framework organized into three major categories: internal (features of coursework that can influence motivation), external (features of the learning environment that can influence motivation), and personal (influences caused by the learner) factors. Also, the authors compare their framework to Keller’s ARCS model of motivation – Attention, Relevance, Confidence, and Satisfaction – and find numerous areas of commonality. The authors study the factors that predict learner motivation at the beginning, during and at the end of e-learning courses; whether motivation changes during instruction; and the factors related to such motivational change. The study population was about 800 adults learners in the United States.

Among the authors’ major findings:
·      94.2 percent of respondents chose online learning because face-to-face learning did not fit their schedule or was not available, or because the online learning was “convenient and flexible.” (P. 10)
·      Respondents reported relatively high motivation before and during the course; more than a third reported increased motivation during the course, while more than a quarter reported decreased motivation.
·      Learners’ motivation during the course was the best predictor of positive change in motivation.
·      Motivation at the start of the course was the best predictor of motivation during the course.
·      The course’s perceived relevance was the best predictor of motivation at the start of the course.
·      Along with relevance, learners’ competence in/comfort with technology makes them more likely to be motivated when they begin a course.
·      Older learners have advantages over younger learners in that they are more likely to be motivated when starting a course, and more likely to be concerned with the relevance of course content. The latter is attributed to older learners’ apparent increased knowledge of the learning required for their jobs.
·      E-learning courses should be designed to help learners stay motivated.

“The findings in this study make practical sense,” the authors write (P. 14).  I would agree.  People will not start or stick with an online course that they perceive has little relevance to work or life goals, or if they struggle with its technology. It’s unsurprising, too, that relevance is of greater concern to older workers and that they come to online learning with greater motivation. This reflects my own life experience: I look for direct benefits to my personal or professional life in my online courses, and find myself less motivated by learning for learning’s sake. I want my learning to “take me somewhere” – that is, help me to accomplish something concrete, not simply take up space in my mind.



References

Kim, K.J & Frick, T.W. (2011). Changes In Student Motivation During
Online Learning. Educational Computing Research, Vol. 44(1) pp. 1-23. Downloaded from Oncourse April 7, 2011

Merrill, D. (2001). Five Star Rating Scale. Downloaded from Oncourse April 7, 2011


Sunday, May 15, 2011

Week 1: Prescriptive Principles for Instructional Design

Summary


I've read over this paper three or four times now and I'm not sure I wholly understand all that it's saying. It's often quite dense and introduces some terms without much explication, such as "principled skill decomposition," leading me to believe that the authors presume some advanced knowledge of arcane instructional terms, i.e., a graduate education degree. I'm in business, not education, so I lack such a background. Also, some of the charts are quite hard to follow and decode; were I these authors' editor, I would have sent much of this back for revisions and clarification. Even assuming some familiarity with the field, it is often obtuse. Nonetheless, I'll do the best with what knowledge I have.


The authors identify five "first principles" of instruction -- principles that promote learning -- that are common to multiple theories of instructional design: a task-centered approach (seeing examples of and applying skills and undertaking "whole tasks"); activation (recalling or demonstrating prior knowledge); demonstration (seeing the skills in question put to action); application (using new knowledge and receiving feedback on their performance); and integration (putting new knowledge to work in real life).


The authors then proceed to present a number of instructional theories which are compared against the first principles, and present a number of what I'll call "systems" for employing these first principles, such as e-learning and multimedia learning. The effect is to demonstrate that the principles are embodied in a wide range of instructional design theories, and can be applied in a wide range of systems.


Critique


I'll begin by noting my discomfort with the writers' presentation. I still don't know what a "whole task" is (are not all tasks, and their component parts, by definition whole?), but I will presume they speak of a sequence of actions resulting in a completed product, such as finding two fractions' common denominators or assembling a widget. In any case, my lack of knowledge may lead to errant conclusions here.


Second, I can assent to the first principles with little difficulty; in retrospect I know that I've seen them applied in my own learning experience and, without knowing it, have applied them myself in corporate training classes. If I were to paraphrase them for my own training context, they would look something like this:

  • Here's what we're going to learn how to do, and here's what it looks like when done properly
  • This new thing is a lot like what we've done before but adds some important new steps; or: This new thing changes much of what we've done before in an effort to make it better
  • Watch a step-by-step demonstration (or: Watch a demonstration of the first step; now try it yourself; repeat for all steps.)
  • Now try it yourself; I'll offer tips if you do something incorrectly or can't figure something out
  • Now let's commit to putting these new skills to work on the job.

I was pleased to see that the authors included Foshay et al's Cognitive Training Model (pp. 178-180). I and a team of classmates employed this model in designing a video-shooting class for newspaper writers for our project in R521, and it worked to very good effect. Relating the new skills to existing knowledge and encouraging students that they could learn the new skills without great difficulty proved to be invaluable and made the lesson more enjoyable and efficient. All went on the use the new skills on the job, sometimes repeatedly; one who was especially concerned about learning the new skills has become a productive photo and video shooter.


I'll be paying special attention, too, to Allen's e-learning principles (table 14.2, p. 179) as the semester progresses; these strike me as the sort of principles worth keeping in mind when designing my project for use in my workplace (the nature of the project must still be negotiated). 




References:
Merrill, D., Barclay, M. & van Schaak (2008). Prescriptive Principles for Instructional Design. Downloaded May 6, 2011, from class resources in Oncourse.