Saturday, June 25, 2011

Week 7: Building and testing a web prototype


Summary
Having addressed the rigors of effective paper prototyping, Frick and Bohling now turn their attention to the need for a parallel exercise on the web that will show the limitations of the designer’s design plan – early in the process, to prevent a cascading of problems later on. “If we learn that what we have in mind just isn't going to work very well on the Web, then we can look at other alternatives before making too big an investment.” (p. 76)

They begin with a lengthy discussion of how the web, and web pages, work, and the available options for introducing interactivity, such as Java, Perl or PHP. Then they describe means of creating feedback, such as discussion boards and chats, that are outside the scope of our immediate class project.  But their treatment of student interaction with the computer is right up our alley: the use of multiple-choice responses and web forms. Finally, they discuss the use of templates (either custom-built or those provided by programs like Dreamweaver). Templates allow rapid prototyping and streamline production after usability testing.

The goal is to create a web prototype as rapidly as possible – with enough content to test the design but without every single part of the final product. The web prototype serves to test revisions that arose from the paper prototype, and reveal any problems with the web-based design. Like the paper prototype, it’s a process of discovery, and the process follows the same guidelines as the paper-based testing. Once the testing is done and the web prototype is revised, Frick and Bohling emphasize that the designer (or design team) faces a choice: whether to conduct another round of usability testing to test the revised product. They endorse choosing another round of testing: If you still have problems with the Web prototype, and you go headlong into production, you'll just repeat the mistakes in the product and could end up spending more time and money fixing the mistakes later.” (p. 101)

Then, when the product is revised and finalized, they advise one more round of testing if at all possible. In Chapter 6, the authors provide an extensive guide to “bug testing,” the purpose of which is to “ensure that the product behaves as specified, without deviation from the design and without errors.” (p. 106) The steps are to decide the scope of the test, create a testing matrix, and conduct repeated cycles of testing (with documentation) and fixing – followed by “regression:” taking the fully tested and fixed product and working through the entire matrix again to ensure that every part behaves as it’s designed to.

Critique
I must confess to two basic reactions to these chapters. The first is that, whenever I encounter lengthy descriptions of web programs and their capabilities, my eyes glaze over quickly. I have to read repeatedly to comprehend, and often find my comprehension limited nonetheless. This stuff ties my brain in knots very quickly. The second is that, the longer I read (especially Chapter 6), the more I thought, “You’ve got to be kidding.” Perhaps in a perfect world, certainly in a university setting, but not in my workplace can I take the time to repeatedly test a product, or hope for a team with which to test it. As I suspect is the case in most settings, the designer/tester is a one-man band. What is realistic in my setting is to design the product, test it, revise it and then launch. Each user then becomes an iterative tester, and the product is debugged as it’s used. In my workplace, people will be quick to share the user experience, both good and bad, so one can count on feedback. Further, the product can be launched with the public caveat that it will be improved as needed – which implies the need for a feedback mechanism embedded within the product.

Lest I seem to be dismissing Frick and Bohling’s system, I am not. In fact, as with all things I see it paralleling the sort of advice I give to writers and try to follow myself. Once you’ve created a draft (a prototype), read it critically and/or find a trusted reader, then revise. Then read it critically again. And, if possible, again and again, in different settings and on both paper and computer screen to vary the user experience. Some people read their stuff aloud in an effort to find rough spots and errors. Whatever works. Even so, I recognize that for many people one draft and one revision is all they have time for before they must hand it off to an editor for a final read. That’s one reason why more than editor sees it before publication. So I find this system to be sound, as much as time permits. I simply think that any repeat testing before launch would have to be done by me, on my own time.

Sunday, June 19, 2011

Team project reflection


Looking back, I’d have to say that the team project was far more challenging than I would have expected, and on many levels. This was my first experience building an interactive online course, so I suppose that discomfort should be expected. Nonetheless, I struggled in a few areas.
  •  Technology: I don’t know why, but when I meet a new technology I face an extensive learning curve, and it’s only made worse by my underlying suspicion that, somehow, programs are set up to trap new users in dead ends. As a result, I was fiddling with the Hotpotatoes-based quiz until the last day. I never could get it to generate a proper index page, which was a small matter in the end. I was also disappointed that it wouldn’t allow me to create multiple-choice quizzes with multiple wrong choices. I think it’s better for the learner to reason their way through a larger set of choices than only three or four, especially when they’re being asked to enumerate three dimensions of a single problem, such as sustainability or the three R’ of waste prevention. I wanted to get learners to engage each dimension separately, not as a set, and Hotpotatoes seemed inadequate to the task. As for Dreamweaver, I’m still learning after a couple of years of fiddling with it how little I truly know. I’ve never worked with templates, created multiple site folders to a single server address, or uploaded via Dreamweaver; I’ve always used Cyberduck. I’m still not sure I understand the whole folder thing, but I think I can see its value, and uploading with Dreamweaver is much faster. I wonder if I can use it to go back and re-upload previous class projects that I’ve overwritten with later Cyberduck uploads? I think I’ll need that skill to develop a portfolio.
  • Learning: As a trainer, I’ve long had the luxury of interacting with students in a face-to-face setting. Even last spring, when I created a prototype for an online course, it assumed extensive instructor interaction, both synchronous and asynchronous (such interaction was required, in fact). This is the first time I’ve tried to create learning where no instructor was present, and it has felt unnatural. I don’t think I’m terribly good at it, and it has made me doubt that I can build a useful final project. I have multiple possible subjects at hand as a longtime writing coach, but I’m hard-pressed to imagine how I can make any of them happen without direct interaction. I can’t help but think: doesn’t this sort of learning need to be reserved for fairly simplistic subject matter? This is likely to be a longtime challenge for me.
  • Personal: I’ve struggled greatly to attend to this class while balancing work and life demands (somehow they seem greater during the summer), and I came within a hair’s breadth of withdrawing from the course. Now I’m a couple of weeks behind as a result. Between that fact and my lack of confidence in building a project that meets the course objectives, I feel like Sisyphus pushing the rock up the hill with little hope of reaching the top. Still, I’ve found that I commonly go through some sort of mini-crisis in each course I take before I finally stumble upon a workable project.
  • Teamwork: Once again I’ve seen how incredibly valuable it is. Mikah did the bulk of the design work; I simply helped with the words and provided a couple images. Had I been on my own, I might have needed the full semester to produce this project. Working with templates alone would have taken me a week to understand. It’s great to have a teammate whose skills make up for your weaknesses.

Week 6: Effective, efficient, engaging instruction



What Makes e3 (effective, efficient, engaging) Instruction? By 
M. David Merrill



Summary
Merrill frames his thesis as an examination of BYU Hawaii’s efforts to increase both the quality of instruction and the number of students who are reached by distance learning. In doing so it is employing three types of instruction: problem-centered, peer-interactive and technology-enhanced.

First, he argues, such instruction has the advantage of making use of mental models for memory processes, which are generally agreed to be more stable (less prone to forgetting) than associative memory, which is related more closely to memorization.  Problem-centered learning allows students to mentally assemble the various dimensions (components, in his words) of a problem into a whole, thus creating a new – or revised/built upon a previous – mental model. Merrill also argues that such learning is highly motivating, in that learners see themselves doing something they could not do previously.  Second, peer interactivity – sharing, collaboration and critique – allow learners to test and refine their mental models, use their knowledge in new situations and employ the sort of teamwork that is required in the real world. Third, technology allows management of instruction to promote problem solving and meaningful interaction.

Merrill describes the steps in problem-centered learning as showing, then doing: Demonstrate the solution to the problem, teach the knowledge and skills (called component skills) needed to solve it, then use peer interaction to involve learners in problem-solving. “Peer interaction is most effective when in the context of solving real problems or doing real tasks,” Merrill argues (p. 4). This takes place at three levels: acquiring knowledge or skills and applying them to a single solution; discussing and defending different solutions to achieve a consensus solution; and “critiquing another solution based on their understanding of the problem and possible solutions.” (p. 4) Merrill argues that a sequence of three or more problems is most effective for learning the needed skills; new skills can be demonstrated and applied in each sequence as needed to complete the whole task, or set of tasks.

As for the instructor, Merrill says his/her role is to develop and implement the course, and shift during course delivery from presenting information to guiding and coaching learners in their interaction.

Critique

It’s easy to assent to Merrill’s thesis, especially his assertions of appealing to mental models and that “problem-centered, peer-interactive instruction is more motivating, produces better learning, and increases learner motivation.” (p. 2) Indeed, his description of problem-centered instruction, especially as compared to problem-based instruction, makes entirely good sense and is inherently appealing; people want to be taught how to solve real problems, especially in business settings. What goes unaddressed in such a short treatment is whether such an approach is best suited for teaching tasks rather than concepts, or is suited at all for addressing ill-defined problems. It does seem, on the surface, to be skewed in favor of task instruction, and offers little clue to its applicability to ill-defined problems and brainstorming. I would tend to think it could be applied across those other domains, but would appreciate a fuller treatment, with examples, that illustrates such uses.

Finally – though to me the differences matter little in the real world – Merrill seems to blur the line between cognitive learning and constructivist learning, appealing both to the construction and use of mental models and to collaborative, social knowledge construction. As a trainer in a corporate setting, I’m inclined to embrace such blurring if it’s proved to work, as apparently is has in Merrill’s setting.

Week 5: Paper prototyping

Snyder: Making a paper prototype


Summary
Snyder offers a surprisingly robust system for paper prototyping. He suggests using white poster board as a background upon which other pieces can be placed – almost as a template that allows pieces to be swapped out as needed – plus blank paper for larger pieces and note card stock or index cards for smaller pieces. He notes that a background aids both the prototype designer and the test subject, by providing a “reality check” about what can fit on the screen and by framing the user’s visual experience as a representation of a computer screen. (This is one of the most useful takeaways for me.) This “screen real estate” check can be especially valuable in evaluating designs for mobile devices. Snyder also suggests removable tape to represent radio buttons, checkboxes and text fields, and index cards for dialogue boxes. New pieces can be introduced manually to simulate links when the user navigates by pressing a “button” or choosing a menu option. Snyder emphasizes that it’s important to represent the state of each button or menu item so that the user doesn’t have to remember his or her choices. Also, and more usefully for the designer, “sometimes it's also possible to miss subtle problems unless you have responded to all the user's actions in the exact order they happened.” (p. 83)

Critique
Snyder’s treatment is far more robust than I would have imagined, though his easygoing and often fun approach (scissors: “Don’t run with them!”) makes it quite accessible. When I first heard the term “paper prototype,” I imagined something that was hand-drawn on a single piece of paper, without any parts to switch out as the user navigates the “site.” I’ve been far too naïve about this. I’m going to have to devote serious thought to my prototype content and presentation, as opposed to simply drawing some images and foisting them on a test subject. It’s also comforting that he emphasizes that artistic ability is not needed to make a prototype. That’s reassuring for someone like me who isn’t terribly good at drawing. Of all the parts listed, I think I’ll be using the expandable dialog boxes/expandable lists the most for my grammar exercises, so it’s great to have these sections to refer to later. Time to find some removable tape!

Further, I see strong parallels here to a writer’s draft of a story: “If you're a writer or trainer, testing a paper prototype will show you what information users truly need, as opposed to what's nice but not necessary. You'll also get a sense of where users look for information.” (p. 95) This is what writers do; create a fast draft (prototype) to tell them what’s needed and what isn’t, and to weigh how a reader (the user) might perceive and process it. This is some of the most useful and important advice Snyder presents.

Overall, this chapter gives me far more confidence that I can create an effective and efficient paper prototype.

Sunday, June 5, 2011

Week 4: Designing and testing a prototype



Summary

In chapters 3 and 4, Frick and Bohling call for rapidly prototyping and testing a paper-based version of planned web-based instruction. The overarching purpose is to answer three essential questions: Is it effective? Are students satisfied with it? Do they find it usable?

In reviewing these chapters I’d like to extend the comparison I employed last week, when I related the creation of instructional objectives to the critical-thinking phase that opens the writing process: Where do I intend to go, and how will I know it when I see it? In the current chapters, we have an analogy to the writer’s draft that follows the collection of information needed to build the writing. Literally, we’re creating a first daft of the instruction and testing it. Like a writing draft, it allows us to ask: Have I got what I need? What’s missing? What works? What needs work? What works, in this case, is what students find easy to use and makes them willing to learn, and enables them to demonstrate mastery of the lesson content. What needs work is that which falls short of the mastery goal or results in student dissatisfaction or a lack of usability. Effective drafts always show the scope of a written piece and develop its major thematic points; they also, almost without fail, produce surprises of inspiration or gaping holes. They show the writer what must be done next to produce a usable piece. Likewise, Frick and Bohling find that prototyping almost always leads to at least one redesign in pursuit of usability.

To that end, they counsel a rapid, efficient design and development of a prototype on paper, drawing on one’s own teaching experience or consultation with others who have taught the content. The prototype should include the bulk of the lesson content (if not all of it), and should reflect what one might call the skeleton of the site, with representations of all major sections and subsections. They also call for developing several series of pages that mimic the links students (presumably) would follow to the major areas of the site, especially its deepest areas. The presumption reflects the fact that students during testing will follow a trail other than the one that we, as designers, desire for them. That done, the designer should apply Merrill’s five-star criteria (real-world problem, activation, multiple examples, extensive practice, transfer) to the prototype; develop a means of assessing student mastery that employs authentic assessment; develop a means of assessing satisfaction; and seek appraisal of the design by an expert.

 When it comes to administering the test, Frick and Bohling call for three main ingredients (pp. 40-41): authentic subjects (those who would actually use the instruction); authentic tasks (rooted in the instructional goals); and authentic conditions (the same level of support students will have). From here, the authors assume a team-based test-and-revision system, in which the team creates, pilots and follows a blueprint for the prototype test, including a script for what team members say to test subjects. The authors counsel watching for such things as test subjects giving up or doing what’s not expected, doing the right thing but with anxiety, or doing the wrong things with great confidence. (pp. 53-55) Test subjects are given a pre-assessment, a post-assessment and reactionnaire, and are debriefed afterward.

When the time arrives to analyze results, the authors call for identifying patterns in the test observation data and, from those, drawing conclusions about probable causes of design problems, with special attention to effectiveness, satisfaction and usability. From these, the team should develop a prioritized list of revisions that address ways to improve those core areas.

Critique

While much of my critique is embedded above, I would add that the authors’ assumption of a design and testing team is unrealistic in many settings, especially in a corporate context. While corporate training certainly needs greater attention to instructional prototyping aimed at improving effectiveness, satisfaction and usability, it is most likely to do so with individual practitioners consulting off-site subject matter experts for help with formulating content. Fortunately, the authors provide an otherwise sound, effective framework for prototyping and testing. The challenge in many settings is finding the time; indeed, the first “live” instruction is likely to serve as the first prototype, and improvements are likely to be made incrementally as the instruction matures. The authors would do well to address this reality directly.






Monday, May 30, 2011

Week 3: Frick & Bohling chapters 1 and 2


Summary
Frick and Bohling’s work holds a strong underlying parallel to Mager’s Tips on Instructional Objectives: Establish a sense of direction and a set of standards for success before foisting a learning product on learners (in this case on users.) In both cases, the authors set out a systemic framework for critical thinking about and evaluation of instruction. The obvious implication is that such critical thinking is often lacking, or is lacking in sufficient rigor to produce an effective learning experience. As stated on P. 4: “You can do this process if you have some patience, are willing to learn, have some common sense, and become a good observer. A lot of this boils down to having an inquiring mind. You make empirical observations to answer questions and make decisions.”

They call “inquiry-based, iterative design and development” that avoids common problems such a lack of user input, design and site testing, problem identification before a site becomes public, and site repairs or undetected problems after launch.

Frick and Bohling strongly imply that design of web-based instruction is, ideally, a team process, with team members dedicated to the areas of analysis, instructional design, information and graphic design, web technology, evaluation and usability testing, and web sever and system administration.

Like Mager, they begin by focusing on instructional goals. Unlike Mager, they advocate identifying and working with stakeholders (including students and one’s supervisors) to determine instructional goals. They also take a significant step beyond Mager by advocating authentic assessment: eliciting “student performance and/or artifacts which will indicate whether or to what extent learners have achieved the goals you had in mind.” (p. 10) This parallels Mager’s focus on behavior in instructional objectives – especially his delineation of eliciting overt behavior to provide evidence of covert (unobservable) behavior – but goes beyond Mager, who settles for behavioral indicators as the goal for instruction and the basis for judging students’ success.  “Observed behavior and products created by students are indicators,” Frick and Bohling say by contrast. “They are not the goals.” (p. 10)

Further points for the analysis phase:
  • Learner analysis: “What are relevant characteristics of students that will help determine what instruction may be needed in order to reach the goals of instruction?” (p. 14) Mager addresses this and other analysis not at all.
  • Context analysis: Why use the web for this instruction? What other resources will it require?
  • Most importantly, self-analysis: are you ready to use new resources or try new or nontraditional ways of teaching?


Critique
I find this systematic approach highly effective and more robust than Mager’s concepts, focusing as it does on authentic assessment and “indicators” where Mager focuses only on behaviors. The richness of the inquiry-based approach makes me want to learn more; these chapters also provide a sense of how very much there is to mater in designing online learning. I wonder, too: What about a one-person design shop? In my industry, many trainers work alone and occasionally lean on the expertise of others – while web-based learning is provided almost exclusively by institutions. I’ll be watching to see how much of Frick and Bohling’s systematic approach I can accomplish in my own workplace, where I will largely “fly solo” in training design.

Week 3: Mager's tips on instructional objectives


I. Mager's Tips on Instructional Objectives

As a writing coach, I’m drawn to this sentence in Mager’s article: “If you don't know where you are going, it is difficult to select a suitable means for getting there.” That’s precisely the same challenge that writers face in trying to give form, theme and meaning to the information they’ve gathered, and Mager succinctly captures the sort of critical thinking task that instructional designers, like writers, must engage in if they’re to fashion a meaningful learning experience. They need to create a roadmap to guide their work. A roadmap implies that you know where you want to go. For writers, the “where” is the story’s theme, and its presumed ending. For instructional designers, it’s the intended performance they want to produce among learners.

Summary

Mager defines an instructional objective as “a description of a performance you want learners to be able to exhibit in order to consider them competent.” (p. 1) Then he adds an important  caveat: Learning  objectives  describe “an intended result of instruction, rather than the process of instruction itself.” (p. 1, italics and boldface in original.) He delineates the reasons for stating learning objectives; the four qualities of useful objectives, with an in-depth examination of each quality in turn; and common pitfalls of objective writing.

1. Reasons for stating objectives:
  • They provide the basis for selecting and designing “instructional materials, content, or methods.” (p. 1)
  • They provide the basis for creating or selecting tests to allow both the instructor and the learner to determine whether the desired performance has been achieved (italics mine)
  • They allow students to determine the means they’ll use for achieving the performance. While he does not state it explicitly, this puts some aspect of shaping the instruction under the learners’ control. 

2. Qualities of useful objectives:
  • Audience: The person undertaking the performance: the learner
  •  Behavior: what the learner should be able to do as a result of the instruction; this can be the desired action(s) or its result(s). Behavior is a verb, and it must be something observable. There are two types:

1. Overt behavior: what can be seen directly
2. Covert behavior: is internal (thinking) and can only be inferred by actions. To create an objective for a covert action, add a verb that describes what students must do to demonstrate mastery of the covert action. Make this “indicator” behavior “the simplest and most direct one possible.” (p. 4)
  • Condition: the “conditions (if any) under which the performance is to occur.” (p. 2) 
  • Degree: the thing that judges success: how well the successful learner must perform
3. Common pitfalls
  • False performance: objectives that contain no observable behavior (performance) by which to judge whether the objective is being met
  •  False givens: in general, these describe the instruction itself, rather than the conditions under which learners will perform
  •  Teaching points: describe some part of a class activity, not a desired behavior
  •  Gibberish: education-speak. “It is noise,” Mager says. (p. 7) Avoid it.
  •  Instructor performance: The point of instructional objectives is to describe how the learner is expected to perform, not the instructor.
  •  False criteria: fail to describe an observable degree of performance


Critique

Mager is to be commended for his straightforward presentation; he focuses on clarity and communicating in plain English so that he can be understood across a range of disciplines and contexts. He also does well to focus on verb choices in building descriptions of desired actions, and especially in his focus on action verbs; to-be verbs are of no use for such objectives because they imply a state of being rather than a behavior.

He seems to leave a bit of wiggle room for doubters on the degree of performance. “Sometimes such a criterion is critical. Sometimes it is of little or no importance at all.”(p. 5) This strikes me as unhelpful to his cause, even as it acknowledges reality. I think the better way to express his point would be to say that while in some circumstances one may have a hard time determining a desired degree of behavior, the effort of doing so can reap great rewards – even if the effort falls short.

I think the greatest value of this system is to create a framework that mitigates against laziness or disinclination to pay attention to detail. I, for one, seem to possess a distressing level of both these traits. I think, too, that such objective-writing can add immense utility and rigor in corporate training, where such attention to detail can often be lacking and where the focus can be on delivery of content at the expense of creating desired, measurable performance.