Swimming an Ocean: Motivation and Persistence in Massive Open Online Courses

Introduction

As I’ve discussed in a number of recent posts, MOOCs, or Massive Open Online Courses are a big phenomenon lately. A big issue of discussion with regards to MOOCs is the question of how to support motivation and persistence in Massive Open Online Courses.

This is a form of learning that works very well for certain groups of learners, but, traditionally, not so well for others. This has been an issue for distance learning in general.

In particular, MOOCs work well for for autodidacts, or self-directed learners. These are people that are able to, and enjoy learning on their own, and who either don’t need a teacher. (I count myself as one of these eccentric and annoying creatures!) Such learners are able to learn regardless of the format, and the online aspect makes the learning process very convenient. The course materials can be accessed from a home computer, any time of the day or week.

It is a more difficult environment however for marginal students. These are the students who succeed at a decent rate in traditional universities largely thanks to support systems within the university campus, both formal and informal. Whether official tutoring services, or study technique mini-courses, mental and physical health support, campus social support organizations for different minority groups, unofficial study groups of students in the same specialization, or even just campus extra-curricular and social organizations. All of these features of the brick and mortar landscape, formal and informal, help support students and keep them mentally, socially, and psychologically engaged in the university community and in their courses.  An active area of inquiry centers on how to keep these sorts of students attracted enough to persist online with courses all the way through to completion.

 

Measuring Student Engagement in Massive Open Online Courses

One metric that is talked about a lot by critics with respect to MOOCs is the low completion rates. Typically only about 5-10% of “enrolled” students “complete” the class. This looks pretty bad on the surface.

However, we need to understand that these students are not all the same, and that enrollment in a free online course doesn’t necessarily mean the same thing as it does in paid enrollment at a university. These traditional statistics are somewhat misleading. Students enrolled in MOOC courses are looking for different things. Because it’s free, online, somewhat anonymous, students don’t have to commit money or time to move somewhere or commute to study. As a result, registering for a MOOC is not the same implicit commitment as registering for a paid course at the university. Different students are bringing a wide range of different levels of commitment, different intentions or goals. Even the same student it may differ from course to course.

Take myself as an example. I am “enrolled” in a few different MOOCs right now. (I’ve been between work contracts for a few weeks, so I’ve been taking advantage of the time to take some courses and expand my skill set. This may level off a bit as I start a new job next week)

One Coursera course,  in Interactive Programming in Python from Rice University, I have been actively engaged in the course activities. I’ve spent a good 10 hours a week or more, all told, watching all the lectures, doing all the assignments and quizzes on the official schedule, engaging in the course discussion forums, posting and engaging with other students. I am also a paying participant in the course. For a number of reasons. One, just to give a little kick of external motivation. Two, because I’m interested in certification in MOOCs as a personal informal research issue and would like the verified completion with honors certificate at the end for my LinkedIn.

Other courses, though, whether because of time and / or money limitations, or because the course is interesting, but overly challenging, or because the course is finished, I’m following along with the free and open online record of lectures and assignment exercises, but not fully engaged. For example, I’m also looking at a course in Functional Programming in Scala from Lausanne Polytechnique. For a few reasons. First, because it’s a trendy language used on a lot of sites like Twitter and Quora. Second, because it’s focusing on a more exotic programming paradigm of functional programming. Third, because the course is taught by Martin Odersky, a star computer scientist who created the language. It’s fascinating stuff, and I’m roughly following along with  the lectures, (though a little behind now) and taking a look at the weekly exercises. But I’m more peripherally involved. Again, for a few reasons. First, because the teacher and the presentation are very abstract, and because the course is intended for an advanced audience. Also  there’s not much support for low level things like guidance on syntax of the language, and I don’t have enough time to properly dig around on the internet, at least for now. Also, I’m still refreshing my knowledge of computer programming, so it’s not easy to process more exotic and abstract ideas right now. So I watch lectures to absorb some of the theory, and maybe next time it’s offered I can engage more. Or I can play with the exercises later when I have time.

Another course on Coursera I’m following is in Gamification from University of Pennsylvania, put on by a very engaging Wharton Business school lecturer, Kevin Werbach. It’s not a technically demanding course; it’s theory and c0ncepts mostly, but peppered with lots of practical examples. The course is already finished, so I’m following the lecture record with interest.

Some other interesting looking courses I registered almost as a bookmarking exercise for my later reference so that I can find the courses later in my account on Coursera. Probably not how Coursera intends for you to use the enroll button, but it’s a functional hack that works for me. I similarly have a few courses bookmarked on Udacity and EdX

So just to illustrate, people have different interests and goals. The percentage of people completing / people enrolled is based on the idea that everyone has the same goals. Which is not at all the case. Some want to master a new skill and get a certification they can try to leverage professionally, maybe to expand career options. These ones tend to spend more time, and tend to complete at a higher rate. Others just want to learn some new things seriously, but casually. Others are just poking around, trying out a few courses to see which few they’ll spend more time with. So these completion rates are misleading. The more important metric is how much people are getting out of the course in relation to what they hoped to get out of it going in. And on this metric, many or most are getting as much or more out of it than they intended.

That said, there is doubtless room for improvement in making courses so that more people will engage in them to a deeper, more persistent manner. And this is where I’d like to turn to now.

Again, I’d like to illustrate from my own experiences, particularly with the interactive programming in Python course. The course had lectures, weekly graded multipler choice and short answer exercises, and  weekly mini-project programming challenges. The programming challenges were typically games. An early project was a textr based game of Rock-Paper-Scissors-Lizard-Spock (a nerd-variant of the popular game Rock Paper Scissors). Each week, this stepped up in difficulty, through programming a version of Pong, a version of memory, a visual game of Blackjack, and finally, a two week project building a space shooter game based on Atari Asteroids. Here is my final submission, by the way: http://www.codeskulptor.org/#user32_yFpWoboCQZ_80.py

People started having trouble when it got to the Blackjack game. This was the week when the programming paradigm of “Object Oriented Programming” was introduced. Coming from an abstract mathematical sort of background, I was able to grok the basics of this pretty easily, and managed to finish the project early. As someone who is also from an education background, I decided to go onto the course discussion forums to try to help out anyone who was having difficulties. There was a  lot of chatter during that particular week from people thinking about quitting. I spent a lot of time on the weekend that project was due on the forum trying to talk people down from the ledge and  trying to help different strangers who were stuck. It was a rewarding experience. But it illustrates the fragility of the online environment for those who are borderline.

Supporting Engagement in Massive Open Online Courses

So this is a big topic, how to support learners who are less auto-didactic, less internally motivated than someone like me. I’ve always been someone who benefitted from a good teacher, but didn’t really need one. Online works  naturally for people like me. But what about for others? There are a few different ideas.

More development of discussion forums. This is a great option currently used by different courses. It helps to build a course community and gives students a forum to discuss, exchange ideas, ask questions, get answers. People like online discussion forums, and they play a useful role in the educational experience. The key is to motivate possibly borderline students to engage on the forums and to make sure that the forums are a welcoming, friendly, and safe place to try out ideas and learn. This takes promotional efforts from the instructors to spend time encouraging the use of the forums, and periodically reminding about the existence and usefulness of the forums. It also requires volunteer or paid TAs to help monitor and moderate the discussions.

Another option currently enabled by some MOOC providers is local meetups and other similar projects. Coursera for example enables the ability to try to organize face to face group meetings of people in the same city who are taking the same course. In this way, people could theoretically get together to form real world study groups.

Another idea would be assigned buddies or groups made up of other students. Whether for all or for those who ask for it as an optional support. One possible way to incentivize this into a voluntary effort would be to have a survey at the beginning of the course where people can identify their level of familiarity with the subject going in and their level of confidence in their ability to succeed. People with little experience and low confidence could be directed to a page where they can opt in to being paired with a volunteer student with more experience and confidence. Conversely, those with more experience and confidence could be directed to a page where they can opt to be put into a pool of people who will be randomly paired with a few students who anticipate needing help. It could be entirely voluntary, but incentivized for example with marks. If you opt in to this and actually help people out, then you can get a few bonus marks that can make up for a bad performance on an assignment or quiz.

Another option to make things more lively and interactive than discussion forums would be something like Google Hangouts. People could talk face to face about ideas in the course. This would work best probably with arts and social sciences content, but could find application more generally too.

Finally, another option is to have some minor collaborative exercises in the course where you need to work together with other enrolled students. There would be logistical challenges to dealing with this, for example, if the random forming of teams included a number of more peripheral students not taking active part in the assignments. But if you can find solutions to these challenges, the social, collaborative aspect could help to engage many in the course.

 

On MOOCs and Accreditation

Recently Udacity, one of the major MOOC providers, announced that, in a shift in focus, it is eliminating certificates of completion for non-paying students. The free students can still look at materials and take part in activities, but if they want anything to show for it, the only option is to pay $150 a month as a verified student. The basic pdf certificates of completion that used to be made available to anyone, paid or free, that worked through the material and passed, are being phased out.

This is a mistake, a move 180 degrees in the wrong direction in the ongoing development process of a model for MOOCs and accreditation. In reality, not only should the free students be able to keep gaining some sort of certification in recognition of their efforts, but further moves need to be made to start accrediting these online courses with the universities giving the courses. MOOC consortiums actually need to be leaning harder on schools to make meaningful accreditation available to students rather than taking away elements they were already offering.

Now, don’t get me wrong; the new features Udacity is rolling out to those students paying the extra money are great steps in the right direction- added human support, and grading of submitted work by professors and others with subject expertise rather than peer grading with rubrics. This is great stuff, and if it helps the resulting certificates get taken more seriously in the marketplace, that is great. All of this helps to further the legitimacy of the online medium as a way to get university level education and continuing professional development.

But that’s no reason to throw the free students under the bus. Fine, Don’t let the free students be able to get the assignments graded by profs and TAs. This is expensive, and if someone doesn’t pay for it, it’s probably an unsustainable model, given all the other costs of putting on a nice MOOC.

Let the free students stay graded by peers. But offer them the ability to verify their identity with their webcam and a typing sample, as with the Signature Series on Coursera. (Note: the enhanced Signature Series certificates on Coursera are only available to paying customers, but the prices – $50 a course for courses that can take up to a few months, are much more accessible than the Udacity prices of $150 per month)

The technology for this sort of verification is already developed, so it doesn’t really cost the provider anything extra. And then give the free students a certificate for their efforts. Maybe that certificate is going to have a second rate market value compared to the paid certificates based on professor or TA grading. But at least they will have something of value to take away.

Yes, I realize that the paying students are the ones that pay the bills and keep the lights on. But MOOCs are a big deal in the first place because of the mass of free students. That is what is driving the traffic, the buzz. These millions of students are freely choosing to spend their free time in higher studies rather than vegging in front of the TV. Educational institutions should love this, and should want to encourage it.

The fact of the matter is that MOOCs blew up as an idea because of a promise, explicit and implicit, made to people. MOOC consortiums actively sold the idea that people could not only expand their knowledge, but also expand their opportunities through hard work in these online courses. The barriers to access to higher education would be lowered. If these masses of students lose faith in that vision, the whole thing will collapse. The numbers will shrink. With smaller overall enrolment, there will be correspondingly fewer that will stick around to pay for it. Part of that mass of free students eventually takes the plunge to pay for some sort of extras like an enhanced certification or a proctored examination. The free education generates paid education by helping to support a vibrant platform. The numbers also generate buzz and perceived legitimacy of the educational platform as a place to get meaningful education and training. This perceived legitimacy is key to attracting people to spend actual money on a course.

If you let the less well off students get certification and / or credit if they are willing to (1) do the work and (2) verify their identity for exams and assignments, they will be able to use that credit in professionally meaningful ways to help get into fruitful careers. Given a fruitful career path, these students can have the means to later become paying students in the future as they continue their professional development.

This is the sort of social welfare outcome that people in higher education say they want, right? Then make it happen. This should be the sort of thing both Liberal-minded (give to help the less fortunate) and Conservative-minded (remove barriers to let people raise themselves through their own determined effort) academics and administrators should be able to get behind.

Yes, it is the paid students that keep the bills paid, but understand that people are going to pay for it. If there is meaningful, professionally usable certification and credit available, working professionals will pay for it. It’s like with digital file sharing. It still exists, and is booming, but with reasonable and convenient means available to get access online, people pay for the content.

And if you demonstrate social responsibility by making perhaps lesser but still meaningful and usable credentials to those who can’t afford it, then those who can pay will be even more driven to support you. Because you will have proved your worth as an organization to get behind.

Instructional Design and Technical Writing

Introduction

What is the relationship between Instructional Design and Technical Writing? In what ways do these roles and skill sets overlap, and in which ways are they distinct?

Though Instructional Designers and Technical Writers will often work under the same roof or under the same team, and may collaborate on projects, they have typically been viewed as distinct jobs with distinct roles. However, recent developments in communication tools and changes in the way we think about learning have caused the boundaries between the disciplines to become fuzzier and more porous than they might initially appear. This article will look at some of the overlap and difference between the two fields, recent trends of convergence between the disciplines, and possibilities for the future.

Technical Writing and Instructional Design: a Comparison

Technical writing, as the name implies, traditionally involved writing manuals or documentation to support hardware or software. This included writing user, operation, or maintenance manuals for equipment, formal documentation of standard operating procedures (SOPs), or documentation of APIs or libraries for software. The Technical Writer engages with hardware/software developers and engineers to document key information about the systems and products.

Instructional Design on the other hand focuses on:

  •  Identifying goal performances
  • Identifying performance, skill, and knowledge gaps between what is expected and what is
  • Identifying training objectives
  • Designing training interventions to close gaps, including instructional strategies, media approach, and training delivery method, and
  • Designing assessments to measure learning.

To get needed information, the Instructional Designer engages with subject matter experts (SMEs) in the relevant discipline.

Technical writing has typically focused on text-based written materials with still images, photos, graphs, and charts. The intended media has often, traditionally speaking, been print, though in the more recent period, this has shifted to include digital texts as well. The end product has been mainly text-based resources meant to be used by people involved with a company’s hardware and software products as on-the-job or on-task references.

The products resulting from Instructional Design processes have run the gamut from written materials, classroom lessons, workshops, eLearning, and blended learning, and has involved various media, including still images, photos, graphs, charts, videos, and animations. The end result has been learning resources meant to be used as part of a formal training course, to either help learning, allow practice of what is learned, or test how well the student learned.

Technical Writing and Instructional Design: A Convergence?

So this seems relatively clear cut, and probably, historically speaking, it was. However, ongoing developments in the past decade or so have served to erode some of these distinctions.

Some describe the difference between the two fields as being that Instructional Design focuses on promoting learning and performance improvement and measuring achievement, while Technical Writing seeks simply to inform. However, on close examination, this does not seem to hold up. After all, if the Technical Writer is being paid to inform, it is obvious that it is toward some intended end of learning or performance support. There is at least implicitly some understanding that the user is supposed to learn and better his use of the documented product through the reference.

Some might not think of technical manuals as learning materials, since they aren’t used as materials in a course, and users don’t take a test after using them. Most users utilize manuals as references. When they have a question, they look up the relevant section, and find out what they need to know. This is a form of learning, however. It is simply a different form of learning called informal learning. Informal learning is the learning that takes place outside of formal courses, often in an ad hoc, unplanned, task and needs-based manner. A learner, often an adult learner, though not necessarily, has a question. The person looks up a reference in one of the resources available at hand, whether from computers or people. The person gets the question answered, and gets back to work.

This is “just-in-time” learning within the workplace and within the normal workflow rather than “just in case” learning  carried out away from the workplace in a course. If you think about it, this is actually our natural tendency in learning. When we are stuck with something, be it a game, a piece of software, a piece of equipment, we don’t tend to look for some formal course. We seek out quick focused resources that will answer our question. This could take the form of some sort of quick reference document or a person whose knowledge and experience you value.

Informal learning is less well understood than formal learning, but is ironically how most of us do most of our learning, by far. Instructional Design has always been embedded within the larger field of Human Performance Technology, which is a generalized approach for diagnosing performance problems in organizations and crafting interventions which do not necessarily involve training.

In recent years, the Instructional Design field has begun to look more seriously at informal learning tools, in particular performance support systems, electronic performance support systems,  and job aids as more effective alternatives to formal training for many aspects of workplace learning.

Instead of always designing lengthy courses to be taken by learners at a separate time and place from their work, increasingly, learning materials are designed in smaller instructional pieces indexed and searchable on company intranets. When workers need to learn how to do some task, or are stuck, or need a reminder, they look up their question in the company system, and locate some reference material. They check out the material, and when they understand, they return to their task. The whole experience might take only a few minutes, at the work desk.

Parallel to this, the scope of Technical Writing has expanded to include wider modes of reference and support documentation beyond traditional written manuals. Technical Writers do not only write manuals, but are also called on to create online help files, FAQ banks, and knowledge base items. There has also been a push, as streaming online video has become easier to deal with, for Technical Writers to use video tutorials as a more engaging method of product documentation. Explanatory or demonstrative videos, taken either with cameras or screen capture software like Camtasia, are hosted either on the company knowledge base or sites like YouTube. Technical Writers can also find themselves involved with internal wikis, or blogs. In some cases, this has even included direct interaction with the user community and even curation/management of user generated content, whether on discussion forums or social media.

There is a powerful overlap appearing here between Technical Writing and Instructional Design as applied to informal learning materials.

The question comes as to whether there is a sort of convergence going on and to what extent. Are these ultimately different jobs, or just different points on a spectrum of a common domain of technical communication? A common shared skill set can be seen, including such skills as:

  • Communicating with subject technical experts to get needed information
  • Understanding the characteristics and needs of the intended user and their work environment
  • Being able to rapidly assimilate new information
  • Coming up with ways to explain this information in a simple, well-structured, easy to understand way attuned to the needs of the audience
  • Implementing these explanation, delivering it using whatever tools are available.

On one hand, Technical Writers have been known to sometimes cross over to Instructional Design or Course Development positions.

The tools of Technical Writers have expanded in recent years to allow modes of explanation that might previously have been the domain of Instructional Designers. Recent editions of Adobe Framemaker, a popular Technical Writing tool, have allowed incorporation of video, 3D models or animations, flash presentations, and audio. As books move from something made to print on paper to something used on an e-reader or tablet, it is no longer necessary to stick to static images for support documentation. With the explosion of ebooks on media capable devices like smartphones and tablets, the very definition of what a book or manual is has come into question.

As books are increasingly consumed through electronic screens rather than on paper, old restrictions need not apply. A book can become a multimedia presentation, with different media used as appropriate to bring across different points to the reader. An interactive 3D model from AutoCAD files embedded in a manual can communicate the 3D structure of a system or piece of equipment better than a static image from one fixed perspective. An embedded animation can better bring to life a process or flow. An embedded video can bring to life a maintenance procedure in a much more compact way than through text and static graphics. The decisions involved in deciding when to use what form of media are precisely the sorts of skills traditionally used by Instructional Designers in choosing a media strategy for instructional multimedia.

Meanwhile, job descriptions for some Instructional Design openings read more like Technical Writer jobs, with emphasis on manuals, help files, FAQs, and wikis. Designers in such positions will use more “Technical Writer” oriented development tools such as Camtasia, Robohelp, and Dreamweaver. The need to document systems that go through increasingly shorter cycles of development and update has made it harder to keep up with this pace with traditional Instructional Design via formal courses. This has led to more use of shorter, informal learning resources hosted on company networks.

Key Differences Between Technical Writing and Instructional Design

It’s important to note however that while the sorts of projects and documents that Technical Writers and Instructional Designers are called to work on are overlapping more and more, there are important differences in skill sets.

Instructional Designers, particularly those with formal training, tend to have more developed formal guidance for needs analysis. They also have more formal grounding in the psychology of learning and in systems thinking and analysis. They also tend to have experience with designing for a broader range of media and interaction types. Instructional Designers also have more experience with assessment of learning.

Technical Writers, on the other hand, tend to have much more refined skills in verbal communication.This includes:

  • Clarity, simplicity, and conciseness of expression
  • Descriptive and explanatory skills
  • Document formatting skills to ensure improved aesthetics and easier visual processing, and
  • Editing skills such as spelling and grammar.

Though these skills are rooted in the written or printed word, much of it transfers well to other media of verbal expression, particularly audio narration scripting. Often, Technical Writers have formal training in writing, whether in Technical or Creative Writing.

Conclusion

So clearly the two skill sets are not completely interchangeable. There are large overlaps, but with specific areas of focus and expertise. For the time being, it is more appropriate to consider the two roles as complementary. On a larger project, the two would probably work together, with each involved in different aspects. It is an interesting question though how this will evolve and play out in the future. As Technical Writers expand their horizons toward a broader range of media, and Instructional Designers expand their focus to include informal learning and performance support, will the two fields undergo a certain amount of convergence as facets of a general field of Technical Communication? And what impact would that have on professional development within the two disciplines?

Comments? Critiques? Please feel free, whether on the blog directly, or via whatever social media channel through which you came across the article.

On Natural User Interfaces (NUI)

Preface

Welcome and an early happy weekend. This article is intended to give a bit of deeper background around trends in what are called “Natural User Interfaces,” or NUIs. This term refers to a computer technology trend related to how we interact with computers. It’s a term that I’ve used in some other articles recently, but didn’t get into too deeply, because it takes a bit to explain it so that you do it justice.

Fair warning that this article is intended to be forward looking. It is NOT about looking at tools that are currently available off the shelf. This is not about immediately applicable information. This is a look at where the technology of human-computer interfaces has come from, where it is, where it is probably going in the next few years to come, and what kinds of possibilities that could introduce for computer based training.

So in that respect, it’s about getting yourself mentally prepared for what will be coming a few years down the road. For those who like to think ahead, to dream about future Instructional Design possibilities using the tools that haven’t been invented yet.

My recommendation: if the preface and introduction pique your interest, bookmark this article, email yourself the link, and maybe set it aside for a quiet Sunday afternoon when you have some time to read and reflect. Then you can process it and reflect on the future possibilities of what you can do with this technology. Anyway, I hope you enjoy the article.

Introduction: What is a Natural User Interface (NUI)?

In a recent article, I talked about the future potential for the Kinect sensor to enable on the fly adjustments to presentation in e-Learning. In that article, I brought up the concept of a Natural User Interface, or NUI (pronounced “noo-ey”). In that article, I introduced the term almost in passing, but I recognize that a lot of people might not be familiar with the concept. The intention of the present article is to go into a little more background, to give some sense of the significance of this new type of human-computer interface, what came before it, how it has already changed how we use computers, and how future developments promise to further shape our interactions with computers. Finally, I will try to look ahead a bit at how these types of interfaces could shape the way we train people using computers.

Let’s get started.

Paradigms of human-computer interaction

So the first question for those unfamiliar with the notion of an NUI would be “what is a NUI?”

Well, to answer this question, it helps to go back a bit into the history of computing.

Computers as we generally know them (electronic calculation devices) have a history going back about 70 years, since the time of the second world war. If you want to be technical, you can trace computing back to Ada Lovelace and Charles Babbage and the Difference Engine and Analytical Engine in the early to mid 1800s, but for simplicity, let’s say 70 years, starting around 1945.

What started as a technology used to automate complex computations for a handful of high-end research and military institutions via massive electrical machines has evolved and grown over these seven decades to become a technology that is an integrated, essential part of the fabric of life (at least for people in relatively developed parts of the world). Along the way, the power, speed, and storage capacities of computers have increased exponentially, while the costs and sizes of components have at the same time shrunk at exponential rates. Computers have gone from machines numbering a handful in the whole world to numbering somewhere in the billions. Some billion powerful computers are carried around in people’s pockets in the form of smart phones, and embedded computing devices appear in almost any electrical device produced today.

Along with these developments, the means through which people interface and interact with computers have also dramatically changed. This change has come both as a result of technological developments, and at the same time as a driver to uptake of computers amongst the general population,  Human-Computer interaction has gone through a number of important paradigm shifts.

A paradigm, for those unfamiliar with the term, is a dominant contemporary pattern or way of conceptualizing and doing things. There have been a few major paradigms of human-computer interaction, with corresponding shifts as the technology moves from one dominant mode of interface to another.

I first want to speak about three major early paradigms of human-computer interaction:

  1. Batch interfaces (1940s to 1960s)
  2. Command Line Interfaces (1960s to 1980s)
  3. Graphical User Interfaces (1980s to 2000s)

I will then speak about the recently emerging paradigm of Natural User Interfaces (NUI). I will discuss some of the different examples of NUIs, and finally look at new possibilities for training opened up by these sorts of interfaces.

First paradigm: Batch interface (1940s to 1960s)

The first computer interface paradigm was the batch interface. In this setup, users entered commands through stacks of punch cards punched by hand and fed into a card reader peripheral, which read the punched holes via optical scanning and turned the entries into electrical inputs. Programmers would carefully enter their code on the punch cards and submit their stack of cards as a batch to be scheduled and run by the administrators of the machine.

Remember, this was a time when computers were huge machines taking up most of a room, and a whole university or department might share one of these machines. It was a scarce, in demand resource, so programmers had to wait their turn for their code to be run. Computers could run one program for one user at one time. This produced a serious bottle neck in performance. Users could not typically just sit at the computer by themselves and use it because the resource was limited and the time could be used more efficiently if the programs were run together one after another as a batch.

This cycle from submission of the program to scheduling to entering it into the computer to running could take days, depending on how busy the administrators of the computer center were. And if there was a bug, something miscoded in the punch cards, the program would fail, and the programmer would have to start again, identifying where the error was without any sort of guidance (“syntax error on line 57,” etc). Such aids didn’t exist. The programmer would try to track down the error in logic by hand, and then resubmit the revised program to the queue. It was a system that encouraged refined first draft work.

In a batch interface, the computer reads commands, coded in rigidly structured messages, carries out commands, and gives output through a printer. The computer would take in the programs of many people at one time, and process them, one after another, as a batch. It was in this time period that the first computer languages were developed.

The frustrations of dealing with these batch processing systems were a major drive for computer science researchers of the day to look into alternate modes of human-computer interaction.

Punch card

 

Second paradigm: Terminals and Command line interface (CLI) (1960s to early 1980s)

Then followed the command line interface (CLI). This came about along with development of early computer displays and monitors with keyboards used as inputs. Users could input characters through a keyboard and see them displayed on the screen. This would take place at a terminal with a keyboard and  display connected or networked to the main computer machine.

The main computer would be set up to time share between multiple users. The computer basically rapidly switches between carrying out tasks for each user, allowing the central computer to “simultaneously” handle many users at once. To get a sense of how this works, imagine getting your household chores done by doing laundry for a minute, then switching to keeping an eye on dinner for a minute, then switching to attending to your kids for a minute, then switching to tidying the living room for a minute, then switching to sweeping the floor for a minute. Then imagine thistask switching a million times faster. You’re doing one thing at a time in little slices, but to a casual observer, everything is smoothly proceeding all at once. Generally, your computer at home or at work “multi-tasks” in a similar sort of way. The coordination of the time sharing created a certain amount of overhead using up computer resources, but this became less of a concern as computers became faster over time.

So the user no longer had to punch cards, and no longer had to give them to someone else to feed into the machine, and wait. The different programmers and application users could get access to a terminal, and use that to interact directly with the computer in something resembling real time. The user could input text information, and get text output back more or less immediately.

This paradigm also overlapped with the appearance of the first so-called “micro-computers” used as office business machines (e.g. the IBM era). It was also the paradigm under which the first “personal computers” were born. These were standalone computing machines small enough to fit on a desk.

The user of one of these machines could use the keyboard, aided by the feedback visuals from the screen, to type documents, or to enter commands. The user controls the computer and performs actions such as creating, saving, deleting, copying, and moving files and directories using text based commands typed into a a command line. This can still be seen today in the command line in Linux and in the mstsc / Commad Prompt ultility in Windows.  MS DOS, the first Microsoft operating system, worked like this.

This is known as a Command Line Interface or CLI. More advanced computer programming languages were also developed at this time.

 

Third paradigm: Graphical User Interface (GUI) (1980s to 2000s)

The next paradigm was the Graphical User Interface or GUI (“goo-ey”). This consists of a “desktop metaphor,” with program windows, menus, virtual “buttons” and other controls on the screen with which the user interacts using a mouse and pointer. Associated with this is the acronym WIMP=Windows, Icons, Mouse, Pointer.

The earliest GUI was from research at Xerox PARC in the 1970s. These ideas were later taken up by Apple Computers in the early Macintosh and Microsoft in their Windows OS. Interactions simulated the way a person might interact with a real world machine, by “pushing” (with mouse clicks) virtual buttons, turning virtual dials, etc. It was at this stage, corresponding with a sufficient miniaturization of computer components and fall in price, that the idea of a home “personal computer” took hold. With the desktop metaphor, windows, and mouse pointers, it became much more natural for everyday people to use computers. There were still many rough edges, and certain arcane bits of knowledge to learn, but overall, it became much simpler for everyday people to do basic things with computers. Computers were starting down the road to becoming a household appliance that average people would use as part of their everyday lives.

 

The emerging paradigm: The natural user interface (NUI) (2000s to present)

The next paradigm of human-computer interaction is so-called Natural User Interfaces, or NUI. This can encompass a variety of types of interaction, but the overarching idea is that rather than having artificial or mechanical intermediary means of input, the user interacts with the computer in ways more like those used to interact with people and objects in the real world, and more directly. This typically means touch, body / hand gestures, facial expressions, speech, and giving queries or commands to the computer in something much closer to the ambiguities of everyday language rather than in rigid computer syntax.

What does this mean? Well, to illustrate, let’s look at the predominant method of computer interaction that we’re just coming from and still wrapped up with. Namely, the mouse.  Or, more precisely, the mouse and pointer as a way of navigating graphical menus and control interfaces on a screen display, with the keyboard for the user to enter in data like on some electronic typewriter. This form of interaction was almost completely predominant from around 1984 right up through to around 2008, a period of 24 years. The 1984 date marks the appearance of the Apple Macintosh (128k), which featured a GUI and mouse. 2008 on the other hand was the appearance of the iPhone 3G, which helped to explode the popularity of capacitive multi-touch smartphones. (As much as I dislike Apple’s closed model and think they’re past their prime, I have to grudgingly give them credit for having been squarely at the center of  both of these technological inflection points.)

The mouse has become so much a part of our daily activities, at home and at work, for so long, that it’s easy to lose sight of how awkward and un-natural a way this is of interacting with with a computer. Or interacting with anything. You sit in front of a computer screen, staring at it.You have a button on the screen. You have to grab this mouse on the desktop, drag it along the horizontal plane of the desk surface in order to move the visual of  a  pointer arrow on the vertical plane of the screen surface. And then you click on a button on the mouse to “click” the on-screen button. Once upon a time, this was the only way to mediate the pressing of a button. It was simply the only way to do it. But what is the most natural instinct to do this, today, given the technology widely available now, namely touchscreens? Well, since 2008, with the iPhone, and since 2010, with the iPad, it’s simple. You reach out your hand to the screen and touch the putton to press it. The whole step becomes much more natural and effortless.

Admittedly, it’s still kind of weird, because you’re still blocked by this 2 dimensional surface as you bump up against it and touch it or move your hands over it. It’s still a little limiting and artificial. But it’s getting there. You’re completing the metaphor at least of the classical graphical user interface or the desktop workspace on which you place things and move things around. Instead of moving them with a mouse, you move them directly with your fingers. You’re still operating something like some old fashioned instrument panel, but that has become more naturally engaging. You move like you’re actually operating an instrument panel in real life.

As mobile computing and mobile internet have taken off, this has impacted web and application design so that even on the desktop, the user interface principles inspired by touchscreen usability – lots of white space, simplified menus and controls, and large button targets – have become predominant. Designers try to build applications that work well on both.

Interacting with the computer in these more natural, everyday ways means that in a sense, the interface fades from attention and becomes invisible to the user. But the idea is that generally the experience is smoother, more realistic, more like a real world interaction. The distance between the user and the computer becomes smaller. In this way the computer becomes a little more like an extension of the user’s body. The user simply smoothly interacts with the computer to do what he needs to do.

We call such an interface a Natural User Interface, abbreviated NUI, and pronounced “noo-ey.” It’s the idea of an interface that drapes itself over us, fits us like a glove by letting us interact with the computer more like we interact with real world objects and people.

In popular entertainment, we see some examples of futuristic concepts of use of NUIs. The computer on Star Trek TNG, for example, which the crew commanded through voice or touch screen control panels as they walked around the ship and did their thing.

Or the gesture interfaces Tom Cruise’s character used in the Pre-Crime unit in Minority Report.

http://www.youtube.com/watch?v=8deYjcgVgm8

Or more recently in the battle “simulation” in the film Ender’s Game.

Multi-touch touch capacitive screens as seen in modern smartphones and tablets are one good example of an NUI. You interact with screen items by touching them with one or more fingers to stretch items, rotate them, shrink them, etc.

Virtual assistants or agents such as Apple’s Siri or Microsoft’s Cortana are another example, or another aspect of natural user interface technology. Here users interact in a somewhat conversational manner with the computer using speech. Some of the predictive elements of Google Now would also be examples.

Haptics (touch based interfaces) are yet another element to make interfacing more natural by simulating the texture and force feedbacks and resistances you would get interacting with real objects.

Virtual reality would be another example of a natural user interface.The person interacts with the virtual world through head and body movements, receiving visual feedback through some sort of helmet screen.This is a technology going back some decades, but is becoming more affordable and feasible now. An example of a mass product is the Oculus Rift by company OculusVR (In the news of late for having been acquired by Facebook).

Another example is augmented reality, as in Google Glass. Here, important contextual information is projected within the user’s field of view to give continuously present information.

NUIs can also be combinations of these different types of technology. For example, the combination of speech and body / hand gestures is used in the Microsoft Xbox Kinect sensor. Microsoft, has opened the sensor with free APIs and SDK for developing NUI-enabled software for Windows using the Kinect for Windows sensor. The Kinect is a sensor that was previously sold as an optional peripheral for the Xbox and which is now a bundled part of the new Xbox One gaming and home entertainment console.

http://www.youtube.com/watch?v=Hi5kMNfgDS4

This particular device features two cameras for stereo machine vision with depth perception. Software in the device can make out limbs, facial expressions, hand gestures, limb and finger movements, face movements, facial expressions, even the pulse of the user, and use these as inputs for control. Multiple microphones are present for noise cancellation and for recognizing directionality of sound. There is software on board for voice recognition and for facial recognition.The user controls the game by voice inputs and by moving his body and hands.

This represents a more natural way to interact and brings to life some of these models of human-computer interaction forseen by science fiction earlier. It is not hard to forsee possible applications to training with this, especially with APIs of the device open to commercial and research development. The following links and the video below give some sense of what is being done with this sensor tool.

http://openkinect.org/wiki/Main_Page http://www.microsoft.com/en-us/kinectforwindows/

http://createdigitalmotion.com/2013/10/microsoft-embraces-open-creative-coding-new-kinect-openframeworks-cinder-integration/

http://blogs.msdn.com/b/kinectforwindows/archive/2013/08/13/turn-any-surface-into-a-touch-screen-with-ubi-interactive-and-kinect-for-windows.aspx

http://www.youtube.com/watch?v=Iu2XH5p_hMM

The Xbox One with Kinect is probably the hardest push right now for mass adoption of Natural User Interface technology in the home. There is also an Xbox Kinect for Windows sensor coming out that would allow games and software to be written using this device to control a computer.

http://www.microsoft.com/en-us/kinectforwindows/develop/

Another potential route forward might come in the form of the iPad a few generations down the road if/when Apple can put something similar to Kinect’s sensors today in the iPad. The iPad would make a sophisticated control device for the TV, with the iPad mirroring to the TV screen. So this hypothetical future iPad could watch you through twin cameras, to read your eye movements and facial expressions or detect hand gesture based inputs. The microphone inputs, combined with cloud services, could read speech queries or commands from you. The touch screen would detect button presses, finger or stylus drawing inputs. The accelerometer and gyro would recognize if you’re sitting or standing and in what orientation you’re holding the iPad. You could then hold the iPad in different orientations in space as a control surface or workspace. The problem with the Xbox Kinect sensor is that it watches from farther back. So it can’t pick up yet as much nuance of detail as you could with a closer camera. A camera in the iPad could do that.

I wouldn’t be surprised to see Apple to do this, getting everyone used to this method of interaction, and then hitting with the long-predicted Apple TV, integrating something like the Kinect sensor and a slick multiple layers of Natural User Interfaces built in. Bang and bang. It would have a big impact.

Learning and Training Applications

All of this promises to really shake up how we interact with computers. And since interaction is such a key element of computer based training, this has implications for us as designers of instruction.

There are a number of foreseeable learning and training applications for this sort of technology. To name just a few examples:

Speech recognition and text to speech could be useful for language learning.

Gesture based controls could enable more lifelike interaction with 3D models, especially if using stereoscopic 3D image displays. This could potentially be used for a variety of applications in technical training:

  •  to manipulate and examine equipment in maintenance training.
  • to learn structure of machinery by virtual manipulation of 3d models, including assembly and disassembly. Haptic feedback outputs could even simulate the sensation of touching and working with the actual equipment.
  • in biochemistry, to manipulate 3-D models of large molecules like proteins to understand their structure and active sites
  • or to visualize biological reaction steps

Virtual reality could be used to simulate the operation of certain complex equipment, including running through rare or emergency scenarios.

For soft skills, imagine the immersiveness of a training program where you interact with a 3d character in a scenario using simply your body language and your speech. The realism is greatly heightened. Or imagine a training program that can give feedback on your body language, verbal tics like filler words, and your facial expressions while you give a simulated presentation or sales pitch.

 

 

 

Concept: Promoting persistence with exercise equipment through video gaming

Introduction

Fitness is a big business.

In the US, for example, as of 2009:

  • Health clubs: $20 billion a year, 45 million memberships.
  • Consumer fitness equipment: 3.2 billion

At the same time, there is a lot of concern about public health from diseases related to obesity and lack of exercise.

People spend a lot of money in particular on home exercise equipment. Devices like treadmills, rowing machines, elliptical trainers, and exercise bikes. But people don’t tend to stick with it. The initial motivation comes, but the motivation often doesn’t persist. A common story is that families will buy these for Christmas as part of some intended New Year’s Resolution. More often than not, the box is opened, it’s set up in the basement, it’s actively used for a few weeks or months, and then it’s forgotten about again.

What can be done to help this? Is there a solution to this performance gap?

Gaming and Motivation

One area that excels in creating and then sustaining motivation (persistence) and intensity of engagement is video games. People will spend hours and hours on games, sometimes to the degree of forgoing food, sleep, other activities, and human contact. Games achieve this with a range of different mechanisms: fun and variety, a mix of long term, middle term, and short term goals (game completion, boss or world completion, and minor task or level completion),  continuous informational feedback and rewards in the form of scores and achievements, competition with other gamers, and social communication tools to allow discussion of game strategies and mutual social based motivation.

Could this power of video games be harnessed to encourage people to make more frequent and more effective use of their home exercise machines? Namely through fitness based games that make use of and incorporate the use of the exercise machines?

A solution: fitness based games using the equipment

Fitness based games are something that already exist. There are a number of titles for Wii, Xbox 360 with Kinect / Xbox One, and PS3/PS4. Often, these will make use of the motion based controllers. For the Wii and PS, this involves a handheld motion controller, while for the Xbox with Kinect, this involves simply moving in front of a sensor that captures body movement. The problem with these is that they just involve you moving or jumping around in your living room. There is not a lot of space. This works for people that like yoga and aerobics, but not so much for people that like to bike or run.

As far as I am aware, there aren’t any titles that make use of home exercise equipment. This is, in my mind, a gap just waiting to be filled. Microsoft, the manufacturer of the Xbox, would be in a nice position for this because their Kinect controller doesn’t require you to hold something in your hand to use it.

Microsoft could partner with the exercise equipment manufacturers to build free to download Xbox One game apps that make use of the Kinect sensor and the use of the equipment as part of fun, engaging games.

For example, a biking race game where you control the game, through the Kinect, by pedaling the exercise bike, and there’s nice HD scenery as you go along the race course, any one you like. Mountains, beside the ocean, beside a river. Ort famous race courses like the Tour du France. Ideally displaying in stereo 3D. You could have a training mode and a racing mode, which could offer either a short track race or a longer road race.

Or a running game / road race trainer game tied to major treadmill models. Go for a run by yourself or in some chosen scenery, either in nature or in some city. When you want to test yourself, you play a race mode that puts you in a famous road race course like a big, renowned 10k or the New York or Boston Marathon. Again, 3D rendered and ideally displaying in stereo 3D.

Or for the elliptical, it could be cross-country skiing.

Or a rowing game using a home rowing machine with well known scenes or race scenarios. For example, a game scenario where you train with the Harvard or Cambridge crew, row on an Olympic course, or relive some big race on the Seine in Paris from the early 20th century.

Make sure there’s an interactive display layer menu the user can access for exercise and training analytics. Also with some sort of virtual coaching, maybe something using interactive avatars. In addition, ideally a social network layer to share “achievements” or get encouragement from friends that are also on an exercise program. A space for monitoring vitals like heart rate over time and tools to manage diet and nutrition and suggest meals would also be useful.

Ideally, Microsoft would want to have someone working with the equipment designers and manufacturers to incorporate wifi connections or Bluetooth in the equipment so that the Xbox game, via the Xbox software, can wirelessly control the exercise equipment within some manufacturer and user set safety tolerances. Also, the other way, so that the equipment can wirelessly send the current setting to the game. So, for example, if you’re playing your running game, and you’re on the last kilometer of a 5 k race, and you want to sprint for a PB or to catch someone, and the software calculates that you’re not over-exerting for your age and fitness level, the system raises the speed of the treadmill automatically to match your attempt to go faster. Or for the bike or treadmill to automatically adjust the inclination when the game gets to a hill on a training or race course.

It would also be good for the games to be multiplayer, ideally multiplayer over the internet. Then people could go on at the same time and race each other on their equipment over the Internet. This could help additionally with social based motivation.

With the right gamified elements and incentives and feedback, you could help people make more effective use of fitness equipment in their lives, help them persist at it, and get fitter. The machines would be better used, and health outcomes could be improved over the longer term.

I could even see a nice marketing strategy for Microsoft for the Christmas holiday season. Make a joint marketing arrangement with the major home fitness manufacturers and TV manufactures and the electronics and home appliance stores.

Arrange to set up displays in store. Have the exercise equipment set up facing the biggest TV screen in the store, with the TV hooked up to the Xbox One at an appropriate distance from the exercise equipment and with a well positioned Kinect hooked up to the Xbox. Have some game running in multiplayer mode. People could try it out and have a little low intensity friendly competition right there in the store. And by juxtaposing the Xbox One, the TV, and exercise equipment in a way that shows them working together, you might well increase the sales of all three, benefitting the manufacturers of the devices and the store that sells them. Everyone wins.

Further Reading

http://mobihealthnews.com/22628/xbox-one-kinect-2-0-and-the-future-of-health-technology/

http://www.ciaomom.com/getting-fit-with-nike-kinect-training/