The television as a learning and training space

Introduction

Recent years have seen the world of training embracing  learning on mobile devices, or mlearning, for short. There are many reasons for this:

  1. Client demand as people more and more browse the internet principally through mobile devices
  2. Clients always having their phones with them, allowing lots of little moments during a day when learning could potentially take place.
  3. Phones having lots of sensors and input methods, allowing for innovative interactions
  4. Phones allowing multiple communciation methods

Designers and developers have been working on designs using mobile learning. At its most basic this has taken the form of  using file formats so that videos or presentations will play on a tablet, or even just an iPad. Or to make the training as an iPad app or playable within some container app.

Others, approaching the matter with some semblance of actual seriousness, have gotten more creative, and tailored training more to the unique affordances of smart phones and tablets. They make learning games that use sensors or activities that use sensors as inputs for motion or touch based interactions. Or they use location information. Others use ideas of informal learning and performance support to break training into small, focused little pieces that can be accessed in a spare moment.

eLearning authoring tool providers advertise their tools as enabling responsive eLearning. They hype the promise of being able to publish content to multiple media and device types, for desktop, tablet, and mobile.

This is good for learning and training. However, in this focus on mobile, we may be losing sight of possibly the next key development of web-based learning and training – the television as a learning and training space.

Television as a new window to Internet content and learning

Sitting on a couch with a tablet is a nice way to watch  a video or presentation. The device is light and comfortable. But, still, it’s a 10 inch screen. It is nice for portability, but it’s still a small screen. The small size is a compromise, trading visibility and real estate for portability.

But across from the couch is what? The TV. Big screen – 30, 40, 50, 60 inches. 1080p HD, easy to see, nice to watch, decent speakers. And you don’t have to hold anything.

Television used to be a box on which we watched traditional television programs, whether delivered over the air, or through cable or satellite broadcast. Then, came VHS players, DVD, Blu-ray, video game consoles. The living room TV became instead the screen in the middle of a home entertainment center.

Now, increasingly, televisions are also becoming just another one of the screens,albeit, much bigger ones, through which to access internet content, whether for entertainment, work, or learning. This takes the form of video, audio, text, and apps. The long promised fusing of internet and television has arrived, with several different options available to make this possible.

Many TVs are now “smart TVs,” combining a TV with a computer. These TVs are WiFi enabled, with built in interfaces and platforms with apps capability. Apps allow straightforward connectivity to content sources like Youtube, Netflix, digital music streaming services, and other streaming media.

Modern TV screens also allow for stereoscopic 3D. While no longer a faddish selling point, most newer TVs are by market standard capable of displaying stereoscopic 3D content, whether accessed over the web or on 3D Blu-rays. TVs stand out notably from the other screens through which we consume content in that many of them today readily allow Stereoscopic 3D media. TVs are the one dependable 3D screen that people commonly own.

TVs are also capable of being connected to gaming systems like PS4 and Xbox One, the second of which includes the Xbox Kinect motion and voice sensor. These systems, while meant primarily for gaming, are also intended more generally for home entertainment, with app platforms and apps like Netflix and Youtube to see internet video content.

As well, set top boxes like Apple TV as well as many WiFi enabled Blu-ray players offer a similar bridge between the television and the internet.

Tablets, phones, and laptops can share screens wirelessly to TVs, either through devices like Apple TV, game systems, or via Miracast / WiDi from enabled devices.

It is easy to get content on the TV. As well, the TV will either be setup with sensors, whether in the TV itself or via something like an Xbox, or the person will be screen sharing from something which has sensors and a touch based control interface. It becomes easier to browse, select, and interact with online content shown on the TV.

Designers, both web designers an instructional designers,  need to think about training and learning possibilities in this space.  just as they should be thinking about that OTHER class of displays that will also be more and more in people’s lives – wearables and augmented/virtual reality tech such as Google Glass and Oculus Rift. (More on this in a future post)

Challenges

There are a few challenges in this area:

Platforms

One main challenge is that there are so many different sorts of configurations and ways to connect the internet to the TV:

  • Via game consoles such as XBox One or Sony PS4
  • Smart TVs
  • Set top boxes like Apple TV, Wifi Blu-ray player, or Chromecast
  • Computer connected to the TV to share the screen via HDMI cable
  • Wireless screencast from laptop, tablet, or smartphone to the TV, whether through Apple Airplay or up and coming wireless screencasting standards WiDi (wireless direct) and Miracast.

This makes things difficult for developers, as there is no one clear dominant target for development.

The gaming consoles, which have positioned themselves as not only gaming platforms, but home entertainment hubs, may be one promising avenue, as the multi-billion dollar gaming industry already attracts lots of skilled developers to these platforms. Microsoft’s XBox One in particular runs an operating system related to Windows and uses the same development toolkit. Also, these gaming consoles offer innovative ways to interact with the content on the TV through different types of controller devices. These include body movement and voice based controls. The gaming console option offers interesting possibilities in terms of innovative learning interactions.

A more straightforward, elegant solution may be through smart TVs, where everything is in one box. This would especially be the case if in the future the telvision included sensors that could be turned on for Kinect-like interaction with cameras and microphones. One challenge, however, is attracting developers to different platforms from different manufacturers. Possibly only a company like Samsung, which is involved in manufacturing phones, tablets, computers, and TVs would be in a strong position to carry over advances in interfaces and interactivity from other devices to TVs. Or someone like Apple.

The other challenge would be emotional reactions from consumers. When early press about the Xbox One suggested that the system would require the Kinect sensor – which includes stereo cameras and microphones – to always be on, even when the system is not in use, people became very paranoid, and there was a backlash.

It is possible that TVs will evolve in coming years to become a sort of all-in-one computer, with web connection, innovative web browsing methods (the concept of adaptive web design will also have to adapt and evolve to cater to TV as a screen), app platforms, and built in SSD memory space, possibly supplemented by cloud storage.

Quite possibly the next stage of the Apple OS – Android – Windows – Linux battles will be fought on the battlefield of internet connected TVs. Ubuntu, for example (A variant of the Linux operating system) has actually been positioning itself as a flexible multiplatform, including TV – OS for some time.

Wireless screen sharing may be the simplest approach, making the smartphone, tablet, or PC the central point of control of what appears on the TV screen. Desktop and laptop computers would have limits though in terms of enabling learning interactions.

Tablets and smartphones, could potentially allow for interesting learning interactions through the accelerometer, gyroscope, and touch sensors.

The scene is probably going to be messy for a few years with a lot of options making it hard for developers to pick. This will make it hard to form development communities that will drive things explosively forward.

Interface and Interactivity

The possibilities for learning and training will depend somewhat on the options available for interactivity. One of the challenges in making the TV a hub for learning content is how the user can control and navigate content sitting or standing from across the room. Good eLearning and online training especially requires rich interactions.

But how do you interface with the TV? A computer you sit right there and control it via mouse and keyboard, and to a lesser extent, microphone and camera. A tablet or smartphone you tap it, swipe it,  turn it, talk to it, because again, you’re up close to it and it fits in your hands.

TV is different. You sit back from it, or stand back from it. You’re not going to stand at your TV tapping the screen like those big maps on CNN election night.

There are probably four major options:

  1. Some modification of a traditional TV remote, possibly one with a touchscreen and accelerometer/gyroscope sensors
  2. Some camera and microphone based sensor like the MS Kinect that lets you control via voice and body gesture
  3. Controlling through a laptop computer, tablet, or smartphone, which shares the screen wirelessly via WiDi, Miracast, or Apple Airplay and lets you control things via touchscreen and motion sensors. The TV simply becomes a screen to mirror content on the other device.
  4. A smartphone or tablet is paired with the TV via an app, and serves as a WiFi-connected touch- and motion-based controller.

All of these could probably be made to work, though options 2 and 4 are probably the most plausible options going forward in terms of usability and in terms of building on existing platforms.

 

Learning and Training Possibilities

The matter then becomes how to harness this emerging new portal to the internet for learning an training.

A few possibilities come to mind.

  • Any passive consumption of video content. Particularly content in HD or stereoscopic 3D format. YouTube contet, for example. A TV would be the most natural and comfortable way to watch. Everything becomes bigger and more lifelike
  • Educational gaming activities using a gaming controller
  • Web content browsing with voice and gesture inputs enabled by something like the Kinect. Say, for example, a view of different documents or different levels of detail making use of different focal planes in a 3D field of view. This allows information and screen elements to be arranged not just along dimensions of horizontal and vertical, but by depth as well.
  • Interaction with stereoscopic 3D models using Kinect sensor. Such as chemical structures, architectural structures, geographic feature models of an area, or components of equipment.
  • Live, synschronous, life-like teleconferencing via TVs and Kinect sensors using apps like Skype or something like it embedded in a virtual classroom application. Virtual classroom would work very well on an HD television with connected camera and microphone. For live, face to face communications, for conversational practice in language learning, or a live virtual tutoring session.
  • Using the Kinect, the learner practices some psychomotor skill. At the same time, the Kinect camera lets a remote instructor watch the performance and comment. The Kinect could also capture data to assist in analyzing biomechanics.

These are a few sample ideas. Maybe readers can think of others.

 Conclusion

The past six years have seen dramatic changes with the coming into the mainstream of mobile devices as a new space for online learning, with unique affordances for interactivity. The mobile web and mLearning have expanded our horizons for entertainment and learning. The television, connected to the internet offers a new field on which we can ply our craft as designers and developers. It’s a developing field with a lot of options that will take some time to sort out and settle down. But for those of us tasked with helping our clients and students to learn and develop, it’s a field we would do well not to ignore.


 

Once again, feel free to share your comments, either below, or via social media.

An idea whose time has come? Reusable Learning Objects.

Introduction: A brief history of learning objects

When I was in school in the early 2000s, one of the trendy ideas in the field of educational technology was reusable learning objects (RLOs). Learning objects were a heavily promoted idea in the 1990s and early 2000s. The idea came out of US military-funded training research, focused on two goals:

  • To standardize multiple, mutually incompatible eLearning formats used by vendors to the armed forces so as to improve inter-operability of training content, and
  • To design materials using small, self-contained, meta-tagged modules to enable reuse and thus reduce development time and cost.

The name “learning object” comes from the computer paradigm of object-oriented programming, where small, self-contained code structures model objects and entities in the real world, their properties and their inner structures, and their interaction between objects and entities. This was a paradigm allowing faster development through modular design, re-usable libraries of code, and encapsulation of object data within the objects.

Learning objects try to carry some of this success from software design and development to the design and development of eLearning.

What is a learning object?

A learning object is a short learning piece, usually digital, from a few minutes up to as much as an hour in length, though usually on the shorter side. The learning piece is focused on one learning objective. It will generally include an introduction, explanation and/or demonstration, activities for the learner for practice and / or consolidation, and an assessment. It is an irreducible element of knowledge, an atomic nugget of learning.

It was expected that eLearning objects would use a standard format such as SCORM for metadata attached to the objects. This would enable the learning object to be interoperable with different delivery platforms (LMS).  The idea was for the object to represent instruction for a small nugget of content related to a specific objective.

The purpose of this was to enable re-use of training materials for faster, more efficient development of future content. Usually, when we want to reuse a body of training content as part of a new course, we need to break apart the old course, extract useful bits, and then assemble what you want back together in a cohesive fashion.

The idea with the learning objects is that they represent some small sort of smallest learning objectives. The related objects are already broken down. All that is left when building a new course is to identify what you need to teach, finding out what is already built, evaluate it, and then either re-use or re-purpose the content.

To maximize this re-usability, the learning object is supposed to be as free of specific context (audience, place, type of organization, etc) as possible. For example, if multiple audiences would want to study toward this objective, media or examples used should not be limited to only one audience.

New courses could, in theory, be built by collecting, and sequencing various learning objects, with an overall introduction and conclusion and some linkages to join it all together.

Critiques of the Learning Object concept

While learning objects were a trendy topic in the -90s and -00s, the idea was not without its critics.

There are several critiques of the learning object concept:

  • The idea of learning objects was pushed primarily by the military and for its own concerns of operational efficiency and cost savings rather than any sense that it would produce more better learning. The concerns are quantity of output and efficiency rather than quality of education
  • The idea mainly focuses on eLearning, and specifically eLearning for one solitary self-paced learner. Where social sorts of learning involving cooperation and collaboration fits within this was not clear
  • If context is removed, it is harder for learners to relate to it on a concrete level. Media and graphics and examples are generic, or some wide range. The media and examples don’t speak closely to their particular reality. As such, you risk losing the attention and motivation of the learner, because they may not see the relevance clearly.
  • If context is removed, it is harder for learners to make meaningful connections between the content and other content unless the developer puts in extra effort to put this connective material back in. Statements like, “as you remember from module 1,” or, “you will learn more about this in the coming module,” or “this is related to these other topics” would be mostly removed from learning objects to maximize reusability. Learning these sorts of connections is an important part of learning new material, and is part of what makes new learning stick together cohesively in the learner’s mind.
  • When assembling courses from smaller learning objects, it is not a matter of just sticking together lego blocks or assembling IKEA furniture. Remember that all that context that serves as a connective tissue of sorts for the objects has been stripped away to allow the reuse. To make it most effective, you need to add contextual glue/mortar in between the pieces to improve flow and relevance. This cancels a lot of the time savings that are advertised.

Disillusionment…

So up through the early and mid 2000s there was a lot of hype about learning objects, When I was in my Educational Technology program at that time, the concept was talked about, and readings were given, including critiques of the concept. Some large companies, schools, and educational networks did a lot of work in this field, with some of these projects still continuing. But the idea never took off broadly as advertised.

eLearning continued to gain broader acceptance in the academy and in industry. SCORM standards for eLearning content metadata and inter-operability went forward and became commonly used standards supported by authoring tools and Learning Management Systems. eLearning authoring tools became increasingly sophisticated, allowing simple eLearning to be developed more and more efficiently.

But the strict learning object idea did not continue to be top of mind for practitioners, who grew disillusioned by the concept as they experienced the limitations and difficulty, witnessed lots of bad eLearning content, and found the time savings and re-usability to be much less in practice than advertised.

The term learning object faded from common conversation.

In the meantime…

Life went on, technology advanced. Broadband internet became more widespread with faster speeds. This allowed easier upload and download of multimedia content, even video content.

The Web 2.0 era of user generated content came about. PHP discussion boards. Wikipedia. Youtube. Social media like Facebook. Question and answer sites like eHow and Quora. A Web where content could easily be generated by users, tagged for search, and uploaded.

This was furthered with the mainstreaming of mobile internet devices. The iPhone 3G appeared in 2008. The explosion of the smartphone market followed. This led to a proliferation of mobile apps on sophisticated pocket computers with cameras, microphones, and other sensors. Tablet computing went mainstream, with the iPad in 2010. With these mobile devices came touch based computing and context aware computing. The widespread rollout and development of high speed mobile networks enabled voice, audio, and video transmission. Smart, small, lightweight connected mobile devices mean that the user almost always has on hand.

In the field of educational technology and training, there is an increasing emphasis on informal learning such as job aids, performance support systems, and just-in-time learning.

Finally, eLearning authoring tools have become much more user friendly, making it easier for experts to build their own content and distribute it. This broadens the development pool and makes it easier to generate content.

All of these developments and change have come over the past ten years. We start to see a very different landscape from what it was when this learning objects concept originally peaked and then faded in the early 2000s.

When you look at all these developments together, and reflect, you start to wonder if maybe that old idea of learning objects might have renewed relevance in today’s environment.

 

So what’s changed?

So putting it together, what is different today?

Cell phones and inexpensive but powerful recording equipment let us easily record content. Easy to use authoring software lets us easily assemble media into small but meaningful packets of learning material. Ubiquitous network connections and sharing features in apps let us easily upload content from almost anywhere.

Platforms like Youtube, Soundcloud, Facebook, and others give us a place to upload and organize content, share it with others, see what others have shared, and further pass content along to others.

To keep up with the rapid pace of the age, these pieces of content are short and focused. In line with trends in informal learning and continuous learning, a lot of learning materials are posted on these sorts of platforms and on company intranets, so learners can access brief, relevant material as needed on the job rather than taking a formal course. There is also the trend in microlearning, focusing on short learning pieces of a few minutes in length.Short learning pieces also work better with the usage patterns of smartphones

Responsive web design and responsive eLearning design allow content to be developed once, hosted in one location, and accessible from different devices, at any time, wherever the learner may be.

New standards technologies such as TinCan API/xAPI make it easier and more flexible to track learning on materials accessed and hosted in different locations and in a wider variety of different formats.

Conclusion:

And so we see a lot of elements of this original vision of learning objects being realized thanks to these many separate factors coming together.

And though it is a concept that has its valid criticisms, learning objects may offer an interesting an useful model to help manage and guide this new world of content production and sharing.

The earlier discussions of 10-15 years ago may give useful insight as to how to design, structure, and build short content. As well, it may guide us as to how to meta-tag, store, and search for these materials. And, finally, these earlier discussions may give us insights into how to repurpose and combine these learning pieces into larger, cohesive learning experiences, both online and blended learning, for both individuals and groups.

 

Thoughts?

 

Additional Links

http://www.reusability.org/read/

http://edutechwiki.unige.ch/en/Learning_object

Encouraging worker engagement and ongoing professional development with mLearning and Gamification

Introduction

I have a confession. I fiddle with my phone at work. We all do this from time, throughout the day, when we’re bored, or our brain is mush, when we’re stuck/blocked, or just when we need a change of pace or break. It’s compulsive (damn you variable ratio schedule of reinforcement!)

The companies we work for don’t tend to like this so much (something about productivity), which is why most of us try to keep it to a dull roar. Many companies have de jure HR policies technically prohibiting such a thing. It’s mostly unenforced, though, because most people are responsible, and because, really, who cares so long as objectives are being met. (Also, front line managers are usually just as glued to their phone screens!)

But what if instead of fighting this tendency, companies were smart about it?

I know. Crazy talk, this. But bear with me.

What if companies and their management embraced that workers mostly have smartphones, and simply accepted as a given that they are going to take them out during the day and use them? What if, taking this as a given, they looked at ways to make lemonade out of lemons, and found a way to harness this natural behavior of employees and channel it toward ends useful to the company and to the worker as an employee?

Mobile phones present an excellent opportunity for companies to help encourage engagement at work and ongoing professional development. The concept presented here is a novel idea for encouraging worker engagement and promoting ongoing professional development in an organization through a combination of mLearning and gamification.

Technological component of solution

Informal learning through short mLearning modules

Workers have phones and use them during the day. The company has new policies and procedures it wants the workers to learn, and also wants to have a workforce committed to ongoing personal and professional development. The company wants workers that are always learning and developing their skills. Combine these two elements and make the worker’s smartphone a platform for employee training and development.

Build continuing professional development materials in small mLearning modules, targetting a length of 2-5 minutes. Design these as informal learning pieces. Include both typical didactic learning elements, but also fun, hands-on activities and games.

Design the modules as stand-alone learning moments that don’t depend too much on other learning pieces, sort of like the late 1990s/early 2000s idea of “Learning Objects.” Design the modules with a “mobile-first” approach, such that they are intended to be seen on mobile phones and look good / are easy to use there. Ensure that the modules are meta-tagged according to some logical schema / ontology of tags appropriate to the workplace or industry so that the modules are easily searchable and findable. Make the learning modules available on some TinCan API /xAPI enabled LMS,

Ensure that there is an interface for searching for and browsing modules that is easy and time-efficient for users to use on a smart phone.

Link it to HR

Track the modules taken by learners and their scores on any assessments or pass/fail. Send this data to HR data systems for tracking.

Link back HR systems the other way so that HR systems could recommend specific modules based on learner time available and on defined professional development objectives.

Allow the system to send suggestions based on most viewed content, most uprated content, and the types of content the learner has enjoyed in the past. Enable a rating system, where learners can provide as much evaluation data as they like. Either “smiley face” basic impressions data or more in depth questionnaire/short survey or both.

Within HR systems, take the data on module completion and track this compared to documented development objectives. For mandated training coming from HR, have subscribed modules or module clusters. This content would be suggested or pushed out from HR. Use notifications, either in app notifications or via text/IM/Lync.

Gamify

Have a gamification layer to encourage and reinforce engagement with the system, though be careful to keep it within reason so that learners don’t feel “gamed.” Keep a tally of hours spent, courses completed, skills learned. Use gamified elements like badges and leader boards. Give the learners incentives to keep engaging with it. Track some of these stats through HR to have data on how much the employee is engaging in learning activities.

Make it social

Allow users to rate content, whether with a simple upvote/downvote or with a five star system. Allow them the opportunity to make comments. When a user is browsing modules, make information on average ratings or upvotes/downvotes visible to learners to help them with their choice. This gives feedback to designers/developers and also helps to identify quality content for other learners. This data gives a good sense of what sort of content the learners want and like, and this can be helpful for training development teams as a guide for how to allocate resources for future development.

Also, allow workers to recommend or share content they like to others. This will allow workers to help you promote good content and will further encourage engagement with the system.

Help learners get access

Make Wifi readily available to employees without restrictions. Employees are not going to really engage with this if you’re going to make them use their own data plan. You provide wired internet access to employees as a tool of work; do the same with in-building Wifi.

Human system component of solution

As with any human systems intervention, however, technology alone will not do the trick.

Workers need to be openly encouraged to use the system at work. As this is rolled out, the teams responsible need to make a concerted effort to promote this training system, both initially and as an ongoing reinforcement. This has to be more than a mass email to “Employees: All.” A nice promitional video will be helpful, but workers need to get introduced to it as well in a face to face meeting involving their front line management and perhaps their director. Give people a chance to ask questions and get answers.

“Bored? Brain-fried? Need a break? Tired? Stuck/writers block? Switch gears for a few minutes, play with your phone, so long as you’re using it to learn something.” Everyone should be actively encouraged to do this, and made to feel comfortable taking advantage of the policy.

Management at all levels, from the top down, needs to sets an example of welcoming this. Both in terms of words and in terms of concrete behavior. Management have to also be encouraged to (within reason) use the system and be seen using the system.

The tracked data that HR collects about how many hours the workers are engaged in learning what they are learning, and their completion stats can add to or supplement performance data for annual/semi-annual review. Workers should have visibility via some dashboard of the same sort of data that HR has summarizing their learning and training activities. That way, the worker can go into performance review meetings armed with data to demonstrate commitment to new learning and skill development. The learner can use this to start conversations about raises or about getting more resources or support for further deeper training or broadening of tasks. Conversely, managers can also look at the data to start their own conversations.

Conclusion

Workers have smartphones. Workers are going to look at them during the work day. If companies are smart and tech savvy, they can encourage ongoing training and development if they put out learning content in a way that is tailored to viewing through the workers’ smart phones. The effectiveness of this is reinforced if the company includes sucessful elements of gamification and social media and backs up the project with support from HR and management.

Swimming an Ocean: Motivation and Persistence in Massive Open Online Courses

Introduction

As I’ve discussed in a number of recent posts, MOOCs, or Massive Open Online Courses are a big phenomenon lately. A big issue of discussion with regards to MOOCs is the question of how to support motivation and persistence in Massive Open Online Courses.

This is a form of learning that works very well for certain groups of learners, but, traditionally, not so well for others. This has been an issue for distance learning in general.

In particular, MOOCs work well for for autodidacts, or self-directed learners. These are people that are able to, and enjoy learning on their own, and who either don’t need a teacher. (I count myself as one of these eccentric and annoying creatures!) Such learners are able to learn regardless of the format, and the online aspect makes the learning process very convenient. The course materials can be accessed from a home computer, any time of the day or week.

It is a more difficult environment however for marginal students. These are the students who succeed at a decent rate in traditional universities largely thanks to support systems within the university campus, both formal and informal. Whether official tutoring services, or study technique mini-courses, mental and physical health support, campus social support organizations for different minority groups, unofficial study groups of students in the same specialization, or even just campus extra-curricular and social organizations. All of these features of the brick and mortar landscape, formal and informal, help support students and keep them mentally, socially, and psychologically engaged in the university community and in their courses.  An active area of inquiry centers on how to keep these sorts of students attracted enough to persist online with courses all the way through to completion.

 

Measuring Student Engagement in Massive Open Online Courses

One metric that is talked about a lot by critics with respect to MOOCs is the low completion rates. Typically only about 5-10% of “enrolled” students “complete” the class. This looks pretty bad on the surface.

However, we need to understand that these students are not all the same, and that enrollment in a free online course doesn’t necessarily mean the same thing as it does in paid enrollment at a university. These traditional statistics are somewhat misleading. Students enrolled in MOOC courses are looking for different things. Because it’s free, online, somewhat anonymous, students don’t have to commit money or time to move somewhere or commute to study. As a result, registering for a MOOC is not the same implicit commitment as registering for a paid course at the university. Different students are bringing a wide range of different levels of commitment, different intentions or goals. Even the same student it may differ from course to course.

Take myself as an example. I am “enrolled” in a few different MOOCs right now. (I’ve been between work contracts for a few weeks, so I’ve been taking advantage of the time to take some courses and expand my skill set. This may level off a bit as I start a new job next week)

One Coursera course,  in Interactive Programming in Python from Rice University, I have been actively engaged in the course activities. I’ve spent a good 10 hours a week or more, all told, watching all the lectures, doing all the assignments and quizzes on the official schedule, engaging in the course discussion forums, posting and engaging with other students. I am also a paying participant in the course. For a number of reasons. One, just to give a little kick of external motivation. Two, because I’m interested in certification in MOOCs as a personal informal research issue and would like the verified completion with honors certificate at the end for my LinkedIn.

Other courses, though, whether because of time and / or money limitations, or because the course is interesting, but overly challenging, or because the course is finished, I’m following along with the free and open online record of lectures and assignment exercises, but not fully engaged. For example, I’m also looking at a course in Functional Programming in Scala from Lausanne Polytechnique. For a few reasons. First, because it’s a trendy language used on a lot of sites like Twitter and Quora. Second, because it’s focusing on a more exotic programming paradigm of functional programming. Third, because the course is taught by Martin Odersky, a star computer scientist who created the language. It’s fascinating stuff, and I’m roughly following along with  the lectures, (though a little behind now) and taking a look at the weekly exercises. But I’m more peripherally involved. Again, for a few reasons. First, because the teacher and the presentation are very abstract, and because the course is intended for an advanced audience. Also  there’s not much support for low level things like guidance on syntax of the language, and I don’t have enough time to properly dig around on the internet, at least for now. Also, I’m still refreshing my knowledge of computer programming, so it’s not easy to process more exotic and abstract ideas right now. So I watch lectures to absorb some of the theory, and maybe next time it’s offered I can engage more. Or I can play with the exercises later when I have time.

Another course on Coursera I’m following is in Gamification from University of Pennsylvania, put on by a very engaging Wharton Business school lecturer, Kevin Werbach. It’s not a technically demanding course; it’s theory and c0ncepts mostly, but peppered with lots of practical examples. The course is already finished, so I’m following the lecture record with interest.

Some other interesting looking courses I registered almost as a bookmarking exercise for my later reference so that I can find the courses later in my account on Coursera. Probably not how Coursera intends for you to use the enroll button, but it’s a functional hack that works for me. I similarly have a few courses bookmarked on Udacity and EdX

So just to illustrate, people have different interests and goals. The percentage of people completing / people enrolled is based on the idea that everyone has the same goals. Which is not at all the case. Some want to master a new skill and get a certification they can try to leverage professionally, maybe to expand career options. These ones tend to spend more time, and tend to complete at a higher rate. Others just want to learn some new things seriously, but casually. Others are just poking around, trying out a few courses to see which few they’ll spend more time with. So these completion rates are misleading. The more important metric is how much people are getting out of the course in relation to what they hoped to get out of it going in. And on this metric, many or most are getting as much or more out of it than they intended.

That said, there is doubtless room for improvement in making courses so that more people will engage in them to a deeper, more persistent manner. And this is where I’d like to turn to now.

Again, I’d like to illustrate from my own experiences, particularly with the interactive programming in Python course. The course had lectures, weekly graded multipler choice and short answer exercises, and  weekly mini-project programming challenges. The programming challenges were typically games. An early project was a textr based game of Rock-Paper-Scissors-Lizard-Spock (a nerd-variant of the popular game Rock Paper Scissors). Each week, this stepped up in difficulty, through programming a version of Pong, a version of memory, a visual game of Blackjack, and finally, a two week project building a space shooter game based on Atari Asteroids. Here is my final submission, by the way: http://www.codeskulptor.org/#user32_yFpWoboCQZ_80.py

People started having trouble when it got to the Blackjack game. This was the week when the programming paradigm of “Object Oriented Programming” was introduced. Coming from an abstract mathematical sort of background, I was able to grok the basics of this pretty easily, and managed to finish the project early. As someone who is also from an education background, I decided to go onto the course discussion forums to try to help out anyone who was having difficulties. There was a  lot of chatter during that particular week from people thinking about quitting. I spent a lot of time on the weekend that project was due on the forum trying to talk people down from the ledge and  trying to help different strangers who were stuck. It was a rewarding experience. But it illustrates the fragility of the online environment for those who are borderline.

Supporting Engagement in Massive Open Online Courses

So this is a big topic, how to support learners who are less auto-didactic, less internally motivated than someone like me. I’ve always been someone who benefitted from a good teacher, but didn’t really need one. Online works  naturally for people like me. But what about for others? There are a few different ideas.

More development of discussion forums. This is a great option currently used by different courses. It helps to build a course community and gives students a forum to discuss, exchange ideas, ask questions, get answers. People like online discussion forums, and they play a useful role in the educational experience. The key is to motivate possibly borderline students to engage on the forums and to make sure that the forums are a welcoming, friendly, and safe place to try out ideas and learn. This takes promotional efforts from the instructors to spend time encouraging the use of the forums, and periodically reminding about the existence and usefulness of the forums. It also requires volunteer or paid TAs to help monitor and moderate the discussions.

Another option currently enabled by some MOOC providers is local meetups and other similar projects. Coursera for example enables the ability to try to organize face to face group meetings of people in the same city who are taking the same course. In this way, people could theoretically get together to form real world study groups.

Another idea would be assigned buddies or groups made up of other students. Whether for all or for those who ask for it as an optional support. One possible way to incentivize this into a voluntary effort would be to have a survey at the beginning of the course where people can identify their level of familiarity with the subject going in and their level of confidence in their ability to succeed. People with little experience and low confidence could be directed to a page where they can opt in to being paired with a volunteer student with more experience and confidence. Conversely, those with more experience and confidence could be directed to a page where they can opt to be put into a pool of people who will be randomly paired with a few students who anticipate needing help. It could be entirely voluntary, but incentivized for example with marks. If you opt in to this and actually help people out, then you can get a few bonus marks that can make up for a bad performance on an assignment or quiz.

Another option to make things more lively and interactive than discussion forums would be something like Google Hangouts. People could talk face to face about ideas in the course. This would work best probably with arts and social sciences content, but could find application more generally too.

Finally, another option is to have some minor collaborative exercises in the course where you need to work together with other enrolled students. There would be logistical challenges to dealing with this, for example, if the random forming of teams included a number of more peripheral students not taking active part in the assignments. But if you can find solutions to these challenges, the social, collaborative aspect could help to engage many in the course.

 

On MOOCs and Accreditation

Recently Udacity, one of the major MOOC providers, announced that, in a shift in focus, it is eliminating certificates of completion for non-paying students. The free students can still look at materials and take part in activities, but if they want anything to show for it, the only option is to pay $150 a month as a verified student. The basic pdf certificates of completion that used to be made available to anyone, paid or free, that worked through the material and passed, are being phased out.

This is a mistake, a move 180 degrees in the wrong direction in the ongoing development process of a model for MOOCs and accreditation. In reality, not only should the free students be able to keep gaining some sort of certification in recognition of their efforts, but further moves need to be made to start accrediting these online courses with the universities giving the courses. MOOC consortiums actually need to be leaning harder on schools to make meaningful accreditation available to students rather than taking away elements they were already offering.

Now, don’t get me wrong; the new features Udacity is rolling out to those students paying the extra money are great steps in the right direction- added human support, and grading of submitted work by professors and others with subject expertise rather than peer grading with rubrics. This is great stuff, and if it helps the resulting certificates get taken more seriously in the marketplace, that is great. All of this helps to further the legitimacy of the online medium as a way to get university level education and continuing professional development.

But that’s no reason to throw the free students under the bus. Fine, Don’t let the free students be able to get the assignments graded by profs and TAs. This is expensive, and if someone doesn’t pay for it, it’s probably an unsustainable model, given all the other costs of putting on a nice MOOC.

Let the free students stay graded by peers. But offer them the ability to verify their identity with their webcam and a typing sample, as with the Signature Series on Coursera. (Note: the enhanced Signature Series certificates on Coursera are only available to paying customers, but the prices – $50 a course for courses that can take up to a few months, are much more accessible than the Udacity prices of $150 per month)

The technology for this sort of verification is already developed, so it doesn’t really cost the provider anything extra. And then give the free students a certificate for their efforts. Maybe that certificate is going to have a second rate market value compared to the paid certificates based on professor or TA grading. But at least they will have something of value to take away.

Yes, I realize that the paying students are the ones that pay the bills and keep the lights on. But MOOCs are a big deal in the first place because of the mass of free students. That is what is driving the traffic, the buzz. These millions of students are freely choosing to spend their free time in higher studies rather than vegging in front of the TV. Educational institutions should love this, and should want to encourage it.

The fact of the matter is that MOOCs blew up as an idea because of a promise, explicit and implicit, made to people. MOOC consortiums actively sold the idea that people could not only expand their knowledge, but also expand their opportunities through hard work in these online courses. The barriers to access to higher education would be lowered. If these masses of students lose faith in that vision, the whole thing will collapse. The numbers will shrink. With smaller overall enrolment, there will be correspondingly fewer that will stick around to pay for it. Part of that mass of free students eventually takes the plunge to pay for some sort of extras like an enhanced certification or a proctored examination. The free education generates paid education by helping to support a vibrant platform. The numbers also generate buzz and perceived legitimacy of the educational platform as a place to get meaningful education and training. This perceived legitimacy is key to attracting people to spend actual money on a course.

If you let the less well off students get certification and / or credit if they are willing to (1) do the work and (2) verify their identity for exams and assignments, they will be able to use that credit in professionally meaningful ways to help get into fruitful careers. Given a fruitful career path, these students can have the means to later become paying students in the future as they continue their professional development.

This is the sort of social welfare outcome that people in higher education say they want, right? Then make it happen. This should be the sort of thing both Liberal-minded (give to help the less fortunate) and Conservative-minded (remove barriers to let people raise themselves through their own determined effort) academics and administrators should be able to get behind.

Yes, it is the paid students that keep the bills paid, but understand that people are going to pay for it. If there is meaningful, professionally usable certification and credit available, working professionals will pay for it. It’s like with digital file sharing. It still exists, and is booming, but with reasonable and convenient means available to get access online, people pay for the content.

And if you demonstrate social responsibility by making perhaps lesser but still meaningful and usable credentials to those who can’t afford it, then those who can pay will be even more driven to support you. Because you will have proved your worth as an organization to get behind.

Instructional Design and Technical Writing

Introduction

What is the relationship between Instructional Design and Technical Writing? In what ways do these roles and skill sets overlap, and in which ways are they distinct?

Though Instructional Designers and Technical Writers will often work under the same roof or under the same team, and may collaborate on projects, they have typically been viewed as distinct jobs with distinct roles. However, recent developments in communication tools and changes in the way we think about learning have caused the boundaries between the disciplines to become fuzzier and more porous than they might initially appear. This article will look at some of the overlap and difference between the two fields, recent trends of convergence between the disciplines, and possibilities for the future.

Technical Writing and Instructional Design: a Comparison

Technical writing, as the name implies, traditionally involved writing manuals or documentation to support hardware or software. This included writing user, operation, or maintenance manuals for equipment, formal documentation of standard operating procedures (SOPs), or documentation of APIs or libraries for software. The Technical Writer engages with hardware/software developers and engineers to document key information about the systems and products.

Instructional Design on the other hand focuses on:

  •  Identifying goal performances
  • Identifying performance, skill, and knowledge gaps between what is expected and what is
  • Identifying training objectives
  • Designing training interventions to close gaps, including instructional strategies, media approach, and training delivery method, and
  • Designing assessments to measure learning.

To get needed information, the Instructional Designer engages with subject matter experts (SMEs) in the relevant discipline.

Technical writing has typically focused on text-based written materials with still images, photos, graphs, and charts. The intended media has often, traditionally speaking, been print, though in the more recent period, this has shifted to include digital texts as well. The end product has been mainly text-based resources meant to be used by people involved with a company’s hardware and software products as on-the-job or on-task references.

The products resulting from Instructional Design processes have run the gamut from written materials, classroom lessons, workshops, eLearning, and blended learning, and has involved various media, including still images, photos, graphs, charts, videos, and animations. The end result has been learning resources meant to be used as part of a formal training course, to either help learning, allow practice of what is learned, or test how well the student learned.

Technical Writing and Instructional Design: A Convergence?

So this seems relatively clear cut, and probably, historically speaking, it was. However, ongoing developments in the past decade or so have served to erode some of these distinctions.

Some describe the difference between the two fields as being that Instructional Design focuses on promoting learning and performance improvement and measuring achievement, while Technical Writing seeks simply to inform. However, on close examination, this does not seem to hold up. After all, if the Technical Writer is being paid to inform, it is obvious that it is toward some intended end of learning or performance support. There is at least implicitly some understanding that the user is supposed to learn and better his use of the documented product through the reference.

Some might not think of technical manuals as learning materials, since they aren’t used as materials in a course, and users don’t take a test after using them. Most users utilize manuals as references. When they have a question, they look up the relevant section, and find out what they need to know. This is a form of learning, however. It is simply a different form of learning called informal learning. Informal learning is the learning that takes place outside of formal courses, often in an ad hoc, unplanned, task and needs-based manner. A learner, often an adult learner, though not necessarily, has a question. The person looks up a reference in one of the resources available at hand, whether from computers or people. The person gets the question answered, and gets back to work.

This is “just-in-time” learning within the workplace and within the normal workflow rather than “just in case” learning  carried out away from the workplace in a course. If you think about it, this is actually our natural tendency in learning. When we are stuck with something, be it a game, a piece of software, a piece of equipment, we don’t tend to look for some formal course. We seek out quick focused resources that will answer our question. This could take the form of some sort of quick reference document or a person whose knowledge and experience you value.

Informal learning is less well understood than formal learning, but is ironically how most of us do most of our learning, by far. Instructional Design has always been embedded within the larger field of Human Performance Technology, which is a generalized approach for diagnosing performance problems in organizations and crafting interventions which do not necessarily involve training.

In recent years, the Instructional Design field has begun to look more seriously at informal learning tools, in particular performance support systems, electronic performance support systems,  and job aids as more effective alternatives to formal training for many aspects of workplace learning.

Instead of always designing lengthy courses to be taken by learners at a separate time and place from their work, increasingly, learning materials are designed in smaller instructional pieces indexed and searchable on company intranets. When workers need to learn how to do some task, or are stuck, or need a reminder, they look up their question in the company system, and locate some reference material. They check out the material, and when they understand, they return to their task. The whole experience might take only a few minutes, at the work desk.

Parallel to this, the scope of Technical Writing has expanded to include wider modes of reference and support documentation beyond traditional written manuals. Technical Writers do not only write manuals, but are also called on to create online help files, FAQ banks, and knowledge base items. There has also been a push, as streaming online video has become easier to deal with, for Technical Writers to use video tutorials as a more engaging method of product documentation. Explanatory or demonstrative videos, taken either with cameras or screen capture software like Camtasia, are hosted either on the company knowledge base or sites like YouTube. Technical Writers can also find themselves involved with internal wikis, or blogs. In some cases, this has even included direct interaction with the user community and even curation/management of user generated content, whether on discussion forums or social media.

There is a powerful overlap appearing here between Technical Writing and Instructional Design as applied to informal learning materials.

The question comes as to whether there is a sort of convergence going on and to what extent. Are these ultimately different jobs, or just different points on a spectrum of a common domain of technical communication? A common shared skill set can be seen, including such skills as:

  • Communicating with subject technical experts to get needed information
  • Understanding the characteristics and needs of the intended user and their work environment
  • Being able to rapidly assimilate new information
  • Coming up with ways to explain this information in a simple, well-structured, easy to understand way attuned to the needs of the audience
  • Implementing these explanation, delivering it using whatever tools are available.

On one hand, Technical Writers have been known to sometimes cross over to Instructional Design or Course Development positions.

The tools of Technical Writers have expanded in recent years to allow modes of explanation that might previously have been the domain of Instructional Designers. Recent editions of Adobe Framemaker, a popular Technical Writing tool, have allowed incorporation of video, 3D models or animations, flash presentations, and audio. As books move from something made to print on paper to something used on an e-reader or tablet, it is no longer necessary to stick to static images for support documentation. With the explosion of ebooks on media capable devices like smartphones and tablets, the very definition of what a book or manual is has come into question.

As books are increasingly consumed through electronic screens rather than on paper, old restrictions need not apply. A book can become a multimedia presentation, with different media used as appropriate to bring across different points to the reader. An interactive 3D model from AutoCAD files embedded in a manual can communicate the 3D structure of a system or piece of equipment better than a static image from one fixed perspective. An embedded animation can better bring to life a process or flow. An embedded video can bring to life a maintenance procedure in a much more compact way than through text and static graphics. The decisions involved in deciding when to use what form of media are precisely the sorts of skills traditionally used by Instructional Designers in choosing a media strategy for instructional multimedia.

Meanwhile, job descriptions for some Instructional Design openings read more like Technical Writer jobs, with emphasis on manuals, help files, FAQs, and wikis. Designers in such positions will use more “Technical Writer” oriented development tools such as Camtasia, Robohelp, and Dreamweaver. The need to document systems that go through increasingly shorter cycles of development and update has made it harder to keep up with this pace with traditional Instructional Design via formal courses. This has led to more use of shorter, informal learning resources hosted on company networks.

Key Differences Between Technical Writing and Instructional Design

It’s important to note however that while the sorts of projects and documents that Technical Writers and Instructional Designers are called to work on are overlapping more and more, there are important differences in skill sets.

Instructional Designers, particularly those with formal training, tend to have more developed formal guidance for needs analysis. They also have more formal grounding in the psychology of learning and in systems thinking and analysis. They also tend to have experience with designing for a broader range of media and interaction types. Instructional Designers also have more experience with assessment of learning.

Technical Writers, on the other hand, tend to have much more refined skills in verbal communication.This includes:

  • Clarity, simplicity, and conciseness of expression
  • Descriptive and explanatory skills
  • Document formatting skills to ensure improved aesthetics and easier visual processing, and
  • Editing skills such as spelling and grammar.

Though these skills are rooted in the written or printed word, much of it transfers well to other media of verbal expression, particularly audio narration scripting. Often, Technical Writers have formal training in writing, whether in Technical or Creative Writing.

Conclusion

So clearly the two skill sets are not completely interchangeable. There are large overlaps, but with specific areas of focus and expertise. For the time being, it is more appropriate to consider the two roles as complementary. On a larger project, the two would probably work together, with each involved in different aspects. It is an interesting question though how this will evolve and play out in the future. As Technical Writers expand their horizons toward a broader range of media, and Instructional Designers expand their focus to include informal learning and performance support, will the two fields undergo a certain amount of convergence as facets of a general field of Technical Communication? And what impact would that have on professional development within the two disciplines?

Comments? Critiques? Please feel free, whether on the blog directly, or via whatever social media channel through which you came across the article.

On Natural User Interfaces (NUI)

Preface

Welcome and an early happy weekend. This article is intended to give a bit of deeper background around trends in what are called “Natural User Interfaces,” or NUIs. This term refers to a computer technology trend related to how we interact with computers. It’s a term that I’ve used in some other articles recently, but didn’t get into too deeply, because it takes a bit to explain it so that you do it justice.

Fair warning that this article is intended to be forward looking. It is NOT about looking at tools that are currently available off the shelf. This is not about immediately applicable information. This is a look at where the technology of human-computer interfaces has come from, where it is, where it is probably going in the next few years to come, and what kinds of possibilities that could introduce for computer based training.

So in that respect, it’s about getting yourself mentally prepared for what will be coming a few years down the road. For those who like to think ahead, to dream about future Instructional Design possibilities using the tools that haven’t been invented yet.

My recommendation: if the preface and introduction pique your interest, bookmark this article, email yourself the link, and maybe set it aside for a quiet Sunday afternoon when you have some time to read and reflect. Then you can process it and reflect on the future possibilities of what you can do with this technology. Anyway, I hope you enjoy the article.

Introduction: What is a Natural User Interface (NUI)?

In a recent article, I talked about the future potential for the Kinect sensor to enable on the fly adjustments to presentation in e-Learning. In that article, I brought up the concept of a Natural User Interface, or NUI (pronounced “noo-ey”). In that article, I introduced the term almost in passing, but I recognize that a lot of people might not be familiar with the concept. The intention of the present article is to go into a little more background, to give some sense of the significance of this new type of human-computer interface, what came before it, how it has already changed how we use computers, and how future developments promise to further shape our interactions with computers. Finally, I will try to look ahead a bit at how these types of interfaces could shape the way we train people using computers.

Let’s get started.

Paradigms of human-computer interaction

So the first question for those unfamiliar with the notion of an NUI would be “what is a NUI?”

Well, to answer this question, it helps to go back a bit into the history of computing.

Computers as we generally know them (electronic calculation devices) have a history going back about 70 years, since the time of the second world war. If you want to be technical, you can trace computing back to Ada Lovelace and Charles Babbage and the Difference Engine and Analytical Engine in the early to mid 1800s, but for simplicity, let’s say 70 years, starting around 1945.

What started as a technology used to automate complex computations for a handful of high-end research and military institutions via massive electrical machines has evolved and grown over these seven decades to become a technology that is an integrated, essential part of the fabric of life (at least for people in relatively developed parts of the world). Along the way, the power, speed, and storage capacities of computers have increased exponentially, while the costs and sizes of components have at the same time shrunk at exponential rates. Computers have gone from machines numbering a handful in the whole world to numbering somewhere in the billions. Some billion powerful computers are carried around in people’s pockets in the form of smart phones, and embedded computing devices appear in almost any electrical device produced today.

Along with these developments, the means through which people interface and interact with computers have also dramatically changed. This change has come both as a result of technological developments, and at the same time as a driver to uptake of computers amongst the general population,  Human-Computer interaction has gone through a number of important paradigm shifts.

A paradigm, for those unfamiliar with the term, is a dominant contemporary pattern or way of conceptualizing and doing things. There have been a few major paradigms of human-computer interaction, with corresponding shifts as the technology moves from one dominant mode of interface to another.

I first want to speak about three major early paradigms of human-computer interaction:

  1. Batch interfaces (1940s to 1960s)
  2. Command Line Interfaces (1960s to 1980s)
  3. Graphical User Interfaces (1980s to 2000s)

I will then speak about the recently emerging paradigm of Natural User Interfaces (NUI). I will discuss some of the different examples of NUIs, and finally look at new possibilities for training opened up by these sorts of interfaces.

First paradigm: Batch interface (1940s to 1960s)

The first computer interface paradigm was the batch interface. In this setup, users entered commands through stacks of punch cards punched by hand and fed into a card reader peripheral, which read the punched holes via optical scanning and turned the entries into electrical inputs. Programmers would carefully enter their code on the punch cards and submit their stack of cards as a batch to be scheduled and run by the administrators of the machine.

Remember, this was a time when computers were huge machines taking up most of a room, and a whole university or department might share one of these machines. It was a scarce, in demand resource, so programmers had to wait their turn for their code to be run. Computers could run one program for one user at one time. This produced a serious bottle neck in performance. Users could not typically just sit at the computer by themselves and use it because the resource was limited and the time could be used more efficiently if the programs were run together one after another as a batch.

This cycle from submission of the program to scheduling to entering it into the computer to running could take days, depending on how busy the administrators of the computer center were. And if there was a bug, something miscoded in the punch cards, the program would fail, and the programmer would have to start again, identifying where the error was without any sort of guidance (“syntax error on line 57,” etc). Such aids didn’t exist. The programmer would try to track down the error in logic by hand, and then resubmit the revised program to the queue. It was a system that encouraged refined first draft work.

In a batch interface, the computer reads commands, coded in rigidly structured messages, carries out commands, and gives output through a printer. The computer would take in the programs of many people at one time, and process them, one after another, as a batch. It was in this time period that the first computer languages were developed.

The frustrations of dealing with these batch processing systems were a major drive for computer science researchers of the day to look into alternate modes of human-computer interaction.

Punch card

 

Second paradigm: Terminals and Command line interface (CLI) (1960s to early 1980s)

Then followed the command line interface (CLI). This came about along with development of early computer displays and monitors with keyboards used as inputs. Users could input characters through a keyboard and see them displayed on the screen. This would take place at a terminal with a keyboard and  display connected or networked to the main computer machine.

The main computer would be set up to time share between multiple users. The computer basically rapidly switches between carrying out tasks for each user, allowing the central computer to “simultaneously” handle many users at once. To get a sense of how this works, imagine getting your household chores done by doing laundry for a minute, then switching to keeping an eye on dinner for a minute, then switching to attending to your kids for a minute, then switching to tidying the living room for a minute, then switching to sweeping the floor for a minute. Then imagine thistask switching a million times faster. You’re doing one thing at a time in little slices, but to a casual observer, everything is smoothly proceeding all at once. Generally, your computer at home or at work “multi-tasks” in a similar sort of way. The coordination of the time sharing created a certain amount of overhead using up computer resources, but this became less of a concern as computers became faster over time.

So the user no longer had to punch cards, and no longer had to give them to someone else to feed into the machine, and wait. The different programmers and application users could get access to a terminal, and use that to interact directly with the computer in something resembling real time. The user could input text information, and get text output back more or less immediately.

This paradigm also overlapped with the appearance of the first so-called “micro-computers” used as office business machines (e.g. the IBM era). It was also the paradigm under which the first “personal computers” were born. These were standalone computing machines small enough to fit on a desk.

The user of one of these machines could use the keyboard, aided by the feedback visuals from the screen, to type documents, or to enter commands. The user controls the computer and performs actions such as creating, saving, deleting, copying, and moving files and directories using text based commands typed into a a command line. This can still be seen today in the command line in Linux and in the mstsc / Commad Prompt ultility in Windows.  MS DOS, the first Microsoft operating system, worked like this.

This is known as a Command Line Interface or CLI. More advanced computer programming languages were also developed at this time.

 

Third paradigm: Graphical User Interface (GUI) (1980s to 2000s)

The next paradigm was the Graphical User Interface or GUI (“goo-ey”). This consists of a “desktop metaphor,” with program windows, menus, virtual “buttons” and other controls on the screen with which the user interacts using a mouse and pointer. Associated with this is the acronym WIMP=Windows, Icons, Mouse, Pointer.

The earliest GUI was from research at Xerox PARC in the 1970s. These ideas were later taken up by Apple Computers in the early Macintosh and Microsoft in their Windows OS. Interactions simulated the way a person might interact with a real world machine, by “pushing” (with mouse clicks) virtual buttons, turning virtual dials, etc. It was at this stage, corresponding with a sufficient miniaturization of computer components and fall in price, that the idea of a home “personal computer” took hold. With the desktop metaphor, windows, and mouse pointers, it became much more natural for everyday people to use computers. There were still many rough edges, and certain arcane bits of knowledge to learn, but overall, it became much simpler for everyday people to do basic things with computers. Computers were starting down the road to becoming a household appliance that average people would use as part of their everyday lives.

 

The emerging paradigm: The natural user interface (NUI) (2000s to present)

The next paradigm of human-computer interaction is so-called Natural User Interfaces, or NUI. This can encompass a variety of types of interaction, but the overarching idea is that rather than having artificial or mechanical intermediary means of input, the user interacts with the computer in ways more like those used to interact with people and objects in the real world, and more directly. This typically means touch, body / hand gestures, facial expressions, speech, and giving queries or commands to the computer in something much closer to the ambiguities of everyday language rather than in rigid computer syntax.

What does this mean? Well, to illustrate, let’s look at the predominant method of computer interaction that we’re just coming from and still wrapped up with. Namely, the mouse.  Or, more precisely, the mouse and pointer as a way of navigating graphical menus and control interfaces on a screen display, with the keyboard for the user to enter in data like on some electronic typewriter. This form of interaction was almost completely predominant from around 1984 right up through to around 2008, a period of 24 years. The 1984 date marks the appearance of the Apple Macintosh (128k), which featured a GUI and mouse. 2008 on the other hand was the appearance of the iPhone 3G, which helped to explode the popularity of capacitive multi-touch smartphones. (As much as I dislike Apple’s closed model and think they’re past their prime, I have to grudgingly give them credit for having been squarely at the center of  both of these technological inflection points.)

The mouse has become so much a part of our daily activities, at home and at work, for so long, that it’s easy to lose sight of how awkward and un-natural a way this is of interacting with with a computer. Or interacting with anything. You sit in front of a computer screen, staring at it.You have a button on the screen. You have to grab this mouse on the desktop, drag it along the horizontal plane of the desk surface in order to move the visual of  a  pointer arrow on the vertical plane of the screen surface. And then you click on a button on the mouse to “click” the on-screen button. Once upon a time, this was the only way to mediate the pressing of a button. It was simply the only way to do it. But what is the most natural instinct to do this, today, given the technology widely available now, namely touchscreens? Well, since 2008, with the iPhone, and since 2010, with the iPad, it’s simple. You reach out your hand to the screen and touch the putton to press it. The whole step becomes much more natural and effortless.

Admittedly, it’s still kind of weird, because you’re still blocked by this 2 dimensional surface as you bump up against it and touch it or move your hands over it. It’s still a little limiting and artificial. But it’s getting there. You’re completing the metaphor at least of the classical graphical user interface or the desktop workspace on which you place things and move things around. Instead of moving them with a mouse, you move them directly with your fingers. You’re still operating something like some old fashioned instrument panel, but that has become more naturally engaging. You move like you’re actually operating an instrument panel in real life.

As mobile computing and mobile internet have taken off, this has impacted web and application design so that even on the desktop, the user interface principles inspired by touchscreen usability – lots of white space, simplified menus and controls, and large button targets – have become predominant. Designers try to build applications that work well on both.

Interacting with the computer in these more natural, everyday ways means that in a sense, the interface fades from attention and becomes invisible to the user. But the idea is that generally the experience is smoother, more realistic, more like a real world interaction. The distance between the user and the computer becomes smaller. In this way the computer becomes a little more like an extension of the user’s body. The user simply smoothly interacts with the computer to do what he needs to do.

We call such an interface a Natural User Interface, abbreviated NUI, and pronounced “noo-ey.” It’s the idea of an interface that drapes itself over us, fits us like a glove by letting us interact with the computer more like we interact with real world objects and people.

In popular entertainment, we see some examples of futuristic concepts of use of NUIs. The computer on Star Trek TNG, for example, which the crew commanded through voice or touch screen control panels as they walked around the ship and did their thing.

Or the gesture interfaces Tom Cruise’s character used in the Pre-Crime unit in Minority Report.

http://www.youtube.com/watch?v=8deYjcgVgm8

Or more recently in the battle “simulation” in the film Ender’s Game.

Multi-touch touch capacitive screens as seen in modern smartphones and tablets are one good example of an NUI. You interact with screen items by touching them with one or more fingers to stretch items, rotate them, shrink them, etc.

Virtual assistants or agents such as Apple’s Siri or Microsoft’s Cortana are another example, or another aspect of natural user interface technology. Here users interact in a somewhat conversational manner with the computer using speech. Some of the predictive elements of Google Now would also be examples.

Haptics (touch based interfaces) are yet another element to make interfacing more natural by simulating the texture and force feedbacks and resistances you would get interacting with real objects.

Virtual reality would be another example of a natural user interface.The person interacts with the virtual world through head and body movements, receiving visual feedback through some sort of helmet screen.This is a technology going back some decades, but is becoming more affordable and feasible now. An example of a mass product is the Oculus Rift by company OculusVR (In the news of late for having been acquired by Facebook).

Another example is augmented reality, as in Google Glass. Here, important contextual information is projected within the user’s field of view to give continuously present information.

NUIs can also be combinations of these different types of technology. For example, the combination of speech and body / hand gestures is used in the Microsoft Xbox Kinect sensor. Microsoft, has opened the sensor with free APIs and SDK for developing NUI-enabled software for Windows using the Kinect for Windows sensor. The Kinect is a sensor that was previously sold as an optional peripheral for the Xbox and which is now a bundled part of the new Xbox One gaming and home entertainment console.

http://www.youtube.com/watch?v=Hi5kMNfgDS4

This particular device features two cameras for stereo machine vision with depth perception. Software in the device can make out limbs, facial expressions, hand gestures, limb and finger movements, face movements, facial expressions, even the pulse of the user, and use these as inputs for control. Multiple microphones are present for noise cancellation and for recognizing directionality of sound. There is software on board for voice recognition and for facial recognition.The user controls the game by voice inputs and by moving his body and hands.

This represents a more natural way to interact and brings to life some of these models of human-computer interaction forseen by science fiction earlier. It is not hard to forsee possible applications to training with this, especially with APIs of the device open to commercial and research development. The following links and the video below give some sense of what is being done with this sensor tool.

http://openkinect.org/wiki/Main_Page http://www.microsoft.com/en-us/kinectforwindows/

http://createdigitalmotion.com/2013/10/microsoft-embraces-open-creative-coding-new-kinect-openframeworks-cinder-integration/

http://blogs.msdn.com/b/kinectforwindows/archive/2013/08/13/turn-any-surface-into-a-touch-screen-with-ubi-interactive-and-kinect-for-windows.aspx

http://www.youtube.com/watch?v=Iu2XH5p_hMM

The Xbox One with Kinect is probably the hardest push right now for mass adoption of Natural User Interface technology in the home. There is also an Xbox Kinect for Windows sensor coming out that would allow games and software to be written using this device to control a computer.

http://www.microsoft.com/en-us/kinectforwindows/develop/

Another potential route forward might come in the form of the iPad a few generations down the road if/when Apple can put something similar to Kinect’s sensors today in the iPad. The iPad would make a sophisticated control device for the TV, with the iPad mirroring to the TV screen. So this hypothetical future iPad could watch you through twin cameras, to read your eye movements and facial expressions or detect hand gesture based inputs. The microphone inputs, combined with cloud services, could read speech queries or commands from you. The touch screen would detect button presses, finger or stylus drawing inputs. The accelerometer and gyro would recognize if you’re sitting or standing and in what orientation you’re holding the iPad. You could then hold the iPad in different orientations in space as a control surface or workspace. The problem with the Xbox Kinect sensor is that it watches from farther back. So it can’t pick up yet as much nuance of detail as you could with a closer camera. A camera in the iPad could do that.

I wouldn’t be surprised to see Apple to do this, getting everyone used to this method of interaction, and then hitting with the long-predicted Apple TV, integrating something like the Kinect sensor and a slick multiple layers of Natural User Interfaces built in. Bang and bang. It would have a big impact.

Learning and Training Applications

All of this promises to really shake up how we interact with computers. And since interaction is such a key element of computer based training, this has implications for us as designers of instruction.

There are a number of foreseeable learning and training applications for this sort of technology. To name just a few examples:

Speech recognition and text to speech could be useful for language learning.

Gesture based controls could enable more lifelike interaction with 3D models, especially if using stereoscopic 3D image displays. This could potentially be used for a variety of applications in technical training:

  •  to manipulate and examine equipment in maintenance training.
  • to learn structure of machinery by virtual manipulation of 3d models, including assembly and disassembly. Haptic feedback outputs could even simulate the sensation of touching and working with the actual equipment.
  • in biochemistry, to manipulate 3-D models of large molecules like proteins to understand their structure and active sites
  • or to visualize biological reaction steps

Virtual reality could be used to simulate the operation of certain complex equipment, including running through rare or emergency scenarios.

For soft skills, imagine the immersiveness of a training program where you interact with a 3d character in a scenario using simply your body language and your speech. The realism is greatly heightened. Or imagine a training program that can give feedback on your body language, verbal tics like filler words, and your facial expressions while you give a simulated presentation or sales pitch.

 

 

 

New ideas and technologies in training and learning

%d bloggers like this: