Category Archives: Technologies

Concept: Maintenance Training 2030


April 25, 2032 …

Ed, a Canadian aircraft maintenance training instructor located in Montreal, gets ready for his upcoming engines maintenance class.

Supposed to be a big class this week, 8 students.

  • 1 from Canada
  • 1 from the US
  • 2 from South America
  • 2 from India, and
  • 2 from China

Ed checks on the equipment in the classroom.

He puts on his Holo-Glasses, which come to life, softly glowing holographic data displays and icons popping up in front of him.  The device recognizes him, launching the virtual assistant to greet him. “Hello Ed! How are you? All set for your class?” “Just fine, thanks. Everything set?” “Yes, Ed. All the students are going to be attending; no cancellations. Everything looks good with the students. One was having some technical issues earlier, but I helped him through it.” Excellent,” thought Ed. “Everything looks alright with 15 mins to go.”

Ed begins cueing up the opening presentation notes, and the multimedia training manual. These pop up in their own windows in Ed’s field of view.

As Ed continues his preparations, the digital assistant relays notifications confirming the status of the students. The assistant is communicating with the students before class so Ed can focus on his preparation. Everything is looking good. Ed checks the 3D cameras and tests out his holopresence projection, seeing what his students will see.

“Loo-king good! Let’s do this!”

A few minutes later, the class begins. Ed welcomes the students as they holopresence in from their remote locations. Ed and the students, their Holo-glasses on,  take their places in the shared virtual classroom. The software places softly glowing holographic representations of the other participants in the shared visual space.  Ed looks out at the students’ faces, and the students see a holographic overlay of the same classroom and the same students from their own virtual perspective. At first, the experience is a little eerie, but as the class gets going, and all the students introduce themselves, the illusion takes hold and it feels like everyone is in the same classroom.

Ed presents the content, asks questions, and listens to the responses. Master teacher he is, he observes carefully, gets a sense of the learners’ body language and expressions, and, much like in a real class, adjusts as he goes. Ed brings up holographic 3D animations and models of the engine and components for the class to see. He zooms, rotates, and takes apart the holographic engine parts. The hologram also appears in the students’ fields of view, and Ed invites students here and there to come up and try for themselves and demonstrate actions to the class. Static images appear on screens in mid air, demonstrating schematics.

In the afternoon portion it is time for the virtual hands-on lab exercises. Ed and the students convene again, once again with beautiful, interactive 3D holographic models of the engine floating in the shared digital overlay. This time however everyone puts on their SureTouch(TM) haptic feedback gloves.

The gloves use sensors to read finger and hand position, the headset measures their hand positions in relation to the digital model’s virtual position, and actuators in the gloves give pressure feedback to simulate handling real objects with substance instead of just weightless holograms. It’s kind of weird at first, and it’s not quite the same as the real thing, but close enough for horseshoes and hand grenades, as they say. And definitely a hell of a lot cheaper than taking an actual engine offline to train.

As always, it took a few years for the technology to perfect itself and a lot of research and proofs of concept before the regulators really believed it could be as effective as the real thing. The Dutch Aerospace Lab did some great research as always, and once EASA signed off, the other regulators followed pretty swiftly after. Regulators came to appreciate virtual maintenance training, just as they came to appreciate the power of full flight simulators decades before.

The company definitely appreciates it too – they save a small fortune in flights, hotels, taxis, and per diems doing virtual classes like this over the course of the year. As do the students’ companies..

Ed for one, appreciates it too. No packing, no airport security, no  cramped 12 hour flight, no hotel room, no taxis, no jetlag, no traffic. Well … scratch that last one. This is Montreal, after all, where the seasons are winter … and construction. Even in 2032, there’s plenty of traffic. (You can’t win ’em all, I guess.) “Oh well, ” thought Ed. “Decent weather today, so at least could read a book on the way in while the autodrive on the car took care of all the unpleasantness.”  And all from the comfort of the Montreal office.

Ed loves it, and his family loves it too – less time away. And besides. even though he felt a little silly to admit it, irrational as it was, Ed had felt a littled weirded out by flying ever since they started the rollout of unpiloted commerical flights in the late 2020s. Hundreds of times safer than human pilots or not, it’s still kind of creepy to have algorithms flying you around instead of people.

“Or maybe I’m just getting old, ” Ed thought. Gets a little jarring after awhile to see the world transform itself before your eyes so quickly. The young seem to take it in stride, unphased, as they always do. And, Ed had to admit that the toys are pretty cool. All this change has its benefits.

Such is the stuff of life in a world of sci-fi dreams made true.

 

 

 

 

What learning professionals can learn from video games

Introduction

Video games are a huge global market, bigger than movies and music. somewhere north of $60 billlion a year, and growing, between console, PC, and mobile. In comparison, estimated global box office revenue for 2014 was about $38 billion.

Adults, particularly older ones that didn’t grow up playing games, tend to be somewhat dismissive of the merit and value of gaming. It’s a toy for their kids, a distraction rather than something to take too seriously.

But today, electronic games are also a serious pastime for working adults as well, and for men and women. As well, modern gaming systems are basically sophisticated computers for the living room that handle a range of different forms of entertainment – TV, music, movies, gaming, apps, web browsing.

Some interesting stats:

  • The average age is 31
  • The gaming population is roughly split between men and women, 52% vs 48%.
  • The average US household has at least one gaming console

Source: http://venturebeat.com/2014/04/29/gaming-advocacy-group-the-average-gamer-is-31-and-most-play-on-a-console/ 

So the question becomes, what do video games have to offer us, as learning professionals, beyond fun and entertainment?

Video games have lessons to teach us, as learning professionals in a few key ways.

The Lessons of video games

1. First, they show how to make complex, realistic tasks fun. People play games because they’re fun. There’s struggle and effort and challenge and obstacles to overcome, and often a lot of seriousness, but in the end it’s fun.

2. Second, video games are distinctly successful at attracting and holding people’s attention. People engage and stay engaged for sustained periods. Modern games can take upwards of 100 hours to complete. This is comparable, order of magnitude, to the time investment for a professional pilot to earn a type rating to fly a new type of aircraft.

These games will often involve intricate levels of goals and objectives and sub-goals and side goals. But people are drawn to persist and complete them. Games play effectively with different motivational and reward pathways to produce a compelling need to persist at the task despite challenges and difficulties and sometimes because of these challenges and difficulties.

These are some the same sorts of problems we face as designers and developers of learning materials. How to motivate people to engage in our content, and how to keep them engaged through to the end of the course. The success of games in this regard and the ways games achieve this success should be a motivator for us. We should look at the tricks games use to keep us interested and try to apply the lessons to our learning materials.

3. Third, video games are at the forefront of interactivity. If we want to see the state of the art of near / present term feasible interactions, we need to keep up with games. This is where the innovation is happening. Have you seen modern game controllers used with gaming consoles?

It looks like you could use it to put something in space. So many different buttons and directional controllers, and all in one neat, ergonomic package that fits perfectly in your hand.

Off-brand Xbox 360 controllers (similar to the one shown on the left) are used to fly military drones. Playstation controllers (the one shown on the right) are similar in design.

Microsoft’s Kinect sensor, meanwhile, used for the Xbox, is arguably the most innovative consumer market man-machine interface out there, using multiple cameras and microphones to detect body position and orientation and take speech inputs. It basically lets you, given effective software, control a computer using your voice and the movements of your body.

And then of course there are some of the novel forms of interaction used for tablet and phone games. Touch is one example, with controls based on tapping, swiping, and pinching. The other is controls based on the accelerometer and gyroscope, where games are controlled by moving, shaking, turning, or tilting the device. Some good examples are the Sky Map app on Android and Super Monkey Ball 2 and the Labyrinth 2 on iOS.

Finally, up and coming gaming tech such as the Oculus VR look to take gaming to a new level of immersiveness.

The common denominator here is that effective and meaningful use (not just as a gimmick) of sophisticated interactivity helps to pull the user in and increase engagement.

In comparison, the built in types of interactions we see enabled in eLearning authoring tools like Storyline and Captivate tend to be very simplistic – multiple choice, drag and drop, matching. Video games can motivate and inspire us by showing what is really possible today in interactivity.

 Conclusion

Video games, far from being something only for kids, have become a serious entertainment industry enjoyed by people of a wide variety of ages. They are an established part of our modern lives. Learning professionals looking for ideas on new and better ways to engage and motivate learners should take a serious look at the best practices of modern video games.

And hey – you might even have some fun doing it.

The television as a learning and training space

Introduction

Recent years have seen the world of training embracing  learning on mobile devices, or mlearning, for short. There are many reasons for this:

  1. Client demand as people more and more browse the internet principally through mobile devices
  2. Clients always having their phones with them, allowing lots of little moments during a day when learning could potentially take place.
  3. Phones having lots of sensors and input methods, allowing for innovative interactions
  4. Phones allowing multiple communciation methods

Designers and developers have been working on designs using mobile learning. At its most basic this has taken the form of  using file formats so that videos or presentations will play on a tablet, or even just an iPad. Or to make the training as an iPad app or playable within some container app.

Others, approaching the matter with some semblance of actual seriousness, have gotten more creative, and tailored training more to the unique affordances of smart phones and tablets. They make learning games that use sensors or activities that use sensors as inputs for motion or touch based interactions. Or they use location information. Others use ideas of informal learning and performance support to break training into small, focused little pieces that can be accessed in a spare moment.

eLearning authoring tool providers advertise their tools as enabling responsive eLearning. They hype the promise of being able to publish content to multiple media and device types, for desktop, tablet, and mobile.

This is good for learning and training. However, in this focus on mobile, we may be losing sight of possibly the next key development of web-based learning and training – the television as a learning and training space.

Television as a new window to Internet content and learning

Sitting on a couch with a tablet is a nice way to watch  a video or presentation. The device is light and comfortable. But, still, it’s a 10 inch screen. It is nice for portability, but it’s still a small screen. The small size is a compromise, trading visibility and real estate for portability.

But across from the couch is what? The TV. Big screen – 30, 40, 50, 60 inches. 1080p HD, easy to see, nice to watch, decent speakers. And you don’t have to hold anything.

Television used to be a box on which we watched traditional television programs, whether delivered over the air, or through cable or satellite broadcast. Then, came VHS players, DVD, Blu-ray, video game consoles. The living room TV became instead the screen in the middle of a home entertainment center.

Now, increasingly, televisions are also becoming just another one of the screens,albeit, much bigger ones, through which to access internet content, whether for entertainment, work, or learning. This takes the form of video, audio, text, and apps. The long promised fusing of internet and television has arrived, with several different options available to make this possible.

Many TVs are now “smart TVs,” combining a TV with a computer. These TVs are WiFi enabled, with built in interfaces and platforms with apps capability. Apps allow straightforward connectivity to content sources like Youtube, Netflix, digital music streaming services, and other streaming media.

Modern TV screens also allow for stereoscopic 3D. While no longer a faddish selling point, most newer TVs are by market standard capable of displaying stereoscopic 3D content, whether accessed over the web or on 3D Blu-rays. TVs stand out notably from the other screens through which we consume content in that many of them today readily allow Stereoscopic 3D media. TVs are the one dependable 3D screen that people commonly own.

TVs are also capable of being connected to gaming systems like PS4 and Xbox One, the second of which includes the Xbox Kinect motion and voice sensor. These systems, while meant primarily for gaming, are also intended more generally for home entertainment, with app platforms and apps like Netflix and Youtube to see internet video content.

As well, set top boxes like Apple TV as well as many WiFi enabled Blu-ray players offer a similar bridge between the television and the internet.

Tablets, phones, and laptops can share screens wirelessly to TVs, either through devices like Apple TV, game systems, or via Miracast / WiDi from enabled devices.

It is easy to get content on the TV. As well, the TV will either be setup with sensors, whether in the TV itself or via something like an Xbox, or the person will be screen sharing from something which has sensors and a touch based control interface. It becomes easier to browse, select, and interact with online content shown on the TV.

Designers, both web designers an instructional designers,  need to think about training and learning possibilities in this space.  just as they should be thinking about that OTHER class of displays that will also be more and more in people’s lives – wearables and augmented/virtual reality tech such as Google Glass and Oculus Rift. (More on this in a future post)

Challenges

There are a few challenges in this area:

Platforms

One main challenge is that there are so many different sorts of configurations and ways to connect the internet to the TV:

  • Via game consoles such as XBox One or Sony PS4
  • Smart TVs
  • Set top boxes like Apple TV, Wifi Blu-ray player, or Chromecast
  • Computer connected to the TV to share the screen via HDMI cable
  • Wireless screencast from laptop, tablet, or smartphone to the TV, whether through Apple Airplay or up and coming wireless screencasting standards WiDi (wireless direct) and Miracast.

This makes things difficult for developers, as there is no one clear dominant target for development.

The gaming consoles, which have positioned themselves as not only gaming platforms, but home entertainment hubs, may be one promising avenue, as the multi-billion dollar gaming industry already attracts lots of skilled developers to these platforms. Microsoft’s XBox One in particular runs an operating system related to Windows and uses the same development toolkit. Also, these gaming consoles offer innovative ways to interact with the content on the TV through different types of controller devices. These include body movement and voice based controls. The gaming console option offers interesting possibilities in terms of innovative learning interactions.

A more straightforward, elegant solution may be through smart TVs, where everything is in one box. This would especially be the case if in the future the telvision included sensors that could be turned on for Kinect-like interaction with cameras and microphones. One challenge, however, is attracting developers to different platforms from different manufacturers. Possibly only a company like Samsung, which is involved in manufacturing phones, tablets, computers, and TVs would be in a strong position to carry over advances in interfaces and interactivity from other devices to TVs. Or someone like Apple.

The other challenge would be emotional reactions from consumers. When early press about the Xbox One suggested that the system would require the Kinect sensor – which includes stereo cameras and microphones – to always be on, even when the system is not in use, people became very paranoid, and there was a backlash.

It is possible that TVs will evolve in coming years to become a sort of all-in-one computer, with web connection, innovative web browsing methods (the concept of adaptive web design will also have to adapt and evolve to cater to TV as a screen), app platforms, and built in SSD memory space, possibly supplemented by cloud storage.

Quite possibly the next stage of the Apple OS – Android – Windows – Linux battles will be fought on the battlefield of internet connected TVs. Ubuntu, for example (A variant of the Linux operating system) has actually been positioning itself as a flexible multiplatform, including TV – OS for some time.

Wireless screen sharing may be the simplest approach, making the smartphone, tablet, or PC the central point of control of what appears on the TV screen. Desktop and laptop computers would have limits though in terms of enabling learning interactions.

Tablets and smartphones, could potentially allow for interesting learning interactions through the accelerometer, gyroscope, and touch sensors.

The scene is probably going to be messy for a few years with a lot of options making it hard for developers to pick. This will make it hard to form development communities that will drive things explosively forward.

Interface and Interactivity

The possibilities for learning and training will depend somewhat on the options available for interactivity. One of the challenges in making the TV a hub for learning content is how the user can control and navigate content sitting or standing from across the room. Good eLearning and online training especially requires rich interactions.

But how do you interface with the TV? A computer you sit right there and control it via mouse and keyboard, and to a lesser extent, microphone and camera. A tablet or smartphone you tap it, swipe it,  turn it, talk to it, because again, you’re up close to it and it fits in your hands.

TV is different. You sit back from it, or stand back from it. You’re not going to stand at your TV tapping the screen like those big maps on CNN election night.

There are probably four major options:

  1. Some modification of a traditional TV remote, possibly one with a touchscreen and accelerometer/gyroscope sensors
  2. Some camera and microphone based sensor like the MS Kinect that lets you control via voice and body gesture
  3. Controlling through a laptop computer, tablet, or smartphone, which shares the screen wirelessly via WiDi, Miracast, or Apple Airplay and lets you control things via touchscreen and motion sensors. The TV simply becomes a screen to mirror content on the other device.
  4. A smartphone or tablet is paired with the TV via an app, and serves as a WiFi-connected touch- and motion-based controller.

All of these could probably be made to work, though options 2 and 4 are probably the most plausible options going forward in terms of usability and in terms of building on existing platforms.

 

Learning and Training Possibilities

The matter then becomes how to harness this emerging new portal to the internet for learning an training.

A few possibilities come to mind.

  • Any passive consumption of video content. Particularly content in HD or stereoscopic 3D format. YouTube contet, for example. A TV would be the most natural and comfortable way to watch. Everything becomes bigger and more lifelike
  • Educational gaming activities using a gaming controller
  • Web content browsing with voice and gesture inputs enabled by something like the Kinect. Say, for example, a view of different documents or different levels of detail making use of different focal planes in a 3D field of view. This allows information and screen elements to be arranged not just along dimensions of horizontal and vertical, but by depth as well.
  • Interaction with stereoscopic 3D models using Kinect sensor. Such as chemical structures, architectural structures, geographic feature models of an area, or components of equipment.
  • Live, synschronous, life-like teleconferencing via TVs and Kinect sensors using apps like Skype or something like it embedded in a virtual classroom application. Virtual classroom would work very well on an HD television with connected camera and microphone. For live, face to face communications, for conversational practice in language learning, or a live virtual tutoring session.
  • Using the Kinect, the learner practices some psychomotor skill. At the same time, the Kinect camera lets a remote instructor watch the performance and comment. The Kinect could also capture data to assist in analyzing biomechanics.

These are a few sample ideas. Maybe readers can think of others.

 Conclusion

The past six years have seen dramatic changes with the coming into the mainstream of mobile devices as a new space for online learning, with unique affordances for interactivity. The mobile web and mLearning have expanded our horizons for entertainment and learning. The television, connected to the internet offers a new field on which we can ply our craft as designers and developers. It’s a developing field with a lot of options that will take some time to sort out and settle down. But for those of us tasked with helping our clients and students to learn and develop, it’s a field we would do well not to ignore.


 

Once again, feel free to share your comments, either below, or via social media.

An idea whose time has come? Reusable Learning Objects.

Introduction: A brief history of learning objects

When I was in school in the early 2000s, one of the trendy ideas in the field of educational technology was reusable learning objects (RLOs). Learning objects were a heavily promoted idea in the 1990s and early 2000s. The idea came out of US military-funded training research, focused on two goals:

  • To standardize multiple, mutually incompatible eLearning formats used by vendors to the armed forces so as to improve inter-operability of training content, and
  • To design materials using small, self-contained, meta-tagged modules to enable reuse and thus reduce development time and cost.

The name “learning object” comes from the computer paradigm of object-oriented programming, where small, self-contained code structures model objects and entities in the real world, their properties and their inner structures, and their interaction between objects and entities. This was a paradigm allowing faster development through modular design, re-usable libraries of code, and encapsulation of object data within the objects.

Learning objects try to carry some of this success from software design and development to the design and development of eLearning.

What is a learning object?

A learning object is a short learning piece, usually digital, from a few minutes up to as much as an hour in length, though usually on the shorter side. The learning piece is focused on one learning objective. It will generally include an introduction, explanation and/or demonstration, activities for the learner for practice and / or consolidation, and an assessment. It is an irreducible element of knowledge, an atomic nugget of learning.

It was expected that eLearning objects would use a standard format such as SCORM for metadata attached to the objects. This would enable the learning object to be interoperable with different delivery platforms (LMS).  The idea was for the object to represent instruction for a small nugget of content related to a specific objective.

The purpose of this was to enable re-use of training materials for faster, more efficient development of future content. Usually, when we want to reuse a body of training content as part of a new course, we need to break apart the old course, extract useful bits, and then assemble what you want back together in a cohesive fashion.

The idea with the learning objects is that they represent some small sort of smallest learning objectives. The related objects are already broken down. All that is left when building a new course is to identify what you need to teach, finding out what is already built, evaluate it, and then either re-use or re-purpose the content.

To maximize this re-usability, the learning object is supposed to be as free of specific context (audience, place, type of organization, etc) as possible. For example, if multiple audiences would want to study toward this objective, media or examples used should not be limited to only one audience.

New courses could, in theory, be built by collecting, and sequencing various learning objects, with an overall introduction and conclusion and some linkages to join it all together.

Critiques of the Learning Object concept

While learning objects were a trendy topic in the -90s and -00s, the idea was not without its critics.

There are several critiques of the learning object concept:

  • The idea of learning objects was pushed primarily by the military and for its own concerns of operational efficiency and cost savings rather than any sense that it would produce more better learning. The concerns are quantity of output and efficiency rather than quality of education
  • The idea mainly focuses on eLearning, and specifically eLearning for one solitary self-paced learner. Where social sorts of learning involving cooperation and collaboration fits within this was not clear
  • If context is removed, it is harder for learners to relate to it on a concrete level. Media and graphics and examples are generic, or some wide range. The media and examples don’t speak closely to their particular reality. As such, you risk losing the attention and motivation of the learner, because they may not see the relevance clearly.
  • If context is removed, it is harder for learners to make meaningful connections between the content and other content unless the developer puts in extra effort to put this connective material back in. Statements like, “as you remember from module 1,” or, “you will learn more about this in the coming module,” or “this is related to these other topics” would be mostly removed from learning objects to maximize reusability. Learning these sorts of connections is an important part of learning new material, and is part of what makes new learning stick together cohesively in the learner’s mind.
  • When assembling courses from smaller learning objects, it is not a matter of just sticking together lego blocks or assembling IKEA furniture. Remember that all that context that serves as a connective tissue of sorts for the objects has been stripped away to allow the reuse. To make it most effective, you need to add contextual glue/mortar in between the pieces to improve flow and relevance. This cancels a lot of the time savings that are advertised.

Disillusionment…

So up through the early and mid 2000s there was a lot of hype about learning objects, When I was in my Educational Technology program at that time, the concept was talked about, and readings were given, including critiques of the concept. Some large companies, schools, and educational networks did a lot of work in this field, with some of these projects still continuing. But the idea never took off broadly as advertised.

eLearning continued to gain broader acceptance in the academy and in industry. SCORM standards for eLearning content metadata and inter-operability went forward and became commonly used standards supported by authoring tools and Learning Management Systems. eLearning authoring tools became increasingly sophisticated, allowing simple eLearning to be developed more and more efficiently.

But the strict learning object idea did not continue to be top of mind for practitioners, who grew disillusioned by the concept as they experienced the limitations and difficulty, witnessed lots of bad eLearning content, and found the time savings and re-usability to be much less in practice than advertised.

The term learning object faded from common conversation.

In the meantime…

Life went on, technology advanced. Broadband internet became more widespread with faster speeds. This allowed easier upload and download of multimedia content, even video content.

The Web 2.0 era of user generated content came about. PHP discussion boards. Wikipedia. Youtube. Social media like Facebook. Question and answer sites like eHow and Quora. A Web where content could easily be generated by users, tagged for search, and uploaded.

This was furthered with the mainstreaming of mobile internet devices. The iPhone 3G appeared in 2008. The explosion of the smartphone market followed. This led to a proliferation of mobile apps on sophisticated pocket computers with cameras, microphones, and other sensors. Tablet computing went mainstream, with the iPad in 2010. With these mobile devices came touch based computing and context aware computing. The widespread rollout and development of high speed mobile networks enabled voice, audio, and video transmission. Smart, small, lightweight connected mobile devices mean that the user almost always has on hand.

In the field of educational technology and training, there is an increasing emphasis on informal learning such as job aids, performance support systems, and just-in-time learning.

Finally, eLearning authoring tools have become much more user friendly, making it easier for experts to build their own content and distribute it. This broadens the development pool and makes it easier to generate content.

All of these developments and change have come over the past ten years. We start to see a very different landscape from what it was when this learning objects concept originally peaked and then faded in the early 2000s.

When you look at all these developments together, and reflect, you start to wonder if maybe that old idea of learning objects might have renewed relevance in today’s environment.

 

So what’s changed?

So putting it together, what is different today?

Cell phones and inexpensive but powerful recording equipment let us easily record content. Easy to use authoring software lets us easily assemble media into small but meaningful packets of learning material. Ubiquitous network connections and sharing features in apps let us easily upload content from almost anywhere.

Platforms like Youtube, Soundcloud, Facebook, and others give us a place to upload and organize content, share it with others, see what others have shared, and further pass content along to others.

To keep up with the rapid pace of the age, these pieces of content are short and focused. In line with trends in informal learning and continuous learning, a lot of learning materials are posted on these sorts of platforms and on company intranets, so learners can access brief, relevant material as needed on the job rather than taking a formal course. There is also the trend in microlearning, focusing on short learning pieces of a few minutes in length.Short learning pieces also work better with the usage patterns of smartphones

Responsive web design and responsive eLearning design allow content to be developed once, hosted in one location, and accessible from different devices, at any time, wherever the learner may be.

New standards technologies such as TinCan API/xAPI make it easier and more flexible to track learning on materials accessed and hosted in different locations and in a wider variety of different formats.

Conclusion:

And so we see a lot of elements of this original vision of learning objects being realized thanks to these many separate factors coming together.

And though it is a concept that has its valid criticisms, learning objects may offer an interesting an useful model to help manage and guide this new world of content production and sharing.

The earlier discussions of 10-15 years ago may give useful insight as to how to design, structure, and build short content. As well, it may guide us as to how to meta-tag, store, and search for these materials. And, finally, these earlier discussions may give us insights into how to repurpose and combine these learning pieces into larger, cohesive learning experiences, both online and blended learning, for both individuals and groups.

 

Thoughts?

 

Additional Links

http://www.reusability.org/read/

http://edutechwiki.unige.ch/en/Learning_object

Encouraging worker engagement and ongoing professional development with mLearning and Gamification

Introduction

I have a confession. I fiddle with my phone at work. We all do this from time, throughout the day, when we’re bored, or our brain is mush, when we’re stuck/blocked, or just when we need a change of pace or break. It’s compulsive (damn you variable ratio schedule of reinforcement!)

The companies we work for don’t tend to like this so much (something about productivity), which is why most of us try to keep it to a dull roar. Many companies have de jure HR policies technically prohibiting such a thing. It’s mostly unenforced, though, because most people are responsible, and because, really, who cares so long as objectives are being met. (Also, front line managers are usually just as glued to their phone screens!)

But what if instead of fighting this tendency, companies were smart about it?

I know. Crazy talk, this. But bear with me.

What if companies and their management embraced that workers mostly have smartphones, and simply accepted as a given that they are going to take them out during the day and use them? What if, taking this as a given, they looked at ways to make lemonade out of lemons, and found a way to harness this natural behavior of employees and channel it toward ends useful to the company and to the worker as an employee?

Mobile phones present an excellent opportunity for companies to help encourage engagement at work and ongoing professional development. The concept presented here is a novel idea for encouraging worker engagement and promoting ongoing professional development in an organization through a combination of mLearning and gamification.

Technological component of solution

Informal learning through short mLearning modules

Workers have phones and use them during the day. The company has new policies and procedures it wants the workers to learn, and also wants to have a workforce committed to ongoing personal and professional development. The company wants workers that are always learning and developing their skills. Combine these two elements and make the worker’s smartphone a platform for employee training and development.

Build continuing professional development materials in small mLearning modules, targetting a length of 2-5 minutes. Design these as informal learning pieces. Include both typical didactic learning elements, but also fun, hands-on activities and games.

Design the modules as stand-alone learning moments that don’t depend too much on other learning pieces, sort of like the late 1990s/early 2000s idea of “Learning Objects.” Design the modules with a “mobile-first” approach, such that they are intended to be seen on mobile phones and look good / are easy to use there. Ensure that the modules are meta-tagged according to some logical schema / ontology of tags appropriate to the workplace or industry so that the modules are easily searchable and findable. Make the learning modules available on some TinCan API /xAPI enabled LMS,

Ensure that there is an interface for searching for and browsing modules that is easy and time-efficient for users to use on a smart phone.

Link it to HR

Track the modules taken by learners and their scores on any assessments or pass/fail. Send this data to HR data systems for tracking.

Link back HR systems the other way so that HR systems could recommend specific modules based on learner time available and on defined professional development objectives.

Allow the system to send suggestions based on most viewed content, most uprated content, and the types of content the learner has enjoyed in the past. Enable a rating system, where learners can provide as much evaluation data as they like. Either “smiley face” basic impressions data or more in depth questionnaire/short survey or both.

Within HR systems, take the data on module completion and track this compared to documented development objectives. For mandated training coming from HR, have subscribed modules or module clusters. This content would be suggested or pushed out from HR. Use notifications, either in app notifications or via text/IM/Lync.

Gamify

Have a gamification layer to encourage and reinforce engagement with the system, though be careful to keep it within reason so that learners don’t feel “gamed.” Keep a tally of hours spent, courses completed, skills learned. Use gamified elements like badges and leader boards. Give the learners incentives to keep engaging with it. Track some of these stats through HR to have data on how much the employee is engaging in learning activities.

Make it social

Allow users to rate content, whether with a simple upvote/downvote or with a five star system. Allow them the opportunity to make comments. When a user is browsing modules, make information on average ratings or upvotes/downvotes visible to learners to help them with their choice. This gives feedback to designers/developers and also helps to identify quality content for other learners. This data gives a good sense of what sort of content the learners want and like, and this can be helpful for training development teams as a guide for how to allocate resources for future development.

Also, allow workers to recommend or share content they like to others. This will allow workers to help you promote good content and will further encourage engagement with the system.

Help learners get access

Make Wifi readily available to employees without restrictions. Employees are not going to really engage with this if you’re going to make them use their own data plan. You provide wired internet access to employees as a tool of work; do the same with in-building Wifi.

Human system component of solution

As with any human systems intervention, however, technology alone will not do the trick.

Workers need to be openly encouraged to use the system at work. As this is rolled out, the teams responsible need to make a concerted effort to promote this training system, both initially and as an ongoing reinforcement. This has to be more than a mass email to “Employees: All.” A nice promitional video will be helpful, but workers need to get introduced to it as well in a face to face meeting involving their front line management and perhaps their director. Give people a chance to ask questions and get answers.

“Bored? Brain-fried? Need a break? Tired? Stuck/writers block? Switch gears for a few minutes, play with your phone, so long as you’re using it to learn something.” Everyone should be actively encouraged to do this, and made to feel comfortable taking advantage of the policy.

Management at all levels, from the top down, needs to sets an example of welcoming this. Both in terms of words and in terms of concrete behavior. Management have to also be encouraged to (within reason) use the system and be seen using the system.

The tracked data that HR collects about how many hours the workers are engaged in learning what they are learning, and their completion stats can add to or supplement performance data for annual/semi-annual review. Workers should have visibility via some dashboard of the same sort of data that HR has summarizing their learning and training activities. That way, the worker can go into performance review meetings armed with data to demonstrate commitment to new learning and skill development. The learner can use this to start conversations about raises or about getting more resources or support for further deeper training or broadening of tasks. Conversely, managers can also look at the data to start their own conversations.

Conclusion

Workers have smartphones. Workers are going to look at them during the work day. If companies are smart and tech savvy, they can encourage ongoing training and development if they put out learning content in a way that is tailored to viewing through the workers’ smart phones. The effectiveness of this is reinforced if the company includes sucessful elements of gamification and social media and backs up the project with support from HR and management.

On MOOCs and Accreditation

Recently Udacity, one of the major MOOC providers, announced that, in a shift in focus, it is eliminating certificates of completion for non-paying students. The free students can still look at materials and take part in activities, but if they want anything to show for it, the only option is to pay $150 a month as a verified student. The basic pdf certificates of completion that used to be made available to anyone, paid or free, that worked through the material and passed, are being phased out.

This is a mistake, a move 180 degrees in the wrong direction in the ongoing development process of a model for MOOCs and accreditation. In reality, not only should the free students be able to keep gaining some sort of certification in recognition of their efforts, but further moves need to be made to start accrediting these online courses with the universities giving the courses. MOOC consortiums actually need to be leaning harder on schools to make meaningful accreditation available to students rather than taking away elements they were already offering.

Now, don’t get me wrong; the new features Udacity is rolling out to those students paying the extra money are great steps in the right direction- added human support, and grading of submitted work by professors and others with subject expertise rather than peer grading with rubrics. This is great stuff, and if it helps the resulting certificates get taken more seriously in the marketplace, that is great. All of this helps to further the legitimacy of the online medium as a way to get university level education and continuing professional development.

But that’s no reason to throw the free students under the bus. Fine, Don’t let the free students be able to get the assignments graded by profs and TAs. This is expensive, and if someone doesn’t pay for it, it’s probably an unsustainable model, given all the other costs of putting on a nice MOOC.

Let the free students stay graded by peers. But offer them the ability to verify their identity with their webcam and a typing sample, as with the Signature Series on Coursera. (Note: the enhanced Signature Series certificates on Coursera are only available to paying customers, but the prices – $50 a course for courses that can take up to a few months, are much more accessible than the Udacity prices of $150 per month)

The technology for this sort of verification is already developed, so it doesn’t really cost the provider anything extra. And then give the free students a certificate for their efforts. Maybe that certificate is going to have a second rate market value compared to the paid certificates based on professor or TA grading. But at least they will have something of value to take away.

Yes, I realize that the paying students are the ones that pay the bills and keep the lights on. But MOOCs are a big deal in the first place because of the mass of free students. That is what is driving the traffic, the buzz. These millions of students are freely choosing to spend their free time in higher studies rather than vegging in front of the TV. Educational institutions should love this, and should want to encourage it.

The fact of the matter is that MOOCs blew up as an idea because of a promise, explicit and implicit, made to people. MOOC consortiums actively sold the idea that people could not only expand their knowledge, but also expand their opportunities through hard work in these online courses. The barriers to access to higher education would be lowered. If these masses of students lose faith in that vision, the whole thing will collapse. The numbers will shrink. With smaller overall enrolment, there will be correspondingly fewer that will stick around to pay for it. Part of that mass of free students eventually takes the plunge to pay for some sort of extras like an enhanced certification or a proctored examination. The free education generates paid education by helping to support a vibrant platform. The numbers also generate buzz and perceived legitimacy of the educational platform as a place to get meaningful education and training. This perceived legitimacy is key to attracting people to spend actual money on a course.

If you let the less well off students get certification and / or credit if they are willing to (1) do the work and (2) verify their identity for exams and assignments, they will be able to use that credit in professionally meaningful ways to help get into fruitful careers. Given a fruitful career path, these students can have the means to later become paying students in the future as they continue their professional development.

This is the sort of social welfare outcome that people in higher education say they want, right? Then make it happen. This should be the sort of thing both Liberal-minded (give to help the less fortunate) and Conservative-minded (remove barriers to let people raise themselves through their own determined effort) academics and administrators should be able to get behind.

Yes, it is the paid students that keep the bills paid, but understand that people are going to pay for it. If there is meaningful, professionally usable certification and credit available, working professionals will pay for it. It’s like with digital file sharing. It still exists, and is booming, but with reasonable and convenient means available to get access online, people pay for the content.

And if you demonstrate social responsibility by making perhaps lesser but still meaningful and usable credentials to those who can’t afford it, then those who can pay will be even more driven to support you. Because you will have proved your worth as an organization to get behind.

On Natural User Interfaces (NUI)

Preface

Welcome and an early happy weekend. This article is intended to give a bit of deeper background around trends in what are called “Natural User Interfaces,” or NUIs. This term refers to a computer technology trend related to how we interact with computers. It’s a term that I’ve used in some other articles recently, but didn’t get into too deeply, because it takes a bit to explain it so that you do it justice.

Fair warning that this article is intended to be forward looking. It is NOT about looking at tools that are currently available off the shelf. This is not about immediately applicable information. This is a look at where the technology of human-computer interfaces has come from, where it is, where it is probably going in the next few years to come, and what kinds of possibilities that could introduce for computer based training.

So in that respect, it’s about getting yourself mentally prepared for what will be coming a few years down the road. For those who like to think ahead, to dream about future Instructional Design possibilities using the tools that haven’t been invented yet.

My recommendation: if the preface and introduction pique your interest, bookmark this article, email yourself the link, and maybe set it aside for a quiet Sunday afternoon when you have some time to read and reflect. Then you can process it and reflect on the future possibilities of what you can do with this technology. Anyway, I hope you enjoy the article.

Introduction: What is a Natural User Interface (NUI)?

In a recent article, I talked about the future potential for the Kinect sensor to enable on the fly adjustments to presentation in e-Learning. In that article, I brought up the concept of a Natural User Interface, or NUI (pronounced “noo-ey”). In that article, I introduced the term almost in passing, but I recognize that a lot of people might not be familiar with the concept. The intention of the present article is to go into a little more background, to give some sense of the significance of this new type of human-computer interface, what came before it, how it has already changed how we use computers, and how future developments promise to further shape our interactions with computers. Finally, I will try to look ahead a bit at how these types of interfaces could shape the way we train people using computers.

Let’s get started.

Paradigms of human-computer interaction

So the first question for those unfamiliar with the notion of an NUI would be “what is a NUI?”

Well, to answer this question, it helps to go back a bit into the history of computing.

Computers as we generally know them (electronic calculation devices) have a history going back about 70 years, since the time of the second world war. If you want to be technical, you can trace computing back to Ada Lovelace and Charles Babbage and the Difference Engine and Analytical Engine in the early to mid 1800s, but for simplicity, let’s say 70 years, starting around 1945.

What started as a technology used to automate complex computations for a handful of high-end research and military institutions via massive electrical machines has evolved and grown over these seven decades to become a technology that is an integrated, essential part of the fabric of life (at least for people in relatively developed parts of the world). Along the way, the power, speed, and storage capacities of computers have increased exponentially, while the costs and sizes of components have at the same time shrunk at exponential rates. Computers have gone from machines numbering a handful in the whole world to numbering somewhere in the billions. Some billion powerful computers are carried around in people’s pockets in the form of smart phones, and embedded computing devices appear in almost any electrical device produced today.

Along with these developments, the means through which people interface and interact with computers have also dramatically changed. This change has come both as a result of technological developments, and at the same time as a driver to uptake of computers amongst the general population,  Human-Computer interaction has gone through a number of important paradigm shifts.

A paradigm, for those unfamiliar with the term, is a dominant contemporary pattern or way of conceptualizing and doing things. There have been a few major paradigms of human-computer interaction, with corresponding shifts as the technology moves from one dominant mode of interface to another.

I first want to speak about three major early paradigms of human-computer interaction:

  1. Batch interfaces (1940s to 1960s)
  2. Command Line Interfaces (1960s to 1980s)
  3. Graphical User Interfaces (1980s to 2000s)

I will then speak about the recently emerging paradigm of Natural User Interfaces (NUI). I will discuss some of the different examples of NUIs, and finally look at new possibilities for training opened up by these sorts of interfaces.

First paradigm: Batch interface (1940s to 1960s)

The first computer interface paradigm was the batch interface. In this setup, users entered commands through stacks of punch cards punched by hand and fed into a card reader peripheral, which read the punched holes via optical scanning and turned the entries into electrical inputs. Programmers would carefully enter their code on the punch cards and submit their stack of cards as a batch to be scheduled and run by the administrators of the machine.

Remember, this was a time when computers were huge machines taking up most of a room, and a whole university or department might share one of these machines. It was a scarce, in demand resource, so programmers had to wait their turn for their code to be run. Computers could run one program for one user at one time. This produced a serious bottle neck in performance. Users could not typically just sit at the computer by themselves and use it because the resource was limited and the time could be used more efficiently if the programs were run together one after another as a batch.

This cycle from submission of the program to scheduling to entering it into the computer to running could take days, depending on how busy the administrators of the computer center were. And if there was a bug, something miscoded in the punch cards, the program would fail, and the programmer would have to start again, identifying where the error was without any sort of guidance (“syntax error on line 57,” etc). Such aids didn’t exist. The programmer would try to track down the error in logic by hand, and then resubmit the revised program to the queue. It was a system that encouraged refined first draft work.

In a batch interface, the computer reads commands, coded in rigidly structured messages, carries out commands, and gives output through a printer. The computer would take in the programs of many people at one time, and process them, one after another, as a batch. It was in this time period that the first computer languages were developed.

The frustrations of dealing with these batch processing systems were a major drive for computer science researchers of the day to look into alternate modes of human-computer interaction.

Punch card

 

Second paradigm: Terminals and Command line interface (CLI) (1960s to early 1980s)

Then followed the command line interface (CLI). This came about along with development of early computer displays and monitors with keyboards used as inputs. Users could input characters through a keyboard and see them displayed on the screen. This would take place at a terminal with a keyboard and  display connected or networked to the main computer machine.

The main computer would be set up to time share between multiple users. The computer basically rapidly switches between carrying out tasks for each user, allowing the central computer to “simultaneously” handle many users at once. To get a sense of how this works, imagine getting your household chores done by doing laundry for a minute, then switching to keeping an eye on dinner for a minute, then switching to attending to your kids for a minute, then switching to tidying the living room for a minute, then switching to sweeping the floor for a minute. Then imagine thistask switching a million times faster. You’re doing one thing at a time in little slices, but to a casual observer, everything is smoothly proceeding all at once. Generally, your computer at home or at work “multi-tasks” in a similar sort of way. The coordination of the time sharing created a certain amount of overhead using up computer resources, but this became less of a concern as computers became faster over time.

So the user no longer had to punch cards, and no longer had to give them to someone else to feed into the machine, and wait. The different programmers and application users could get access to a terminal, and use that to interact directly with the computer in something resembling real time. The user could input text information, and get text output back more or less immediately.

This paradigm also overlapped with the appearance of the first so-called “micro-computers” used as office business machines (e.g. the IBM era). It was also the paradigm under which the first “personal computers” were born. These were standalone computing machines small enough to fit on a desk.

The user of one of these machines could use the keyboard, aided by the feedback visuals from the screen, to type documents, or to enter commands. The user controls the computer and performs actions such as creating, saving, deleting, copying, and moving files and directories using text based commands typed into a a command line. This can still be seen today in the command line in Linux and in the mstsc / Commad Prompt ultility in Windows.  MS DOS, the first Microsoft operating system, worked like this.

This is known as a Command Line Interface or CLI. More advanced computer programming languages were also developed at this time.

 

Third paradigm: Graphical User Interface (GUI) (1980s to 2000s)

The next paradigm was the Graphical User Interface or GUI (“goo-ey”). This consists of a “desktop metaphor,” with program windows, menus, virtual “buttons” and other controls on the screen with which the user interacts using a mouse and pointer. Associated with this is the acronym WIMP=Windows, Icons, Mouse, Pointer.

The earliest GUI was from research at Xerox PARC in the 1970s. These ideas were later taken up by Apple Computers in the early Macintosh and Microsoft in their Windows OS. Interactions simulated the way a person might interact with a real world machine, by “pushing” (with mouse clicks) virtual buttons, turning virtual dials, etc. It was at this stage, corresponding with a sufficient miniaturization of computer components and fall in price, that the idea of a home “personal computer” took hold. With the desktop metaphor, windows, and mouse pointers, it became much more natural for everyday people to use computers. There were still many rough edges, and certain arcane bits of knowledge to learn, but overall, it became much simpler for everyday people to do basic things with computers. Computers were starting down the road to becoming a household appliance that average people would use as part of their everyday lives.

 

The emerging paradigm: The natural user interface (NUI) (2000s to present)

The next paradigm of human-computer interaction is so-called Natural User Interfaces, or NUI. This can encompass a variety of types of interaction, but the overarching idea is that rather than having artificial or mechanical intermediary means of input, the user interacts with the computer in ways more like those used to interact with people and objects in the real world, and more directly. This typically means touch, body / hand gestures, facial expressions, speech, and giving queries or commands to the computer in something much closer to the ambiguities of everyday language rather than in rigid computer syntax.

What does this mean? Well, to illustrate, let’s look at the predominant method of computer interaction that we’re just coming from and still wrapped up with. Namely, the mouse.  Or, more precisely, the mouse and pointer as a way of navigating graphical menus and control interfaces on a screen display, with the keyboard for the user to enter in data like on some electronic typewriter. This form of interaction was almost completely predominant from around 1984 right up through to around 2008, a period of 24 years. The 1984 date marks the appearance of the Apple Macintosh (128k), which featured a GUI and mouse. 2008 on the other hand was the appearance of the iPhone 3G, which helped to explode the popularity of capacitive multi-touch smartphones. (As much as I dislike Apple’s closed model and think they’re past their prime, I have to grudgingly give them credit for having been squarely at the center of  both of these technological inflection points.)

The mouse has become so much a part of our daily activities, at home and at work, for so long, that it’s easy to lose sight of how awkward and un-natural a way this is of interacting with with a computer. Or interacting with anything. You sit in front of a computer screen, staring at it.You have a button on the screen. You have to grab this mouse on the desktop, drag it along the horizontal plane of the desk surface in order to move the visual of  a  pointer arrow on the vertical plane of the screen surface. And then you click on a button on the mouse to “click” the on-screen button. Once upon a time, this was the only way to mediate the pressing of a button. It was simply the only way to do it. But what is the most natural instinct to do this, today, given the technology widely available now, namely touchscreens? Well, since 2008, with the iPhone, and since 2010, with the iPad, it’s simple. You reach out your hand to the screen and touch the putton to press it. The whole step becomes much more natural and effortless.

Admittedly, it’s still kind of weird, because you’re still blocked by this 2 dimensional surface as you bump up against it and touch it or move your hands over it. It’s still a little limiting and artificial. But it’s getting there. You’re completing the metaphor at least of the classical graphical user interface or the desktop workspace on which you place things and move things around. Instead of moving them with a mouse, you move them directly with your fingers. You’re still operating something like some old fashioned instrument panel, but that has become more naturally engaging. You move like you’re actually operating an instrument panel in real life.

As mobile computing and mobile internet have taken off, this has impacted web and application design so that even on the desktop, the user interface principles inspired by touchscreen usability – lots of white space, simplified menus and controls, and large button targets – have become predominant. Designers try to build applications that work well on both.

Interacting with the computer in these more natural, everyday ways means that in a sense, the interface fades from attention and becomes invisible to the user. But the idea is that generally the experience is smoother, more realistic, more like a real world interaction. The distance between the user and the computer becomes smaller. In this way the computer becomes a little more like an extension of the user’s body. The user simply smoothly interacts with the computer to do what he needs to do.

We call such an interface a Natural User Interface, abbreviated NUI, and pronounced “noo-ey.” It’s the idea of an interface that drapes itself over us, fits us like a glove by letting us interact with the computer more like we interact with real world objects and people.

In popular entertainment, we see some examples of futuristic concepts of use of NUIs. The computer on Star Trek TNG, for example, which the crew commanded through voice or touch screen control panels as they walked around the ship and did their thing.

Or the gesture interfaces Tom Cruise’s character used in the Pre-Crime unit in Minority Report.

http://www.youtube.com/watch?v=8deYjcgVgm8

Or more recently in the battle “simulation” in the film Ender’s Game.

Multi-touch touch capacitive screens as seen in modern smartphones and tablets are one good example of an NUI. You interact with screen items by touching them with one or more fingers to stretch items, rotate them, shrink them, etc.

Virtual assistants or agents such as Apple’s Siri or Microsoft’s Cortana are another example, or another aspect of natural user interface technology. Here users interact in a somewhat conversational manner with the computer using speech. Some of the predictive elements of Google Now would also be examples.

Haptics (touch based interfaces) are yet another element to make interfacing more natural by simulating the texture and force feedbacks and resistances you would get interacting with real objects.

Virtual reality would be another example of a natural user interface.The person interacts with the virtual world through head and body movements, receiving visual feedback through some sort of helmet screen.This is a technology going back some decades, but is becoming more affordable and feasible now. An example of a mass product is the Oculus Rift by company OculusVR (In the news of late for having been acquired by Facebook).

Another example is augmented reality, as in Google Glass. Here, important contextual information is projected within the user’s field of view to give continuously present information.

NUIs can also be combinations of these different types of technology. For example, the combination of speech and body / hand gestures is used in the Microsoft Xbox Kinect sensor. Microsoft, has opened the sensor with free APIs and SDK for developing NUI-enabled software for Windows using the Kinect for Windows sensor. The Kinect is a sensor that was previously sold as an optional peripheral for the Xbox and which is now a bundled part of the new Xbox One gaming and home entertainment console.

http://www.youtube.com/watch?v=Hi5kMNfgDS4

This particular device features two cameras for stereo machine vision with depth perception. Software in the device can make out limbs, facial expressions, hand gestures, limb and finger movements, face movements, facial expressions, even the pulse of the user, and use these as inputs for control. Multiple microphones are present for noise cancellation and for recognizing directionality of sound. There is software on board for voice recognition and for facial recognition.The user controls the game by voice inputs and by moving his body and hands.

This represents a more natural way to interact and brings to life some of these models of human-computer interaction forseen by science fiction earlier. It is not hard to forsee possible applications to training with this, especially with APIs of the device open to commercial and research development. The following links and the video below give some sense of what is being done with this sensor tool.

http://openkinect.org/wiki/Main_Page http://www.microsoft.com/en-us/kinectforwindows/

http://createdigitalmotion.com/2013/10/microsoft-embraces-open-creative-coding-new-kinect-openframeworks-cinder-integration/

http://blogs.msdn.com/b/kinectforwindows/archive/2013/08/13/turn-any-surface-into-a-touch-screen-with-ubi-interactive-and-kinect-for-windows.aspx

http://www.youtube.com/watch?v=Iu2XH5p_hMM

The Xbox One with Kinect is probably the hardest push right now for mass adoption of Natural User Interface technology in the home. There is also an Xbox Kinect for Windows sensor coming out that would allow games and software to be written using this device to control a computer.

http://www.microsoft.com/en-us/kinectforwindows/develop/

Another potential route forward might come in the form of the iPad a few generations down the road if/when Apple can put something similar to Kinect’s sensors today in the iPad. The iPad would make a sophisticated control device for the TV, with the iPad mirroring to the TV screen. So this hypothetical future iPad could watch you through twin cameras, to read your eye movements and facial expressions or detect hand gesture based inputs. The microphone inputs, combined with cloud services, could read speech queries or commands from you. The touch screen would detect button presses, finger or stylus drawing inputs. The accelerometer and gyro would recognize if you’re sitting or standing and in what orientation you’re holding the iPad. You could then hold the iPad in different orientations in space as a control surface or workspace. The problem with the Xbox Kinect sensor is that it watches from farther back. So it can’t pick up yet as much nuance of detail as you could with a closer camera. A camera in the iPad could do that.

I wouldn’t be surprised to see Apple to do this, getting everyone used to this method of interaction, and then hitting with the long-predicted Apple TV, integrating something like the Kinect sensor and a slick multiple layers of Natural User Interfaces built in. Bang and bang. It would have a big impact.

Learning and Training Applications

All of this promises to really shake up how we interact with computers. And since interaction is such a key element of computer based training, this has implications for us as designers of instruction.

There are a number of foreseeable learning and training applications for this sort of technology. To name just a few examples:

Speech recognition and text to speech could be useful for language learning.

Gesture based controls could enable more lifelike interaction with 3D models, especially if using stereoscopic 3D image displays. This could potentially be used for a variety of applications in technical training:

  •  to manipulate and examine equipment in maintenance training.
  • to learn structure of machinery by virtual manipulation of 3d models, including assembly and disassembly. Haptic feedback outputs could even simulate the sensation of touching and working with the actual equipment.
  • in biochemistry, to manipulate 3-D models of large molecules like proteins to understand their structure and active sites
  • or to visualize biological reaction steps

Virtual reality could be used to simulate the operation of certain complex equipment, including running through rare or emergency scenarios.

For soft skills, imagine the immersiveness of a training program where you interact with a 3d character in a scenario using simply your body language and your speech. The realism is greatly heightened. Or imagine a training program that can give feedback on your body language, verbal tics like filler words, and your facial expressions while you give a simulated presentation or sales pitch.