Tag Archives: technical training

Augmented Reality and Wearable Computing: Possibilities for Google Glass in Training

Introduction

Hello, happy Friday and welcome to my  blog. One of my main objectives with this blog is to encourage innovation in training by taking a “skate to where the puck is going” perspective, looking at new technology coming in the not-too-distant future and looking at ways that the technology can enhance training. Today’s topic is augmented reality and wearable computing with a focus on Google Glass. To be enjoyed with a nice Friday morning coffee or tea at the desk. 🙂

NOTE: I am not affiliated with Google or Google Glass and this article is not intended as a promotion of Google Glass in particular. I have chosen to reference this particular technology as a relatively well known example of the technology that currently exists and will be available in the near future on the market.

Augmented Reality

One interesting contemporary trend in computing is so-called augmented reality. What is augmented reality? Augmented reality differs from virtual reality. Augmented reality provides an “overlay” that augments or extends one’s appreciation, understanding, or navigation of the real world rather than an immersion in an unreal virtual world. The user operates in the real world as normally, but with useful location or context dependent information feeds displayed in his field of view to assist him in whatever he is doing.

One example of augmented reality is the HUD, the heads-up display on aircraft. A projector behind the pilot projects display data onto a screen between the pilot’s head and the windshield. The artificial horizon line of the attitude display overlays the actual horizon and various pieces of key data are also projected onto this overlay layer.

An HUD also features prominently as futuristic concept art in the Ironman series of films. (Image included under fair use educational / commentary usage)

Some apps for mobile phones play with augmented reality as well showing overlay information about local points of interest in the immediate area overlaid on the camera preview image shown on the phone screen. The app uses data from the location, the compass, and the gyroscope to sense where the user is, and in which 3-dimensional direction the user is pointing the camera to update the augmentation.

Wearable Computing

Another trend is that of wearable computing. This involves devices with computing power and integrated sensor devices to collect data about the wearer and his environment (GPS position, orientation of body in space, velocity, acceleration/rotation, direction facing or direction of movement, as well as data feeds like local temperature and local points of interest ).

Some examples are Pebble Watch, Samsung Galaxy Gear watch.

Google Glass: Upcoming augmented reality wearable computer

 

Google Glass is an upcoming device (not yet commercially available, in beta) that is receiving favorable reviews in beta testing by participating individuals. Google Glass is a relatively lightweight pair of glasses with an integrated wireless internet connected computer. The device features embedded sensors and a small projector that projects imagery into the field of view of the wearer so that it overlays reality. It functions as both a wearable computer and an augmented reality device. There is a touch sensitive area on the side of the glasses near the user’s right temple to allow touching to initiate actions. Also, there are speakers to hear audio, a microphone to record audio and take voice commands, and a camera to take pictures or record video. You can see a video of the Google Glass in action here.

Glass features

The user can watch videos, see pictures, take pictures or record video or audio, make verbal commands or queries, search the internet, share pics/audio/video to social networks or emails, send dictated IMs, and enable video conferencing (with the user seeing the other person and the other person seeing the user’s POV (point of view) ). There is also the ability to access Google services such as Translate.

Training and performance support applications of Glass

In this post, I’d like to describe some of the conceivable training and performance support applications of this great new tool. Some of these possible applications are based on known out of the box capabilities of Google Glass, while others are reasonably foreseeable possibilities given the capabilities of the device and assuming creative effort on the part of app developers.

Technical Training, equipment maintenance

Google Glass could be very useful in technical / equipment maintenance training.

  • As a means to collect easy, hands-free POV  video of an expert / SME demonstrating how to fix an issue with the equipment or perform some procedure. This video could be streamed live, or could simply be a way to record video clips for use in online help or formal eLearning.
  • As a means to collect POV video capture of the trainee performing the task while streaming the video to an expert. The expert observes, and gives verbal, and possibly video feedback over an audio/video conferencing connection, possibly through another Glass, and potentially at a remote location.
  • As a visual support for component identification and access to more detailed information. Camera image recognition could recognize equipment, and overlay 3d Autocad or Ngrain image with labels on components. It could also potentially enable easy link through to online technical documentation formatted specially for viewing on the Glass’ screen.

Soft Skills training

On the other side of things, Glass could also be very useful in soft skills training.

  • POV video capture of user performance in a role playing simulation could be used for review in post-simulation debriefing sessions
  •  Or, to turn things around, in the same role playing simulation, you could have someone other than the trainee wear Google Glass and record. This would be good for client-facing skills training. The learner can see himself and his performance through the client’s direct POV. Either the video could be recorded and reviewed after the session, or both people could wear Glass, and establish a video-conferencing link. That way, the trainee could perform the simulated interaction in the scenario, while receiving live feedback of how the client experiences the interaction.

Performance support for someone working in a people-centric / client facing position

Certain professions have a much higher emphasis on meeting people, making and nurturing contacts, and all-around growing and maintaining a massive “mental rolodex.” This could include politicians, public relations or publicists, sales force, talent agents, etc. The people who go into these sorts of fields often have unusual talents for this, but everyone has his cognitive limits. We remember the face but not the name, for example, creating socially awkward moments.

A wearable eyeglass computer like Glass could help as a performance support. When a person comes into view, facial recognition could be carried out and the face checked against the contact database. This then brings up useful reminder data – photo, name, company, age, and any other useful or relevant information, allowing a smooth start to the conversation. Glass could also allow an easy way to photograph business cards and automatically (via OCR) extract information to import to contacts.

Language Learning

Augmented reality layers could make useful scaffolding for language learning support apps tying into using Google’s impressive tools for image recognition, speech to text, and text to speech. Text and audio overlays could provide helpful support information to the learner, and this could be used either in classroom practice, or out in the real world. The support could be optional as a scaffold with the ability to turn on or off as the learner feels the need. This could conceivably involve:

  • Live OCR (optical character recognition) and live translation overlay of signage or written material (reading store displays, street signs, restaurant menus, etc)
  • Live speech to text of foreign language to text translation overlay on screen
  • Live suggestions of phrases to use in conversation, with spelling and pronunciation cues

As well, Glass could enable course activities or assignments where the learner goes out into the community and records himself practicing the new language in a real situation (go buy something in a store and talk to clerks, ask for directions, etc). The conversations and interactions could be recorded and reviewed or graded afterwards. The world can become a language lab.

Historical site and museum interpretation

The technology could also find great usage in historical site or museum interpretation.

The user could borrow/rent a pair at the entrance or visitor’s center, and use them to experience a transparent overlay of video or 3d animation based on location. This would provide the experience of being there and then. This could be used at the sites of famous battles like Waterloo, Civil War, WWI and WW2 battlefields, Plains of Abraham, Revolutionary War, etc. Or at old historical ruins like the Colliseum or Acropolis. Or a natural history exhibition site could show what the location looked like in the Jurassic period, or give a visual sense of what the glaciers would have looked like in the past ice age.

Similarly, it could be used in museum exhibit interpretation. The user borrows/rents the glasses, comes to a display/exhibit, and the Glass detects the location and makes an audio-visual presentation available over wifi.

Operational performance support

Glass could also potentially offer performance support for operations of equipment like airplanes and cars.

In aviation, for example, this could enable hands-free, eyes straight ahead checklists as an alternative to glancing downwards at the checklist display on a cockpit Multi-function Display (MFD). Glass, receiving a wireless feed from the airplane, could display the current checklist item, upcoming checklist items, and any special cautions or warnings for steps. Visual annunciations could also potentially appear on the display.

It could also be used in driver training or driver performance support, for example performance support as a reinforcement for defensive driver training. The user can opt for supportive prompts or possibly the glass monitors the driver’s point of view and traffic via camera and gives scaffolding prompts until the learner reaches proficiency. For example: it could prompt the driver to check mirrors periodically, prompt to check the speedometer periodically,  provide a visible cue if the speed calculated from GPS and accelerometer exceeds what is known from GPS/Google Maps to be the posted limit by a some threshold (10-20 km/h above, say), provide a visible or aural prompt to encourage attention or slowing if red brake lights are seen far ahead or it is detected that a car far ahead is otherwise slowing or if the following distance behind the car in front falls below some set threshold.

Conclusion

These are just a few possible training or performance support applications that can be imagined for Google Glass. Doubtless others can and will be imagined and realized as the technology rolls out commercially.

Feel free to leave a reply to share your comments and your own ideas.