Category Archives: Augmented reality

Posts on augmented reality technologies

Concept: Maintenance Training 2030


April 25, 2032 …

Ed, a Canadian aircraft maintenance training instructor located in Montreal, gets ready for his upcoming engines maintenance class.

Supposed to be a big class this week, 8 students.

  • 1 from Canada
  • 1 from the US
  • 2 from South America
  • 2 from India, and
  • 2 from China

Ed checks on the equipment in the classroom.

He puts on his Holo-Glasses, which come to life, softly glowing holographic data displays and icons popping up in front of him.  The device recognizes him, launching the virtual assistant to greet him. “Hello Ed! How are you? All set for your class?” “Just fine, thanks. Everything set?” “Yes, Ed. All the students are going to be attending; no cancellations. Everything looks good with the students. One was having some technical issues earlier, but I helped him through it.” Excellent,” thought Ed. “Everything looks alright with 15 mins to go.”

Ed begins cueing up the opening presentation notes, and the multimedia training manual. These pop up in their own windows in Ed’s field of view.

As Ed continues his preparations, the digital assistant relays notifications confirming the status of the students. The assistant is communicating with the students before class so Ed can focus on his preparation. Everything is looking good. Ed checks the 3D cameras and tests out his holopresence projection, seeing what his students will see.

“Loo-king good! Let’s do this!”

A few minutes later, the class begins. Ed welcomes the students as they holopresence in from their remote locations. Ed and the students, their Holo-glasses on,  take their places in the shared virtual classroom. The software places softly glowing holographic representations of the other participants in the shared visual space.  Ed looks out at the students’ faces, and the students see a holographic overlay of the same classroom and the same students from their own virtual perspective. At first, the experience is a little eerie, but as the class gets going, and all the students introduce themselves, the illusion takes hold and it feels like everyone is in the same classroom.

Ed presents the content, asks questions, and listens to the responses. Master teacher he is, he observes carefully, gets a sense of the learners’ body language and expressions, and, much like in a real class, adjusts as he goes. Ed brings up holographic 3D animations and models of the engine and components for the class to see. He zooms, rotates, and takes apart the holographic engine parts. The hologram also appears in the students’ fields of view, and Ed invites students here and there to come up and try for themselves and demonstrate actions to the class. Static images appear on screens in mid air, demonstrating schematics.

In the afternoon portion it is time for the virtual hands-on lab exercises. Ed and the students convene again, once again with beautiful, interactive 3D holographic models of the engine floating in the shared digital overlay. This time however everyone puts on their SureTouch(TM) haptic feedback gloves.

The gloves use sensors to read finger and hand position, the headset measures their hand positions in relation to the digital model’s virtual position, and actuators in the gloves give pressure feedback to simulate handling real objects with substance instead of just weightless holograms. It’s kind of weird at first, and it’s not quite the same as the real thing, but close enough for horseshoes and hand grenades, as they say. And definitely a hell of a lot cheaper than taking an actual engine offline to train.

As always, it took a few years for the technology to perfect itself and a lot of research and proofs of concept before the regulators really believed it could be as effective as the real thing. The Dutch Aerospace Lab did some great research as always, and once EASA signed off, the other regulators followed pretty swiftly after. Regulators came to appreciate virtual maintenance training, just as they came to appreciate the power of full flight simulators decades before.

The company definitely appreciates it too – they save a small fortune in flights, hotels, taxis, and per diems doing virtual classes like this over the course of the year. As do the students’ companies..

Ed for one, appreciates it too. No packing, no airport security, no  cramped 12 hour flight, no hotel room, no taxis, no jetlag, no traffic. Well … scratch that last one. This is Montreal, after all, where the seasons are winter … and construction. Even in 2032, there’s plenty of traffic. (You can’t win ’em all, I guess.) “Oh well, ” thought Ed. “Decent weather today, so at least could read a book on the way in while the autodrive on the car took care of all the unpleasantness.”  And all from the comfort of the Montreal office.

Ed loves it, and his family loves it too – less time away. And besides. even though he felt a little silly to admit it, irrational as it was, Ed had felt a littled weirded out by flying ever since they started the rollout of unpiloted commerical flights in the late 2020s. Hundreds of times safer than human pilots or not, it’s still kind of creepy to have algorithms flying you around instead of people.

“Or maybe I’m just getting old, ” Ed thought. Gets a little jarring after awhile to see the world transform itself before your eyes so quickly. The young seem to take it in stride, unphased, as they always do. And, Ed had to admit that the toys are pretty cool. All this change has its benefits.

Such is the stuff of life in a world of sci-fi dreams made true.

 

 

 

 

Augmented Reality and Wearable Computing: Possibilities for Google Glass in Training

Introduction

Hello, happy Friday and welcome to my  blog. One of my main objectives with this blog is to encourage innovation in training by taking a “skate to where the puck is going” perspective, looking at new technology coming in the not-too-distant future and looking at ways that the technology can enhance training. Today’s topic is augmented reality and wearable computing with a focus on Google Glass. To be enjoyed with a nice Friday morning coffee or tea at the desk. 🙂

NOTE: I am not affiliated with Google or Google Glass and this article is not intended as a promotion of Google Glass in particular. I have chosen to reference this particular technology as a relatively well known example of the technology that currently exists and will be available in the near future on the market.

Augmented Reality

One interesting contemporary trend in computing is so-called augmented reality. What is augmented reality? Augmented reality differs from virtual reality. Augmented reality provides an “overlay” that augments or extends one’s appreciation, understanding, or navigation of the real world rather than an immersion in an unreal virtual world. The user operates in the real world as normally, but with useful location or context dependent information feeds displayed in his field of view to assist him in whatever he is doing.

One example of augmented reality is the HUD, the heads-up display on aircraft. A projector behind the pilot projects display data onto a screen between the pilot’s head and the windshield. The artificial horizon line of the attitude display overlays the actual horizon and various pieces of key data are also projected onto this overlay layer.

An HUD also features prominently as futuristic concept art in the Ironman series of films. (Image included under fair use educational / commentary usage)

Some apps for mobile phones play with augmented reality as well showing overlay information about local points of interest in the immediate area overlaid on the camera preview image shown on the phone screen. The app uses data from the location, the compass, and the gyroscope to sense where the user is, and in which 3-dimensional direction the user is pointing the camera to update the augmentation.

Wearable Computing

Another trend is that of wearable computing. This involves devices with computing power and integrated sensor devices to collect data about the wearer and his environment (GPS position, orientation of body in space, velocity, acceleration/rotation, direction facing or direction of movement, as well as data feeds like local temperature and local points of interest ).

Some examples are Pebble Watch, Samsung Galaxy Gear watch.

Google Glass: Upcoming augmented reality wearable computer

 

Google Glass is an upcoming device (not yet commercially available, in beta) that is receiving favorable reviews in beta testing by participating individuals. Google Glass is a relatively lightweight pair of glasses with an integrated wireless internet connected computer. The device features embedded sensors and a small projector that projects imagery into the field of view of the wearer so that it overlays reality. It functions as both a wearable computer and an augmented reality device. There is a touch sensitive area on the side of the glasses near the user’s right temple to allow touching to initiate actions. Also, there are speakers to hear audio, a microphone to record audio and take voice commands, and a camera to take pictures or record video. You can see a video of the Google Glass in action here.

Glass features

The user can watch videos, see pictures, take pictures or record video or audio, make verbal commands or queries, search the internet, share pics/audio/video to social networks or emails, send dictated IMs, and enable video conferencing (with the user seeing the other person and the other person seeing the user’s POV (point of view) ). There is also the ability to access Google services such as Translate.

Training and performance support applications of Glass

In this post, I’d like to describe some of the conceivable training and performance support applications of this great new tool. Some of these possible applications are based on known out of the box capabilities of Google Glass, while others are reasonably foreseeable possibilities given the capabilities of the device and assuming creative effort on the part of app developers.

Technical Training, equipment maintenance

Google Glass could be very useful in technical / equipment maintenance training.

  • As a means to collect easy, hands-free POV  video of an expert / SME demonstrating how to fix an issue with the equipment or perform some procedure. This video could be streamed live, or could simply be a way to record video clips for use in online help or formal eLearning.
  • As a means to collect POV video capture of the trainee performing the task while streaming the video to an expert. The expert observes, and gives verbal, and possibly video feedback over an audio/video conferencing connection, possibly through another Glass, and potentially at a remote location.
  • As a visual support for component identification and access to more detailed information. Camera image recognition could recognize equipment, and overlay 3d Autocad or Ngrain image with labels on components. It could also potentially enable easy link through to online technical documentation formatted specially for viewing on the Glass’ screen.

Soft Skills training

On the other side of things, Glass could also be very useful in soft skills training.

  • POV video capture of user performance in a role playing simulation could be used for review in post-simulation debriefing sessions
  •  Or, to turn things around, in the same role playing simulation, you could have someone other than the trainee wear Google Glass and record. This would be good for client-facing skills training. The learner can see himself and his performance through the client’s direct POV. Either the video could be recorded and reviewed after the session, or both people could wear Glass, and establish a video-conferencing link. That way, the trainee could perform the simulated interaction in the scenario, while receiving live feedback of how the client experiences the interaction.

Performance support for someone working in a people-centric / client facing position

Certain professions have a much higher emphasis on meeting people, making and nurturing contacts, and all-around growing and maintaining a massive “mental rolodex.” This could include politicians, public relations or publicists, sales force, talent agents, etc. The people who go into these sorts of fields often have unusual talents for this, but everyone has his cognitive limits. We remember the face but not the name, for example, creating socially awkward moments.

A wearable eyeglass computer like Glass could help as a performance support. When a person comes into view, facial recognition could be carried out and the face checked against the contact database. This then brings up useful reminder data – photo, name, company, age, and any other useful or relevant information, allowing a smooth start to the conversation. Glass could also allow an easy way to photograph business cards and automatically (via OCR) extract information to import to contacts.

Language Learning

Augmented reality layers could make useful scaffolding for language learning support apps tying into using Google’s impressive tools for image recognition, speech to text, and text to speech. Text and audio overlays could provide helpful support information to the learner, and this could be used either in classroom practice, or out in the real world. The support could be optional as a scaffold with the ability to turn on or off as the learner feels the need. This could conceivably involve:

  • Live OCR (optical character recognition) and live translation overlay of signage or written material (reading store displays, street signs, restaurant menus, etc)
  • Live speech to text of foreign language to text translation overlay on screen
  • Live suggestions of phrases to use in conversation, with spelling and pronunciation cues

As well, Glass could enable course activities or assignments where the learner goes out into the community and records himself practicing the new language in a real situation (go buy something in a store and talk to clerks, ask for directions, etc). The conversations and interactions could be recorded and reviewed or graded afterwards. The world can become a language lab.

Historical site and museum interpretation

The technology could also find great usage in historical site or museum interpretation.

The user could borrow/rent a pair at the entrance or visitor’s center, and use them to experience a transparent overlay of video or 3d animation based on location. This would provide the experience of being there and then. This could be used at the sites of famous battles like Waterloo, Civil War, WWI and WW2 battlefields, Plains of Abraham, Revolutionary War, etc. Or at old historical ruins like the Colliseum or Acropolis. Or a natural history exhibition site could show what the location looked like in the Jurassic period, or give a visual sense of what the glaciers would have looked like in the past ice age.

Similarly, it could be used in museum exhibit interpretation. The user borrows/rents the glasses, comes to a display/exhibit, and the Glass detects the location and makes an audio-visual presentation available over wifi.

Operational performance support

Glass could also potentially offer performance support for operations of equipment like airplanes and cars.

In aviation, for example, this could enable hands-free, eyes straight ahead checklists as an alternative to glancing downwards at the checklist display on a cockpit Multi-function Display (MFD). Glass, receiving a wireless feed from the airplane, could display the current checklist item, upcoming checklist items, and any special cautions or warnings for steps. Visual annunciations could also potentially appear on the display.

It could also be used in driver training or driver performance support, for example performance support as a reinforcement for defensive driver training. The user can opt for supportive prompts or possibly the glass monitors the driver’s point of view and traffic via camera and gives scaffolding prompts until the learner reaches proficiency. For example: it could prompt the driver to check mirrors periodically, prompt to check the speedometer periodically,  provide a visible cue if the speed calculated from GPS and accelerometer exceeds what is known from GPS/Google Maps to be the posted limit by a some threshold (10-20 km/h above, say), provide a visible or aural prompt to encourage attention or slowing if red brake lights are seen far ahead or it is detected that a car far ahead is otherwise slowing or if the following distance behind the car in front falls below some set threshold.

Conclusion

These are just a few possible training or performance support applications that can be imagined for Google Glass. Doubtless others can and will be imagined and realized as the technology rolls out commercially.

Feel free to leave a reply to share your comments and your own ideas.