Hands-on with HoloLens

I’ve written a lot here previously about the potential for Augmented Reality, specifically the HoloLens, introduced in 2015 and released as a developer edition in 2016. However, the HoloLens being expensive ($4000 in Canada) and a little hard to find, I haven’t previously had the chance to use one hands on. Last week, thanks to a vendor demo at work, I was finally lucky enough to get my hands and eyes on this amazing piece of technology. Here I share some of my observations from this brief hands on session.

Note that I only got to spend something like 30 minutes of time directly with the device, so there was not time for a full and comprehensive review, nor for any deep assessment of how well it would fit into an extended use workflow. But I explored around as much as I could in the time I had.


First of all, the strengths:

  • The refresh rate and power seem good. I didn’t notice any lag or stutter while moving around a relatively complex 3d image
  • The device features a decent amount of storage memory. The device has a nominal memory is 64GB. You end up with somewhat less with the operating system installed, but you still have a decent amount of memory.
  • The main interface is similar to Windows with the tiled layout. Menu interfaces are similar to Windows, Xbox One, etc. It runs a special edition of Windows 10.

  • The voice interface and voice recognition seemed to work very well. The interface for the OS uses Cortana voice assistant as one of its main interface modalities (along with gestures). You can use voice commands to launch apps and other basic tasks.
  • The operating system comes with a nice tutorial app to help the user quickly acclimate to the user interface. I didn’t get a chance to go all the way through this tutorial, but what I saw was good.
  • The device has connectivity for Bluetooth and Wifi. Can connect a Bluetooth keyboard. Allows you to use typical apps for productivity, serves as a screen. I did not have the chance to try this out.
  • Battery life is not bad. About 3 hours with active use. I see some reviews online reporting up to 5 hours, depending on the type of usage. This is not enough for a full workday, but enough to say, get you to lunch, you plugin to recharge, and then come back after.  Recharging apparently takes a fair amount of time. However, the HoloLens can be used while plugged in if you’re sitting in one place, which could help extend the length of a session.
  • Video capture was smooth and easy, and will capture your first person perspective, including the apps and holograms in your view. Microsoft calls this “mixed reality capture.” The device features a 2.3 megapixel video and still camera.
  • I found the device to be reasonably “glasses friendly.” I wear glasses, and sometimes VR headsets can be a pain to make sure the two layers of lenses between the digital goodness and my eyes cooperate and align, but this worked well.
  • The device was also reasonably comfortable to wear. It sat well on my head, and was not noticeably heavy. Some VR goggles I find a little front heavy, but the HoloLens is lighter and doesn’t project as much outwards in the front.


The device is not by any means perfect, however. This is a first generation developer edition for developer experimenters and creative professionals. It’s an R0 product. It’s a lot of the way toward where it needs to be for a widely usable consumer device, but some things could benefit from a few more years to tweak and take advantage of the ever falling prices of hardware to improve the specs and performance. A few shortcomings with the current edition:

  • The vertical field of view is a little narrow.  It’s far enough from your eyes that it only takes up part of the field of view and the edge is visible. If you’re too close to a large holographic object, it will just end at the edge, which breaks the illusion. Perhaps later versions will address this with a screen that fills more of the field of view.
  • Resolution is a little low at a max of 720. It would be difficult to do very detailed 3D models. Hopefully future models can bump this up to at least HD.
  • The device seems somewhat limited in terms of the basic built in gestures. Tap to click on something,tap and hold to drag, resize, or rotate an object and bloom (open your fingers with your palm facing you) to launch a menu. I would be interested to see if the development kit allows you to build new gestures. I believe the Kinect dev kit has a tool for this.  The only thing is that if you build new gestures, part of your work would be to seamlessly educate users on the gestures. All in all though, I did find it relatively intuitive to pick up the basic gestures, even without the built in tutorial.


  • The RAM is somewhat limited – 2GB for the CPU, and 1GB for the “Holographic Processing Unit,” a specialized GPU of sorts for handling processing of 3D graphics. The CPU specs are at the level of a smartphone or tablet, so despite the fact you can connect a keyboard, you wouldn’t be able to run very heavy apps. That’s something that would be good to work on for later editions, because one of the places I could see HoloLens ultimately being useful would be as a replacement for a desktop computer and screen. More RAM would be needed to make that feasible.

Apps available

There are a number of apps available. Some of these I was able to try out, others I was not. The apps available include:

  • Netflix
  • Edge browser
  • Camera
  • Movies and Tv
  • Skype
  • Music
  • OneNote
  • OneDrive
  • MS Office
  • HoloStudio
  • Minecraft

Basic Functionality

The HoloLens responds to movement laterally and vertically through space, and to 3 axis rotations of your head.

Windows that are launched initially hover a few meters in front of your eyes, and by default move to stay in your view as you move and turn your head.  However, you can also point and click to pin windows and objects in place. There is some smartness to this; the system software identifies flat surfaces, whether horizontal or vertical. If you turn so that an object or window is hovering above a table, and then click to pin, it will lock to the table top. If you hover a window over a wall, and point and click to pin, it will lock to the wall. You can similarly point and click to unpin. Picture something like pinning an app icon to your desktop or the toolbar in Windows or MacOS, but with the desktop now being now any planar surface in your  surroundings. It’s hard to understand how cool this really is until you try it, to spatially arrange your computing within your environment.

Even more interesting is that the system can remember the placement of Windows and holograms between sessions. For example, you could pin a Netflix window to your living room wall. Voila. Virtual TV. Walk away into another room or to another floor and come back, and it’s still there.  Or rather, it remembers that it should be there. And it will still be there the next time you fire up the HoloLens. There are so many possibilities for this.

Cameras within HoloLens map the environment as you walk and gaze around. This creates a 3d model of the surroundings. If you walk around a building or house, it will map all the rooms visited and store a map of the area.

The HoloLens has built in speakers that allow for an immersive spatial sound experience. Sound sources can have directionality, just as in real life, and sounds attached to holographic objects will rise and fall in volume and adjust in perceived position relative to you as you move closer, farther away, and go around the virtual object. This enhances the immersiveness. And while the speakers in the device manage a good volume to let you hear, it doesn’t seem to project significant noise to those nearby.

Unfortunately the HoloStudio tool for crafting 3D objects did not seem to be installed on the unit I got to play  with, which was a minor disappointment. This is an app I was pretty excited about. Nor did a get a chance to check out the experience of browsing and downloading apps from the store.

Nor, sadly, was the amazingly cool looking Minecraft app installed. (First thing my son asked about when I told about my hands-on)

The Path Forward for HoloLens

All in all, I came out of this brief test of the HoloLens extremely impressed. This is an amazing first draft of this technology. Even with some minor shortcomings it nevertheless reinforced  my earlier perception that Augmented Reality is going to be a key part of the future of personal, creative, and work computing.

This first release of the HoloLens is a developer version is not meant for mass adoption. It’s a first step in a planned series of releases. Previously there was an intention to release an incremental v2 in 2018, and a v3 in 2020.

However, recent unconfirmed reports indicate that Microsoft will instead focus effort on speeding up the v3 release by skipping v2 and refocusing efforts on a big jump with v3. V3 is expected in 2019, and, one would presume, this is where the tech would launch in a more powerful and hopefully cheaper consumer oriented model.

I for one am looking forward with anticipation. Until then, I can only hope I will have the chance to spend more time exploring the the possibilities of this new mode of computing.

Another review:


Previous Augmented/Virtual Reality Articles on this Blog

Virtual and Augmented Reality and first person media

On the Microsoft HoloLens

Augmented Reality and Wearable Computing: Possibilities for Google Glass in Training

Concept: Maintenance Training 2030

April 25, 2032 …

Ed, a Canadian aircraft maintenance training instructor located in Montreal, gets ready for his upcoming engines maintenance class.

Supposed to be a big class this week, 8 students.

  • 1 from Canada
  • 1 from the US
  • 2 from South America
  • 2 from India, and
  • 2 from China

Ed checks on the equipment in the classroom.

He puts on his Holo-Glasses, which come to life, softly glowing holographic data displays and icons popping up in front of him.  The device recognizes him, launching the virtual assistant to greet him. “Hello Ed! How are you? All set for your class?” “Just fine, thanks. Everything set?” “Yes, Ed. All the students are going to be attending; no cancellations. Everything looks good with the students. One was having some technical issues earlier, but I helped him through it.” Excellent,” thought Ed. “Everything looks alright with 15 mins to go.”

Ed begins cueing up the opening presentation notes, and the multimedia training manual. These pop up in their own windows in Ed’s field of view.

As Ed continues his preparations, the digital assistant relays notifications confirming the status of the students. The assistant is communicating with the students before class so Ed can focus on his preparation. Everything is looking good. Ed checks the 3D cameras and tests out his holopresence projection, seeing what his students will see.

“Loo-king good! Let’s do this!”

A few minutes later, the class begins. Ed welcomes the students as they holopresence in from their remote locations. Ed and the students, their Holo-glasses on,  take their places in the shared virtual classroom. The software places softly glowing holographic representations of the other participants in the shared visual space.  Ed looks out at the students’ faces, and the students see a holographic overlay of the same classroom and the same students from their own virtual perspective. At first, the experience is a little eerie, but as the class gets going, and all the students introduce themselves, the illusion takes hold and it feels like everyone is in the same classroom.

Ed presents the content, asks questions, and listens to the responses. Master teacher he is, he observes carefully, gets a sense of the learners’ body language and expressions, and, much like in a real class, adjusts as he goes. Ed brings up holographic 3D animations and models of the engine and components for the class to see. He zooms, rotates, and takes apart the holographic engine parts. The hologram also appears in the students’ fields of view, and Ed invites students here and there to come up and try for themselves and demonstrate actions to the class. Static images appear on screens in mid air, demonstrating schematics.

In the afternoon portion it is time for the virtual hands-on lab exercises. Ed and the students convene again, once again with beautiful, interactive 3D holographic models of the engine floating in the shared digital overlay. This time however everyone puts on their SureTouch(TM) haptic feedback gloves. The gloves use sensors to read finger and hand position, the headset measures their hand positions in relation to the digital model’s virtual position, and actuators in the gloves give pressure feedback to simulate handling real objects with substance instead of just weightless holograms. It’s kind of weird at first, and it’s not quite the same as the real thing, but close enough for horseshoes and hand grenades, as they say. And definitely a hell of a lot cheaper than taking an actual engine offline to train. As always, it took a few years for the technology to perfect itself and a lot of research and proofs of concept before the regulators really believed it could be as effective as the real thing. The Dutch Aerospace Lab did some great research as always, and once EASA signed off, the other regulators followed pretty swiftly after. Regulators came to appreciate virtual maintenance training, just as they came to appreciate the power of full flight simulators decades before.

The company definitely appreciates it too – they save a small fortune in flights, hotels, taxis, and per diems doing virtual classes like this over the course of the year. As do the students’ companies..

Ed for one, appreciates it too. No packing, no airport security, no  cramped 12 hour flight, no hotel room, no taxis, no jetlag, no traffic. Well … scratch that last one. This is Montreal, after all, where the seasons are winter … and construction. Even in 2032, there’s plenty of traffic. (You can’t win ’em all, I guess.) “Oh well, ” thought Ed. “Decent weather today, so at least could read a book on the way in while the autodrive on the car took care of all the unpleasantness.”  And all from the comfort of the Montreal office.

Ed loves it, and his family loves it too – less time away. And besides. even though he felt a little silly to admit it, irrational as it was, Ed had felt a littled weirded out by flying ever since they started the rollout of unpiloted commerical flights in the late 2020s. Hundreds of times safer than human pilots or not, it’s still kind of creepy to have algorithms flying you around instead of people.

“Or maybe I’m just getting old, ” Ed thought. Gets a little jarring after awhile to see the world transform itself before your eyes so quickly. The young seem to take it in stride, unphased, as they always do. And, Ed had to admit that the toys are pretty cool. All this change has its benefits.

Such is the stuff of life in a world of sci-fi dreams made true.





Concept: A VR “Memory Palace”

Introduction – Blame it on the Black Star …

About a month back, early January, I was driving on the way to work, listening to CBC Radio Montreal, as I often do. On the radio, the announcers were talking about David Bowie – it was the one year anniversary of his death.  They talked a bit about his final album, Black Star, which came out just days before his death, and played a short clip from the lead-off title track Black Star. I had played this album a lot last January, and so, a few blocks up, at the next red light, I dug the CD out of the storage compartment between the seats and popped it into the CD player.  And the beautiful-haunting-sprawling-soaring music flowed out after probably a year since I had heard the song.

And as often happens to me when I hear a piece of music tied to an emotionally poignant moment in my life, like a key turning the tumblers of a lock, a door opened, and sweet memory gushed forth in brilliant flashes. I think everyone gets this from time to time, with songs, with smells – but for me it is an explosive experience. This album was tied to a particularly poignant period of my life last year. A week to the day after Bowie died, I got married – remarried after 5 years going it alone following a nasty divorce. This album was the soundtrack in effect of all my driving during that period, including my wedding and honeymoon. The sweetest flashbacks danced across my memory – my wife’s colorful wedding outfit and the look on her face, the sight of the flurries in Mississauga that day, the smell of the January air, the scent of jasmine blossoms in a garland around my neck, the drive along the Niagara Parkway to the grand old Victorian inn where we honeymooned, and the gorgeous room where we stayed, the quiet streets of Niagara-on-the-Lake in the snow. All of this flooded back – I could actually smell the jasmine! – all of this from a song.

The portals of memory open in such unexpected and wonderful ways. And how strange its ways sometimes. The knowledge in the pages of a textbook you were trying to learn from yesterday – information you struggled hard to understand and remember – can be lost in oblivion, despite all your efforts, but the rich details of a weekend a year before can come back to you effortlessly if the right trigger is applied.

On the nature of memory …

Memory is deeply tied to emotion and place. Places to which you have a strong emotional connection, places which have great significance to you, they stick in your memory very vividly. Everyone has places like this. Some beautiful natural spot from a wonderful vacation, the location of a meaningful life event. Probably right now you can think of such a place. If you close your eyes, you can probably see the place in your mind’s eye in rich detail. You could even close your eyes and walk around in the place. Places with high emotional significance are given preferential treatment in being placed into memory, and are easier to retrieve. Spatial information in general – information about the layout of places – in general is easier to remember than general pieces of information, but emotionally-laden spaces are particularly memorable. And it is effortless. Certain portions of our brain are evolutionarily tuned to efficiently store and retrieve spatial data, just as certain portions of our brain are tuned to store and retrieve emotionaly laden information. Compare this to random verbal or conceptual information we want to make ourselves memorize. How hard it is to push it into long term memory so that it sticks, and how hard it can be to retrieve it again.

An ancient technique – The “Memory Palace”

One millenias-old trick for remembering things takes advantage of this ease of remembering places. This trick involves building a “memory palace,” and putting the things you want to remember there. In this technique, a person goes through an exercise of visualizing some place – it could be a familiar place, and imaging placing information to be remembered in distinct places within this familiar space. Then, when it is time to remember, the person simply closes his eyes, and in his mind’s eye, traverses the space, going from place to place, retrieving the information stored in this spatial filing cabinet.

Modernizing the technique – Virtual Reality

Now, let’s modernize this technique a bit, bringing it into the 21st century with the help of technology.

Imagine building a  high fidelity 3D digital representation of one of these meaningful places that is viewable and navigable in an interactive VR (Virtual Reality) environment. A 3D scene with some interactivity laid over top. Imagine being able to place notes or representations of information or objects you want to remember within this virtual space that immerses you in a meaningful place already seared into your memory. These pieces of information then end up arranged spatially within this virtual space using an arrangement of your chosing.

This representation of place and spatially arranged information and items could be saved and accessed and modified later.

This VR tool would provide a nice visual scaffold to the classical memory palace technique, which traditionally depended entirely on the imagination.  I would hypothesize that actually being immersed in that space and “seeing” the spatial relationships would support the process significantly. In addition, I could see the physical action of virtually placing the items within the space with hand gestures could also serve to reinforce the memory.


Interestingly, it looks like a company in England is already trying to do something similar to this, with a particular focus on using VR memory palaces to scaffold second language learning.



Coursera trend – no pay, no grades


Anyone who’s been taking courses on Coursera for awhile has probably noticed by now a recent development – that is, the move to charging students for submitting assessments for grading. Apparently, this new “feature” started in early 2016, but has become a disturbingly common trend lately. For courses that do this, you need to pay for the course to benefit from graded assignments. And, mind you, this is even for courses that use peer-graded and machine-graded assessments. A very informal sampling (probably at least 80% of the courses I’ve looked at recently) indicates that most new courses are adopting this.

This is problematic.

Coursera trend - No pay, no grade
No pay – no grade

The importance of assessment to learning

Seeing how the assessments work in a course is an important part of evaluating whether to take the course, and whether to pay for it. In many courses, the feedback from graded assessments is the best part of the course. For example, the completely free (funded out of an endowment project) Algorithms I and II courses from Princeton University use a machine grader that gives great feedback when you submit your assignments. This is where you do a lot of your learning.

Going through the assessments gives you a good sense of the quality of the course. When a course gives meaningfully rigorous assignments, you learn more, and being able to try out the assignments gives you more information about whether the course is worth paying for.

I understand that MOOC courses are expensive to develop, with estimates ranging between $30 000 and $100 000 per course. And I understand that Coursera is probably under pressure to gain more revenue to please investors. Given that, you can understand the desire of Coursera, and of  institutions to recoup their costs and investments. But they also need to reflect more carefully on what makes sense to a student.

Previously, as of 2015, the model was this: You could audit the course, including graded assignments, for free. If you just wanted to learn, you could watch videos, do readings, participate in the class discussion forums, do assignments, and get a peer-graded or machine graded mark. But, if you wanted a certificate of completion you had to pay for that. If you didn’t pay, you could still fully experience the learning, but wouldn’t have a sharable record of completion. This was the model since 2015, and it was a pretty fair model. It wasn’t quite the Utopian vision of free, accessible university level education for everyone,  but it was about the best you could reasonably expect from what is ultimately a for-profit platform.

For those who are newer to Coursera, in the beginning days it used to be even more open than this – you could get a completion certifcate or statement of accomplishment, even without paying. There was some minor uproar when free certificates were removed, but that at least I can accept as a fair compromise. “Freemium” is, after all, a pretty common model these days. It’s expensive to develop courses, not to mention host and serve them up to students. Someone has to pay the bills, and $70 – 100  to take a meaningful course  and have proof of completion you can share on LinkedIn is not a bad deal.

However, the key to freemium is that you need to offer a reasonably satisfying and enticing free product that stands on its own feet. You want a large number of people consuming the free product, because the larger that pool is, the more revenue you make from that percentage that opt in to extras like certifications. But getting an assignment graded is not an extra. That’s an essential part of the course. If you turn people away by hiding key parts of the educational experience behind a paywall, you threaten to shrink that pool of people.

I guess someone will probably make the analogy of auditing a university course. You can just pop in, sit in the back row and watch a lecture. But you’re not going to get assignments graded or sit for an exam. There’s maybe some merit to this analogy. But on the other hand, in the university course what you’re really paying for ultimately is the certified university credit (And the degree that a series of credit courses in a specialization ultimately add up to)

The problem with this move

Philosophical merits aside, there are two flaws I can see with the idea of charging to get assignments graded.

1. Any good course has a “long tail” – it’s expensive to get the course out there initially, but then it can be rolled out more easily month after month. A good course will be attractive to students over a long time, and this will give lots of opportunity to recoup costs. I see lots of good courses and programs like Stanford’s Machine Learning and Rice’s Principles of Programming in Python courses that continue to roll on with plenty of students years after they were first created.

2. Someone is going to be less likely to want to pay if he can’t get a proper representative feel for what the course is like first. Myself, I tend to take courses on Coursera in programming. Lectures are useful, but the hands-on programming assignments and the feedback I get from an auto-grader are the most useful part of any course. Applying the skills you learned from the lecture and getting feedback on what you’re doing right and wrong is a key part of the learning experience.

If a course prevents me from seeing what a course is like in this regard, I tend to drop it. Recently, I was looking into a Johns Hopkins specialization in Data Science. The first course looked interesting, but I felt compelled to drop it, because it wouldn’t let me submit even one assignment without paying for the course first. Why would I spend $80 for a course if I can’t get a proper sense of what it’s like?

And especially so when the course is the first in a specialization. A four week course is not really worth much as learning on its own. If I’d had a chance to fully try out the first course, maybe I would have gone for the specialization, and the organization might have gotten a few hundred dollars. Instead, because they were cheap, they got nothing.

I think Coursera probably believes, based on their own research, that enough people who would otherwise audit for free are going to find assessment valuable enough to pay for it if they can’t get it otherwise. And I agree that there are probably people out there for whom this will work. But I have the feeling there are more people like me who would otherwise have been inclined to pay after the fact, but now will not. I anticipate that many schools will see this play out over many courses, and will have to backtrack.

The good

This whole thing is unfortunate, because other than this, Coursera is making a lot of useful steps forward, and I want to give credit where credit is due. It’s come a long way in some respects in building both a great catalogue of training, as well as a nice, functional learning / learning management platform.

  • Many courses are being offered with great regularity, some even on a monthly basis. There is always a session going for many courses.
  • Course lengths seem to be settling into a nice average  – 4-6 weeks seems to be the norm, which is a nice size balance between having a meaningful chunk of content and being able to fit the course commitments into the life of someone with a full time job and family.
  • A lot more specializations are popping up – groupings of courses centered around a common theme or skillset grouping. This is immensely practical, and probably more meaningful as a show of learning than an isolated course. A single 5 week course on data science in Python? Ok, cool, but doesn’t add up to much beyond an introduction. But a 5 course specialization on Data Science in Python over 25 weeks that covers data cleaning, plotting and charting, machine learning, text mining, and social network analysis? There’s some value in that.
  • It is easy to switch sessions and transfer grading progress across sessions if you fall behind. I’ve found this very useful in a few courses where unexpected life business bogged me down and I got behind.
  • There is very good flexibility in terms of paying for courses. In courses where graded assignments are free without the certificate, you can complete the whole course and then later upgrade to a certificate by paying, even months after the fact. This is very nice.
  • Prices, while noticably rising over time, seem to be finding a sensible equilibrium. A few hundred dollars for 20 weeks of decent video instruction and meaningful assignments is a decent value compared to other training you see out on the market.

Larger question – the value proposition of Coursera Certifications

So really, I’m griping about one major misstep in what is, overall, a very good platform for learning. There are a lot of positive things happening.

The bigger question behind all of this is the value proposition for Coursera certificates. What value does a student get by paying for a course on Coursera? It’s still pretty fuzzy. I’ve taken a fair number of courses over the past few years on Coursera. Some on other platforms too, but Coursera has tended to be the one I have used the most. Some of those courses I have chosen to pay for, for various reasons. The first few, as a motivation to complete the course amidt work and family. Some others because I thought the courses were great and wanted to vote with my money – encouraging the people who made them and encouraging a model of continuing education I’m convinced is the future of career skills development. And finally, some others because I thought they might add something to my LinkedIn page or CV.

It’s this last one that has always been the most fuzzy and uncertain value proposition.

Do I get the sense that recruiters or hiring managers are going to see a LinkedIn or CV with Coursera courses on it and suddenly perk up? Not so much. I don’t think it’s realistically  there yet in terms of credibility.

Does a sequence of related courses add up to something interesting for the same people? Perhaps to the level of showing a certain discipline and commitment to continuing learning. Beyond that, hard to say. HR is often stubbornly conservative.

Given this reality, I think Coursera needs to do more work to justify the value of what students are paying for rather than focussing on paring back the free experience of their courses. Ultimately, that is the existential question upom which their ability to collect revenue and support the whole enterprise long term depends on. How can Coursera ensure that their certifications and specializations have value in the marketplace?

Virtual and Augmented Reality and first person media

Virtual reality (VR) and augmented reality (AR) offer a possibility that hasn’t been so present in media – a truly first person perspective.

First person perspective, where you see things through the eyes of another person or fictional character, has long been a regular form of narrative for novels. It has also been has been used in electronic media in some cases, within the limits of display technologies. In electronic gaming, the first person shooter genre, going back to Doom, displays a flat representation of the perspective of the player character. In film, there are a few interesting examples where large parts of a film are shot to tell the story through the eyes of the main character. Enter the Void and Diving Bell and the Butterfly are good examples of this.

But newer technology of VR and AR take things another level beyond. 

Just quickly for those who may be unfamiliar, VR and AR are two related, but subtly distinct forms of immersive media. In VR, through wearable goggles and other peripherals, a person is immersed in a separate, virtual space, some separate, alternate reality. In AR, the user is still within the visual space of normal reality, but augmentations are superimposed in the field of view over the objects of your space. The first is an escape to a different reality, while the second is a blending of this reality with digitally displayed virtual objects. A good example of VR is Oculus VR, while an intriguing example of AR is Microsoft’s HoloLens.


In both, the display is typically wrapped around your field of view. It takes up your whole visual space, or almost. While watching a film or playing a typical video game, you sit in your normal space and see images on a screen within part of your field of view. VR and AR can go beyond this. You’re not watching a presentation or watching a story from some distance. You’re there in the midst of it. You live it, almost. And this makes a big difference in terms of realism and immersion.

It’s no longer so much imagining being in the story, but experiencing it. You become a first person spectator, or participant, in interactive media. You are the character. You fully enter the world of the media.

Imagine the possibilities for:

  • advertising – This could have a hugeimpact. With these sorts of technology, there is no need to make someone imagine the experience of the product. They can just have the experience first hand.
  • creative storytelling in film – For example, see the recent announcements about Oculus VR and short films.
  • creative storytelling and gameplay in games. You already have the first person shooter (FPS) genre. I recently started the game Destiny on my Xbox. You’re just immersed in these majestic landscapes. How much more engaging and immersive if the screen is wrapped around your face and you’re in the middle of it? 
  • creative storytelling in interactive media 
  • teaching and training – to put students within a scientific simulation or visualization, to virtually repair a 3d model of a piece of equipment, to virtually visit a historical site or a recreation of a historic event in an immersive manner

Overall, there are possibilities here for a whole new type of media. It will be interesting to see what creative people can do with it.

On the Microsoft HoloLens

Microsoft’s big surprise

I want to talk about a big tech story from last week, the surprise unveiling of the Microsoft HoloLens Augmented Reality headset. This new product was revealed in the course of the launch of the Windows 10 Technical Preview. Probably, some of those reading this have seen the promo video:


This is pretty amazing, and completely out of nowhere. There was no advance sign that Microsoft was announcing this. The project, from the same inventor as the Kinect sensor, was a complete secret until the big reveal this past week. This pleasant surprise has had the tech world abuzz ever since.

Now, a word of caution. Yes, this video is a Microsoft promo, with an artistic representation of how its supposed to work. This is a product that is still, as far as anyone knows, in pre-production prototype stage. There is no release date; it may not even be out this year.

But still. From what I’ve seen from other sources, this is the most impressive and mindblowing thing to come along in a long time. This is next level iPhone and iPad big, one of the next steps in the evolution of the computer.

Why this is a very big deal

This sounds like hyperbole, and certainly, Microsoft has had products before that looked great in an early demo but it didn’t pan out. But generally I have a pretty good eye for this sort of thing.

In 2008 was about the only person in my office with a smart phone. My first cell phone. It was the reveal of the iPhone 3G that showed me the time had arrived. I didn’t buy an iPhone, but that was what showed that this technology had arrived. I had stayed out of cell phones to that point, because I was waiting for the technology to get to a certain point where it can become an indispensable all-in-one computer tool that fits in your pocket and connects you to the internet and lets you do everything – take notes, read/write email, read web articles, internet chat, play games, do light office tasks, etc.

A few years later, in 2010, Apple released the iPad, the first one. I understood immediately that this was going to be huge. So for the first and only time, I waited in line on launch day at the Apple store. That first iPad had a lot of limitations (no camera, no multi-tasking, no folders, kind of chunky – and you couldn’t even copy-paste!), but I loved it for its obvious potential.

I’ve been on Netflix since the initial beta in Canada. That was a tough sell back then. The selection was pretty lousy in Canada at that point, though you could see it grow month by month. Now something like 1/6 of Canadian residents has a Netflix account. 

This new product from Microsoft feels like the same sort of up and coming thing. This is literally science fiction type stuff that I previously would have considered maybe possible within  the coming decade. But this looks like it will soon be available. The world moves so fast these days.

This product fits in amongst the ongoing paradigm shift toward Natural User Interfaces (NUI) that I’ve spoken about previously.  The world of computing went through Command Line Interfaces (CLI), Graphical User Interfaces (GUI), and now, Natural User Interfaces (NUI). In NUIs, a user interacts with the system more like he interacts with people and objects in the real world. Touch based computing is part of this. Speech recognition and commands are another part. Virtual assistants are another. Gesture based computing like the Kinect is another. HoloLens fits in this same trend.

Think about the media we experience through computers. We, in a 3 dimensional world, look from a distance at some other 3 dimensional space, through this two dimensional plane, the screen. It’s realistic, it’s lifelike, but it is always separate, a wistful yearning gaze at something you can see but never reach. This new technology in a sense collapses the screen. These two 3d worlds, the barrier collapsed, flow in amongst each other, and coexist in the same physical space. The user lives within and interacts with both. The user experiences a digital object or sprite within his own world. And the user can create his own holographic worlds, and share them with others, who also experience it with the same immersion and intimacy.


Not just a slick concept video

The most amazing thing is that the demo ideas shown in the first video I saw are apparently not just artists conceptions. This is what I initially assumed on watching the promo video shown at the beginning of this post. “This is a sense of where we’ll be by launch. ” But actually, no. These are examples of real software that has been developed by third parties. The Holographic Studio tool from the video where the father builds a spaceship and then sort of just Exports to 3D Printer is real. That 3d workbench tool the father was using. That’s not an artist’s representation. That’s currently existing software. Amazing. Can you imagine?


The Holo Studio tool  actually looks like a wonderful potential tool for (holo)graphic artisits for developing future media assets. And that’s just an early design tool. Imagine what a multimedia design powerhouse like Adobe could cook up (if they’re not already busy in the kitchen).

This could be a great tool for developing holographic sprites for games or holographic multimedia for learning and presentation materials. It’s a fantastic kickstart for helping to support a development community. This means that you could develop media for the environment from within the environment. That’s the beauty of this technology that fuses the space of the interface with the space of the user environment.

The other stuff is apparently real too – the Netflix on the wall, the Skype window in mid air, the Minecraft and gaming within the space of your living room is proven. Also, using these holograms to maneuvre a Windows interface is here.

This article also gives some more insider perspective, and corroborates that there is something real there.



Gaming Possibilities

The gaming application examples, remind me of the alien and robot game scene from “Her:” (Please excuse some of the language in the clip)


Imagine playing a game in your living room. The video of the person playing Minecraft in his living room looked great, as did the video of an alien run and jump game.

           A 3D character running around a living room using HoloLens.

Imagine playing a game with others connected to the same LAN. Say in an environment like an urban paintball course. For example, some sort of first person shooter game.

I look forward to what kind of gaming applications Microsoft can get going, maybe by engaging people developing for Xbox. Apparently apps developed as universal Windows apps will run naturally on HoloLens.

Training possibilities

Imagine the training possibilities. The same sort of facility as this urban paintball could function for military battle training in groups. Some walls and corridors would serve as a scaffold for a virtual environment, which the software could fill with holographic enemies.

Imagine the possibilities for simulation and scenario based training. Talk about immersive and high fidelity.

Imagine the possibilities of interactive 3D media on a range of subjects. Science for visualizations. History for being able to be immersed in historical locations. Math to visualize complex structures and graphs. Or computer science, to visualize the flow of an algorithm or the relationships between object classes in a computer program.

The Skype video communications tool on HoloLens could allow live maintenance coaching, as shown in the video. Imagine a maintenance training course where the learners’ organization could pay for an optional post course support. Give the customer organizations some of the devices, and then when there is an issue with the equipment they can’t solve, they can put on the headset and Skype call, and an available instructor can help to walk them through it. The instructor will be on Skype, watching the live stream from the maintainer HoloLens glasses on an iPad or other tablet. Or for distance learning, for tele-instruction. Imagine a maintenance course given completely remotely using the HoloLens. Imagine a teacher teaching a class live, but with students also tuning in via HoloLens.

I talked about these and some other ideas when I spoke before about possibilities in training using augmented reality on the Google Glass. This new product comes in a wave of other Virtual Reality and Augmented Reality. This includes Facebook’s acquisition, Oculus VR, Samsung’s Gear VR goggles, and Google’s Magic Leap project.

Or imagine if, beyond being able to have a Skype conversation with someone on a flat window in space, if other people in remote locations could be brought into this world as realistic, high fidelity holographic representations in real time. Kind of like Princess Leia calling out for Obi Wan on Star Wars episode 4. Maybe that’s beyond where the technology will be soon, but that would be bona fide tele-conferencing and tele-collaboration. 

 Further reading







Instructional Design Lessons from the designers of Stephen Hawking’s text input system

A recent article in Wired magazine told the story of the team of Intel engineers who developed ALS-Inflicted physicist Stephen Hawkings current speech input system. I highly recommend. The story at the link below.


As an Instructional Designer, I loved this article, because it talks about the iterative design and development process of a sophisticated support tool. It has so many great lessons about engineering and design, about needs analysis, about iterative design, interface design, and about prototyping and testing. These lessons apply to design in general as well to the specific case of Instructional Design.

Let’s take a closer look at a few of these lessons.

The power of good tools

It shows the power of well-crafted support tools as a solution to performance challenges. Educational Technology students will remember that one of the core messages from Human Performance Technology is that our real business is not so much teaching in itself, but in improving human performance, improving the ability of people to reach their goals for various efforts. Sometimes training is the route to this improvement, but sometimes better tools are the answer. A well crafted support tool fits us like a glove. It becomes like an extension of us and extends our abilities.

Change can be jarring

It illustrates the challenge of insituting change in human-machine systems, particularly with older users. On paper a new tool may be much more efficient. And maybe it would be, with a blank slate. But users bring their attachments to long standing learned patterns of doing things. There is an inertia here. Learning a new way of doing something can involve unlearning the old way. It highlights one of the challenging contraints in design, keeping of continuity with what came before to avoid overhwhelming the user. The tool is supposed to make life easier; be cautious of how big an up-front hill you put between the user and the point where the user starts to see the benefit.

Know your users

It illustrates the importance of taking time to understand your user, where their challenges are, and how to solve those problems. A bit earlier I mentioned the power of tools that are crafted to the user. But to design such tools, you really need to take the time to know the needs of the user.

Iterative design and rapid prototyping

It illustrates the benefit of an iterative design process designing. building, and testing multiple prototypes. While in the textbooks, Instructional Design is presented as iterative, in practice, a lot of training organizations see it as linear. You analyze, then you make the whole design, and then you build it, and implement it. But so often you don’t really understand the needs of the user until you get into building something and let the user try it. And often, even though the user signs off  on a design, they don’t really understand what it means either until they see it live. That’s why an iterative design with rapid prototyping can be so helpful. More back and forth in the design and development phases can save a lot of headaches later. Build something preliminary, test it with the user, then make corrections. Figure out something is not working in the prototype stage, not after investing hundreds of hours into development.


Finally, it highlights the importance of good communication with the client regularly about design vision, plans. Of listening carefully about what worked and what didn’t in the most recent prototype. Of hearing what they are saying and what is unsaid, between the lines.

New ideas and technologies in training and learning

%d bloggers like this: