On Driver Education

When we’re driving around, I often like to remind my eldest son that by the time he turns 16, in the summer of 2021, driving will probably be, or be in the process of becoming, an obsolete skill. (In the city and on the major highways, at least) Driverless cars are in the workable roadworthy prototype stage, and an array of performance support tools are being rolled out in newer cars. These tools serve to guide drivers, augment their awareness of their surroundings, and improve safety. One of the posts I currently have in draft is on the parallels between aviation and driving in terms of increasing automation and performance support. Experts tend to predict driverless cars as becoming a common reality in the early 2020s.

In the meantime, people will drive, and young people will need to learn to drive. But is it being done well? Or as well as it could. Safety equipment in cars and in the engineering of roads has reduced overall accidents and fatalities, but driving remains a relatively dangerous mode of transportation.

Young people entering into the world of driving at age 16 (and new drivers in general) need solid training to stay safe. When I turned 16 I went through the Young Drivers of Canada program. The basic program is similar to high school driver’s education, but with more emphasis on practical skills of situational awareness, threat monitoring, and collision avoidance. The training consisted of a classroom component of about one week in length, and a number of in car driving lessons. The classroom time was for teaching of theory, and consisted largely of lecture, some discussion, and watching of instructional videos. The road lessons in the car allowed practice of skills with grading and feedback by a trained instructor.

I went through this training close to 20 years ago now. This intervening period has seen a plethora of new information and sensor technologies arrive on the market. Oddly, the domain of driver education does not seem to have evolved appreciably in terms of approach. It still seems to be classroom plus supervised road lesson.

Looking at this through the eyes of an Instructional Designer, I see some gaps here in this training approach that, if filled, could lead to more effective learning, improved safety in early experiences on the road, and improved transfer and retainment of skills after the initial training is completed.

The classroom training and road practice are a good foundation, especially within a framework of defensive driving that includes threat monitoring, preventive measures, and emergency maneuvers / collision avoidance. These are necessary components of a strong foundation for an effective training system, though I’d probably advocate exploring some form of blended approach for teaching theory materials. Having some material delivered as online video or eLearning/mLearning content would allow for better flexibility of the learners, particularly adult learners. But overall, the theory and hands on supervised practice are good components of a foundation.

That said, I see two places where additional training could fill a gap and potentially improve performance.

These gaps are:

  • The need for some intermediate training to bridge the space between classroom (or online) theory and practice on actual roads. There is a wide chasm of experience between the cognitive activity of learning theory material in a classroom and the complex psychomotor skill of driving. Something in between would help to bridge this chasm and soften the transition.
  • After the course is over, there is a need for some level of electronic performance support while the young driver is out on his own. This is needed to scaffold the learner in applying the skills safely while they are still new, until such point that the skills become internalized, and automatic.

In regards to the first gap, let’s take the example of aviation. When an airline pilot is doing type training on a new aircraft, the pilot does not go directly from the classroom to flying the actual plane. First, the pilot spends time training scenarios on a simulator device. Simulation based training offers a lot of benefits. It is completely safe, but feels psychologically real when the different types of fidelity (the sense of realism of the simulation) are high. There are different dimensions of fidelity – (1) sensory perception/look and feel, (2) the process of operation of the simulated equipment versus real, and (3) the dynamics of the simulation, the relation between action and results.  It can be less expensive to operate a simulator than the actual equipment, at least once the initial investment is made to acquire the simulator. You can also control the scenario conditions to have focused training and avoid the problem practical training experiences being ad hoc, depending on the random conditions of the day.

This first gap then could be addressed through some sort of electronic driving simulator, similar to how pilots train on flight simulators. I understand that some driver training programs alresady use these, but it needs to be more universal. At one end of it could be some piece of fixed equipment similar to the flight simulator or fixed based trainer, with a mockup of a real driver’s seat, dashboard, displays/gauges, and steering wheel combined with screens to simulate out the window views, and speakers for sound effects. At the other end of it might be something like a realistic game-like driving simulation on an Xbox with a Kinect sensor and the learner’s hands as controller.

Performance support on the other hand could take the form of some sort of apparatus with cameras/sensors and a built in computer set up inside an actual car. It could collect data about speed, traffic conditions, weather, local speed limit, braking and swerving, and signalling. It could track eye movements to look out the windshield, both toward near objects and far objects, toward rear and side mirrors, and toward the blind spot. It could also monitor hand positions on the wheel (or stick, as relevant). The computer could collect data for later analysis (Syncing to a mobile app, for example).

Ideally, it would also give spoken word support cues / prompts / feedback in response to conditions and what the novice driver is doing:
“Remember to check your mirrors” “Look ahead to spot upcoming problems” “Remember to check your blind spot” “Brake!” “Try not to ride the clutch” “A little more gas” “Slow down a bit for weather/traffic conditions” “Accident ahead. Caution”)

Ideally, this form of performance support could be built into future cars as a “training / support” mode. This mode could be engaged at the push of a button on the dashboard as part of the support/automation systems that are increasingly built into modern cars. For now, though, it would have to be developed as a third party device.

The television as a learning and training space

Introduction

Recent years have seen the world of training embracing  learning on mobile devices, or mlearning, for short. There are many reasons for this:

  1. Client demand as people more and more browse the internet principally through mobile devices
  2. Clients always having their phones with them, allowing lots of little moments during a day when learning could potentially take place.
  3. Phones having lots of sensors and input methods, allowing for innovative interactions
  4. Phones allowing multiple communciation methods

Designers and developers have been working on designs using mobile learning. At its most basic this has taken the form of  using file formats so that videos or presentations will play on a tablet, or even just an iPad. Or to make the training as an iPad app or playable within some container app.

Others, approaching the matter with some semblance of actual seriousness, have gotten more creative, and tailored training more to the unique affordances of smart phones and tablets. They make learning games that use sensors or activities that use sensors as inputs for motion or touch based interactions. Or they use location information. Others use ideas of informal learning and performance support to break training into small, focused little pieces that can be accessed in a spare moment.

eLearning authoring tool providers advertise their tools as enabling responsive eLearning. They hype the promise of being able to publish content to multiple media and device types, for desktop, tablet, and mobile.

This is good for learning and training. However, in this focus on mobile, we may be losing sight of possibly the next key development of web-based learning and training – the television as a learning and training space.

Television as a new window to Internet content and learning

Sitting on a couch with a tablet is a nice way to watch  a video or presentation. The device is light and comfortable. But, still, it’s a 10 inch screen. It is nice for portability, but it’s still a small screen. The small size is a compromise, trading visibility and real estate for portability.

But across from the couch is what? The TV. Big screen – 30, 40, 50, 60 inches. 1080p HD, easy to see, nice to watch, decent speakers. And you don’t have to hold anything.

Television used to be a box on which we watched traditional television programs, whether delivered over the air, or through cable or satellite broadcast. Then, came VHS players, DVD, Blu-ray, video game consoles. The living room TV became instead the screen in the middle of a home entertainment center.

Now, increasingly, televisions are also becoming just another one of the screens,albeit, much bigger ones, through which to access internet content, whether for entertainment, work, or learning. This takes the form of video, audio, text, and apps. The long promised fusing of internet and television has arrived, with several different options available to make this possible.

Many TVs are now “smart TVs,” combining a TV with a computer. These TVs are WiFi enabled, with built in interfaces and platforms with apps capability. Apps allow straightforward connectivity to content sources like Youtube, Netflix, digital music streaming services, and other streaming media.

Modern TV screens also allow for stereoscopic 3D. While no longer a faddish selling point, most newer TVs are by market standard capable of displaying stereoscopic 3D content, whether accessed over the web or on 3D Blu-rays. TVs stand out notably from the other screens through which we consume content in that many of them today readily allow Stereoscopic 3D media. TVs are the one dependable 3D screen that people commonly own.

TVs are also capable of being connected to gaming systems like PS4 and Xbox One, the second of which includes the Xbox Kinect motion and voice sensor. These systems, while meant primarily for gaming, are also intended more generally for home entertainment, with app platforms and apps like Netflix and Youtube to see internet video content.

As well, set top boxes like Apple TV as well as many WiFi enabled Blu-ray players offer a similar bridge between the television and the internet.

Tablets, phones, and laptops can share screens wirelessly to TVs, either through devices like Apple TV, game systems, or via Miracast / WiDi from enabled devices.

It is easy to get content on the TV. As well, the TV will either be setup with sensors, whether in the TV itself or via something like an Xbox, or the person will be screen sharing from something which has sensors and a touch based control interface. It becomes easier to browse, select, and interact with online content shown on the TV.

Designers, both web designers an instructional designers,  need to think about training and learning possibilities in this space.  just as they should be thinking about that OTHER class of displays that will also be more and more in people’s lives – wearables and augmented/virtual reality tech such as Google Glass and Oculus Rift. (More on this in a future post)

Challenges

There are a few challenges in this area:

Platforms

One main challenge is that there are so many different sorts of configurations and ways to connect the internet to the TV:

  • Via game consoles such as XBox One or Sony PS4
  • Smart TVs
  • Set top boxes like Apple TV, Wifi Blu-ray player, or Chromecast
  • Computer connected to the TV to share the screen via HDMI cable
  • Wireless screencast from laptop, tablet, or smartphone to the TV, whether through Apple Airplay or up and coming wireless screencasting standards WiDi (wireless direct) and Miracast.

This makes things difficult for developers, as there is no one clear dominant target for development.

The gaming consoles, which have positioned themselves as not only gaming platforms, but home entertainment hubs, may be one promising avenue, as the multi-billion dollar gaming industry already attracts lots of skilled developers to these platforms. Microsoft’s XBox One in particular runs an operating system related to Windows and uses the same development toolkit. Also, these gaming consoles offer innovative ways to interact with the content on the TV through different types of controller devices. These include body movement and voice based controls. The gaming console option offers interesting possibilities in terms of innovative learning interactions.

A more straightforward, elegant solution may be through smart TVs, where everything is in one box. This would especially be the case if in the future the telvision included sensors that could be turned on for Kinect-like interaction with cameras and microphones. One challenge, however, is attracting developers to different platforms from different manufacturers. Possibly only a company like Samsung, which is involved in manufacturing phones, tablets, computers, and TVs would be in a strong position to carry over advances in interfaces and interactivity from other devices to TVs. Or someone like Apple.

The other challenge would be emotional reactions from consumers. When early press about the Xbox One suggested that the system would require the Kinect sensor – which includes stereo cameras and microphones – to always be on, even when the system is not in use, people became very paranoid, and there was a backlash.

It is possible that TVs will evolve in coming years to become a sort of all-in-one computer, with web connection, innovative web browsing methods (the concept of adaptive web design will also have to adapt and evolve to cater to TV as a screen), app platforms, and built in SSD memory space, possibly supplemented by cloud storage.

Quite possibly the next stage of the Apple OS – Android – Windows – Linux battles will be fought on the battlefield of internet connected TVs. Ubuntu, for example (A variant of the Linux operating system) has actually been positioning itself as a flexible multiplatform, including TV – OS for some time.

Wireless screen sharing may be the simplest approach, making the smartphone, tablet, or PC the central point of control of what appears on the TV screen. Desktop and laptop computers would have limits though in terms of enabling learning interactions.

Tablets and smartphones, could potentially allow for interesting learning interactions through the accelerometer, gyroscope, and touch sensors.

The scene is probably going to be messy for a few years with a lot of options making it hard for developers to pick. This will make it hard to form development communities that will drive things explosively forward.

Interface and Interactivity

The possibilities for learning and training will depend somewhat on the options available for interactivity. One of the challenges in making the TV a hub for learning content is how the user can control and navigate content sitting or standing from across the room. Good eLearning and online training especially requires rich interactions.

But how do you interface with the TV? A computer you sit right there and control it via mouse and keyboard, and to a lesser extent, microphone and camera. A tablet or smartphone you tap it, swipe it,  turn it, talk to it, because again, you’re up close to it and it fits in your hands.

TV is different. You sit back from it, or stand back from it. You’re not going to stand at your TV tapping the screen like those big maps on CNN election night.

There are probably four major options:

  1. Some modification of a traditional TV remote, possibly one with a touchscreen and accelerometer/gyroscope sensors
  2. Some camera and microphone based sensor like the MS Kinect that lets you control via voice and body gesture
  3. Controlling through a laptop computer, tablet, or smartphone, which shares the screen wirelessly via WiDi, Miracast, or Apple Airplay and lets you control things via touchscreen and motion sensors. The TV simply becomes a screen to mirror content on the other device.
  4. A smartphone or tablet is paired with the TV via an app, and serves as a WiFi-connected touch- and motion-based controller.

All of these could probably be made to work, though options 2 and 4 are probably the most plausible options going forward in terms of usability and in terms of building on existing platforms.

 

Learning and Training Possibilities

The matter then becomes how to harness this emerging new portal to the internet for learning an training.

A few possibilities come to mind.

  • Any passive consumption of video content. Particularly content in HD or stereoscopic 3D format. YouTube contet, for example. A TV would be the most natural and comfortable way to watch. Everything becomes bigger and more lifelike
  • Educational gaming activities using a gaming controller
  • Web content browsing with voice and gesture inputs enabled by something like the Kinect. Say, for example, a view of different documents or different levels of detail making use of different focal planes in a 3D field of view. This allows information and screen elements to be arranged not just along dimensions of horizontal and vertical, but by depth as well.
  • Interaction with stereoscopic 3D models using Kinect sensor. Such as chemical structures, architectural structures, geographic feature models of an area, or components of equipment.
  • Live, synschronous, life-like teleconferencing via TVs and Kinect sensors using apps like Skype or something like it embedded in a virtual classroom application. Virtual classroom would work very well on an HD television with connected camera and microphone. For live, face to face communications, for conversational practice in language learning, or a live virtual tutoring session.
  • Using the Kinect, the learner practices some psychomotor skill. At the same time, the Kinect camera lets a remote instructor watch the performance and comment. The Kinect could also capture data to assist in analyzing biomechanics.

These are a few sample ideas. Maybe readers can think of others.

 Conclusion

The past six years have seen dramatic changes with the coming into the mainstream of mobile devices as a new space for online learning, with unique affordances for interactivity. The mobile web and mLearning have expanded our horizons for entertainment and learning. The television, connected to the internet offers a new field on which we can ply our craft as designers and developers. It’s a developing field with a lot of options that will take some time to sort out and settle down. But for those of us tasked with helping our clients and students to learn and develop, it’s a field we would do well not to ignore.


 

Once again, feel free to share your comments, either below, or via social media.