Principles for Effective mLearning

 

 

 

Introduction

The rise of use of mobile computing and communication devices like smart phones have dramatically changed our ways of life. How we navigate our environment, how we search for information, how we read, how we consume digital multimedia, how we browse the web, how we take photographs, how we communicate with others.

In recent years, this has come to include learning and training as well. Mobile Learning and Mobile eLearning (or, mLearning) have become popular buzzwords in training and learning circles in the past few years.

What we have to remember however, as designers, is that mobile devices are not just “another screen” on which to view content. Effective mLearning is not just a matter of shrinking your screens down and putting your existing content on a smartphone or tablet.

Mobile devices have certain characteristics as devices that enable certain usages within the context of education. They have strengths and weaknesses, things they do well and things they don’t. They key is to recognize the strengths of mobile devices and how they are most naturally used. Then to take certain appropriate content and certain appropriate types of activities and deliver those and only those through mobile devices.

The purpose of this post is to look at some of the characteristics of mobile devices and look at some useful principles for designing learning materials and learning strategies for mobile.

Mobile device capabilities

Mobile devices have a number of capabilities that can be drawn upon for learning and training purposes.

Network Connectivity

Mobile devices can be connected to the Internet over both cellular and Wifi connections. Connection speeds vary based on the technology:

  • 3G (up to 7 Mbps)
  • HSPA+ (up to 21 Mbps)
  • LTE (up to 100 Mbps)
  • Wifi (5-20 Mbps)

Interaction

Interactions with content

Mobile devices offer many ways to interact with content- tap, double tap, tap and drag, swipe, pinch to zoom, two finger rotate, accelerometer based tilt/rotate of device, shaking of device

Interactions with other learners

Interactions with other learners can include both synchronous and asynchronous for communication and collaboration:

  • Synchronous – phone call, SMS, Skype VOIP, video conferencing like Facetime, Skype video calling, IM
  • Asynchronous – email, SMS/text, discussion forum, YouTube upload and commenting, Wikis, Cloud document storage and editing, Instagram/Pinterest photo sharing, Reddit link sharing and discussion, social media sharing and discussion on FB/Google+,Twitter,Tumblr.

Content consumption methods

Mobile devices include a number of ways, either built in or through third party apps, to consume content:

  • Text content – Web browser, ebooks/reading apps (Adobe pdf, Kindle)
  • Audio content – podcasting, music / audio players, Soundcloud
  • Video content – YouTube viewer, built in video viewers and 3rd party viewers like VLC
  • Mapping for location related content

Content capture, editing, production

Mobile devices include a number of tools to help learners / users capture, edit, and produce their own content:

  • Photo capture and editing
  • Video capture and editing, mobile movie making
  • Audio capture and editing
  • Text capture – Word processor, Notes apps
  • Drawing / sketching – sketching apps, stylus. Notes

Search and Navigation

Mobile devices include a variety of capabilities for search:

  • Web search
  • On device search
  • Navigation, mapping
  • Local based search – what’s around here?
  • Text search and voice search

 Organization, planning, and tracking

Mobile devices also have a number of tools to help with getting organized and keeping track of dates and times for tasks to do:

  • Calendars, including shared calendars
  • Notifications
  • Reminders and checklists
  • Alarms
  • Note-taking

 

Strengths of mobile devices

Mobile devices have particular strengths as devices to be used for education and training:

  • Portability / mobility – small sized, light weight, easy to hold and carry. This makes it easier for learners to always have these devices with them, making them an excellent potential device to use for learning and training.
  • Connectivity – devices can connect to other devices and to networks over micro USB, Bluetooth, Wifi, WiDi, cellular, NFC and can receive signals from GPS. Multiple modes to communicate and collaborate with others. High speed cellular internet means that the learner almost always has access to the internet to access content. The learner does not need to be chained to a desk or trapped in a classroom to have access to learning materials, whether formal or informal.
  • Location, position, orientation, and context awareness – sensors such as GPS, accelerometer, gyroscope, barometer tell the position and orientation in space of the phone, it’s motion through space, and data about the environment around such as pressure, temperature, light levels. Combined with the connectivity, this enables the device to “know” where you are and what else is in the area.
  • User personalization – mobile devices have a lot of data on device about the user, are cloud connected to various sources of data about the learner, and can “learn” over time the patterns of the user.

 

Limitations of Mobile Devices

Remember however to keep in mind some of the practical constraints and limitations of mobile devices:

  • Small screen sizes
  • Battery / power limitations – this will vary from device to device and will depend on features that use a lot of power such as use of the screen, networking, and processor intensive operations
  • Processor limitations – this is getting better in recent years with high powered multi-core processors, but mobile devices still tend to be less powerful than modern laptops and desktops
  • Multi-tasking is somewhat limited compared to desktop operating systems
  • Limited on-board storage. Some devices have microSD ports to take a memory card, but this is far from universal. Cloud storage services like Dropbox, Google Drive, and SkyDrive can somewhat compensate for this.
  • Data plan limitations – when the user is not connected to Wifi, internet connectivity is through the cellular data connection. Cellular data usage is often limited to a certain maximum amount per month, with heavy overage fees.

Principles and Tips for mLearning Design

The capabilities and limitations of mobile devices lead to a number of principles to guide mLearning design.

Use a Primarily Touch-Based Design for Navigation and Interaction

People are used to interacting with and navigating through mobile apps with taps, double taps, swipes and gestures. As a designer, you should try to build this into any mLearning. Also be careful to adjust instructions in terms of the language you use to describe what to do. Tell learners to “tap” or “push” instead of “click.”

Design for touch and gestures. Touch shouldn’t just be an afterthought, but should be the primary mode of input and interaction. Remember that for now touchscreens don’t enable the “hover state” you see with mouse based interfaces. So any design where additional cues or material are reviewed when the cursor hovers over an area need to be modified.

Design to the size of the target device. Make scale sized templates to get used to the actual size and how much can legibly fit on the screen. Make sure any controls and buttons are an appropriate size for easy touch interaction. Keep interactive areas within easy reach of thumbs in the ways users usually hold the devices.

Use Other Modes of Interaction

Also, thanks to the different sensors in mobile devices, other creative modes of interaction could also be possible, including speech inputs, tilting or rotation, shaking the device, or taking a picture.

Consider Using Games / Gamification to Make it Fun

Gaming is one of the most popular applications of mobile devices. The number of hours spent playing games on popular mobile platforms like iOS and Android are similar to the number of hours spent on home consoles like Xbox, Playstation, and Wii. Consider using Serious gaming or gamification to increase learner engagement  and make the learner more likely to use the learning materials during spare time between other tasks.

Give Opportunities for Communication and Collaboration

Mobile devices are network connected and allow many types of synchronous and asynchronous communication and collaboration options. We know that interactions with other learners plays an important role in learning. Enabling learners to have contact and interaction with each other should be a part of an effective mLearning strategy. As mentioned earlier, there are a range of synchronous and asynchronous communication and collaboration tools that can be used.

Make Use of Mobile Content Capture Capabilities

Still and video cameras, microphones, styluses, and virtual keyboards on cell phones allow learners to collect and create material, whether text, audio, video, sketch for project based work out in the world that can be shared with others. For example, they could visit a site and collect digital “artifacts” to document noteworthy points related to their learning onsite. Learners can then comment on each others’ work.

Make Materials Short, Digestible, and Findable

Usage of smart phones for learning, tends to be for short periods, 5-10 minutes in between other activities. This may be better attuned to just-in-time information for support on the spot rather than sustained periods of learning. You could use text and images, audio such as a podcast, or a short video. This use case for smart phones has instructional design ramifications –  it suggests that you should design in smaller, self-contained bite sized chunks. This is in line somewhat with the older idea of learning objects.

Keep in mind that smaller pieces means more pieces, and that these pieces need to be easily searchable and findable. Remember as well that any interfaces for searching for content need to be simplified and rescaled to work within the limits of the mobile device screen. You may also want to tag content for smart phone appropriateness, so that smart phone optimized content comes up first in search.

Tablets tend to be more comfortably used for longer, browsing content while sitting in a chair or on the couch, for example. Still, the ideal is for shorter, bite sized chunks of content that can be easily digested on the go.

Be Conscious of Screen Sizes

Mobile devices come in a range of sizes. These range from around 4-5 inches for cell phones to 7-10 inches for tablets. This has consequences on how content can be presented. You will need to rework interfaces and the layout of items on the screen. There are resolution and screen aspect ratio issues. You may want to use scalable vector graphics rather than bitmapped, and relative sizes (percentages) for screen elements and sections rather than absolute (pixels)

Alternatively, you can use HTML and responsive web design, so that the presentation of the content and the screen layout changes according to the device and screen size.

Mobile devices tend to fall into 3 broad size groups that overlap somewhat.

  • Smartphones (4-6 inches)
  • Large smart phones and smaller tablets (6-8 inches), and
  • Full size tablets (9-11 inches)

 

Keep in mind that due to the small screen, it may be hard to make out small details in complex graphics, even with higher resolution “Retina” type screens. Detailed imagery is probably better viewed on a tablet than a smartphone. Tablets, because of their larger screens are more comfortable to read from and can be used for more sustained periods.

Smart phones however are an attractive target for training for a few reasons. First, smart phones are light and portable, and are something people always have with them. This cannot necessarily be said for large tablets, which take up more space and are heavier, requiring a bag to carry around.

Depending on the type of clothes people wear, certain smaller tablets such as 7-8 inch tablets (e.g. Nexus 7, iPad Mini, Galaxy Note) may fit in pants pockets, or, for women, a purse and still be portable. Some telephones, such as so-called “phablets” (e.g. Galaxy Notes, Galaxy Mega) also sit in this in-between “sweet spot” around 6 inches. Devices this size make a decent compromise between generous screen real estate and portability.

A thorough learner and context analysis during the Analysis phase of a project can help you to identify what sort of devices the learners will be using.

Keep Connectivity and Data Issues in Mind

In contrast to tablets, which are most commonly purchased in wifi only versions rather than wifi+3G/4G versions, smart phones include cellular data connectivity by default, with the majority of users subscribing to a data plan through a cellular provider. As such, smart phones are generally always available, always on, and always connected. This makes smart phone technology supportive of a “Just-in-Time” learning approach where learning can potentially be fit into any spare moment during the day, at the learner’s convenience.

However, keep in mind the user’s data access limits – the limits of cellular data plans and cost, availability of cellular signal, and the availability of wifi. You want to find ways to make any content “lighter” in terms of how much data has to be transferred.

This cuts down on potential data plan use and makes it easier to deal with any connection speed issues. In many urban areas today, cellular connection speeds over 4G/LTE can actually be substantially faster than home or office broadband, but you should not count on this in designing the content.

Data limitation issues are another reason why content presented in short, discrete chunks is ideal for smart phone based learning.

Consider a Multi-Screen Approach

An alternate way to consider the “m” in mLearning is multi- rather than mobile. As in, multiple types of screens and devices. Rather than making all content viewable or usable on every device, use a multiple screen approach where you use each available tool for the content for which it is most appropriate.

You can teach different parts of a body of content and carry out different activities with different devices, based on which devices are best for which tasks. Truly mobile devices like smartphones and smaller tablets may be better for reinforcement and on the spot performance support or informal reference rather than for sustained study of formal content.

Large tablets might be better for sustained, formal use due to the larger screen.

Large tablets however can also be useful as a performance support tool, especially when you want the user to be able to browse documents with detailed diagrams. An example would be for maintenance workers on site.

Delivering mLearning: Web Apps vs Native Apps

There are two potential approaches to delivering applications for mLearning:

  • A native app based approach. An app is developed for whatever target platforms (Apple App Store, Google Play Store, Windows Phone Marketplace, Blackberry Marketplace, etc)
  • A web application approach using HTML, JavaScript, CSS, server side scripting, etc, with content delivered through the device web browser.

The chart on the following page breaks down some of the pros and cons of these two approaches to mobile applications:

Type of App

Web App (HTML, JS, CSS, server-side   scripting) Native App Store App
Pros
  •   As long as you keep to what is relatively standard across browsers, the course materials will work on any platform. Develop once, and it can be used   anywhere.
  •   No   need to develop a separate app and distribute through the web store. User simply logs into a website.
  •   Developers   control the updating of the web app since the app runs off their servers. User accesses updated app by visiting the site
  •   Faster/cheaper   to develop and maintain
  •   With   responsive web design, can easily adapt to the range of different screen   sizes on phones and tablets
  •   Every   user, regardless of platform, can access the materials and has essentially   the same experience.
  •  Easier to enable capability to download when you have a connection and then view later offline
  • The fact that the app runs off the local machine itself, and is able to tap into machine specific optimizations can lead to smoother, improved performance for some types of apps and content.
  • More secure for the user in general, because apps are screened at submission by central app store.
  • Have more control access to device hardware due to ability to hook into OS-specific APIs. Allows more interaction between the app and device hardware.
Cons
  •   Harder   to set it up to download content for offline viewing, but possible
  •   If   content can’t be downloaded and stored for offline viewing, it will be potentially   more data intensive to use the app. Each viewing of a case will incur data   use. This is not a concern over wifi, however.
  •   Notifications   about new content for download would have to operate separately from the web app itself. They would have to go through email
  •   Web app doesn’t have nearly as much potential access to the hardware of the phone   such as camera (usually for security reasons).
  •   Formatting won’t follow distinctive native look and feel of the specific phone’s / tablet’s OS. Will be more general. On the other hand, material will look roughly the same   on all platforms.
  •  Multiple app stores to navigate
  • Have to build multiple versions of the app to hit whatever app stores are desired.
  • Even for the same app stores, need to add development time to customize, in the case of iOS or Android, for smart phone vs. tablet use. (Since phone optimized apps look ugly with lots of wasted space when simply ported without adjustment to tablets)
  • Involves using multiple development platforms and multiple languages
  • When app is updated, depend on users to download the update. They will be notified of an update, but they have to choose to download.

 

Tracking Mobile Learning

One of the important aspects of an organizational learning strategy is the ability to track the learning activities and achievements of workers / learners. For traditional eLearning, this involved an LMS serving up content through a web browser. The LMS records enrolments, grades, and course completions for the purpose of certification or career development within the organization. One of the challenges, until recently, with respect to mobile learning activities is that the dominant standard, SCORM, only tracked formal courses hosted on an LMS and taken through  a web browser. Mobile apps or serious mobile games would not have been trackable.

However, the latest incarnation of SCORM, called Experience API or “Tin Can” API is better equipped to handle a more flexible variety of learning activities, formal or informal, mobile or desktop. For more information, check out my earlier post , which includes a basic overview of the Tin Can API and links to primary sources with more information on details of the standard and implemention.

 

Conclusion

This post looked at some of the capabilities of mobile devices, pointing out some of the basic strengths and weaknesses of such devices. With this as a basis, we looked at some basic principles to use for mLearning design.

These were:

  • Use touch-based design
  • Make use of other modes of interaction
  • Consider using games / gamification to make it fun and improve engagement
  • Give opportunities for communication and collaboration
  • Make use of mobile content capture capabilities
  • Make materials short, digestible, and findable
  • Be conscious of screen sizes
  • Keep connectivity and data issues in mind
  • Consider a multi-screen approach

We also looked at the pros and cons of web apps vs native platform apps for delivery of learning content.

Finally, we discussed issues related to tracking mobile learning activities and achievement using the Tin Can API.

Links

http://www.elearningguild.com/research/archives/index.cfm?id=149&action=viewonly

http://www.elearningguild.com/publications/?id=35

http://www.fastcodesign.com/1669896/10-ways-that-mobile-learning-will-revolutionize-education

Tin Can API Overview

Introduction

One of the rising buzzwords in the world of training today is the so-called “Tin Can API.” The purpose of this post is to give a quick and dirty overview of what it’s all about in an easy to follow Q&A format. I’ve also included some links for more detailed follow-up.

What is Tin Can API?

 

Tin Can API is a new standard for learning activity / resource meta-data that has rolled out recently. Tin Can API comes from Advanced Distributed Learning (ADL), the makers of the SCORM standard. It is the intended successor to SCORM. Officially it is known as “Experience API” (xAPI).

What is SCORM?

For those not familiar, SCORM is short for Sharable Content Object Reference Model. It is/was a set of standards specifying how eLearning content is to communicate with a system hosting the content, usually a Learning Management System (LMS). The original standard came out in 2000, and went through numerous revisions, the most recent being SCORM 2004, itself going through four revisions, the most recent in 2009.

Why was SCORM developed?

SCORM was developed by the ADL initiative, which came out of the Office of the US Secretary of Defense, originally in response to a late Clinton-era executive order to the Department of Defense. The US military has long been one of the largest global consumers of training, and, as anyone who’s spent time in an Educational Technology graduate program reading research papers in the field can attest, funds a large percentage of educational technology research.

Part of the motivation behind the push that led to SCORM is that prior to this, as of the late 1990s, there was eLearning going on, but with many incompatible standards in use amongst private industry, between different agencies of the military, and between different civilian agencies of government. The eLearning built by one vendor for one LMS would not necessarily work in the LMS for another. If a client switched vendors, legacy content became potentially unusable, requiring expensive conversion. It was perceived that standardization would play a beneficial role to the eLearning market, allowing better interoperability of content, saving money, increasing efficiency, and encouraging innovation in the market.

Today there are many different products that will produce SCORM compliant eLearning content, and this content is usable in any SCORM compliant LMS. In this sense, SCORM has been a success.

Why go beyond SCORM?

However, the latest edition of SCORM 2004 came out back in 2009. A lot has changed, technologically since then, particularly the massive spread of use of mobile technologies, the rise of focus on informal learning, the refinement of Web services technologies, and the spread of recognition that learning is something that takes place continuously and everywhere, not just when logged into a company LMS. A serious update in approach was needed to keep up with changes.

SCORM compliant eLearning is been delivered using an LMS through a web browser. But learning activities happen outside of this. In reading a book, in playing a serious game, in taking an instructor led class, in using a mobile app, in participating in a simulation, in using an informal learning tool, in real world experience. These are all valid learning experiences for which data is not readily captured by existing systems under the current standard.

The Experience API is geared toward collecting and recording details from any learning experience, wherever they happen, in one central location. In terms of basic philosophy, though not in implemetation it is similar to the idea behind Open Badges and ePortfolios.

How and Where are Experiences Recorded?

Experiences are recorded in a format of actor-verb-object. For example:

  • “Cyril read the ASTD publication ‘Informal Learning Basics'”
  • “Cyril took the online course ‘Teaching With Moodle: an Introduction'”
  • “Cyril watched a YouTube video on configuring WordPress using plugins.”

Note that while this simple actor-verb-object format is core, other properties could also be captured such as:

  • Context
  • Location
  • Starting and ending timestamps
  • Source
  • Content rating, and
  • Results .

The sequence of such recorded experiences forms what is called an Activity Stream. Activity Streams are a concept already used in social networking; familiar examples would be the activity feeds we see in Facebook, Twitter, Google+, or Tumblr. They are an ongoing stream of actions done by a person on a timeline.

Activity stream data is stored in a database called a Learning Record Store (LRS). An LRS can either be included in an LMS or may be standalone. A learner’s data could be sent to multiple LRS’s. For example, a learner could have records of learning sent to his own personal LRS in addition to his employer’s LRS.

Different LRSes can also share data amongst themselves.

What is the benefit of this?

There are a few benefits:

  • The learning record data is not stuck in an LMS, and does not require an LMS.
  • You can capture data on all types of learning activities, formal and informal, online and offline, documenting the full spectrum of learning and professional development activities in which a worker / learner takes part
  • You capture a richer, more complete view of a person’s learning path
  • Workers can potentially more easily document and demonstrate their professional development, and through the use of their own LRS and future data management tools, can curate this information for use in “personal branding” and job search activities.
  • Using advanced data processing, this richer data can potentially be more effectively correlated to actual performance

Can Existing SCORM Packages Be Made Compatible With Tin Can API?

Yes. Tools such as SCORM Engine can convert existing SCORM packages into the Tin Can API format for transfer to an LRS. Legacy learning records could be converted to the new format and new data from legacy LMS content can be recorded in the new format. Many commercial LMSes and content authoring tool makers are in the process of becoming compliant with Tin Can API, and plugins have been developed for open source tools like Moodle.

Links for Further Reading

Great site by Rustici software with additional info on Tin Can API: http://tincanapi.com/

Good post by another blog: http://www.open-thoughts.com/2012/10/understanding-tin-can-api/

http://lo-f.at/glahn/2013/07/xapi-open-badges-and-e-portfolios.html

More technical description of Activity Streams:   https://github.com/activitystreams/activity-schema/blob/master/activity-schema.md

http://www.adlnet.gov/tla/lrs/

 

 

 

 

On Learning Theories: A Pluralistic Approach

As aspiring educational technologists discover in their university studies, there are many different theories that talk about human learning. Each one brings a somewhat different take on the broad phenomenon of human learning. And while each theory has its partisans and arguments, it is important to resist getting caught up too much in that. It is important to remember that it is not a matter of a contest for “one true theory to rule them all.”

There are some that think like this, or at least talk like this, and in such striving to elaborate particular learning theories and in such trying to argue for the maximal potential application of one’s pet theory are careers in academia made. But this is not the best approach for the practitioner, who is less interested in proving or justifying an ideology or dogma than in pragmatically figuring out what works best in practice for the particular task, the particular teaching point at hand, given the context at hand.

The reality is that learning is multi-faceted. There are many different aspects to it. There is an aspect of learning that consists in modifying external, observable behavior. There is an aspect of modifying internal cognitive structures. There are ways that learning can be improved by paying attention to sensory modes and structuring of information so that it’s easier to process. There are ways that learning can be improved by understanding learners’ prior mental structures and by working to modify and build on them. There are aspects of motivating learners to give them that initial push to engage with learning materials. And there are aspects of helping learners to have the volition to persist with the efforts through to the end when things get hard. There are aspects of learning that are shaped by the individual’s interaction with content to construct meaning. There are aspects of learning that are shaped by learners’ interactions with other people to construct meaning. There are ways that knowledge is situated in activity in context, with learning mixed up in recognizing and mastering the affordances of the environment and with the learner’s efforts to become part of a community of practice. There are principles for teaching young learners, and principles for teaching adult learners. And there are meta-cognitive strategies, where learners can learn to monitor and improve their own learning and abilities to learn.

Different theories individually shine light on particular facets of learning. They have applications in certain areas. A zealous focus on one theory, one aspect of learning is not going to be useful. Better to take a pluralistic sort of approach that sees the relative truths in all of these learning theories, and the specific domains where each is the best tool for the job. By putting these insights together, you can gain a more cohesive and comprehensive view of learning.

  • Behaviorism gives insight in training in its reminder to focus on observable end performance, on what you expect the learner to be able to do differently at the end.
  • Cognitive information processing theory reminds us to be mindful of the limits of human processing power and short term memory as we try to deliver instructional content without overloading the learners.
  • Schema theory reminds us that people are not blank slates, and that the ease of new learning is influenced, for good and bad, by the mental structures that are already there. Correct beginnings of understanding are scaffolds we can build on to make our jobs easier. But misconceptions form barriers to understanding that must be actively grasped and torn down before we can begin to build.
  • Motivational Design / ARCS reminds us of the importance of using motivational elements, when appropriate, in training, to push learners to engage with the training and stick with it.
  • Individual constructivism reminds us of the importance of not just delivering passive instruction, but rather allowing learners a chance to interact actively with the learning environment so as to test their developing knowledge structures. This prompts us to design rich, interactive materials so that the learners can engage, test, and refine their knowledge.
  • Social constructivism reminds us of the value of letting learners interact with each other and how much learners can learn from fellow students who have just learned the material. This prompts us to enable opportunities for communication, discussion, and collaboration.
  • Situated cognition, meanwhile, reminds us that learning is tied up human activity in the world, in social/cultural context and physical space. This prompts us not just to create training that is not abstracted away from really, but to build rich, situated activities like case studies and simulations with realistic scenarios.

If we allow ourselves to be reminded of all these points, we can hopefully become more rounded, more effective instructional designers as a result by consciously and selectively applying the insights of various learning theories. In this way we can support more completely, in the training we build, the varied bases of learning.

 

Shared post on Keller’s Motivational Design (ARCS)

I wanted to share this post from another blog.

http://learningsnippets.wordpress.com/2013/09/27/integrating-motivation-with-instructional-design/?goback=%2Egde_3730624_member_277131784#%21

Dr. John Keller’s ARCS (Attention, Relevance, Confidence, Satisfaction) model of Motivational Design is one of those theories that always stuck in my mind and carried with me from university studies in Learning Theories. The ARCS acronym is very easy to remember, and it sums up nicely the key considerations for improving the motivation of the learners taking your course.

The post includes a brief overview of the ARCS Motivational Design model. It also includes a nice link to another site further explaining the theory. There is also a great YouTube video featuring an interview with John Keller, which I’m embedding below. In the video, Keller talks about the origins of the ARCS Motivational Design theory and the expansion of the theory to include volition, the learner’s persistence with the learning.

Enjoy!

 

 

The Potential for Stereoscopic 3D in Education

Introduction

One relatively old technology idea that has enjoyed a resurgence in the last couple of years is stereoscopic 3D. 3D in the cinema is a technology that has waxed and waned in a faddish sort of way over the previous decades. James Cameron’s Avatar (2009), a high budget, highly successful blockbuster film by a top notch action director making artful use of 3D led the most recent charge in bringing back 3D as a trend.

While most examples of 3D in film since then have used the technology in much less skillful ways, the tech is still going relatively strong in theatres, its staying power buoyed by higher revenue from 3D films and aided the appearance of 3D HDTVs and 3D Blu-ray players on the home theatre front.

So 3D has made a comeback over the past four years in entertainment. But what about the prospects for more serious applications, for learning? The subject of this week’s blog post is the potential for the use of 3D content in learning / training.

The post will look at:

  • Factors converging to make more widespread use of 3D content possible
  • Potential learning benefits achievable through judicious and effective use of 3D content
  • Types of subject matter content that would be potentially amenable to instruction through 3D video
  • Challenges faced in making 3D content used in a more widespread fashion
  • Considerations for effective use of stereoscopic 3D in learning and training

First a note that when I talk about 3D here, I am making a distinction between older Pixar-style 3D animation (computer generated animation with lighting effects that make the images look “3-dimensional” but made for viewing on a 2D screen) and stereoscopic 3D, which uses two slightly different images (one for each eye), and glasses (either active with shutters or passive using polarized lenses) to simulate what the eyes and brain would experience looking at a real object, simulating an immersive and realistic experience. This article is focused on stereoscopic 3D.

Factors Leading to Increased Use of 3D

There are a number of developments causing a convergence toward an increased potential to use stereo 3D content for learning. These developments include:

  • Resurgence of interest in recent years 3D in cinema. A mixed bag for quality has slowed the growth of popularity of it in the theatres, but approximately 1/4 of box office dollars still come from 3D screenings. Some artists really have learned to work the distinct visual language of 3D in an artistic, compelling, integrated way and have made films oriented toward use of it that brings added value. Others tack it on as an afterthought, diluting the concept.
  • Increased accessibility and affordability of professional 3D camera rigs, as well as relatively affordable consumer-oriented dual-lens 3D camcorders. Basic 3D capable camcorders are currently in the hundreds of dollars to a little over a thousand range. For a while now, YouTube has allowed the uploading of user-generated 3D content.
  • Accessibility and affordability of TVs and monitors that can display 3D. Though 3D is no longer being used so much as a dominant selling point, the reality is that most newer TVs include 3D as an option by default. A quick search on the Future Shop website shows 3D capable HD LED TVs from top tier manufacturers like Samsung between $750 and $1000. And obviously as technology advances these prices will continue to fall.
  • Affordability of 3D blu-ray players. A similar search shows these devices selling for as little as a bit over $100.
  • Accessibility and affordability of office and school type projectors that can project 3D content. ($1500-2000)
  • Availability of software for editing 3D content (After Effects, e.g.).
  • Increased diffusion of expertise in 3D cinematography – understanding of principles behind stereoscopy and how to compose scenes for 3D video. This is an important factor for ensuring not only a quantity of content, but quality too.

Potential roadblocks

There are several potential roadblocks that need to be overcome, however:

  • Getting access to quality camera equipment
  • The need to develop knowledge of how to properly set up 3D shots and shoot in 3D
  • Hassle of glasses; active shutter type glasses need a power source and are expensive, polarized passive glasses by design filter out half the light, making for a dimmer, less vibrant picture
  • Expense of screens (though this is falling)
  • Limited amount and varied quality of content. Some 3D content is high quality, though much is riding the bandwagon, faddish. Few learn effectively how to use it, working with its strengths and weaknesses, and planning the visual composition of shots from the start with good use of 3D in mind, so as to make something that adds value from what would have been possible in 2D

Potential benefits of 3D

So why use 3D content for learning or training, beyond use as some sort of fad or gimmick? There are several potential benefits, given use of the technique in a meaningful way:

  • Immersiveness
  • Learner engagement
  • Increased level of connection with content
  • Sense of realistic “presence” of what is being seen
  • Enhanced ability to perceive and comprehend spatial relationships and depth in learning content

For learning that involves understanding a place or some sort of 3D structure, with 2D content, it takes more mental effort to interpret and understand the spatial relationships, to connect with the place. Good filmmakers and photographers know how to compose 2D imagery and video to help us with this, but there is still a need to take the 2D and mentally abstract from it, imagining mentally the 3D reality. With 3D, there is not so much imagination or abstraction; the mental processing of it in terms of the experience in the brain would theoretically be similar to how you would process the perceived image of the real thing if you were there.

Visit a location, or a good museum exhibit, and you can see the power of experiencing a location in all its spatial dimensions. It is one thing to read about something and look at a picture, another to experience it in an immersive fashion.

Potential areas of training application

Stereo 3D could be beneficial in a range of different subject matter areas, from hard sciences to life sciences to social science to fine arts. Some examples of where 3D could be useful:

  • Understanding 3D structures and relation to form. For example in hard sciences. For example to understand the spatial arrangement of stars in a galactic supercluster, to visualize a simulation of the formation of the solar system, to visualize the relation between structure and function for higher level protein structure, or of the enzymes that control DNA transcription, to visualize simulations of how weather interacts with geography like mountains.
  • In experiencing exotic or impossible to visit locales. For example, outer space, in Space Station 3D or Alfonso Cuaron’s upcoming Gravity. Or to virtually visit paleolithic cave paintings normally closed to the public in Werner Herzog’s Cave of Forgotten Dreams. Another would be out of the way natural parks or reserves.
  • In experiencing areas of natural splendor like the Grand Canyon. Or more remote areas of natural beauty. Ecologists and conservationists sometimes struggle to communicate the value and beauty of certain isolated natural locations and the value of their preservation. 3D can give a more meaningful sense of what the place is like and why it is special.
  • Along the same lines, to experience wildlife on video in a more realistic, lifelike manner.
  • In experiencing human spaces or manmade sites like famous areas in foreign cities
  • For history to better understand the spatial layout of an area to give a better grasp of how that shaped the experience of some historical event. Pass of Thermopylae, for example, or Normandy Beach, or Dieppe.
  • For anthropology in the study of different cultures and their ways of life and the spaces in which they live. A good part of social science and is in understanding the physical spaces in which the people live. Culture is, in a sense, formed at the intersection of a physical space with technology and climate. 3D transmits more powerfully that spatial dimension, allowing us to better bridge that  separation between ourselves and some exotic culture by virtually entering their living space.
  • Religious studies – to virtually experience the geography of the Sinai desert, or the Mount of Olives, to walk through a 3D archaeologists imagining of the Temple at Jerusalem, to virtually experience Mount A’rafat or the Kaaba in Hajj, or to virtually visit one of the more famous and spectacular Hindu or Buddhist temples
  • Fine arts: To experience art exhibitions, sculpture particularly in distant locations virtually for art education, to visit the studio of a prominent photographer or painter, or to be in the pit of an orchestral concert.
  • Experience of a first hand point of view for relatively recent history and contemporary human events as live footage gets captured in 3D
  • In architecture or structural engineering for collaboration in the design of structures by looking at 3D models in 3D

Important Training Considerations

In order to promote effective learning or training using 3D content, there are several issues to which attention will need to be given:

  • Ensuring effective cost-benefit. If investments of money and time are going to be put into buying equipment and learning how to use it, there needs to be a return on investment in the form of more efficient and / or more effective teaching / training. It has to be more than a fad or gimmick done “because the technology is there.” There has to be a compelling learning case for any particular usage.
  • Avoiding it being mere “edutainment” where it’s cool to watch but learners don’t gain anything toward the actual objectives. We need to recognize when 3D legitimately gives worthwhile added value that justifies it and when it’s just decoration. There is a difference between content simply being entertaining and content being educational. (Though of course, the ideal is for it to be both!)
  • Finding quality content. This is a major concern right now. There is currently a kind of catch 22 for example with 3DTV where people don’t watch a lot of it because there are hardly any channels with content and new channels are slow to appear because not many are actively watching.
  • A significant portion of the population (some estimates go up to 25%) experience negative effects / discomfort from watching 3D content, ranging from headaches to nausea. This comes from a mismatch between how the eyes must be oriented to experience the 3D effect in focus (both eyes looking straight ahead toward the screen, parallel to each other) and how the eyes naturally tend to orient themselves based on where the brain is telling them the image is (eyes rotate inward so that they converge on the object in between). This strain, however is usually experienced from watching a whole feature length 2 hour film in one stretch. This effect would likely be less noticeable in viewing sets of 3D clips with breaks in between, as would be the case with using 3D educational materials.
  • The need to establish a sound, rigorous research grounding of principles to guide best practice. There has been a relative shortage of hard research on establishing what works educationall in 3D. There are some research studies showing positive and significant differences in amount learned between groups when content was viewed in 3D as opposed to 2D. However, more rigorous research, as well as creative informal experimentation by training professionals and enthusiasts needs to be done. People need to generate content and play with different variables and see what types of content benefit in a meaningful way from a 3D treatment and which don’t, what factors augment or hinder the educational impact of 3D, how much of measured impact is attributable to simple novelty temporarily enhancing attention, and how much to real improvements in the ability to get ideas across. The increasing availability of affordable, quality 3D capable camcorders and screens on which to show the content should enable this by opening production of content up to education departments, training departments, and “amateur,” YouTube type enthusiasts.

In Conclusion

A number of factors are converging to produce an environment where stereoscopic 3D is increasingly a viable approach for some training content. Well constructed 3D content can potentially bring learning benefits as part of a properly-designed training and learning solution for a range of different subject areas. Proper attention has to be paid however to ensure that 3D content is used in a thoughtful, principle-based way so that it brings legitimate ROI rather than simply being a gimmick.

Interesting Links

http://jaredjared.com/wp-content/uploads/2011/10/Bendis-Stereoscopy.pdf

http://www.indeptheducation.com/wp/blog/

http://www.iste.org/connect/special-interest-groups/sig3d

http://planet3d.org/category/education-and-3d/

http://www.videomaker.com/videonews/2013/01/3d-video-boom-or-bust-in-2013

http://www.3dfocus.co.uk/3d-news-2/exclusive-3d-as-standard-as-the-power-button/11680

http://www.xpand.me/education/

Logical future directions of stereo 3D technology:

  • Combining 3D video capture with body movement based Natural User Interfaces (NUIs), interacting with a 3D imagery via physical gestures
  • Capturing and streaming 3D video to enable Live 3D virtual tele-presence. Some companies are already offering such setups for corporate clients.
  • Incorporation of 3D into mobile or wearable devices. Imagine a miniaturized dual lens 3D capture set-up in future versions of cellphones, tablet, or Google Glass-type augmented reality wearables with dual cameras and a 3D display in the glasses a few years down the road. Meta glasses are one example of projects working in this direction.

Augmented Reality and Wearable Computing: Possibilities for Google Glass in Training

Introduction

Hello, happy Friday and welcome to my  blog. One of my main objectives with this blog is to encourage innovation in training by taking a “skate to where the puck is going” perspective, looking at new technology coming in the not-too-distant future and looking at ways that the technology can enhance training. Today’s topic is augmented reality and wearable computing with a focus on Google Glass. To be enjoyed with a nice Friday morning coffee or tea at the desk. 🙂

NOTE: I am not affiliated with Google or Google Glass and this article is not intended as a promotion of Google Glass in particular. I have chosen to reference this particular technology as a relatively well known example of the technology that currently exists and will be available in the near future on the market.

Augmented Reality

One interesting contemporary trend in computing is so-called augmented reality. What is augmented reality? Augmented reality differs from virtual reality. Augmented reality provides an “overlay” that augments or extends one’s appreciation, understanding, or navigation of the real world rather than an immersion in an unreal virtual world. The user operates in the real world as normally, but with useful location or context dependent information feeds displayed in his field of view to assist him in whatever he is doing.

One example of augmented reality is the HUD, the heads-up display on aircraft. A projector behind the pilot projects display data onto a screen between the pilot’s head and the windshield. The artificial horizon line of the attitude display overlays the actual horizon and various pieces of key data are also projected onto this overlay layer.

An HUD also features prominently as futuristic concept art in the Ironman series of films. (Image included under fair use educational / commentary usage)

Some apps for mobile phones play with augmented reality as well showing overlay information about local points of interest in the immediate area overlaid on the camera preview image shown on the phone screen. The app uses data from the location, the compass, and the gyroscope to sense where the user is, and in which 3-dimensional direction the user is pointing the camera to update the augmentation.

Wearable Computing

Another trend is that of wearable computing. This involves devices with computing power and integrated sensor devices to collect data about the wearer and his environment (GPS position, orientation of body in space, velocity, acceleration/rotation, direction facing or direction of movement, as well as data feeds like local temperature and local points of interest ).

Some examples are Pebble Watch, Samsung Galaxy Gear watch.

Google Glass: Upcoming augmented reality wearable computer

 

Google Glass is an upcoming device (not yet commercially available, in beta) that is receiving favorable reviews in beta testing by participating individuals. Google Glass is a relatively lightweight pair of glasses with an integrated wireless internet connected computer. The device features embedded sensors and a small projector that projects imagery into the field of view of the wearer so that it overlays reality. It functions as both a wearable computer and an augmented reality device. There is a touch sensitive area on the side of the glasses near the user’s right temple to allow touching to initiate actions. Also, there are speakers to hear audio, a microphone to record audio and take voice commands, and a camera to take pictures or record video. You can see a video of the Google Glass in action here.

Glass features

The user can watch videos, see pictures, take pictures or record video or audio, make verbal commands or queries, search the internet, share pics/audio/video to social networks or emails, send dictated IMs, and enable video conferencing (with the user seeing the other person and the other person seeing the user’s POV (point of view) ). There is also the ability to access Google services such as Translate.

Training and performance support applications of Glass

In this post, I’d like to describe some of the conceivable training and performance support applications of this great new tool. Some of these possible applications are based on known out of the box capabilities of Google Glass, while others are reasonably foreseeable possibilities given the capabilities of the device and assuming creative effort on the part of app developers.

Technical Training, equipment maintenance

Google Glass could be very useful in technical / equipment maintenance training.

  • As a means to collect easy, hands-free POV  video of an expert / SME demonstrating how to fix an issue with the equipment or perform some procedure. This video could be streamed live, or could simply be a way to record video clips for use in online help or formal eLearning.
  • As a means to collect POV video capture of the trainee performing the task while streaming the video to an expert. The expert observes, and gives verbal, and possibly video feedback over an audio/video conferencing connection, possibly through another Glass, and potentially at a remote location.
  • As a visual support for component identification and access to more detailed information. Camera image recognition could recognize equipment, and overlay 3d Autocad or Ngrain image with labels on components. It could also potentially enable easy link through to online technical documentation formatted specially for viewing on the Glass’ screen.

Soft Skills training

On the other side of things, Glass could also be very useful in soft skills training.

  • POV video capture of user performance in a role playing simulation could be used for review in post-simulation debriefing sessions
  •  Or, to turn things around, in the same role playing simulation, you could have someone other than the trainee wear Google Glass and record. This would be good for client-facing skills training. The learner can see himself and his performance through the client’s direct POV. Either the video could be recorded and reviewed after the session, or both people could wear Glass, and establish a video-conferencing link. That way, the trainee could perform the simulated interaction in the scenario, while receiving live feedback of how the client experiences the interaction.

Performance support for someone working in a people-centric / client facing position

Certain professions have a much higher emphasis on meeting people, making and nurturing contacts, and all-around growing and maintaining a massive “mental rolodex.” This could include politicians, public relations or publicists, sales force, talent agents, etc. The people who go into these sorts of fields often have unusual talents for this, but everyone has his cognitive limits. We remember the face but not the name, for example, creating socially awkward moments.

A wearable eyeglass computer like Glass could help as a performance support. When a person comes into view, facial recognition could be carried out and the face checked against the contact database. This then brings up useful reminder data – photo, name, company, age, and any other useful or relevant information, allowing a smooth start to the conversation. Glass could also allow an easy way to photograph business cards and automatically (via OCR) extract information to import to contacts.

Language Learning

Augmented reality layers could make useful scaffolding for language learning support apps tying into using Google’s impressive tools for image recognition, speech to text, and text to speech. Text and audio overlays could provide helpful support information to the learner, and this could be used either in classroom practice, or out in the real world. The support could be optional as a scaffold with the ability to turn on or off as the learner feels the need. This could conceivably involve:

  • Live OCR (optical character recognition) and live translation overlay of signage or written material (reading store displays, street signs, restaurant menus, etc)
  • Live speech to text of foreign language to text translation overlay on screen
  • Live suggestions of phrases to use in conversation, with spelling and pronunciation cues

As well, Glass could enable course activities or assignments where the learner goes out into the community and records himself practicing the new language in a real situation (go buy something in a store and talk to clerks, ask for directions, etc). The conversations and interactions could be recorded and reviewed or graded afterwards. The world can become a language lab.

Historical site and museum interpretation

The technology could also find great usage in historical site or museum interpretation.

The user could borrow/rent a pair at the entrance or visitor’s center, and use them to experience a transparent overlay of video or 3d animation based on location. This would provide the experience of being there and then. This could be used at the sites of famous battles like Waterloo, Civil War, WWI and WW2 battlefields, Plains of Abraham, Revolutionary War, etc. Or at old historical ruins like the Colliseum or Acropolis. Or a natural history exhibition site could show what the location looked like in the Jurassic period, or give a visual sense of what the glaciers would have looked like in the past ice age.

Similarly, it could be used in museum exhibit interpretation. The user borrows/rents the glasses, comes to a display/exhibit, and the Glass detects the location and makes an audio-visual presentation available over wifi.

Operational performance support

Glass could also potentially offer performance support for operations of equipment like airplanes and cars.

In aviation, for example, this could enable hands-free, eyes straight ahead checklists as an alternative to glancing downwards at the checklist display on a cockpit Multi-function Display (MFD). Glass, receiving a wireless feed from the airplane, could display the current checklist item, upcoming checklist items, and any special cautions or warnings for steps. Visual annunciations could also potentially appear on the display.

It could also be used in driver training or driver performance support, for example performance support as a reinforcement for defensive driver training. The user can opt for supportive prompts or possibly the glass monitors the driver’s point of view and traffic via camera and gives scaffolding prompts until the learner reaches proficiency. For example: it could prompt the driver to check mirrors periodically, prompt to check the speedometer periodically,  provide a visible cue if the speed calculated from GPS and accelerometer exceeds what is known from GPS/Google Maps to be the posted limit by a some threshold (10-20 km/h above, say), provide a visible or aural prompt to encourage attention or slowing if red brake lights are seen far ahead or it is detected that a car far ahead is otherwise slowing or if the following distance behind the car in front falls below some set threshold.

Conclusion

These are just a few possible training or performance support applications that can be imagined for Google Glass. Doubtless others can and will be imagined and realized as the technology rolls out commercially.

Feel free to leave a reply to share your comments and your own ideas.

Getting Started

Welcome!

I’m very excited to be making my first blog post. I’m taking advantage of some downtime to finally get this project off the ground after ages of procrastination.

 Focus of the Blog

The focus of this blog will be on interesting  new ideas and concepts in training and learning. Topic areas I have in mind include informal learning and tools for supporting it, MOOCs in higher education, augmented reality tech like Google Glass, commercially available VR tech like Oculus Rift, Natural User Interface (NUI) tech such as MS Kinect, gamification in learning and training, and new standards like Tin Can API.

Posts will probably go in a few different directions:

  • Ideas about how new technologies can be used in training and learning
  • Commentary on interesting articles or book reviews
  • Reviews of interesting tools such as eLearning development tools, virtual classroom or Web conferencing tools, LMS tools, or mobile apps.
  • Organizations and companies doing cool and innovative things in training

 Other Projects

I also hope to indulge a bit my passion for teaching, particularly in science, through some new YouTube channels. These efforts are still in the planning stages, but as they get off the ground, I may share some of the more interesting efforts here.

 Plans for Future Posts

I’m hoping to start off by posting about once a week. That seems like a nice reasonable goal to start off with in terms of manageable time commitment and a focus on quality over quantity. As time goes on, I’ll evaluate from there based on time available and on whether the blog attracts any reader interest.

New ideas and technologies in training and learning

%d bloggers like this: