As we all know, technology is moving at a breathtaking pace. A mere 10 years ago, Facebook launched its mobile app (in earnest) – a milestone many believe marks the true shift to the age of mobile computing, the largest digital transformation since the rise of the Internet in the 90s. Just as impactful as the mobile digital transformation, we are entering the decade of spatial computing. Everything from self-driving cars to advanced autonomous systems will rely on spatial computing, the theoretical apotheosis of industrial and digital transformations.
As we sit on the precipice of our next big digital step forward, we need to ask ourselves how to prepare for an explosion of complexity in our daily lives.
Spatial computing, particularly Augmented Reality, has the potential to force-multiply the power of human intelligence. In many respects, AR is our best way to contend with the inherent complexity of our future. And while AR has limitless applications, one area it will help meet the challenges of our complex future is through training and knowledge acquisition.
AR has already demonstrated huge benefits to employee training. Inherently on demand, AR training can reduce costs significantly without the need to travel to seminars and classes, especially if implemented on existing hardware platforms like mobile phones and tablets. It can also help job training where safety might be a concern. But most importantly, AR training provides more effective learning and retention by removing cognitive barriers, creating stronger emotional connection to content, and narrowing focus to core subject materials in ways that were not previously possible at scale.
In his book, UX for XR (Design Thinking), Cornel Hillman cites studies that show AR participants learn an average of “four times faster, with 3.5 times higher emotional connection while being 2.5 times more confident and four times more focused.” What this study highlights goes beyond mere rote memorization. Emotional connection and focus are primary indicators of true knowledge retention. In situ spatial interaction, gestures and speech all elevate digital engagement in ways that have profound impact on cognition.
Bloom’s Taxonomy of Learning is an interesting framework to consider as we evaluate the impact of augmented reality on knowledge acquisition. Originally published as the ‘Taxonomy of Educational Objectives’ in 1956 by Benjamin Bloom, the framework has undergone various iterations to get it to its current state. The framework is meant to help structure objectives and learning goals, organize the objectives in a way that makes sense, and ensure that instruction and assessment are aligned with objectives. Bloom’s work has served as a cornerstone in structuring curriculum at all levels of education from pre-K to post graduate and professional instruction.
With Bloom’s Taxonomy, we can evaluate the impact of training with the use of AR. One could argue that AR helps redistribute cognitive load from the base levels of Bloom’s taxonomy to the higher levels of cognition focused on evaluation, adaptive thinking, judgment, and decision-making. We can also look at practical applications in training around smart systems and medical equipment to help illustrate the value of AR with respect to Bloom’s Taxonomy.
At its base, we find ‘Remember’, or the ability to recall facts and concepts. Any deficiency in this base level of knowledge impedes efficiency in every level above it. Much of our superficial and factual knowledge has been offloaded to digital sources like Google and Wikipedia. The ability to project factual knowledge into our field of vision, as needed, will supercharge our ability to understand, apply, analyze and evaluate problems and solutions like never before. Rather than ‘right-click -> Look up’ on your 2D screen, a simple gesture or voice command can pull up relevant information in spatial proximity to your focused gaze, accelerating knowledge application in a subtle but paradigm shifting way.
Beyond the level of ‘remembering’ in Bloom’s Taxonomy, we progress to higher levels of cognition. Because AR projects information into the real world, and because AR experiences can optimize around gaze and gestures, learning can accelerate at all levels of the taxonomy with supercharged focus and engagement. Removing a litany of impediments related to focus and context switching will have profound impact on engagement, comprehension and retention. At the ‘Understand’ level of the taxonomy, we want the trainee to identify, locate, select, and classify. At the ‘Apply’ level of the taxonomy, we want the trainee to solve, demonstrate and operate to reflect competency from the lower levels of the hierarchy. Again, in situ information projected into the real world against real objects fast-tracks this level of comprehension.
Imagine the simple case of AR training for maintenance of a robotic arm. The goal is to diagnose and fix a problem related to restricted movement of the equipment. With a simple voice command, device instrumentation data can appear overlaid on top of the arm with various status readings and error codes. A voice command can bring up details for a particular error code. AR has helped at the “Remember” level. The AR instruction could then walk the trainee through a procedure, step by step, while she is working with the physical equipment. Having all of this information overlaid in context accelerates the trainees ability to identify and classify a mechanical issue with the robotic arm. AR has helped the trainee ‘Understand’ more efficiently than ever before.
Because augmented reality accelerates the ability to recall facts and understand basic concepts in novel and effective ways, cognitive load shifts away from areas of the brain devoted to memory retention and moves toward the area of the brain responsible for critical thinking. With the limitless simulations possible through AR, the trainee is engaged at a level that promotes emotional connection with the content, which then fosters deeper retention of the material. Quick visual feedback loops at the point of execution accelerates richer application, analysis and evaluation – concepts at the higher end of Bloom’s hierarchy.
As an example of real-world application of AR integrated into an overall learning experience, consider the assembly of test kits for medical labs. These kits require careful placement of various test tubes and solution vials into precise locations depending on a particular SKU. AR content can help with repetitive selection and placement of the parts into the correct position in the kit. Once the trainee achieves a certain level of comfort in remembering and understanding the general kit construction, AR can start introducing variants of the kit on the fly, including incorrect part numbers and assembly ordering. The trainee is now engaged at the application, analysis and evaluation levels of our hierarchy. The number of hours required for a trainer to prepare and execute these scenarios in the real world makes this type of dynamic training nearly impossible without AR.
As we conclude our application of Bloom’s Taxonomy to AR training, we consider the top of the hierarchy: Create. This level of the taxonomy reflects true mastery of subject matter. As we look at the next technology wave, we see a human capital resource gap. The demand for mastery cannot be met with traditional forms of knowledge acquisition. Curriculum development and execution, especially for the medical devices and smart systems of our future, faces too many impediments to be truly effective without augmented reality. Costs of training prep, lack of access to heavy equipment and safety concerns present practical problems that AR can solve. Beyond cost and logistics, AR offers a revolutionary opportunity to accelerate mastery. Given the growing complexity of technology, we need the AR revolution to help get us there.
References
Armstrong, P. (2010). Bloom’s Taxonomy. Vanderbilt University Center for Teaching. Retrieved on September, 2021 from https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/