When Google released the first version of Google Glass in 2013, Lucas developers loaded our warehouse optimization software and created a vision picking demo that combined the best of voice and vision. Our conclusion was that smart glasses were not ready for prime time in the warehouse and distribution center. Google eventually shelved the original developer version of Glass, but smart glasses have roared back to life in the last three years.
Watch this short interview on SC Digest: Augmented Reality in the DC – What’s Real, What’s Not and What Do Users Need to Know?
Today there are a number of very capable smart glass models from several vendors (Vuzix, Recon, Google, among others) that are targeting enterprise and industrial applications in manufacturing, supply chain logistics and field service. In the distribution center, these wearable devices open the door to new picking processes that incorporate visual cues and augmented reality in a seamless, multi-modal and hands-free workflow including voice and scanning. Alongside the technical advances, the hype surrounding the value of so-called vision picking in warehouse logistics is growing.
A number of early adopters in the US and Europe have rolled out vision picking processes in their warehouses, notably DHL Supply Chain (Lucas CTO recently joined DHL executives in a presentation on the state of AR in the DC at the Ecommerce Operations Summit). These early production systems show that vision holds great promise as a complement to current voice and scanning technologies to further improve worker efficiency and accuracy in picking and other warehouse logistics tasks. Notwithstanding the promise of the technology, we aren’t quite there. Today’s smart glasses for vision picking do not provide augmented reality or vision recognition, two key capabilities that will add significant value beyond current wearable technologies.
This article provides an overview of the current state of the art in vision for warehouse management and operations, and identifies the short and long-term milestones that will enable vision to cross the chasm from promise to reality in warehouse logistics and other supply chain applications.
Vision Picking Today
Rather than a revolution, vision represents the next stage in the evolution of mobile technologies used in the warehouse, an evolution that includes the steady migration from RF to voice to today’s multi-modal wearable picking systems (to learn more see our White Paper, Warehouse Mobility Beyond Voice and RF.).
The concept of “vision picking” is that order pickers in a warehouse or DC can view pick information within their field of vision on smart glasses, rather than looking down at a mobile RF terminal device screen. To confirm their tasks, workers can capture barcodes using the camera embedded in the wearable glass frame rather than handling a scanner. Equally important as the display and scan capabilities, smart glasses include speakers and microphones so that workers can interact using voice direction and speech recognition.
With a heads up display, the user can glance at information to the side of his main field of view. Product images, visual indicators and textual information can be displayed based on context or user request.
In essence, vision picking merges best available mobile technologies:
- Task information is presented to workers in the form of text and images available in the user’s field of vision on a wearable “heads-up display.”
- Verbal prompts and instructions supplement and complement the text and visual information.
- Users can scan barcodes and use voice commands to confirm their activities without stopping to handle and aim a handheld scanner or key enter information.
- Finally, workers can request help, report exceptions, and navigate the application workflow (skip item or aisle, change work area, etc.) using voice commands.
By eliminating time that might otherwise be spent stopping to read a wearable device screen, handling a scanner, or key entering data, vision picking will create a more efficient warehouse picking process, compared to RF picking processes that predominate in logistics operations today. It’s worth noting that vision in itself will only affect worker activity at the pick face. It will not directly impact travel time and/or pick density, both of which have a bigger influence on overall process efficiency and productivity.
To achieve dramatic productivity gains with vision, many DCs will have to redesign their picking process. In other words, the real driver for operational improvements with vision will come from improved process and workflow design using vision, voice and scanning. (Lucas recently published a white paper that explores how workflow and process design impact warehouse productivity. That paper is focused on best practices for using voice picking, but the principles apply to vision picking as well.)
Heads-Up Display, Augmented Reality, And Virtual Reality
As noted above, today’s vision picking systems use smart glasses to provide a user with a heads-up display (HUD) – a “virtual” display within a portion of their line of sight. This is very different from true augmented reality (AR) and virtual reality (VR), two technologies that are starting to find uses in consumer applications.
AR involves superimposing virtual images and/or information on the physical environment in the background. Pokemon Go! is one of the best examples of AR, where virtual Pokemon are superimposed on the image of the real world on a user’s smartphone. In a DC context, AR would enable a workers to see in his ar glasses a virtual light on a slot in a pick face to call out the next pick location. Another use would be to show a virtual carton nested on a half-stacked pallet of boxes to help a selector build a better pallet.
Virtual Reality goes one step further to immerse a user in a virtual world, using a VR headset that fully blocks a user’s view of the outside world. In some cases, a camera on the outside of the headset will capture and combine real and virtual images on the screen to create a type of mixed reality. VR is already being used in gaming, medical and scientific/engineering applications.
Evolving Smart Glasses For Vision Picking
A 2009 prototype of Kopin Golden-i smart glasses produced by Motorola.
Current smart glasses for the logistics industry use technology originally developed more than a decade ago. Kopin Technologies was one of the first, with its Golden-i product introduced in 2008. Kopin has licensed their display technology to other vendors that have taken the next steps in commercializing smart glasses for consumer and business applications – including Vuzix and Recon (which was purchased by Intel).
There are many new entrants to the space, but the Google subsidiary X (their so-called moonshot factory), Recon, and Vuzix are leading the charge in the industrial smart glass market. All three offer glasses that include a small HUD screen that is positioned on a frame approximately one inch in front of one of the user’s eyes. Since they provide a limited viewing area, these devices aren’t intended to support augmented reality.
Companies that are delivering smart glasses for augmented reality include Microsoft and Vuzix. Microsoft’s HoloLens is touted as a device for mixed reality that allows users to interact with virtual, holographic images (in their actual living rooms). On the Enterprise side, Vuzix Blade™ AR smart glasses provide a full “see-through viewing experience” that allows you to see overlaid information in a full field of vision. Unlike Hololens, which has a wrap-around glass style, Blade is more like wearing regular glasses where the lenses are a see through display surface. Vuzix is currently shipping a developer version of Blade™ to development partners, like Lucas.
Vuzix Blade AR Smart Glasses are now available to developers.
The rate of improvement in smart glasses over the past five years has been remarkable, yet there are still some limitations:
- Speech Recognition. Using the embedded microphones in the glass frame provides adequate recognition, but recognition accuracy in our testing is substantially lower than what we see using a headset with a noise cancelling microphone on the end of the boom, near the user’s mouth.
- Barcode Reading. The smart glasses are capable of reading barcodes using the embedded camera, but the image-based barcode scanning is slow and ess robust and accurate compared to traditional warehouse scanners. As a result, most current applications of smart glasses use an external barcode scanner (typically a ring scanner to minimize handling).
- Battery Life. Finally, current smart glasses cannot run a full shift on a single battery charge. Some models can be powered by an external battery pack (connected via wire), and others have a swappable battery to compensate.
As a result of these limitations, there are trade offs in using smart glasses today. There is also some work that software developers need to do to fully integrate heads up display information within an optimized user workflow alongside scanning, voice, and other technologies. These are all manageable challenges. The bigger challenge – and opportunity – lies in making the jump from HUD to AR, and capitalizing on the long-term potential of vision recognition.
The Future Of Vision Picking For DC Operations
The next big step in the development of vision picking solutions will be the gradual addition of AR to heads up display information. We also anticipate that camera-based scanning will continue to improve, eliminating the need for an external scanner in many applications.
Looking out 3-5 years, future AR picking systems will begin to fully incorporate vision recognition providing better location awareness and another layer of automated task confirmation. Vision recognition will improve the accuracy and utility of AR cues delivered in a user’s line of sight. For example, the system will correctly (and immediately) recognize which slot to highlight for the next pick (or picks).
Vision recognition also holds the promise to provide a new level of automated activity confirmation. Vision will initially supplement speech, scanning and other forms of task confirmation, but we can envision a system that will automatically validate activities as a worker moves a carton from a pick location to a pallet using automatic vision recognition technology.
IT, engineering, logistics, and operations executives should continue to monitor the progress of smart glass hardware and be aware that a heads up display is a complement to scanning, voice and speech recognition, rather than a replacement.
Don’t make the mistake of believing that HUD or AR is the best technology for every workflow challenge, especially in its early phase of development. In the end, the reality of current technology doesn’t quite measure up to the hype, but there is little question that “vision” is the next big thing in manual picking technology. The question is not if, but when, it will enter the mainstream of the logistics industry.