We all know it’s not hard to scan a single QR code using a smartphone. But just as cooking a meal for a friend is different from professionally catering for 500 people, scanning the odd code isn’t the same as scanning 500 shoe boxes when stock taking in a poorly lit backroom.
Smart devices are more than capable of scanning at scale. Building a reliable, usable app that frontline workers adopt, though, requires some creative and unusual UX approaches.
In this Q&A, Maiya Shur, Scandit Director and Head of UX, shares our insights on how to design scanning UI, which pitfalls to avoid, and how and why we developed our pre-built barcode scanning component SparkScan.
Let’s start at the beginning. Can you tell us a bit about your role at Scandit?
I lead a team of UX designers and researchers. Our main focus is bringing deep workflow understanding and user focus to product development.
Scandit builds products that enterprises implement for employees and consumers. This means that, like many B2B companies, our customers and our end users are not the same people. My team’s mission is to bring end-user understanding and empathy to the entire organization.
Our most important job is looking after the camera interface – i.e. how users position and use the camera on their smart device to capture barcodes (or IDs). Scandit’s core computer vision software can decode barcodes incredibly fast and accurately from a camera feed, but effective scanning at scale requires more than just that.
What’s different about UX design at Scandit?
In our designs, we’re essentially using a device camera to connect the three-dimensional, physical world of products, packages and pallets to a digital world.
Scanning apps have similarities to other camera-based apps such as SnapChat – but they’re very different to an e-commerce app or something like WhatsApp. If you’re used to designing apps used solely on screen, and apply the same UX processes to scanning design, you run the risk of it going wrong.
I also think it’s fair to say we find ourselves designing for users who aren’t often designed for – warehouse workers, store associates, delivery drivers.
That’s exciting, but it also comes with challenges. It can be hard to access real work settings, and conventional user research methods, such as mockups, don’t always work.
You have to be creative. For example, if we can’t access real work settings we look for “day in the life” YouTube videos posted by frontline workers. (Luckily, in 2023 everyone is a content creator.)
We also go to great lengths to recreate realistic set-ups in our lab and recruit our test participants from platforms for hiring forklift drivers, not from user testing panels.
So how do you design for this hybrid digital/physical user experience?
Like all good designers, we start from user needs, and these vary by role, industry, and what is being scanned. Who is using the app and what are they using for? How do they need to interact? What actions does the user need to take to get their job done?
What’s a bit more unusual is that we also consider what conditions they’re scanning in – lighting, angles, distances and so on. Understanding the physicality of scanning is very important. For example, what other things are users doing at the same time? Are they lifting boxes or heavy cases?
These situational and environmental factors all impact the way a person holds a smart device, presses the scan button, positions it to capture data accurately, and avoids scanning unintentional codes.
What are the basic principles of smart device scanning?
It almost goes without saying that scanning at scale needs to be fast and efficient. But one of our biggest learnings has been that users’ perception of this is as much about how easy it is to aim and trigger, as how quickly the software can capture a barcode.
All our scanning follows three basic principles we’ve uncovered:
- Use a large, ergonomic touch area: When starting the camera hundreds of times a day, a large, ergonomic touch area is a must. A small icon or button just won’t do.
- Assist with aiming: It’s important that the user knows scanning is live and how far away to hold the camera. When scanning batches of barcodes, the user needs to be guided to the optimal distance, perhaps through an initial calibration.
- Make feedback unmissable: Our users’ environments are often busy and loud. Scan feedback should be obvious and delivered through visual, sound and haptic feedback.
Beyond the basics, you need to decide how and where to display results, indicate progress, when to pause between scans versus when to keep the scanner running, how to manage idle time, and how to avoid unintentional scans.
Like much interface design, it’s often the attention to subtle details that initially seem “at the margin” which make the difference between poor and great usability.
In a recent project, we added a 150-millisecond delay between providing scan feedback and closing the camera. This tiny amount of time was what made the difference between users saying “it scans too fast,” and feeling secure that the right object was scanned.
Need help with scanning UX?
Our pre-built component SparkScan solves the most common scanning UX challenges for you.
What are the pitfalls you’ve seen when designing scanning UX?
The single biggest thing is designers thinking they can envision these interaction patterns while sitting at their laptop. That’s a sure recipe for low adoption.
When developing e-commerce, social or chat apps, you don’t need to worry too much about what else the user is doing. But designing for our target users requires extensive observational research to understand how humans interact with physical objects using a camera.
What matters most to our users happens outside the app – picking a physical product in store, delivering a package, and so on. That’s a real change in mindset, because the app needs to be an unobtrusive, helpful assistant rather than the center of the experience.
What are the consequences of poor scanning user experience?
Low adoption is far and away the biggest risk. Frontline workers and consumers are busy people, and I’ve seen many times how user frustration with sub-par scanning causes abandonment, inaccurate task completion or an increase in time spent on-task.
How have you put these insights into practice?
We built SparkScan!
SparkScan is a pre-built barcode scanner component with a built-in user interface that solves many common scanning UX challenges.
For most of our customers, scanning is a small but critical component of their application. It’s unrealistic that their UX designers will have time to develop the expertise we acquired from years of building and observing the use of scanning interfaces.
So, we identified the need to create a scanning component that solved common problems upfront – and, importantly, that people could integrate without having to redesign their existing UI to accommodate scanning.
I don’t believe there’s any other scanning interface quite like SparkScan. There are so many UI details I could talk about, but the main features are that it’s got a large, moveable, semi-transparent scan button, plus a small camera preview in the top right corner (next to the actual camera). These float as a layer on top of any existing interface.
The magic of SparkScan is that you can scan at close range without even looking at the camera preview. You aim with the device itself, keeping your eyes on the objects you’re picking, moving or counting.
What’s next for UX at Scandit?
SparkScan is just one of a range of pre-built components we’re building out. These not only make scanning on smart devices easy to adopt, but make data capture smarter.
We’re looking closely at the user experience of specific workflows (such as receiving goods or picking orders). Then we’re building out interfaces that shift tedious tasks to technology and upskill frontline workers with useful real-time insights. There’s more to come! I’m excited about what’s next.