How to Measure Barcode Scanning Performance


Quantifying barcode scanning performance can be troublesome: developers and product teams don’t know where to start, and users can’t explain what they want in technical detail. All we can really agree on is that frontline workers want to scan fast, and customers using self-scanning apps expect a flawless user experience.

This guide takes you into the depths of performance testing for camera-based barcode scanning applications. Whether you’re integrating scanning solutions for retail, warehousing, last-mile delivery, or any other industry, you’ll learn the key factors for conducting systematic, use-case-based testing to determine how your app meets business and user requirements.

Robot arm and smartphone being used to test Scandit barcode scanning performance on a set of brown cardboard boxes.

Why is performance testing important?

Recent Scandit research shows that the top scanning issues frustrating retail associates relate to the performance of their apps. Reducing these frustrations starts with understanding how to measure different aspects of barcode scanning performance to prioritize performance optimization work.

Bar chart illustrating the top pain points relating to barcode scanning in retail. The top two are “scanning one barcode at a time” and “poor scanning performance”

What barcode scanning performance measures should you test?

The primary measures for evaluating barcode scanning performance are metrics related to the speed and accuracy at which your app captures barcodes. Usually, these metrics are measured from the moment the user sees the barcode on their screen to when they know the barcode is recognized correctly.

These performance outcomes vary depending on your organization and use cases. Some teams want to measure the number of barcodes scanned in a given amount of time; others want to know their scan error rates.

These metrics support, but are not exactly the same as the business-level key performance indicators (KPI) specific to your industry. For example, a retailer with an in-store order fulfillment use case may want to determine scan speed metrics in order to support a KPI of a 15-minute average order picking times.

Common performance metrics are:

  • Total time to scan: The time it takes to adjust to the scene containing the barcode, process the camera images, extract the label information, and send results to your app (usually through a callback method). This metric directly relates to user productivity and satisfaction with your app.
  • Scan accuracy: The number of false positives (captures of label data that doesn’t exist) and incorrect barcode data over a series of captures.
  • Maximum scan range: The maximum distance a barcode can be captured successfully. This is influenced by the barcode’s characteristics (size, print quality, type), environmental conditions (lighting, dust), and the capabilities and tuning of the device camera (exposure, focus handling, resolution).
  • Low light: The ability to detect and capture barcodes under poor or low-light conditions.
  • Battery consumption: The power efficiency of your app during barcode scanning operations; influenced by CPU cycles, display backlighting, camera use, and other device characteristics.
  • Ability to scan difficult barcodes: The scanning software’s ability to decode hard-to-scan barcodes, such as damaged labels, those experiencing glare, tiny barcodes, and electronic shelf labels (ESLs).

However, it’s critical to avoid focusing solely on these numbers. Examining actual use cases is often more effective at exposing issues than knowing the number of scans per second or false positive rates. For example, a customer cares more about getting the product information they need fast and accurately than knowing your app can scan 100 items per minute.

The following testing best practices help you align the metrics above with end-user expectations.

Best practice #1: Implement use-case-based testing

Barcode scanning happens in the real world, not on your desktop. To accurately assess barcode scanning performance, it’s recommended to adopt a use-case-based testing approach. Having actual users test the software in real environments ensures that the testing conditions mirror the scenarios they will encounter. It also fosters accurate, in-the-moment feedback.

This approach allows you to identify and address issues early in the development process, ensuring a smooth and efficient user experience after the application is deployed.

When designing your test cases, consider these goals:

  1. Test in the same environment where the solution will be deployed. This helps you get real insights into the performance and usability aspects of barcode scanning with real physical barcodes, actual lighting conditions, and realistic scan angles.
  2. Include a broad range of users to cover diverse abilities rather than rely on a single user who may not represent your user base. If your use cases include customers, bring them into your test plan or find non-technical internal test users to act as customers. Each user should perform a statistically significant number of runs through the test cases (at least ten), starting with two trial runs to familiarize themselves with the app and scanning process.
  3. Cover various scenarios, including different barcode symbologies, sizes, print quality, and backgrounds.
  4. Test the end-to-end system, from barcode capture to any back-end system updates. This ensures the data you capture is accurately reflected in all systems in a timely manner.
  5. Take network latency into account, as it may impact how quickly information is presented on screen or stored in the back-end system.

Some examples of test cases for different industries are:

  • Retail: Include tests accounting for scanning from the bottom shelf and situations where employees perform replenishment tasks before store opening (e.g., with lights slightly dimmed).
  • Last-mile deliveries: Include conditions with poor or no internet connectivity.
  • Logistics: Include scenarios inside and outside the warehouse, such as loading vehicles in a garage versus outdoors (pictured below).

Testing on your desktop

If real-world testing will be performed later and you need to test barcode scanning on your desktop, consider these:

  • Sample barcodes should be printed using high-resolution pictures of the real barcodes to avoid adding unwanted artifacts.
  • The empty space around the barcode (the “quiet zone”) should be large enough to scan and free of any marks, text, or designs.
  • As far as possible, replicate the lighting conditions of your real environments.
  • Avoid scanning barcodes on your device screen, as performance may be impacted due to light interference patterns (the Moiré effect) and other artifacts.

Best practice #2: Test with real user devices

The performance of barcode scanning software is tied to device hardware. This includes the specifications and configuration of its CPU, GPU, RAM, chipset, camera, and autofocus technology.

To ensure your app will work in production, deploy and test your barcode scanning software on the same devices your users have. This includes all models and specifications so that users with older, lower-end devices aren’t left with apps they cannot use.

Consider these criteria when specifying your test hardware:

  1. Ensure that all devices meet the minimum recommended specifications outlined by the barcode scanning SDK. If you have devices below these specifications, you may need to adjust your test metrics to account for any performance degradation. The system requirements for Scandit software are here.
  2. Avoid testing on developer devices, as they may have different specifications than user devices and skew performance metrics.
  3. Avoid displaying labels on computer monitors to test barcode scans. Instead, use printed barcode labels in various positions, conditions (obscured or torn), and lighting environments.

Other performance optimizations include enabling the required symbologies only, configuring the device camera with recommended settings, and adopting UX best practices to make users’ lives easier. Our blog on performance optimizations explains these optimizations in greater detail.

Test Scandit’s performance for yourself

Try for free


Best practice #3: Use the latest version of scanning software

Using the latest version of your chosen barcode scanning software ensures you have the latest optimizations, bug fixes, and hardware-specific adaptations to maximize performance.

For example, the latest versions of Scandit software always include up-to-date algorithms and performance tuning, so your app can take advantage of the latest hardware and software platforms.

Best practice #4: Perform systematic testing

Systematic techniques rigorously create tests while random approaches generate arbitrary data.

The Importance of Software Testing, IEEE Computer Society

Systematic testing breaks test cases down into atomic steps that validate a single feature or code change. For example, it’s much faster to spot a gap in barcode symbology support if you’re testing symbologies individually rather than all at once. And it’s easier to isolate a UX issue if your test user performs one scan operation versus jumping between scenarios.

A systematic approach helps track that all your features and use cases are tested and isolates issues to specific components, making them easier to fix.

Best practice #5: Choose the right test methodology for your use case

There are different approaches to testing barcode scanning solutions, so it’s important to find the best one that matches your use cases and chosen metrics.

The following methodologies are commonly used:

  • Time and motion studies use direct observation of barcode scanning to measure performance in a simulated environment. For example, this can be a “scan parcours” of scanning a fixed number of barcodes. Time studies use timekeeping devices to measure how long tasks take, and motion studies film activities to identify where users struggle or slow down in the scanning process. Due to the simulated nature of these studies, great care needs to be taken when setting them up and interpreting results.
  • Timed process runs (also Process A / B test) use direct observation of barcode scanning and employee satisfaction measurements within an actual process environment. KPIs are measured while a worker performs a task in their usual environment with their usual technology (A test) and compared to the results with other technologies (B test). This test method is very reliable when covering individual, time-boxed processes.
  • Site A/B tests are the most sophisticated tests. In them, all relevant KPIs of a site, such as a store or warehouse, are measured for an extended period while employees use different technologies. The results are compared to gain reliable insight into which technology drives better outcomes.

Example test case: Barcode scanning on retail shelves

To illustrate how you can set up a use-case-based test scenario, here is an example test case covering a barcode scanning scenario for inventory counts of products on a retail shelf.

Test retail shelves to be used to conduct barcode scanning performance testing.

If you’re just starting to develop a barcode scanning application, you may want to test using a Scandit sample from GitHub. These samples have best practices for application design, UX, and performance baked in, and come pre-configured to run on all our supported platforms. Using a Scandit demo app is another option.

Test setup

  1. Arrange a retail shelf mockup with various products, ensuring a diverse range of barcode types (e.g., EAN, UPC, and QR codes) that match your requirements are present.
  2. Place the products in different positions on the shelf, simulating real-world product placement. Include partially obscured, angled, and positioned barcodes at different heights.
  3. Ensure the lighting conditions in the test environment match those typically found in the target setting. You can cycle through different lighting levels systematically to know when performance starts to degrade.
  4. Install the barcode scanning application on each test device, ensuring it meets the minimum device requirements and is properly configured and connected to any backend systems. (The application might allow users to scan barcodes individually using a tool such as SparkScan, or batch scan multiple items simultaneously using a tool such as MatrixScan Count.)

Test execution

  1. Instruct the test users to perform barcode scans as they would in a real-life inventory count. This may involve picking up products from the shelf, scanning barcodes at different angles, and dealing with any challenges (e.g., glare, damaged barcodes).
  2. Have the test users scan a predefined set of products, capturing the time taken for each scan and noting any issues encountered. If connected to a back-end inventory system or ERP, verify its data matches the test products.
  3. Let the test users do two practice runs to familiarize themselves with the setup. It is recommended that they record at least ten test runs per user to achieve a statistically reliable result.
  4. Repeat the test with multiple users and environmental conditions to gather a diverse set of performance data.

Test data collection and analysis

  1. Record the following data points for each barcode scan:
    • Time to the first successful scan
    • Number of attempts required for a successful scan
    • Any errors or false positives encountered
    • Any feedback from the test user
  2. Collect and analyze the data points to identify performance bottlenecks and common user issues, such as consistently slow scan times for certain barcode types or conditions.
  3. Compare these results against the desired performance metrics and business requirements to determine if the barcode scanning integration meets the necessary standards.

How Scandit tests barcode scanning performance

The Scandit development team uses an extensive suite of tests to measure scan performance on a wide range of devices and reflect real use cases in data capture. With support for over 20,000 smart device models and feedback from over 2,000 customer accounts with more than 3.5 million end users worldwide, our test setups mimic many different use cases and workflows based on real-world situations.

In-house, we perform two types of testing:

  • Robot-based testing for the most-used devices in the market, supporting consistent, repeatable, and measurable actions to extract in-depth performance metrics.
  • Use-case-specific testing for specific customer scenarios and environments.

Our Enterprise-Level Success Team supports customers daily in executing your performance testing, from setting up environments to making development recommendations based on the results.

Measure once, deploy successfully

Barcode scanning performance is critical to integrating barcode scanning capabilities into enterprise applications. By adopting a systematic, use-case-based testing approach and knowing your key performance metrics, you can ensure that your application meets the requirements of your business and users.

While hardware specifications play a role in barcode scanning performance, the software used to read the barcode is equally important. Scandit’s barcode scanning software is designed to deliver enterprise-grade scanning performance across a wide range of smart devices.

To test the performance for yourself, sign up for a free trial.


Test Scandit’s performance for yourself

Try for free