THE PROBLEM
Your device list is a guess. Your device strategy needs to be a decision.
Most mobile QA teams pick a handful of devices, run every test on every device, and call it coverage. The problem with this approach isn't effort β it's distribution. You're spending equal CI time on a device that represents 2% of your users as you are on a device that represents 30%.
When a device-specific bug ships to production, it almost never comes from your top-tested device. It comes from the one that got equal weight when it deserved more, or no weight at all when it deserved some.
"Fixed device lists create blind spots you don't see until a user does." β Asad Abrar, Founder & CEO, Drizz
The fix isn't more devices. It's smarter distribution.
THE SCALE OF THE PROBLEM
24,000 Android variants. The challenge isn't access, it's prioritization.
There are over 24,000 distinct Android device models in active use globally, spanning Samsung, Xiaomi, Oppo, Vivo, OnePlus, each running their own custom skin with its own quirks. No team tests on 24,000 devices, nor should they.
The real problem is that even when teams have access to the right devices, they treat every device equally. A Samsung Galaxy A-series device holding 25% of your traffic gets the same test run as a Pixel device holding 3%. That's not a coverage strategy, it's a spreadsheet that happens to run.
The teams that catch device-specific bugs before users do are the ones who've mapped their test distribution to their traffic distribution. Until now, doing that well required manual effort and constant maintenance. Drizz builds that capability directly into how tests are scheduled and executed.
WHAT DRIZZ DOES: RANDOMIZED DEVICE TESTING
Three modes, full control.
Drizz's Randomized Device Testing gives your team three distinct ways to run tests across devices, and lets you combine them based on what each test actually needs.
1. Run tests on all devices: For your core regression suite: login, checkout, critical flows, you want confidence across the board. Drizz lets you run any test across your entire configured device set in parallel. Every device, same test, simultaneously. No queuing.
2. Run specific tests on specific devices: Some bugs are device-class specific. A payment rendering issue you've seen before on low-RAM devices. A camera flow that behaves differently on a specific OEM skin. Drizz lets you pin individual tests to individual devices, so targeted investigations don't require running your entire suite everywhere.
3. Assign weightage to devices based on your traffic: This is the core of the feature. You know which devices your users are on, from your analytics, your support tickets, your market research. Drizz lets you encode that knowledge as weightage. A device representing 30% of your traffic gets 30% of your test runs. A device at 5% gets 5%. Drizz distributes execution proportionally, automatically, every time.
The result: your CI coverage finally reflects your actual user base, without anyone having to manually adjust a device matrix every time the numbers shift.
HOW THIS IS DIFFERENT FROM WHAT EXISTS
Device farms give you access. Drizz gives you distribution.
BrowserStack, LambdaTest, Sauce Labs, these platforms solve the infrastructure problem brilliantly. You can access thousands of real devices on demand. But the question of how to distribute tests across those devices is entirely up to you. There's no weighting system. There's no proportional execution. You pick a list, you run it uniformly, you move on.
Firebase Test Lab runs a curated device set automatically based on global market share data. That's directionally right, but it's a generalized signal, not calibrated to your specific user base, your geography, your product tier.
Drizz sits in a different layer. It doesn't just give you the devices, it gives you a system for deciding how much testing attention each device deserves, and then executes against that decision automatically.
The time savings come from two places: parallel execution across devices instead of sequential, and elimination of wasted runs on low-traffic devices that were getting equal weight for no reason.
WHY THIS MATTERS
The bugs device-specific testing is designed to catch.
Device-specific bugs have a particular character, they're invisible until they're not. A layout that breaks on a specific screen resolution. A checkout flow that fails on a specific Android skin. An animation that stutters on a low-RAM device. None of these show up on your top device. They show up on the device that was getting 2% of your test runs when it should have been getting 20%.
Weighted device execution doesn't just save time. It changes which bugs you catch before they ship.
What changes for your team:
- Run everything everywhere when you need full coverage
- Pin targeted tests to specific devices when you need precision
- Assign traffic-based weightage so CI time goes where your users are
- All in parallel, no queuing, no waiting
GETTING STARTED
Live today. No infrastructure changes needed.
Randomized Device Testing is available to all Drizz users now. Set your device weightages once in the Drizz dashboard, and every test run distributes accordingly from that point forward. Your existing plain-English test steps work as-is, Drizz handles the scheduling and parallel execution.
To see it configured against your actual device mix, book a demo.
β

