You have an app. You need to test it. You're not sure where to start.
Most guides on this topic list 12 abstract steps ("define objectives," "identify requirements," "develop a test strategy") that read like a project management textbook. This is different. It's practical sequence a QA engineer or developer follows when testing a mobile app for first time, starting from "I have a build file and no testing process" and ending at "my tests run automatically before every release."
If you already have a mature testing process and you're looking for optimization, see our mobile testing best practices or test automation strategy guides instead.
Step 1: pick your devices
Before you test anything, decide which devices you're testing on. This is decision that determines whether your testing catches real user bugs or only catches bugs on your personal phone.
Check your analytics. If you use Firebase, go to Analytics > Demographics > Device model. If you use Mixpanel or Amplitude, look at device breakdown report. You'll see which phones your users actually have. For most apps targeting India and Southeast Asia, you'll see Samsung Galaxy A-series, Xiaomi Redmi, Realme, and a mix of iPhones. For US-focused apps, it's iPhone, Samsung Galaxy S-series, and Pixel.
Pick at least 4 devices:
- One Samsung (One UI behavior, font scaling, battery optimization)
- One stock Android (Pixel, baseline behavior)
- One budget device with 3-4 GB RAM (where performance problems surface)
- One iPhone (iOS rendering differences)
If you don't have physical devices, use emulators and simulators for development and a cloud platform (like Drizz) for real-device testing before release.
Step 2: install build and do a first pass
Get build file (APK for Android, IPA for iOS) onto your test device. On Android, sideload APK or use Firebase App Distribution. On iOS, use TestFlight or install directly from Xcode.
Once installed, do a quick smoke test: launch app, sign up or log in, navigate to main screens, and confirm core flows work. This takes 10-15 minutes. If app crashes at launch or login is broken, stop. Send it back to development. There's no point in deeper testing on a build that can't complete basic flows.
This first pass answers one question: is this build stable enough to test?
Step 3: write your test cases
Now you know build is stable. Next, document what you're going to test. For each core flow (login, search, checkout, payment, onboarding), write a test case that includes: preconditions (user is logged out, Wi-Fi connected, test account exists), steps (tap email field, type credentials, tap Login), and expected result (home screen loads within 3 seconds, welcome message shows user's name).
If you're starting from zero, focus on 10-15 test cases covering flows that matter most. Don't try to cover everything. Cover paths that cost you money or users when they break: login, core feature, checkout, and payment.
For a reusable format, use our test case template. For organizing multiple test cases into a release plan, use our test plan template.
Step 4: test manually on your first device
Run through each test case on one device. This is manual testing. Open app, follow steps, check results. Document what passes and what fails. Take screenshots of failures.
While you're at it, do some exploratory testing. Don't just follow script. Try things test case didn't anticipate: paste a 500-character string into a field, rotate device mid-form, kill app during a payment, switch between Wi-Fi and cellular. Set a timer for 20 minutes and explore. You'll find bugs that scripted test cases never would have covered.
Record your findings in a session charter (see our exploratory testing guide for template).
Step 5: repeat on your other devices
Now run same core test cases on other 3 devices in your matrix. You'll find bugs that only appear on specific hardware:
- The checkout button is hidden behind keyboard on a Samsung Galaxy A14 because One UI handles keyboard insets differently than stock Android.
- The app takes 4.5 seconds to cold start on a budget device with 3 GB RAM, but launched in 1.2 seconds on your Pixel.
- A font scaling setting on Samsung (set to "Large" by default in many markets) truncates a button label from "Place Order" to "Place Or..."
- A Xiaomi security popup blocks app on first launch, and none of your test cases accounted for it.
These are bugs that emulators can't catch. One team found that 23% of their test failures came from device-specific rendering differences, not code bugs. Testing on one device catches code bugs. Testing on four catches device bugs.
Step 6: automate repetitive tests
After a few manual test cycles, you'll notice a pattern: every sprint, you're running same 10-15 smoke and regression tests by hand. That's automation signal.
Automate tests that are repeatable, deterministic, and run more than 5 times. Start with smoke tests (10-15 tests, core flows, run on every build). Then add regression tests (tests for features that previously broke, run on every PR).
The tool you pick depends on your team. If you have automation engineers comfortable with code, Appium or Espresso work. If your team includes manual QA or PMs who should contribute tests, plain-English tools lower barrier. With Drizz, test is plain English: "Tap on Login," "Type 'user@test.com' in email field," "Validate 'Welcome' is visible." Vision AI executes steps on real devices. No selectors. No Appium Inspector workflow. The popup agent handles OEM dialogs. Self-healing adapts when UI changes.
Morgan Ellis, a QA Engineering Lead, described difference: "Writing tests in plain English made automation something whole team could contribute to. We shipped 20 tests in a single day."
For a deeper comparison of automation tools, see our test automation framework guide.
Step 7: plug tests into your CI/CD pipeline
The final step is making tests automatic. Instead of a person triggering tests manually, your CI/CD pipeline triggers them on every build.
Set up your pipeline (GitHub Actions, Jenkins, Fastlane) so that:
- Code is committed
- The build compiles
- Smoke tests run on 2-3 real devices (2-3 minutes)
- If smoke passes, regression tests run on full device matrix (20-30 minutes)
- If regression passes, build is promoted (to TestFlight, Google Play internal track, or your staging environment)
- If any step fails, build is rejected and developer gets a notification with screenshots and failure details
This is shift-left testing in practice: testing happens automatically, early, and on every change. No manual trigger. No "we'll test it on Friday." Testing is part of build process.
After release, monitor crash-free rates in production. When a production bug is found, write a regression test for it. The bug gets fixed, test prevents it from returning, and suite grows with tests grounded in real user failures.
The sequence, summarized
- Pick devices from user analytics (4 minimum)
- Install and smoke test build (10-15 minutes)
- Write 10-15 test cases for core flows
- Test manually on one device + 20 minutes exploratory
- Repeat on other devices to catch hardware-specific bugs
- Automate smoke and regression tests
- Plug into CI/CD so tests run on every build
Steps 1-5 can start today with zero tooling. Steps 6-7 require a testing tool and CI pipeline. The whole process scales from "one person testing on their phone" to "automated suite running on real devices before every release."
For full testing strategy (what types of tests, which tools, what cadence), see our test automation strategy. For deeper guidance on each testing type, functional vs non-functional testing, and what to test, we have dedicated guides.
FAQ
How do I start mobile app testing with no experience?
Start with manual testing. Install app on a real device, walk through core flows (login, main feature, checkout), and document what breaks. No tools needed. See step 2 above.
How many test cases do I need to start?
10-15 covering your core flows. Don't try to cover everything. Cover paths that cost you money or users when they break: login, onboarding, core feature, checkout, and payment.
Should I test on emulators or real devices?
Both. Emulators for daily development (fast, free). Real devices before release (accurate, catches OEM-specific bugs). At minimum, test on one Samsung, one Pixel, one budget device, and one iPhone.
When should I start automating tests?
When you notice you're running same 10-15 tests manually every sprint. Automate those first (smoke and regression), then expand. Don't automate before you have stable test cases.
How long does mobile app testing take?
A manual smoke test takes 10-15 minutes per device. Full manual testing across 4 devices takes 2-4 hours. Automated smoke runs in 2-3 minutes. Automated regression across 5 devices runs in 20-30 minutes.
What is difference between manual and automated mobile testing?
Manual testing is a person following test steps on a device. Automated testing is a script or AI executing those steps. Manual finds unexpected bugs through exploration. Automated prevents known bugs from returning through regression. You need both.


