Writing test cases for mobile apps follows same principles as any software testing. But mobile adds constraints that change how you write them. The device matters. The OS version matters. The network condition matters. The OEM skin matters. A test case that says "verify login works" is useless on mobile because login might work on a Pixel and fail on a Samsung where keyboard covers Login button.
If you need test case format (8 fields, template structure), see our test case template guide. This guide is about craft: how to think about what to test, how to write steps that survive across devices, and how to avoid mistakes that make test cases useless six months later.
Rule 1: one action per step
The most common mistake is cramming multiple actions into one step. "Open app, log in, and navigate to checkout" is three steps disguised as one. When test fails, you can't tell which action broke.
Bad: "Open app and log in with valid credentials, then go to cart."
Good:
- Launch app
- Wait for login screen to load
- Tap email field
- Type "user@test.com"
- Tap password field
- Type "Test@1234"
- Tap "Log in"
- Wait for home screen to load
- Tap cart icon
Each step is one action. When step 7 fails, you know Login button is problem, not email field or navigation.
Rule 2: include device and OS in preconditions
On web, preconditions are about browser and user state. On mobile, you also need device model, OS version, and network condition. A test that passes on a Pixel 8 with Android 15 might fail on a Samsung Galaxy A14 with Android 13 because of One UI's font scaling defaults.
Bad: Preconditions: "User is logged out."
Good: Preconditions: "Samsung Galaxy A14, Android 13, One UI 5. Font size: default. Network: Wi-Fi. App installed (v4.2). User logged out. Test account: user@test.com / Test@1234."
This lets anyone re-run test on exact same setup. If test passes on one device and fails on another, you know difference is in device, not in steps.
Rule 3: use exact text from screen, not paraphrases
The step should reference exactly what user sees. If button says "Place Order," test case says "Tap 'Place Order.'" Not "tap order button" or "submit order." Paraphrasing creates ambiguity. If someone else runs test and screen says "Place Order" but test case says "submit order," they have to guess which element you meant.
Bad: "Click submit button on checkout page."
Good: "Tap 'Place Order' on checkout screen."
This rule matters even more for automated tests. In Drizz, tests are written in plain English and Vision AI matches step to screen. "Tap 'Place Order'" finds button by its visible text. "Click submit" would require interpretation. Exact text is always clearer.
Rule 4: specify expected results with measurable criteria
"Verify login works" is a wish, not an expected result. Expected results need to be specific enough that two different testers would agree on pass/fail without discussing it.
Bad: Expected result: "Login works."
Good: Expected result: "Home screen loads within 3 seconds. Header displays 'Welcome, User.' Bottom navigation bar shows 4 tabs: Home, Search, Cart, Profile."
Measurable criteria (load time, visible text, element count) remove ambiguity. If screen loads in 4 seconds, is that a pass or a fail? With a 3-second threshold, answer is clear.
Rule 5: test one objective per test case
Don't combine "verify login with valid credentials" and "verify login with invalid credentials" in same test case. Each one has different steps, different data, and different expected results. If test case has two objectives and one fails, pass/fail status is ambiguous.
Bad: One test case that covers valid login, invalid password, empty email, and account lockout.
Good: Four separate test cases:
- LOGIN_TC_001: Verify login with valid email and password
- LOGIN_TC_002: Verify error message for invalid password
- LOGIN_TC_003: Verify error message for empty email field
- LOGIN_TC_004: Verify account lockout after 5 failed attempts
Each test case has one objective, one set of steps, one expected result. When LOGIN_TC_003 fails, you know exactly what broke and where.
Rule 6: account for mobile-specific scenarios
Web test cases don't need to cover incoming phone calls, device rotation, or OEM-specific permission dialogs. Mobile test cases do. For every critical flow (login, checkout, payment), ask: "What happens if user gets a phone call during this step? What if they rotate device? What if keyboard covers a button?"
These scenarios should be separate test cases, not afterthoughts:
- INTERRUPT_TC_001: Verify checkout state persists after incoming phone call
- ROTATE_TC_001: Verify form data persists after device rotation during address entry
- KEYBOARD_TC_001: Verify "Place Order" button is reachable when keyboard is open on Samsung Galaxy A14
For more on these scenarios, see our guide to types of mobile app testing (interrupt testing, compatibility testing).
Rule 7: write test steps that work across devices
This is where mobile test authoring diverges most from web. On web, you write one test and run it in Chrome. On mobile, same test should ideally work on Samsung, Pixel, Xiaomi, and iPhone without device-specific rewrites.
The way to achieve this is to describe steps by what's visible on screen, not by element IDs or device-specific behavior.
Device dependent step: "Tap element with resource-id 'btn_checkout' at coordinates (340, 720)."
Device independent step: "Tap 'Place Order' below order summary."
The second version works on any screen size because it references visible text and relative position. It doesn't depend on element IDs (which change between builds), coordinates (which change between screen sizes), or platform-specific identifiers.
This is exactly how Drizz tests work. Steps are written in plain English ("Tap 'Place Order' below order summary"), and Vision AI finds element on any device by visual context. The same test runs on Android and iOS without modification. Self-healing adapts when layout shifts. The popup agent handles OEM dialogs (Samsung battery prompts, Xiaomi security popups) that would otherwise require device-specific steps.
15 ready to use mobile test case examples
Here are 15 test case examples covering most common mobile flows. Each one follows all 7 rules above. They're written in plain-English format compatible with Drizz's authoring model. For full 8-field template format, see our test case template.
Login flow:
- Launch app, tap email field, type "user@test.com," tap password field, type "Test@1234," tap "Log in," validate "Welcome" is visible on home screen
- Launch app, tap "Log in" with empty email, validate "Email is required" error message appears
- Launch app, enter valid email, enter wrong password, tap "Log in," validate "Invalid credentials" error message appears
Signup flow: 4. Tap "Sign up," enter name, email, password, tap "Create Account," validate verification email prompt appears 5. Tap "Sign up," leave password field empty, tap "Create Account," validate "Password is required" error appears
Checkout flow: 6. Add item ($49.99) to cart, tap "Checkout," enter shipping address, tap "Place Order," validate order confirmation shows "$49.99" 7. Add item to cart, apply promo code "SAVE20," validate total updates to discounted price, complete checkout, validate confirmation shows discounted total 8. Add item, go to checkout, remove item from cart via cart icon, validate cart shows "Empty"
OTP verification: 9. Enter phone number on signup, tap "Send OTP," validate OTP received within 30 seconds, enter OTP, tap "Verify," validate profile setup screen loads 10. Enter phone number, tap "Send OTP," enter incorrect OTP, validate "Invalid OTP" error message appears
Push notification: 11. Place order, trigger status change to "Out for delivery" (via admin), validate push notification appears within 60 seconds, tap notification, validate tracking screen opens
Search: 12. Tap search bar, type "pizza," validate search results appear within 2 seconds, validate results contain "pizza" in name or description
Location: 13. Grant location permission, validate app shows restaurants near current location, change delivery address to a different city, validate restaurant list updates to new location
Device specific: 14. Open app on Samsung Galaxy A14 with font size set to "Large," navigate through all main screens, validate no text truncation or button label overflow 15. Start checkout flow, trigger incoming phone call, answer call for 30 seconds, return to app, validate checkout state (items, address, payment) is preserved
For organizing these into a release test plan, see our test plan template. For automating them, Drizz turns these plain-English steps into executable tests on real devices. The format you see above is same format you'd paste into Drizz editor. No translation step, no selector mapping, no code.
FAQ
How do I write test cases for a mobile app with no experience?
Start with app's core flows: login, main feature, and checkout. For each flow, write one action per step, include device in preconditions, and specify measurable expected results. Follow 7 rules above.
How many test cases should I write for a mobile app?
Start with 10-15 covering core revenue-impacting flows. Expand from there. A mature app might have 100-300 test cases across functional, regression, and compatibility categories.
What's difference between a test scenario and a test case?
A test scenario describes WHAT to test ("verify checkout flow"). A test case describes HOW to test it (specific steps, data, device, expected result). One scenario can have multiple test cases covering happy path, edge cases, and error states.
Should mobile test cases include device information?
Yes. Always include device model, OS version, and network condition in preconditions. A test that passes on a Pixel might fail on a Samsung. Without device info, failures can't be reproduced.
Can I use same test cases for Android and iOS?
If your test cases are written by visible screen text (not element IDs), yes. "Tap 'Log in'" works on both platforms. "Tap element with resource-id 'btn_login'" is Android-only. Write device-independent steps for maximum reuse.
How often should test cases be updated?
After every UI change that affects tested flow, after every new feature addition, and after every bug fix (add a regression test case for fixed bug). Review full suite quarterly to retire outdated cases.


