This is a working checklist, not a testing theory article. It covers 8 categories and 50 items that apply to any iOS or Android app. Use it before every release to catch failures your users will find if you don't.
According to JetBrains 2024 Developer Ecosystem Survey, 43% of mobile developers cite testing as their top productivity bottleneck. Most of that friction comes from not knowing what to test, not from testing itself. This checklist fixes that.
Each item is a pass/fail check. If you can't answer "yes" to an item, it's a gap in your coverage.
1. Functional testing
Does app do what it's supposed to do?
- [ ] Core user flows work end to end (signup, login, main action, logout)
- [ ] Form validation catches invalid inputs and shows clear error messages
- [ ] All buttons, links, and interactive elements respond to taps
- [ ] Back navigation behaves correctly on every screen (including Android's hardware back button)
- [ ] Deep links open correct screen with correct data
- [ ] Push notifications arrive and tapping them opens right destination
- [ ] App handles session expiry gracefully (auto-logout, re-authentication prompt)
- [ ] Offline behavior is defined and tested (cached data, error states, sync on reconnect)
What teams miss most: The Android back button. iOS doesn't have one, so teams that test on iPhone first often forget that Android users rely on hardware/gesture back navigation. If your app doesn't handle it, users get stuck.
2. UI and UX testing
Does app look right and feel right?
- [ ] Text is readable on all supported screen sizes (no truncation, no overlap)
- [ ] Touch targets are at least 44x44 CSS pixels (WCAG minimum)
- [ ] Keyboard doesn't cover input fields (screen scrolls or layout adjusts)
- [ ] Dark mode renders correctly (no invisible text, no broken contrast)
- [ ] Landscape orientation works or is explicitly disabled
- [ ] Animations run at 60fps without jank or frame drops
- [ ] Loading states exist for every async action (spinners, skeletons, or progress bars)
What teams miss most: Keyboard behavior. On Android, soft keyboard pushes content up. On iOS, it overlays content. If your login form's "Submit" button sits behind keyboard, half your users can't tap it.
3. Performance testing
Is app fast enough on real hardware?
- [ ] App cold-starts in under 3 seconds on a mid-range device (not just your flagship test phone)
- [ ] Scrolling through long lists stays smooth (no frame drops, no blank rows)
- [ ] Memory usage stays stable during extended sessions (no leaks that cause crashes after 10+ minutes)
- [ ] Battery drain is reasonable (an AR or GPS-heavy session draining 15% in 10 minutes is a problem)
- [ ] App size is under 100 MB for initial download (large apps get lower install rates, per Google's data)
- [ ] API response times are under 2 seconds for user-facing actions
What teams miss most: Testing on mid-range devices. Your flagship phone with 12GB RAM hides performance problems that a $200 phone with 3GB RAM exposes instantly. The JetBrains survey confirms that device fragmentation is #2 mobile testing challenge after flakiness.
4. Security testing
Is user data protected?
- [ ] All network calls use HTTPS (no mixed content, no HTTP fallbacks)
- [ ] Auth tokens are stored in secure storage (Keychain on iOS, EncryptedSharedPreferences on Android), not plain-text SharedPreferences
- [ ] Session tokens expire and refresh correctly
- [ ] Sensitive data doesn't appear in app logs or crash reports
- [ ] The app validates SSL certificates (no accepting self-signed certs in production builds)
What teams miss most: Logging sensitive data. Developers add console.log(userToken) during debugging, forget to remove it, and token shows up in production crash reports. Always grep your codebase for logged tokens before release.
5. Device and OS compatibility testing
Does app work across devices your users actually have?
- [ ] Tested on at least 3 Android manufacturers (Samsung, Pixel, Xiaomi/OnePlus) to catch OEM skin differences
- [ ] Tested on 2 most recent major iOS versions (currently iOS 17 and 18)
- [ ] Tested on 3 most recent major Android versions (currently 13, 14, and 15)
- [ ] Tested on at least 2 screen sizes per platform (small phone + large phone or tablet)
- [ ] Tested on a device with a notch/dynamic island and one without
- [ ] Tested on a foldable device if your user base includes Samsung Galaxy Z Fold/Flip users
What teams miss most: OEM-specific rendering. Samsung's One UI adds rounded corners and modified navigation gestures. Xiaomi's MIUI changes notification behavior. A button that's tappable on a Pixel might be partially hidden on a Samsung because of OEM's status bar height difference.
One mobile team we work with found that 23% of their test failures came from device-specific rendering differences, not code changes. They caught zero of these on emulators.
6. Network testing
Does app handle real-world network conditions?
- [ ] App works on 3G/slow connections (not just Wi-Fi)
- [ ] App handles complete network loss gracefully (error message, not a crash)
- [ ] App recovers when network returns (auto-retry or manual refresh, not a frozen screen)
- [ ] Large file uploads/downloads show progress and can be resumed after interruption
- [ ] API timeouts produce user-facing error messages, not silent failures
What teams miss most: The transition from connected to disconnected. Most teams test "offline mode" by turning Wi-Fi off before opening app. But real-world scenario is losing connection mid-flow: user starts a checkout, walks into an elevator, and payment API call times out halfway through.
7. Installation and update testing
Does app install, update, and uninstall cleanly?
- [ ] Fresh install works on a device that's never had app
- [ ] Upgrade from previous version preserves user data (login state, preferences, cached content)
- [ ] App handles force-close and relaunch without data corruption
- [ ] Uninstall removes all app data (no orphaned files or database entries)
- [ ] App works correctly after an OS update (test on beta OS versions before they ship)
What teams miss most: Upgrade testing. Your v2.3 works perfectly on a fresh install. But a user upgrading from v2.1 hits a database migration bug that corrupts their saved data. Always test upgrade path from at least 2 previous versions.
8. Accessibility testing
Can everyone use app?
- [ ] Screen readers (VoiceOver on iOS, TalkBack on Android) can navigate all screens and read all content
- [ ] All images have descriptive alt text or content descriptions
- [ ] Color contrast meets WCAG AA minimum (4.5:1 for normal text, 3:1 for large text)
- [ ] The app is usable without relying solely on color to convey information (e.g., error states use text + color, not just red borders)
- [ ] Focus order is logical when navigating with a keyboard or switch control
- [ ] Font sizes respect user's system accessibility settings (Dynamic Type on iOS, font scale on Android)
What teams miss most: System font scaling. When a user sets their phone to 200% text size (common for users with low vision), does your app's layout still work? Or do labels overflow their containers and overlap with other elements?
How to automate this checklist
Running 50 checks manually before every release takes days. Automating them takes same checks down to hours, and you can run them on every build instead of just before release.
The items in categories 1 (functional), 2 (UI), and 5 (compatibility) are most automatable. They're user-visible actions on screen: tap this, verify that, check this text is visible, confirm this layout works.
Drizz lets you turn each checklist item into a plain English test:
- "Tap Login, enter email, enter password, tap Submit, validate home screen" (functional test, item 1)
- "Validate 'Submit' button is visible when keyboard is open" (UI test, item 3)
- "Scroll down until 'Add to Cart', tap it, validate cart badge shows 1" (functional test, item 3)
The Vision AI engine runs each test on real Android and iOS devices across multiple manufacturers and OS versions. One test covers item 1 and item 5 simultaneously because it runs on Samsung, Pixel, Xiaomi, and iPhones in a single batch.
The built-in popup agent handles unpredictable system dialogs (permissions, update prompts, "rate this app") that block manual testers and crash scripted automation. Adaptive wait logic detects screen state changes instead of using hardcoded timers, so tests don't flake on slower devices.
Teams using this approach report going from 15 tests authored per month (with Appium scripts) to 200 per month (with plain English). One team covered their entire regression suite in under a week. Another went from 30% of sprint time on testing to about 10%, with tests self-healing when UI changed between releases.
The checklist tells you what to test. Automation tells you how often you can afford to test it. The answer should be "every build."
FAQ
What is a mobile app testing checklist?
A mobile app testing checklist is a structured list of items to verify before releasing an iOS or Android app. It typically covers functional testing, UI/UX testing, performance, security, device compatibility, network handling, installation/update behavior, and accessibility. The checklist ensures consistent coverage across releases and prevents teams from missing common failure points.
How many devices should I test on?
At minimum, test on 3 Android manufacturers (Samsung, Pixel, and one other like Xiaomi or OnePlus), 2 iOS versions (current and previous), and 2 screen sizes per platform. This covers major rendering differences and OS-specific behaviors. Teams with larger user bases should expand to 8-12 devices based on their analytics data showing which devices their users actually have.
Can I automate a mobile testing checklist?
Yes. Functional, UI, and compatibility items (roughly 60-70% of a typical checklist) can be automated with end-to-end test automation. Security and accessibility items are partially automatable (HTTPS checks, contrast ratio scans). Performance testing requires specialized tools (Android Profiler, Xcode Instruments). Some items like "does this feel intuitive?" still require human judgment.
What's most common thing teams miss in mobile testing?
Keyboard behavior (keyboard covering input fields differently on iOS vs Android), OEM-specific rendering differences (Samsung vs Pixel vs Xiaomi), and upgrade path testing (upgrading from v2.1 to v2.3 instead of only testing fresh installs). These three cause more post-release bug reports than any other category.
Should I test on emulators or real devices?
Both serve different purposes. Emulators are fast for development-time checks. Real devices catch hardware specific issues: GPU rendering differences, thermal throttling, touch responsiveness, battery drain, and OEM skin behaviors that emulators don't simulate. For release quality testing, always include real devices.
How often should I run checklist?
Run full checklist before every release. Run automated subset (functional + compatibility items) on every CI build. The goal is to catch regressions within hours of introduction, not days before release when fixing them is expensive.


