Test data management is the process of creating, maintaining, and controlling the data your tests need to run. It answers a simple question: when your automated test types "user@test.com" into the email field and taps "Log in," does that user account exist? Is the password correct? Does the account have the right subscription tier, the right cart history, the right promo eligibility?
If the answer is "sometimes," your test suite is flaky for data reasons, not code reasons. K2View's research found that QA teams spend over 30% of their testing time dealing with defective test data and one full day per week on test data provisioning. That's time spent not on finding bugs but on making sure the test can run at all.
On mobile, test data management has problems that web and backend testing don't face. You need OTP codes that arrive via SMS. You need GPS coordinates that trigger location-specific content. You need payment sandbox credentials that work with the payment SDK's test mode. You need device-specific test accounts that handle OEM permission states. And you need all of this to be consistent across 5-10 devices running in parallel.
The 4 test data strategies
1. Fixtures (hardcoded test data)
The simplest approach. Test data is hardcoded in the test script or stored in a JSON/CSV file alongside the tests. The login test uses user@test.com / Test@1234. The checkout test uses credit card 4242 4242 4242 4242. The promo test uses code SAVE20.
When it works: Early-stage apps with small test suites. The team has 20-30 tests, one test environment, and a stable set of test accounts. The data doesn't change between sprints.
When it breaks: The moment someone else modifies the test account. A developer logs into user@test.com in staging, changes the password, and every test that uses those credentials fails. Or the promo code SAVE20 expires, and the 8 tests that depend on it all go red overnight. Fixtures are fast to set up and fragile to maintain.
Best practice: If you use fixtures, document them in a shared location (not buried in test files) and assign an owner who's responsible for keeping them valid. For how to reference test data in a structured format, see our test case template guide.
2. Mocks and stubs (fake data, no real backend)
Tests don't hit the real API. Instead, they use mocked responses. The login API always returns { success: true, token: "fake-token" }. The payment API always returns { status: "completed" }. The data is controlled entirely by the test, so it never goes stale.
When it works: Unit tests and component tests that validate frontend logic without needing a real backend. Testing how the UI handles a 500 error response (mock a 500). Testing how the app renders an empty cart (mock an empty response). Testing how the app handles a slow API (mock a delayed response).
When it breaks: On mobile, mocks hide real integration bugs. A mocked payment API that always returns "completed" won't catch that the real payment SDK changed its response format in version 3.2. Mocks test what you told them to return, not what the real system returns. As we covered in our React Native testing guide, mocking native modules like biometrics creates a false sense of coverage. The mock returns biometryType: 'FaceID' while the real module on a Pixel returns biometryType: 'Fingerprint', and your component silently does nothing.
Best practice: Use mocks for unit and component tests. Never use mocks for E2E tests. E2E tests should hit the real backend (or a staging replica) because the whole point is to test the integration.
3. Staging environment (real data in a controlled copy)
Tests run against a staging environment that mirrors production: same APIs, same database schema, same third-party integrations, but with test accounts and sandbox credentials. The data is real in structure but not in content.
When it works: Teams with mature infrastructure that maintain a stable staging environment. The staging database is seeded with known test accounts before each test run. API keys point to sandbox versions of third-party services (Stripe test mode, Firebase test project).
When it breaks: When staging goes stale. The production database schema gets updated, but staging doesn't. A new API endpoint is deployed to production but not to staging. Or worse, staging is shared across teams, and Team A's tests modify data that Team B's tests depend on. "Our staging environment is broken" is one of the most common reasons mobile teams give for pausing test automation.
Best practice: Treat staging like production: automate its deployment, keep it in sync, and give each team (or each CI run) an isolated data namespace. If your staging is unreliable, your tests are unreliable regardless of how well they're written.
4. Synthetic data generation (create fresh data per run)
Each test run generates its own data. The test creates a new user (test_user_<timestamp>@test.com), creates a new order, applies a new promo code, and validates the result. When the test ends, the data is cleaned up. No shared state, no stale data, no contamination.
When it works: At scale. Teams with 200+ tests running across 5+ devices in parallel need data isolation. If Test A and Test B both use user@test.com, they'll interfere with each other when running simultaneously on different devices.
When it breaks: When the app doesn't support programmatic user creation. If creating a test account requires email verification, SMS OTP, and a CAPTCHA, generating fresh data per run is slow and complex. Some teams solve this with API-based account creation (bypass the UI for setup, test the UI for the actual flow).
Best practice: Synthetic data is the most scalable strategy, but it requires backend support. Work with your developers to create API endpoints or admin tools that let the test suite provision accounts and data without going through the full user-facing flow.
Mobile-specific test data types
These are the data types that web and backend testing don't deal with but mobile testing depends on daily.
OTP codes. Many mobile apps use SMS OTP for login or verification. Automated tests can't read SMS from a real phone. Solutions: use a test phone number that returns a fixed OTP (like Firebase Auth's test phone numbers), use an API to retrieve OTPs from a test SMS service, or use Drizz's memory commands to store and reuse dynamic values during execution.
GPS coordinates. Location-based apps (food delivery, ride-hailing, maps) need specific GPS coordinates to test location-specific content. On Android, SET_GPS(latitude, longitude) can be set programmatically. On iOS, Xcode allows custom locations via a GPX file. In Drizz, the SET_GPS system command sets device coordinates mid-test.
Payment sandbox credentials. Stripe test cards (4242 4242 4242 4242), PayPal sandbox accounts, Razorpay test mode keys. These need to be managed centrally and rotated when sandbox environments refresh. If your test suite hardcodes a Stripe test card that the payments team revokes, every checkout test fails simultaneously.
Promo codes and discount rules. Tests that validate coupon behavior need promo codes that exist in the current database with the right eligibility rules (minimum order, valid date range, user segment). These go stale faster than any other test data type because marketing teams create and expire promo codes on their own schedule, often without telling QA.
Device-specific accounts. Some tests need accounts in specific states: a user with an expired subscription, a user with a blocked payment method, a user with location permissions denied. On mobile, the "permissions denied" state is device-level, not account-level, which means the test data strategy has to include device configuration alongside account data.
The test data contamination problem
Test data contamination is the number one cause of phantom failures in mobile test suites, and most teams don't realize it's happening.
Here's the pattern: Test A creates a user and places an order. Test B assumes that user has no orders (it's testing the "empty order history" screen). Test C deletes the user. When tests run in the order A β B β C, Test B fails because Test A left data behind. When tests run in the order A β C β B, Test B passes because Test C cleaned up before Test B ran.
The result: tests pass or fail depending on execution order. The suite is "flaky," but the flakiness isn't in the tests or the app. It's in the data.
Three rules that prevent contamination:
Each test creates its own data. Don't share accounts, orders, or promo codes between tests. Test B creates its own user with an empty order history instead of depending on Test A's cleanup.
Each test cleans up after itself. If a test creates a user, it deletes the user at the end (or the test framework's teardown function handles it). Data left behind is a landmine for the next test.
Tests can run in any order. If your suite fails when you randomize the test execution order, you have a data dependency. Find it and eliminate it.
In Drizz, the test plan structure supports independent test execution. Each test case runs in its own session with its own app state. The platform's CLEAR_APP system command resets the app to a clean state between tests, preventing data from one test leaking into the next.
How Drizz handles test data
Drizz doesn't replace your test data management strategy. It works within it. Tests are written in plain English ("Type 'test_user_42@test.com' in email field"), and the test data is embedded directly in the steps or passed via test plan parameters.
For dynamic data (OTPs, generated IDs, confirmation codes), Drizz's memory commands store values during execution and reuse them in later steps. A test can read an OTP from the screen, store it in a variable, and type it into the verification field, all within the same test run.
For device-specific setup, system commands like SET_GPS, CLEAR_APP, and KILL_APP configure the device state before the test flow starts. OEM permission dialogs that would normally corrupt the test state are handled automatically, so the data setup you prepared at the start of the test is still intact when the actual test steps run.
The net effect: test data problems stop masquerading as app bugs. When a test fails, it's because the app broke, not because the test account expired or another test left stale data behind.
FAQ
What is test data management?
It's the process of creating, maintaining, and controlling the data your tests need to run: user accounts, payment credentials, promo codes, GPS coordinates, and any other input the test requires to execute and validate correctly.
Why does test data management matter for mobile apps?
Mobile tests depend on data types that web tests don't: OTP codes, GPS coordinates, payment sandbox credentials, promo codes with expiration dates, and device-specific permission states. Poorly managed test data causes phantom failures and flaky suites.
What is test data contamination?
When one test leaves behind data (a created user, a placed order) that affects another test's outcome. Tests pass or fail depending on execution order instead of app behavior. The fix is data isolation: each test creates and cleans up its own data.
Should E2E tests use mocked data or real data?
Real data (from a staging environment or generated synthetically per run). E2E tests exist to validate the full integration between frontend, backend, and third-party services. Mocked data hides integration bugs.
What tools are used for test data management?
Enterprise: Informatica TDM, Delphix, K2View. For mobile-specific needs: Firebase Auth test phone numbers (OTPs), Stripe test mode (payments), and Drizz's memory commands (storing dynamic values during test execution).
How do I prevent test data from going stale?
Assign an owner to the test data set. Automate data provisioning (create fresh data per run when possible). Monitor promo code expiration dates. Keep staging environments in sync with production schemas. Review test data health weekly.
β


