It has become the norm for almost every mobile app to integrate some form of mobile analytics. Analytics not only helps the mobile team to learn what point (or points) at which users drop off and to analyze your conversion funnel, but also helps to a/b test colors and buttons or even user flows. There are several players on the market; to include just a few examples: Mixpanel, Heap, Amplitude, or Kissmetrics. But once you’ve integrated and configured your analytics solution, how do you actually test it? To be perfectly honest, in our experience most companies test if their reports and analytics work in the old-fashioned manual way. But here at Testmunk, we’ve already made some automation attempts in this area, the results of which we’d like to share today.
The Traditional Way of Testing Analytics
After you have integrated the sdk or set your ‘events,’ it is key to ensure that your user actions are actually being recorded in your analytics database. To test this, you basically load the app on your phone or simulator, perform the action, and check if a corresponding entry is recorded in the database.
Is there any ROI by automating analytics testing?
Setting automated testcases up certainly involves a bit of effort, as well as strong coordination work between the QA Engineer and the analytics team. In many cases, particularly for smaller companies, the manual method may be the most straightforward method. If, however, your business is heavily dependent on analytics data, and you are driving significant revenue through your app, you might want to consider analytics automation. It acts as a form of ‘insurance’, providing timely and actionable data in a repeatable formula.
The Integration of Mobile Analytics
Typically, mobile analytics integration consists of setting ‘events’ on elements such as buttons, views or navigation. Some analytics providers (e.g. Heap) automatically track everything after integrating the sdk.
Traditional Analytics Testing
Should I test it regularly? Waste of time?
For a small team, it can often be the same person who builds the feature that draws conclusions from the analytics solutions. If this is the case, you are likely aware that the moment you change your app, your analytics report also has to be adjusted. If one person can recognize and make the necessary changes, good enough.
For larger companies, this is more complicated. Larger companies often have an analytics team working on its findings. The analytics team may decide that they need new reports or to track more events. As the analytics team is not directly responsible for implementation, the requirements of said implementation are likely to be handed to the developer team, in order to adjust the integration. On the flip-side, the development or product team might decide on new features or changes in user flows, which would require the analytics team to adjust their reports.
In short, there has to be a good communication flow in place and responsibilities clearly delineated in order to avoid any mishaps.
Communication is key
What could possibly go wrong?
Obviously, as Murphy would tell you, anything that can go wrong, will. What exactly can go wrong depends on how you use your analytics solution to begin with, and what importance it has to your app or organization.
- If you are dependent on analytics from a ‘revenue’ perspective, and perhaps you charge your advertisement partners based on the number of views you get from showing an ad in your app, or if you use it for financial forecasts/projections or controlling it can have quite the impact.
- If you have a marketing team and a budget running behind your app, and you are dependent on your analytics to draw conclusions on where to spend your money, this can have a significant impact too.
- Many companies also use analytics data to decide which features to spend resources on, and which flaws to fix or prioritize.
If these situations sound familiar to you or perhaps mishaps have already happened, then you might be interested in the following solution:
Automated Testing for your Analytics
At Testmunk, our focus is on automated UI testing, which means we are able to test interactions with the user interface, such as touches, swipes, presses, and more, through automation. As mentioned earlier, our automation efforts have included working with integration of analytics platforms, and we’d like to share our findings with you. The methodology below will first execute an automated test case, store the analytics data in a .txt file, and then will verify that the results are stored as expected.
Another method (not detailed in this tutorial) could involve a testcase that queries the analytics API and verifies the test results stored are correct. However this method will be challenging if the api does not provide real-time data. Users of Flurry and Google Analytics have previously reported such delays.
Automation Logic and Syntax Explained
Scenario: Purchase a product Given I’m landing on product page Then I add product to shopping cart And I’m able to successfully checkout
The above example shows a testcase that is part of a longer regression testplan.
The syntax seen is called ‘cucumber’ with underlying ruby steps that perform the action on the screen.
Storing Analytics Data
Our first testcase is to perform user actions, and to ensure all actions will trigger analytics parameters to be stored in an analytics.txt log file.
During the execution of the first testcase, all data is written to the analytics.txt file. We then need to verify the analytics parameters stored in the analytics.txt file are producing valid results.
Scenario: Verify analytics parameters Given analytics data is recorded And I see an analytics data with “parameter” and “value”
| parameter | value | | pageName | LoginPage | | productName | Item1 | | userStatus | notLoggedIn | | timeOnPage | 120 |
In the above scenario, the first step (Given analytics data is recorded) will parse analytics.txt to a Ruby readable format, and prepare all data to be verified.
The second step (And I see an analytics data with “
We‘d love to hear your thoughts on this topic! If you are considering automation of your analytics platform or have your own solution you want to share, don’t hesitate to reach out. We’d love to see how we can help you or how we can learn from our colleagues.
|About the author:
Martin Poschenrieder has been working in the mobile industry for most of the past decade. He began his career as an intern for one of the few German handset manufacturers, years before Android and iPhone were launched. After involvement with several app projects, he soon realized that one of the biggest pain-points in development was mobile app testing. In order to ease this pain, he started Testmunk. Testmunk is based in Silicon Valley, and provides automated app testing over the cloud.
Follow Martin on twitter