Testmunk joined Snapchat. For testing services, check out TestObject.

Articles


Testing on Emulators vs Real Devices

Posted by on August 14th, 2015

As a mobile app testing organization, no problem hits closer to home than that of device fragmentation. When planning a new app, or considering moving an app from Apple to Android, addressing device fragmentation through testing is one of the dev team and QA team’s biggest concerns. With so many devices out there to test, they want to know whether testing on real devices is worth the expense, (or whether it can be accomplished without significant expense), or if emulators can do the job as well or cheaper.

Device fragmentation is a very real problem when testing devices, particularly for UI and functional tests. While the problem is getting much better on the OS fragmentation side of things (approximately 95% of users are on Kit Kat or above), there are still thousands of different manufacturers in the Android market place. Indeed, on the device side of things, fragmentation has increased 28% in the past year, according to OpenSignal’s Android Fragmentation Report, released August 5th.

android_device_fragmentation

Android device fragmentation, August 2015

From the Fragmentation Report data, we can determine that while testers can safely focus on Android OS’ 4.03 and above, the number of devices with unique specifications and proprietary build variations has increased. Apple on the other hand has less variation, meaning almost all Apple devices are running iOS 8 or above, and they only actively sell 4 devices (though there are numerous older models out there.) For testing purposes, this understandably means that Android requires a bit more attention, but does NOT mean there is no benefit to testing real iOS devices.

For the above reasons, Testmunk has committed considerable resources to providing our customers with simple and easy access to real device testing. Simply put…there are far too many devices for emulators to be your only solution.

Emulators/Simulators for Testing

For the startup company on a budget, there are some good reasons to be tempted to consider emulators as a primary testing option. One of the largest reasons many organizations select emulators/simulators over real devices is the cost of procuring enough devices (and ensuring the selected devices match the expected user base of the app)…with over 24 thousand distinct devices out there, it is impossible to completely cover the full range of devices available. Emulators are often free, or offer a low monthly user fee for the software (Genymotion is an example of the latter). Even paid options are very cheap in comparison to even a single device, let alone a full test lab of devices.

ios_simulator_graphic_smallest

Example iOS simulator

Another reason emulators have seen a bit more use, is that some of the common UI problems that argued for real device testing have been alleviated or resolved. Screen size is one of those areas. The range of different screen sizes for Android devices continues to be very large, as seen from the previously mentioned OpenSignal Fragmentation Report.

device_frames_ios_android_aug2015

Unique screen sizes, iOS vs Android, as of August 2015

What has changed, however, is that Google has worked hard through new developer
documentation and guidelines of its Material Design language to make designing apps that can adapt to many screens easier. This also means that typically, screen and visual display issues are less common. In theory, one could argue that if it displays in the emulator, it will do so on the device, provided you test against multiple display sizes. However, this was never the only problem with emulated testing, and is certainly not foolproof.

In addition, some elements can be easier to test on an emulator, such as opening web applications through URL for instance. With an emulator, the test can be done by copying and pasting. With a real device, this is typically done by typing on the touch screen.

Capturing screenshots of UI or display issues can be easier from an emulator, as they are already on your PC or Mac. Windows Snipping Tool allows for quick capture and sharing on PC. Mac also allows for easy screenshots. Third party applications abound for this purpose as well, on both platforms. Popular examples include Techsmith’s Snagit, now available on both platforms, and Lightshot or SnapzProX for Mac.

Additionally, an emulator can extract data in real time and refresh reports as it runs, providing the development team with data they need in order to debug issues. (With the right automated testing platform, this is also possible, so this is not as big an advantage as it seems).

So, long story short, there is a place for emulators in the testing sphere. It’s just not on the front lines. Let’s look at the reasons why.

Testing with Real iOS and Android Devices

many_devices_final

Some testmunk test devices

While an emulator is perfectly adequate for initial testing during development, especially for iterative tests of small portions of code, they are not adequate for final testing of a product because they simply cannot test everything that needs to be tested. Testing on real handsets always gives accurate results for the device in question, without false positives or negatives. It is testmunk’s position that real device testing is an absolute requirement for a successful app – it is the only way to be a hundred percent sure that a given feature works on a given device.

Memory, chipset and other hardware related issues

Memory issues are rampant in app testing, and cannot be tested adequately by emulators. One problem is that developers often focus their attention on the highest quality devices, and build their apps to wow their users on these platforms. Unfortunately, this often means that commonly used yet lower performance devices cannot handle these permutations. An emulator, with dedicated resources just for running the app, will quite often tell you that everything is fine, simply because it does not account for the other processes and functions that might be in play on a low-end system. Testing on multiple real devices will alert you to any such issues.

Chipsets

We already mentioned that device fragmentation has increased dramatically over the past year…over 4000 more device configurations discovered this year compared to last. While not all of these have dramatically different chipsets, display sizes, sensors, or other hardware differences, it is safe to say that out of 24 thousand distinct android devices some permutations are bound to occur. Chipsets in particular can also provide vastly different user experiences between high and low end devices, yet can completely fool an emulator, because the processor of the PC they run on is ten times more powerful than that of a given android device. Even with constraints placed, the emulator can inadvertently borrow to get the job done.

Sensors

Many sensor issues simply cannot be tested via emulators . While Android and iOS emulators can test some sensor functions (with limitations), others like push notifications, geo-location, orientation, and other functionality are impossible to test adequately without a real device. NFC and other functions also require a real device.

Battery

Another physical phone attribute that cannot be tested by an emulator is the battery. There is simply no way to test the efficiency and power consumption of your application via an emulator. An app that kills your phone in an hour will be discarded very quickly.

Display

While we noted above that Android has made it very much easier to develop for a multitude of screen sizes and resolutions, an emulator may still not give you the information you need with regards to brightness, saturation, and many other factors.

Usability Issues

Emulator limitations with regards to testing are not confined to hardware or physical device attributes. There are situations common to any real device that simply cannot be simulated.

Phone interrupts

One of these is phone, messaging, or push notification interruptions. An emulator cannot adequately show how such network related events will react with your application. Another is the difference between mouse and keyboard interactions vs touchscreen. Pinching the screen to shrink a view, or spreading thumb and forefinger to expand a screen will be considerably different with a touch screen.

Look and Feel

In general, the look and feel of the application cannot be covered adequately on an emulator. You will not be able to judge responsiveness of the application on a given phone by emulation. Brightness of the display, and whether the color scheme works in outdoor light are other areas of potential concern. Can you operate the device easily while walking around?

Platform and Carrier Considerations

Typically, your emulator will be running a vanilla version of the OS. This means that fundamental differences between vendors and carriers may not adequately be covered. Carriers and vendors alike add interface layers, skins and other middleware over the top of the stock OS. In some cases, both the carrier and the device manufacturer add layers. (For example, Samsung uses it’s own proprietary build, and a carrier may add on top of this.) This can add another layer of fragmentation that an emulator simply cannot keep up with.

Challenges of Real Device Testing

Device Cost

Many startups simply cannot afford enough devices to create a good representation of the market for automated or one-to-one testing. Any real device testing is preferable to none. Emulators are a useful (and cheap) tool for testing mobile devices, and certainly have their place in the app development process. However, for a true to life evaluation of user experience and an in-depth analysis of functionality, there is no substitution for a real device.

Logistics

Aside from the cost of real devices, there is a logistical problem in storing them, and performing the tests on them. Without a way to test multiple devices simultaneously, testing on real devices is a decidedly manual process. Even if you can test multiple devices at once on an automated platform, this will take a considerable amount of room. Testing even 5 devices can clutter your desk a great deal. Not to mention the problems of ensuring the right cord stays with the right device, and other such issues. Plus there’s always the chance your test devices could be stolen, especially if left on a desk for a long period of time to run tests.

If setting up automated testing, another issue is that each device may need configuration in order to connect with the platform. One example would be enabling ADB debugging on your Android devices. Also, you have to coordinate how they are all going to charge. The larger the number of devices such tasks have to be done for, the more cost to your organization in both time and effort.

Mitigating these issues

If you cannot afford an adequate test lab, one solution might be to have employees purchase different devices for personal communication, and encourage latter stage testing of your app. Just be aware that no one will want to be saddled with a low-end device, so this solution may not be representative.

An automated mobile cloud testing service can provide startups (and larger organizations as well) with the help they need to mitigate both the above concerns (cost of devices, and storage). With such a service, you can create and run dozens of functional automated tests on over a hundred real devices at once, and see the results in minutes. While this cannot replace physical testing altogether, you can get an amazing (and repeatable) overview of your key requirements that can mitigate the risks of limited device availability. Such a service also helps in terms of setup time, and security. The devices in question are kept at your disposal in a ready state, and you are not compromising PC or device security in any way. In combination with emulated tests in the early stages, and physical UI testing on a handful of devices in your own offices, this can ensure a quality app.

Summary

As automated app testing providers, we know full well the value of real device testing, and for that reason, provide only real devices to our customers. Real device testing is an indispensable part of the app development process, and should never be ignored, no matter how many improvements are made to emulators or to Android’s code base and developer guidance. Users can tell if corners are cut, and are not forgiving of error. Do not be daunted by the number of devices out there, because even with a limited range of devices, you will get a better app than without testing physical devices. If you cannot afford a large sample size of real devices, automated cloud testing may be your best bet to smoke test your application. With a service such as ours, you can see your app in action on a lab that might cost upwards of 20 thousand dollars (100 devices, $200 average) to purchase yourself. Not to mention, you will not have to trip over a hundred different USB cords.

P.S. Stay tuned for an upcoming article on how we manage our large array of devices!

We’d love to hear your opinions on testing apps using emulators versus real devices! Tweet us @testmunk or comment below.

martin_poschenrieder About the author:
Martin Poschenrieder has been working in the mobile industry for most of the past decade. He began his career as an intern for one of the few German handset manufacturers, years before Android and iPhone were launched. After involvement with several app projects, he soon realized that one of the biggest pain-points in development was mobile app testing. In order to ease this pain, he started Testmunk. Testmunk is based in Silicon Valley, and provides automated app testing over the cloud.
Follow Martin on twitter

Testmunk automates mobile app testing

LEARN MORE

Read next


Top 10 Mobile Apps at Y Combinator Demo Day

Posted by Michael Walsh on August 19th, 2015

August Testmunk Newsletter

Posted by Michael Walsh on August 8th, 2015

Leave a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>