Testmunk joined Snapchat. For testing services, check out TestObject.

Articles


Image Comparison Testing: Are selfies taken, sent and received correctly?

Posted by on March 22nd, 2016

With more than 250 million photos uploaded to facebook per day and around 1 trillion photos taken worldwide in 2015 it’s obvious to see the impact that smartphones have on our behavior towards taking pictures. In May 2015 Business Insider highlighted that over 8,796 photos are shared on Snapchat every second. I estimate it’s over 20,000 now in March 2016.

We are seeing the growing popularity of the camera feature not only in social consumer apps, but also more and more in business oriented apps and industries such as healthcare, banking, and even manufacturing, to mention just a few.

With our constant efforts to improve app testing, testmunk obviously cares a lot about improvements in testing ‘camera’ and ‘taken image’ features. In this article we’d like to walk you through a setup that we have used for a social app that has such testing needs. We very much enjoyed our journey to establish this setup, so we felt it was worth sharing with you. Hopefully you can take advantage of it as well.

Image Comparison Possibilities

Before we dive deep into our setup, let’s look at the definition of “image comparison testing”. Our definition lies in comparing two similar images. In testing terms this could be comparing two screenshots, meaning you take a screenshot from a specific UI view now and check it against the UI view in a newer app build later.

The goal of such testing is to make sure the screenshots look completely identical, meaning that all elements are on the screen and that they have the same size and position as the previous “old” UI. Image comparison can also be used to verify if pictures are taken and received successfully (and in decent quality) on different devices. This article will elaborate on the latter form of testing.

Our device setup

For our setup, we are using two Moto G devices on Android 4.4 along with two 5 inch tall plastic figures (“Joker” and “Hangover”) that we set up in front of both devices. These will be the subjects of our photos.

image-comparison-selfie-joker-and-hangover

Joker and Hangover with two test devices

In order to test the app’s back and forth sending we used testmunk’s app to app testing capabilities and extended the testcase with a few image comparison steps, which are described in the following paragraphs.

Testflow

Our goal is that we can make sure that if User 1’s device takes a picture, and sends it to a User 2’s device, that it arrives correctly, and looks as it should when compared to a master image. Then User 2 takes a picture, send it to User 1’s device, and we verify that the correct image is being received, by again comparing it to a master image.

image-comparison-testing-test-flow

You might wonder if this functionality could be tested with just a single device (ie, sending to yourself). Yes, you can, but only partially. You can take the image, send to a different username, then log out, and log back in as the second user on the same device. However, testing with two devices, as shown in the image above, has several advantages:

  1. We are able to measure how long it takes to send and to receive the image. We can also set a quality threshold, where if it takes longer than X seconds, the test is considered a failure.
  2. Using a separate device in real time represents how users actually use your app. Manual testing for companies working on such use cases consists of taking two devices in their hands and sending messages/ pictures back and forth. This automated testing methodology comes as close as possible to manual testing.
  3. Besides checking if the correct image appeared, apps usually have a ‘notification’ or counter integrated into their UI. If you had to log in as a new user and start the app manually, you might not test this functionality in the same way as the user who actually uses the app consistently.

Approaches to Image Comparison Testing

Image comparison is in general a tricky topic and different approaches on how to do it have been chosen by different people/organizations. Some comparisons are as simple as pixel by pixel checking; very advanced scenarios may compare a small image within a bigger image, or even images which are slightly shifted or compressed.

We’ve chosen the simple approach for now, which means a pixel by pixel check. This check uses a difference blend, which is the same approach Github uses to differentiate images.
If we have pixelation, or an image that is slightly lighter or darker, the steps will still be able to make the comparison. Another benefit is that it returns a more realistic readout of percentage changed, and allows us to set maximum thresholds while testing.

If you want to compare an image (local or remote) with the current screenshot, it needs to match the resolution in order to be effective. The best use case is testing the app on a device that you already have the screenshots for.

The image comparison works by looping over each pixel in the first image and verifying if it’s the same as the pixel and location in the second image.

Using oily_png gem

The oily_png gem is a library that reads and writes .PNG files. It’s a Ruby C extension to the pure Ruby ChunkyPNG library. We decided to use oily_png because of it’s performance advantages. Speed is one of the top factors we consider when testing. By the way, an image comparison / screenshot library that also looks quite neat but we haven’t had the chance to try out yet is the ios-snapshot-test-case library developed by Jonathan Dann and Todd Krabach at Facebook.

Installing oily_png

Oily png is a ruby gem that can be installed by the following command:

$ gem install oily_png

In our setup example we focus on the calabash testing framework which is based on the cucumber structure. This means we create a new .rb Ruby file and paste the following code.

The below code loops over both images and uses a difference blend to determine their difference per RGB channel.

require 'oily_png'
require 'open-uri'
include ChunkyPNG::Color

def starts_with(item, prefix)
  prefix = prefix.to_s
  item[0, prefix.length] == prefix
end

# compares two images on disk, returns the % difference
def compare_image(image1, image2)
  images = [
    ChunkyPNG::Image.from_file("screens/#{image1}"),
    ChunkyPNG::Image.from_file("screens/#{image2}")
  ]
  count=0
  images.first.height.times do |y|
    images.first.row(y).each_with_index do |pixel, x|

      images.last[x,y] = rgb(
        r(pixel) + r(images.last[x,y]) - 2 * [r(pixel), r(images.last[x,y])].min,
        g(pixel) + g(images.last[x,y]) - 2 * [g(pixel), g(images.last[x,y])].min,
        b(pixel) + b(images.last[x,y]) - 2 * [b(pixel), b(images.last[x,y])].min
      )
      if images.last[x,y] == 255
        count = count + 1
      end
    end
  end

  100 - ((count.to_f / images.last.pixels.length.to_f) * 100);
end

# find the file
def get_screenshot_name(folder, fileName)
  foundName = fileName
  Dir.foreach('screens/') do |item|
  next if item == '.' or item == '..'
    if item.start_with? fileName.split('.')[0]
      foundName = item
    end
  end

  foundName
end

def setup_comparison(fileName, percentageVariance, forNotCase = false)
  screenshotFileName = "compare_#{fileName}"
  screenshot({ :prefix => "screens/", :name => screenshotFileName })

  screenshotFileName = get_screenshot_name("screens/", screenshotFileName)
  changed = compare_image(fileName, screenshotFileName)
  FileUtils.rm("screens/#{screenshotFileName}")

  assert = true
  if forNotCase
    assert = changed.to_i < percentageVariance
  else
    assert = changed.to_i > percentageVariance
  end

  if assert
    fail(msg="Error. The screen shot was different from the source file. Difference: #{changed.to_i}%")
  end

end

def setup_comparison_url(url, percentageVariance)
  fileName = "tester.png"
  open("screens/#{fileName}", 'wb') do |file|
    file << open(url).read
  end

  setup_comparison(fileName, percentageVariance)
  FileUtils.rm("screens/#{fileName}")
end



Then(/^I compare the screen with "(.*?)"$/) do |fileName|
  setup_comparison(fileName, 0)
end

Then(/^I compare the screen with url "(.*?)"$/) do |url|
  setup_comparison_url(url, 0)
end

Then(/^the screen should not match with "(.*?)"$/) do |fileName|
  setup_comparison(fileName, 0, true)
end

Then(/^I expect atmost "(.*?)" difference when comparing with "(.*?)"$/) do |percentageVariance, fileName|
  setup_comparison(fileName, percentageVariance.to_i)
end

Then(/^I expect atmost "(.*?)" difference when comparing with url "(.*?)"$/) do |percentageVariance, url|
  setup_comparison_url(url, percentageVariance.to_i)
end

If you are using local screen shots, add the source images to a “screens” folder at the same level as the features folder. You will use the name of these images in your test steps.

The following calabash / cucumber teststeps are available after injecting the library:

Then I compare the screen with "login_screen.png"
Then I expect atmost "2%" difference when comparing with "login_screen_fail.png"

Then I compare the screen with url "http://testmunk.com/login_screen.png"
Then I expect atmost "2%" difference when comparing with url "http://testmunk.com/login_screen_fail.png"

Then the screen should not match with "screen2.png"

You have three different types of steps. One asserts an exact match, another asserts an approximate match (i.e. up to 2%), and the final one reads if the image does not match (asserting if a particular view-changing action has happened or not). You can use either local files (which should be present in the /screens folder) or remotely uploaded files.

If there is a match failure, you will get the percentage difference in the output so you know how similar the screenshot was to the source.

Food for thought:

  1. As we mentioned in the introduction, image comparison can also be used for comparing UI views. However, we have found the practical application of this is limited, since it can only be applied to cases where data does not change. That is to say, if your UI has newer data, but is still correct, the image comparison will incorrectly prompt you with a failure.
  2. We have found it hard to get 100% identical images simply because the UI elements (such as time or battery level) of the native top bar are often part of the actual screenshots. Because of this, when testing UI screenshots, we set our success match factor to 98% accuracy for best results.
  3. Because the focus of the testing is on successful delivery, and accuracy of the image, we decided on two identical devices (Moto Gs) since our pixel by pixel comparison would run into challenges comparing images with different resolutions.
martin_poschenrieder About the author:
Martin Poschenrieder has been working in the mobile industry for most of the past decade. He began his career as an intern for one of the few German handset manufacturers, years before Android and iPhone were launched. After involvement with several app projects, he soon realized that one of the biggest pain-points in development was mobile app testing. In order to ease this pain, he started Testmunk. Testmunk is based in Silicon Valley, and provides automated app testing over the cloud.
Follow Martin on twitter

Testmunk automates mobile app testing

LEARN MORE

Read next


How to Address Flaky Tests

Posted by Martin Poschenrieder on March 29th, 2016

March News

Posted by Michael Walsh on March 11th, 2016

Leave a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>