Over the past few years, I have been in constant contact with many other founders, CEOs, CTOs, and others responsible for the success of their organizations. I have seen companies grow from a few people to a few hundred. Quite often, when a startup hits a certain level of growth (in my experience this typically comes at approximately 15 to 20 people), software development startups begin to look for a QA manager. By QA manager, I mean someone who is formally tasked with everything related to QA and testing, and to whom those assigned such tasks report to. Prior to this level of growth, the typical process would have the whole team splitting the testing duties to some extent, with limited oversight. They might also have a “customer support/happiness” person whose duties included gathering client testing results, and communicating the upcoming fixes to stakeholders, among other things.
As the team grows, the product or products become more and more complex, requiring more and more structure to deliver successfully. Usually, this realization is due to serious last minute mistakes that can make the organization’s growing pains obvious.
Finding the right QA manager for a startup can be challenging, and several CEOs, Founders, and CTOs have lately been asking me how to evaluate good candidates, and what the best questions to ask might be. In this post, I will list some of these questions, along with some of the answers I think will help you find the right candidate for your company. These questions can be a good starting point for your search, but bear in mind that there are many questions that you will need to come up with on your own, to determine whether the individual has the right skillset for your organization, and whether the candidate fits in with your corporate philosophy, team and culture.
Disclaimer: These questions are not ranked in the order they should be asked, or even in a top ten. These are simply questions that are important to ask during the course of an interview.
Here are the some of the questions I would ask:
1) Let’s say you are the first QA manager joining our startup. What are the first three things you would do?
Smaller companies and startups, in particular startups of less than 7 people, very often do not have a formal testing plan. It is important for the QA manager candidate to ask questions of his own regarding the current process and the challenges facing the organization that led to the search for a QA manager. This conversation will usually lead to this type of question. A strong testing plan, if one is not already in place, should be a QA manager’s first order of business, and one of his go-to responses for this question. The candidate should be able to outline his methodology in implementing this test plan, including how he or she will go about creating their testplan, based on the information gleaned in the candidate’s questions about process and challenges. Criteria could include a list of testcases/scenario that make up the specific use case to be tested. A QA manager generally formalizes these use cases in a spreadsheet or google document, then defines columns with the conditions under which the tested function is a success. Dependent on how complicated or comprehensive the app, the testing plan could be as few as 20 testcases, or as many as a thousand (though typically if your project is this large or involved, you are no longer a startup anymore.) Smaller startups typically start with a google spreadsheet, and expand from there.
Dependent on the size and needs of your startup, acceptable answers might also include:
- Validating and setting up a bug tracking tool and QA process – he should be able to outline what this solution might look like.
- Execute manual testing – he might promise to help by “getting his hands dirty” in order to determine areas of improvement from first hand experience of the process.
- Managing or buying test devices, in order to ensure testers are not reliant on emulators.
2) What’s your definition of a good testcase? Can you give me an example?
Here we want to determine if the QA manager is a strong communicator, and knows the level of detail required to ensure that a test is truly successful, and properly defined.
- A bad testcase:
“Check that sign in works.”
- A good testcase:
On sign-in view enter username: “email@example.com” and Password: “1234”. Then hit the “Sign In” button. After no more than 2 seconds, you should see a Welcome screen indicating “Welcome back”.
A good testcase then, is short and concise, but thorough. The definition of done (or definition of a successful testcase) should not be in the least bit of doubt. Also, it is best practice to ensure you create independent testcases that test only one thing at a time.
3) Let’s say you have a testplan with over 200 testcases. How do you decide what should be automated and what should still be done manually?
In this case, we want the candidate to have a good sense of both the priorities of the testcase, as well as the feasibility of automation for the testcase in question.
Some of the questions that the candidate should raise to evaluate:
- Which scenarios are tedious and take a lot of time doing manually?
- Which scenarios have been missed in the past?
- Where did we see device differences due to fragmentation?
- Which parts of our application are prone to regressions?
- Which testcases are complicated and would take a lot of time to establish an automated testcase for?
- Which parts of the app will likely change over the next few weeks? (If certain parts of the app are about to be changed, we recommend not to start with automated testing for these cases.)
- Which testcases are best done manually as part of “explorative” testing and testing the user experience?
4) How do you determine which devices and OS versions we should test on?
This should generally be an easy question for the candidate. Good candidates will point to app analytics as the best measure, looking for the most used devices for their particular app. Another good answer would be to check the app reviews where people might have complained about specific issues happening on their devices. Looking at the top devices on the market is also a valid answer, but if it is the only answer, it is a little uncreative. The best answers will combine all of these elements, and possibly creative variations based on the industry or target audience of the app in question. To get a more targeted response, this question could be modified to pertain to a specific application or application type…ie “How would you determine which devices and OS versions to test on for an app designed around teaching a language?”
Answers to this variation might include looking into the devices most used at universities and schools, or looking into the testing strategies of competing apps, or looking into the devices most common to areas where learning the particular language is in demand. For example, learning french might be popular in Quebec, Louisianna, regions of Africa, etc… You would like to see a thoughtful, well-considered answer that is cognizant of the app market, and current device trends.
5) What automation frameworks do you have experience with and and what are their pros and cons?
If it comes to mobile automation, you hope to hear the candidate mention frameworks such as UI Automation, Robotium, Calabash, Appium, and Espresso. He should be able to highlight their pros and cons.
A QA manager is less likely to be the person who scripts automated tests, but if he/she has already heard about some of the frameworks and can talk to some extent about it, this is definitely worth bonus points.
6) What are the most important considerations for leveraging mobile test automation effectively?
A good answer should highlight some of these aspects:
Test automation gives you a lot of test results in a very short amount of time. This information will not be much help, unless the information reaches the developer fast, and the developer knows what to do with it. The QA manager should be able to filter this information to the root causes, and get the issue to the right developer in as short a time as possible, while still providing him all the information he needs, such as screenshots, device logs, and the steps to replicate and test the issue.
In most cases these bugs will be filed in a bug tracking tool. In certain cases a testcase might need to be adjusted to cover the latest app change, so the candidate should be able to outline some sort of process regarding who updates the testcases and when.
(Disclaimer: For our newly paid plans on testmunk we supervise clients on this aspect so that clients leverage automation to its fullest effect.)
7) To what extent should developers do their own testing or do you believe testing is the responsibility of the QA team?
The answer to this question is really dependent on the company’s philosophy and is in this respect of particular interest to the interviewer, as it allows him/her to see if the candidate’s beliefs coincide with company philosophy.
At Testmunk, we hold that it is the developer’s responsibility to perform at least some of his own code testing. It is not expected that he will have the capacity, nor that his focus should be to run through large testplans or test on a large stack of devices. However, without the responsibility to review and test his code, a sense of ownership will not develop.
In a sense our view is a hybrid one, in that we believe in the power of a good QA manager and potentially a team (dependent on the size of the company) to support the process. Results improve even more, if all parties have access to testcases and are able to run and access them regularly to verify if the latest changes brought any regressions.
8) What’s your experience using Continuous Integration as part of the development process?
9) What are some of the challenges in mobile app testing?
A strong candidate might mention some of these points:
- If from an Apple development background, the risk of appstore rejection is likely a top concern, especially the potentially lengthy turnaround time of repeated submittals due to Apple’s 7 day approval process.
- The importance of app ratings, and their impact on number of downloads and potentially revenue for the company.
- If from an Android background, the risks of device fragmentation are likely a top concern, especially as it necessitates that functional testing has to be performed on large set of devices.
- The importance of testing localization, not just language, but also settings and information specific to location, as well as integration of app GPS function.
- The importance of usability and acceptance testing (manual testing) in order to ensure that user flow is intuitive and clear. They might also mention the importance of prototyping prior to writing code, with design and prototyping tools such as Invision.
- The importance of testing loading times, but also battery drain, and other problems that could negatively impact user engagement.
- To a much lesser extent, the growing risk of rejection from the Play Store, as they strive to become more security and quality conscious.
10) How do you best manage manual and automated testing working together?
Not every scenario can be automated. Manual tests will always be necessary, especially as the app nears completion. Usability, intuitiveness, and design are some of the elements you simply cannot truly automate testing for. Because you cannot automate such elements, it is important to ensure that the manual portion of testing is performed on a regular schedule, and organized so as to coincide with and benefit from the results of the automated testing. A good QA manager will have ideas on how to ensure both sets of testing not only fit together in the schedule, but also complement each other. With the results from an automatic suite on hand, a tester can more quickly go through his manual scenarios to check how the app looks and feels. Smart candidates should always look for new scenarios to test, both manually, and for the automated test suite, in order to ensure the highest level of quality, as well as design. The automated test suite should not be considered a shortcut, but as a way of doing much, much more in the time allotted.
11) How do you think acceptance testing should be implemented in the development process?
Acceptance testing is typically considered the last stage of development, in which the last few details of the user interface and finicky design issues are dealt with, usually based on client feedback. If dealt with only at the end of the development cycle, acceptance testing can be the most time intensive and frustrating part of the testing pyramid. In truth, in an agile environment, acceptance testing is more of a cloud surrounding the testing pyramid rather than the top layer. In other words, the ideal QA manager candidate will recognize that user acceptance testing can happen throughout the testing process, nailing down design and flow elements as early as possible in order to ensure that the developers can creatively bring it to life within a set framework. The QA manager must tie acceptance tests into app requirements and ensure the tests for user flow are properly connected to the functional tests more readily tested by automation. In other words, the ideal candidate should be able to manage the feedback from clients throughout the latter stages of testing, in order to ensure the “last minute scramble” becomes more of a sprint to the finish line.
As you can see, there are many possible answers to many of these questions, and undoubtedly, you may have heard some excellent responses that may eclipse these. If so, I’d love to hear them, as well as any thoughts on the questions and potential responses above. Another blogpost you may be interested in is our interview questions for mobile product managers.
P.S. While not specifically hiring a QA manager, testmunk is hiring! See our job postings here.
|About the author:
Martin Poschenrieder has been working in mobile for most of the past decade. He started his career as an intern for one of the few German handset manufacturers, years before Android and iPhone were launched. After being involved in several app project and realizing the pain of mobile app testing, he started testmunk. The company provides automated app testing over the cloud. Follow Martin on twitter