You can buy the best golf clubs money can buy but it won’t magically lower your handicap unless you know how to use them. Software testing is no different; it won’t give you good results unless you know how, who, what and when to use it.
Software testing is defined as the process of ensuring that software is of the highest possible quality for users, and testing a product to prevent any software issues from becoming a bottleneck.
There are many ways you can approach software testing and we have recently written a best practice guide to QA Testing. However it’s easy to get confused by the sheer number of testing types and how they overlap, let alone what each of them does.
That’s why we’ve built for you The Ultimate Guide to Software Testing.
This is the how of software testing; how do you implement your testing strategy? We’ve split this section into two categories; Manual Testing and Automated Testing.
Manual testing is defined as testers manually executing test cases without the use of any automation tools. They play the role of the end-user and try to find as many bugs in the application as quickly as possible. The bugs are collated into a bug report, which is passed on to the developers to review and fix them.
An application cannot be tested using automation exclusively so manual testing plays a vital role in software testing. It requires a certain mindset; patience, creativity & open-minded amongst them.
We have put the below types of testing in the Manual Testing section:
- Exploratory Testing
- Manual Regression Testing
- Test Case Execution
What is it?
Exploratory testing relies on allowing the tester to have the freedom to interact with an application and react as they see fit. Good testers adapt and figure out what is needed rather than follow predefined test procedures. Nevertheless, some thought leaders in the software testing industry interpret exploratory testing as test design and test execution at the same time.
In order to maximise the results of exploratory testing, specific parameters must be given to the testers, e.g. what parts of an application to test, how long to test for, etc. Good exploratory testing is a planned activity, but not scripted.
|No need for long preparation||Difficult to get right|
|Fluid approach||Unstructured nature can lead to inefficiency|
|Discovery of more unique bugs/functionality problems||Requires a certain mindset|
|Helps in complex testing situations|
A major benefit is the preparation doesn't have to be exhaustive, although it is still needed. When executed right, exploratory testing is fluid without documentation or test cases. It makes it very effective in finding unique bugs and verifying functionality.
Exploratory testing is useful in complex testing situations when little is known about an application or more information is needed to write scripted tests.
A lack of planning before executing exploratory tests will lead to inefficient and unproductive results. Conversely, exploratory testing should not be scripted. It means getting the balance between the two is difficult.
Exploratory testing also relies heavily on the skill and mindset of the testers. A good exploratory tester requires many skills; lateral thinking, critical thinking, investigation skills, storytelling skills, good communication and technical skills.
Manual Regression Testing
What is it?
Manual regression testing is a method of verification, which is performed manually. It is used to confirm that a recent update, bug fix or code change has not adversely affected existing features. It utilises all or some of the already executed test cases which are re-executed to ensure existing functionality works correctly and no new bugs have been introduced.
Improves product quality
Time-consuming and inefficient
Detects any side effects of updates/bug fixes/code changes
May cause their own side effects
Regression testing is a necessity in all release cycles. When done correctly it can improve and maintain quality. Ideally, it should be performed after every single code commit. This ensures the need to only go back one commit to fix a problem, but this is not always practical.
When there are constant changes being implemented to an application, manual regression tests are very inefficient.
Test Case Execution
What is it?
Test cases help guide the tester through a sequence of steps to validate whether the application is working as intended. A good test case requires good writing skills, attention to detail and a good understanding of the application. Test case execution is the process of executing the code and comparing the expected and actual results. Test cases are assigned to testers to execute the tests, create the bug report and report the status of each one.
|Step-by-step process||Badly written test cases waste time|
|A good test case is reusable|
Testers like the step-by-step process of test cases, although it can be very repetitive. A good test case is reusable. This should be a consideration when writing them to save time in the long-term. Test cases provide comprehensive documentation for the area they are testing.
If test cases are badly written or are not clear, it will cause confusion or mistakes which will mean inaccurate results or the need to re-test.
Automated testing is a process in which an automation tool is used to execute pre-scripted test cases. The objective of automated testing is to simplify and increase efficiency in the testing process.
If a particular form of testing consumes a large percentage of QA, it could be a good candidate for automation. Repetitive tasks such as testing login processes or registration forms are good examples of when to use automated testing.
Using automated testing is undoubtedly quicker than manual testing. In terms of testing execution, it will increase productivity and reduce testing time for the majority of apps/websites. Even though set up costs are high, automated testing can save money in the long-term.
Repetitive tasks are inefficient when done manually, especially when they reoccur. There is also an increased chance of human error. Automated testing can eradicate this, depending on the quality and scope of the test cases.
However, there are certain test situations where automated testing won’t work such as user interface, documentation, installation, compatibility, and recovery. Even if you choose to automate, some form of manual testing will be needed.
Initial set up costs (automation tool purchase, training, maintenance of test scripts) are expensive. Also, if your app or website changes regularly, the cost and time associated with script maintenance will increase considerably.
We have put the below types of testing in the Automated Testing section:
- Unit Testing
- API Testing
- Automated Regression Testing
What is it?
Unit testing is testing individual units or components of an application. The aim is to ensure that each unit performs as designed. It is typically carried out by developers, not testers, as it requires a detailed knowledge of the internal program design and code.
|Finds bugs early||Hard to write|
|Facilitates change||Difficult to automate|
|Simplifies integration||Comprehensive version control needed|
When a failure occurs in a unit test, it’s either caused by a bug in the code or a problem with the actual unit test itself. Either way, it’s easy to pinpoint the problem and early enough in the development cycle to fix it. Unit testing ensures that the code functions properly as the code base grows. This streamlines the code to make it more readable and less complex. By verifying each unit, integration into an application is simpler.
Unit testing also provides guaranteed documentation for an application. This is helpful for other developers that need to find out what functionality is provided by a particular unit.
Good unit tests are complex to write. It can mean the test code is likely to be at least as buggy as the code it is testing. This scenario is the same for both manual and automated unit testing. It’s nearly impossible to evaluate every single execution path in all but the most basic applications. A comprehensive version control system is essential to record the changes in case anyone needs to refer back to previous versions.
What is it?
Application programming interfaces (API) testing means checking APIs directly. An API is a feature that allows one application to interact and communicate to other applications. It determines if the APIs meet expectations for functionality, reliability, performance, and security. This does not cover UI or UX testing. API testing involves sending calls to an API, receiving an output and recording a response.
|Verification of API functionality||Set up can be complex|
|Checks API integration with other applications||Coding knowledge required|
|Tests authentication credentials|
If an API is not tested properly, it may cause problems to not only the primary application but the other applications it integrates with. API testing provides a vital check to ensure this functionality works correctly.
Setting up a testing environment for API testing can be complex. Also, a good level of coding knowledge is necessary for API test cases.
Automated Regression Testing
By nature, regression testing requires constant repetition. It can be performed manually or using an automated method. The definition is the same as manual regression testing; it’s a method of verification but it is automated rather than performed manually.
|Improves product quality||Setting up automated regression tests is expensive|
|Detects any side effects of updates/bug fixes/code changes||High maintenance effort|
|May cause their own side effects|
Read more to learn who you use to test with. Whether it be In-house, Crowdsourced and/or Outsourced Testing, deciding who will execute your testing is a crucial part of your strategy.
Software testing won’t give you good results unless you know how, who, what and when to use it. This is the who you use to test with, part of our Ultimate Guide to Software Testing.
Whether it be In-house, Crowdsourced and/or Outsourced Testing, deciding who will execute your testing is a crucial part of your strategy.
What is it?
Beta testing is an informal type of testing carried out by end-users. It is performed in real-world environments, usually at the final testing stage when the application is considered stable. A beta version is normally released to a limited number of end-users. They are tasked to use it and share their feedback with the developers so they can make the necessary changes.
|Final validation from end-users||Lack of control|
|Improves product quality||Finding the right beta testers|
|Creates goodwill with customers and increases customer satisfaction|
The main aim of beta testing is the ensure there are no major failures in the application. It provides a final validation prior to release. It also provides unique feedback from end-users, which gives developers the opportunity for further improvements before releasing to all users.
Beta testing is a cost-effective way to test an application that builds goodwill between the developer and the end-user.
The management of beta testing is an issue. Other types of testing have clearer parameters and are more structured whereas beta testing is tested in the real world so there is a lack of control.
Selecting the right beta testers and getting them to agree to carry out beta testing can be a challenge. Some users will be more receptive than others and some may agree to take part only to pull out closer to the test.
What is it?
Crowdsourced testing companies offer a large community of professional testers in different locations with access to multiple devices. The testers aim to find bugs, document reproducible steps and provide bug reports. The concept is simple: the collective power of more heads are better than one.
Crowdsourced testing companies act as a broker between the client and the crowd by managing the testing project and the testers before evaluating and presenting the results to the client to action.
|Fast turnaround||Lack of domain/company/product knowledge|
|Cost effective compared to in-house QA/outsourcing|
|Higher quality due to vetted testers|
At a time when there is relentless pressure to develop and release applications quicker, crowdsourced testing is appealing. The crowd can deliver testing results quicker than internal testers simply because there are more of them. Crowdsourced testing is a cost-effective option when compared to in-house QA or automated testing. It also scales efficiently as you grow. The collective power and diversity of the crowd offers a different perspective, which leads to better results.
In-house testers typically have better domain, company and product knowledge than crowdsourced testers. Depending on your app or website this may or may not matter.
In-house testing is when you use internal testers for your testing needs.
|Good domain/company/product knowledge||Difficult to scale|
|Face-to-face communication||More overhead costs|
Using in-house testers brings with it a good knowledge of the domain, the company and the product without additional training needed. Also, internal testers work directly with the QA lead and developers, which can often make communication quicker and easier.
For most organisations, testing requirements differ throughout the year. This means there is often a gap in resources whether it be not enough or too many. Maintaining and adding to an internal testing team is more expensive than using outsourced or crowdsourced options.
Outsourced testing is when software testing is carried out by an independent company or a group of testers outside of your organisation.
|More testers||Communication problems|
|Higher quality results||Less control|
Outsourced testing means you have access to a larger pool of testers. More eyes will often mean better quality results. Outsourcing also scales more efficiently than in-house options. It can be a cost-effective option compared to adding to your internal QA team.
Using external resources outside of your organisation will place a higher emphasis on communication. Different schedules, language barriers and time zones issues can slow down testing process.Your QA lead may feel less control compared to manages their own QA team internally.
Software testing won’t give you good results unless you know how, who, what and when to use it. Here is the 'what' section of our ultimate software testing guide.
Black box testing
What is it?
Black box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation and without seeing the source code. The testers are only aware of what the software is supposed to do, not the logic of how it actually does this.
|Unbiased tests||Test repetition|
|No programming language knowledge needed||Complex test cases|
|End-user point-of-view||Can be time-consuming|
|Faster test case creation||Cannot be used for complex code|
Black box testing offers unbiased tests because the designer and tester work independently. The tester doesn’t need to know any specific programming languages to test the reliability and functionality of an app/website.
Black box testing is performed from an end-user point-of-view rather than a developer standpoint. Test cases can be designed immediately after the completion of specifications.
Testing every possible input stream is not possible because it is too time-consuming and would eventually leave many program paths untested. Black box testing isn’t intended to test complex segments of code.
White Box Testing
What is it?
White box testing is the opposite of black box testing. It tests the internal structure of an application to test the code itself, as opposed to the functionality exposed to the end-user. This type of testing is used by both developers and testers. It helps them to understand which line of code is actually executed and which isn’t.
|Transparency of the internal coding structure||Complex and expensive|
|Maximum test coverage||Regular updates to test script|
Transparency of the internal coding structure is helpful to understand the type of input data that is needed to test effectively. White box testing covers all possible paths of code which can motivate developers to write better code. Test cases can be easily automated with an abundance of tools available to do this.
White box testing is a complex and expensive procedure which requires a mix of extensive programming knowledge and a deep understanding of the internal code structure. The complexity is prolonged if it’s a large application. Updates to the test script are required when the implementation is changing too often.
The necessity to create a full range of inputs to test each path and condition make white box testing extremely time-consuming. It means some conditions might be untested as it is not realistic to test every one.
So when should use different types of testing? We explore in our guide.
What is it?
Accessibility testing is performed to ensure that an application is usable for people with disabilities. This includes visual impairments, colour blindness, poor motor skills, learning difficulties, literacy difficulties, deafness and hearing impairments.
Website accessibility can be measured by using W3C (known as Web Content Accessibility Guidelines or WCAG).
|Expands user base||Automation tools are restrictive|
|Ethical||Inconsistent area of testing|
|Compliance with accessibility-related legislation|
Many people have disability issues that affect their use of software. To avoid isolating users, an application needs to be made accessibility friendly. Accessibility legislation exists in many countries. If an application does not comply, there is a risk of financial penalties and/or legal action.
Software testing is a complex process whether it’s automated, manual or a combination of both. Although there are many automation tools available, they do not necessarily help in every situation. It is a difficult area to replace human intuition and reasoning that manual human checks provide.
Accessibility testing is still in its infancy which causes inconsistency. In the future, there will be a more standard and consistent way to test your application’s accessibility.
What is it?
Compatibility testing validates if an application can be run on different environments including hardware, network, operating system and other software.
|Increases customer satisfaction||Increases costs and testing time|
|Expands test coverage||Delays are common|
|Reduces support time||Time-consuming|
Compatibility testing helps to ensure customer satisfaction as it checks whether an application performs as expected across multiple platforms. All applications should be compatible with the maximum amount of hardware, software, OS, platforms, etc. The time and cost involved with answering user complaints can be time-consuming. It can also have an adverse effect on an application’s reputation.
Compatibility testing will lead to a rise in costs and testing time. This is due to the maintenance of different types of hardware and software systems as well as the need to potentially add to your QA team headcount. Test delays are common with compatibility testing, which may result in longer delivery cycles. Conducting this type of test is very time-consuming because it has to be conducted in the various environments.
What is it?
Functional testing is an essential step in evaluating the performance of an application before it’s released. It refers to testing on a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work".
|Simulates real-world usage||Possibility of omitting logical mistakes|
|Effective at finding serious functional errors||Can miss other performance issues|
|Relies on specific user specifications|
Functional testing is conducted in conditions close to what the customer experiences. This works best when there are the same operating systems, browsers, database etc. Releasing applications with serious functional shortcomings can create disastrous consequences. Functional testing is one of the most effective ways to avoid this.
Functional testing is confined to testing how well an application does what it is supposed which makes it is possible to miss mistakes outside of this scope. The more specific the user specifications, the better functional testing works. Ultimately, the effectiveness of functional testing relies on this.
What is it?
The objective of GUI (Graphical User Interface) testing is to ensure the GUI is functioning correctly. GUI testing includes checks such as the size of the button, the input field, alignment of the text, readability, etc.
|Finds regression errors||Time-consuming|
|Easy to test||Can be difficult to automate properly|
GUI testing is good at finding regression errors caused by application updates. Although repetitive and time-consuming, the GUI testing itself is often easy to conduct.
Automating GUI testing will speed up delivery and improve test coverage but it is not always possible or efficient to do so. Human interaction is often needed for issues such as colour clash, readability, etc. Manual GUI testing is very repetitive so there is a higher risk of errors.
What is it?
Once the development cycle is nearly complete, load testing is carried out to check how an application behaves under the actual demands of the end-users. Load testing is usually performed using automated testing tools that simulate real-world usage. It intends to find issues that prevent software from performing under heavy workloads.
|Provides the maximum capacity of an application||Device and OS coverage|
|Detects functionality errors under load||Recreating real-world conditions is difficult|
|Improves the scalability|
Load testing provides an estimation of the maximum capacity an application can function before performance is affected. It also detects functionality errors that occur under different load variations, which can provide valuable insight into performance bottlenecks. Load testing will improve the scalability of an application.
If load testing is not performed on multiple devices and OS combinations, it might cause inconsistent results that may not be actioned. Applying load to an application to test it’s capacity under a controlled test may not mirror what could happen in a real-world situation.
What is it?
Localisation testing checks the quality of a localised version of an application for a particular culture or locale. When an application is customised for a foreign country or presented in a different language, localisation testing ensures it is accurate. It predominantly tests in three areas; linguistic, cosmetic and functional.
Does the translation negatively affect a brand or messaging? Do the changes create any alignment or spacing problems for the user interface? Is functionality affected by regional preferences?
|Improvement in quality||Requires multilingual testers with expert knowledge of both countries/cultures|
|Reduction of support costs||Local linguistic adjustments can be extensive|
|Verification of cultural accuracy||Time differences can make test management challenging|
Localisation testing will improve the quality of an application. Releasing it to different markets can provide a competitive advantage but it brings its own challenges. Localisation testing goes beyond translation. It has to test cultural differences and user experience.
When done correctly localisation testing will reduce support costs and increase user satisfaction. Using a new audience as beta testers will risk losing them forever.
Releasing an application to a different country or culture can be a laborious undertaking. There are no shortcuts. Localisation testing is not a simple translation exercise, it requires expert knowledge of the local culture, linguistics, and preferences.
Depending on location and local time differences, coordinating localisation testing can be challenging and time-consuming.
What is it?
Non-functional testing involves testing that may not be related to a specific function or end-user action, such as load testing or security testing. It will determine the breaking point; the point at which non-functional elements lead to unstable execution.
|Covers testing that functional testing may miss||Repeated tests needed|
|Gives higher level of security||Expensive|
|Enhances the performance of an application|
Non-functional testing covers tests like load times, which may not be covered in functional testing. Due to the tests it covers, an application will by default be more secure and perform better.
Each time an application is updated, non-functional testing needs to be performed. It may require various tools and is usually expensive.
What is it?
Penetration testing (or pen testing) is a type of security testing. It is done to test how secure an application and its environments (hardware, operating system, network, etc.) are when subject to attack by an external or internal intruder.
An intruder is defined as a hacker or malicious program. Penetration tests either force an attack or do so by using a weakness to gain access to an application. It uses the same methods and tools that a hacker would use but the intention is to identify vulnerabilities so they can be fixed before a real hacker or malicious program exploits them.
|Identifies weaknesses||Tester trustworthiness|
|Finds smaller vulnerabilities||Unrealistic test conditions|
|Covers what automated testing misses|
No two hacks are the same but they all rely on tricking internal staff into granting unwanted access. Penetration testing can discover exactly what these circumstances are and help to fix them. It also identifies high-risk vulnerabilities that exist from an accumulation of smaller weaknesses. These small weaknesses could be software related, code related or, more commonly, caused by unintended employee negligence.
Penetration testing is difficult to automate because it is easier for human testers to detect the types of weaknesses that human attackers are most likely to take advantage of.
Penetration testing is testers trying to break into an application. While the benefits are clear, the testers carrying out the tests are essentially hackers. This creates an obvious trust issue that can be very complex to manage.
Another potential disadvantage is the unrealistic nature of the test conditions and no sense of surprise by internal staff. A real-life application breach will always be unexpected, which is very difficult to replicate. A possible solution is to conduct unannounced tests which are only known by selected internal staff.
There is not one type of testing that fits all testing requirements. The top organisations blend different testing approaches at different stages of their development cycle to achieve the best results. This is the foundation behind QAOps.
QAOps is a framework to help more companies become quality-focused organisations. This is done by implementing the three pillars of QAOps that enable you to change your perspective on how QA is typically viewed in an organisation: from a purely software operations perspective to how QA can help with your growth objectives and improve your customer experience.
The three pillars of QAOps are:
Testing is about managing risk. An important way to do this is by getting various sources of high-quality information to help you make better decisions. In a QA context, this principle is about utilising different testing types (this includes manual and automated testing) in order to help you make better decisions about your approach.
In a similar vein, you wouldn’t rely on one strategy when playing chess to win or one cooking method and disregarding everything else - with these activities, as with testing, there isn’t only ONE way to reach your goal and it some cases it will increase your risk by sticking to one strategy and not incorporating other (or new) sources of information.
Most teams we’ve worked with are focused on building and releasing software as fast as possible. In order for companies to deliver products to market quickly to increase their feedback loop and thus improve their products, they need to rigorously analyse each aspect of their QA process.
Additionally, some of the high-performance development teams we’ve worked with have changed their team structure to take away the burden of responsibility away from solely the tester. Instead, they integrate quality at the beginning of their process to encourage collaboration across the organisation and save time in the long-run.
In most organisations, QA is typically seen as a cost centre, however QA can instead be seen as a growth engine for your company.
Teams we’ve worked with have incorporated the organisation’s growth objectives into their QA activities to prioritise tasks effectively and shape their internal strategy. For example, some companies have prioritised receiving feedback on a particular flow (agreed with their growth team) in order to test a hypothesis on improving their retention metric.