Software testing won’t give you good results unless you know how, who, what and when to use it. This is the when section of our Ultimate Guide to Software Testing.
This is by no means all the different types ﹣ we have tried to cover the most commonly used types of testing from our experience.
What is it?
Accessibility testing is performed to ensure that an application is usable for people with disabilities. This includes visual impairments, colour blindness, poor motor skills, learning difficulties, literacy difficulties, deafness and hearing impairments.
Website accessibility can be measured by using W3C (known as Web Content Accessibility Guidelines or WCAG).
|Expands user base||Automation tools are restrictive|
|Ethical||Inconsistent area of testing|
|Compliance with accessibility-related legislation|
Many people have disability issues that affect their use of software. To avoid isolating users, an application needs to be made accessibility friendly. Accessibility legislation exists in many countries. If an application does not comply, there is a risk of financial penalties and/or legal action.
Software testing is a complex process whether it’s automated, manual or a combination of both. Although there are many automation tools available, they do not necessarily help in every situation. It is a difficult area to replace human intuition and reasoning that manual human checks provide.
Accessibility testing is still in its infancy which causes inconsistency. In the future, there will be a more standard and consistent way to test your application’s accessibility.
What is it?
Compatibility testing validates if an application can be run on different environments including hardware, network, operating system and other software.
|Increases customer satisfaction||Increases costs and testing time|
|Expands test coverage||Delays are common|
|Reduces support time||Time-consuming|
Compatibility testing helps to ensure customer satisfaction as it checks whether an application performs as expected across multiple platforms. All applications should be compatible with the maximum amount of hardware, software, OS, platforms, etc. The time and cost involved with answering user complaints can be time-consuming. It can also have an adverse effect on an application’s reputation.
Compatibility testing will lead to a rise in costs and testing time. This is due to the maintenance of different types of hardware and software systems as well as the need to potentially add to your QA team headcount. Test delays are common with compatibility testing, which may result in longer delivery cycles. Conducting this type of test is very time-consuming because it has to be conducted in the various environments.
What is it?
Functional testing is an essential step in evaluating the performance of an application before it’s released. It refers to testing on a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work".
|Simulates real-world usage||Possibility of omitting logical mistakes|
|Effective at finding serious functional errors||Can miss other performance issues|
|Relies on specific user specifications|
Functional testing is conducted in conditions close to what the customer experiences. This works best when there are the same operating systems, browsers, database etc. Releasing applications with serious functional shortcomings can create disastrous consequences. Functional testing is one of the most effective ways to avoid this.
Functional testing is confined to testing how well an application does what it is supposed which makes it is possible to miss mistakes outside of this scope. The more specific the user specifications, the better functional testing works. Ultimately, the effectiveness of functional testing relies on this.
What is it?
The objective of GUI (Graphical User Interface) testing is to ensure the GUI is functioning correctly. GUI testing includes checks such as the size of the button, the input field, alignment of the text, readability, etc.
|Finds regression errors||Time-consuming|
|Easy to test||Can be difficult to automate properly|
GUI testing is good at finding regression errors caused by application updates. Although repetitive and time-consuming, the GUI testing itself is often easy to conduct.
Automating GUI testing will speed up delivery and improve test coverage but it is not always possible or efficient to do so. Human interaction is often needed for issues such as colour clash, readability, etc. Manual GUI testing is very repetitive so there is a higher risk of errors.
What is it?
Once the development cycle is nearly complete, load testing is carried out to check how an application behaves under the actual demands of the end-users. Load testing is usually performed using automated testing tools that simulate real-world usage. It intends to find issues that prevent software from performing under heavy workloads.
|Provides the maximum capacity of an application||Device and OS coverage|
|Detects functionality errors under load||Recreating real-world conditions is difficult|
|Improves the scalability|
Load testing provides an estimation of the maximum capacity an application can function before performance is affected. It also detects functionality errors that occur under different load variations, which can provide valuable insight into performance bottlenecks. Load testing will improve the scalability of an application.
If load testing is not performed on multiple devices and OS combinations, it might cause inconsistent results that may not be actioned. Applying load to an application to test it’s capacity under a controlled test may not mirror what could happen in a real-world situation.
What is it?
Localisation testing checks the quality of a localised version of an application for a particular culture or locale. When an application is customised for a foreign country or presented in a different language, localisation testing ensures it is accurate. It predominantly tests in three areas; linguistic, cosmetic and functional.
Does the translation negatively affect a brand or messaging? Do the changes create any alignment or spacing problems for the user interface? Is functionality affected by regional preferences?
|Improvement in quality||Requires multilingual testers with expert knowledge of both countries/cultures|
|Reduction of support costs||Local linguistic adjustments can be extensive|
|Verification of cultural accuracy||Time differences can make test management challenging|
Localisation testing will improve the quality of an application. Releasing it to different markets can provide a competitive advantage but it brings its own challenges. Localisation testing goes beyond translation. It has to test cultural differences and user experience.
When done correctly localisation testing will reduce support costs and increase user satisfaction. Using a new audience as beta testers will risk losing them forever.
Releasing an application to a different country or culture can be a laborious undertaking. There are no shortcuts. Localisation testing is not a simple translation exercise, it requires expert knowledge of the local culture, linguistics, and preferences.
Depending on location and local time differences, coordinating localisation testing can be challenging and time-consuming.
What is it?
Non-functional testing involves testing that may not be related to a specific function or end-user action, such as load testing or security testing. It will determine the breaking point; the point at which non-functional elements lead to unstable execution.
|Covers testing that functional testing may miss||Repeated tests needed|
|Gives higher level of security||Expensive|
|Enhances the performance of an application|
Non-functional testing covers tests like load times, which may not be covered in functional testing. Due to the tests it covers, an application will by default be more secure and perform better.
Each time an application is updated, non-functional testing needs to be performed. It may require various tools and is usually expensive.
What is it?
Penetration testing (or pen testing) is a type of security testing. It is done to test how secure an application and its environments (hardware, operating system, network, etc.) are when subject to attack by an external or internal intruder.
An intruder is defined as a hacker or malicious program. Penetration tests either force an attack or do so by using a weakness to gain access to an application. It uses the same methods and tools that a hacker would use but the intention is to identify vulnerabilities so they can be fixed before a real hacker or malicious program exploits them.
|Identifies weaknesses||Tester trustworthiness|
|Finds smaller vulnerabilities||Unrealistic test conditions|
|Covers what automated testing misses|
No two hacks are the same but they all rely on tricking internal staff into granting unwanted access. Penetration testing can discover exactly what these circumstances are and help to fix them. It also identifies high-risk vulnerabilities that exist from an accumulation of smaller weaknesses. These small weaknesses could be software related, code related or, more commonly, caused by unintended employee negligence.
Penetration testing is difficult to automate because it is easier for human testers to detect the types of weaknesses that human attackers are most likely to take advantage of.
Penetration testing is testers trying to break into an application. While the benefits are clear, the testers carrying out the tests are essentially hackers. This creates an obvious trust issue that can be very complex to manage.
Another potential disadvantage is the unrealistic nature of the test conditions and no sense of surprise by internal staff. A real-life application breach will always be unexpected, which is very difficult to replicate. A possible solution is to conduct unannounced tests which are only known by selected internal staff.
There is not one type of testing that fits all testing requirements. The top organisations blend different testing approaches at different stages of their development cycle to achieve the best results. This is the foundation behind QAOps.
QAOps is a framework to help more companies become quality-focused organisations. This is done by implementing the three pillars of QAOps that enable you to change your perspective on how QA is typically viewed in an organisation: from a purely software operations perspective to how QA can help with your growth objectives and improve your customer experience.
The three pillars of QAOps are:
Testing is about managing risk. An important way to do this is by getting various sources of high-quality information to help you make better decisions. In a QA context, this principle is about utilising different testing types (this includes manual and automated testing) in order to help you make better decisions about your approach.
In a similar vein, you wouldn’t rely on one strategy when playing chess to win or one cooking method and disregarding everything else - with these activities, as with testing, there isn’t only ONE way to reach your goal and it some cases it will increase your risk by sticking to one strategy and not incorporating other (or new) sources of information.
Most teams we’ve worked with are focused on building and releasing software as fast as possible. In order for companies to deliver products to market quickly to increase their feedback loop and thus improve their products, they need to rigorously analyse each aspect of their QA process.
Additionally, some of the high-performance development teams we’ve worked with have changed their team structure to take away the burden of responsibility away from solely the tester. Instead, they integrate quality at the beginning of their process to encourage collaboration across the organisation and save time in the long-run.
In most organisations, QA is typically seen as a cost centre, however QA can instead be seen as a growth engine for your company.
Teams we’ve worked with have incorporated the organisation’s growth objectives into their QA activities to prioritise tasks effectively and shape their internal strategy. For example, some companies have prioritised receiving feedback on a particular flow (agreed with their growth team) in order to test a hypothesis on improving their retention metric.