Test execution is the process of performing test cases to identify bugs, errors, and other potential issues your software could have. This article will show you how make yours successful.
Please note: this article is part of our guide series about tests. Follow the link if you're interested in our test execution platform.
Lack of time or resources can lead software developers to skimp on deep or complex testing. But effective test execution is the only way to be sure that a product is reliable enough to be released to market, or if there are bugs that need fixing.
This post will explore exactly what test execution is, why it’s so important, and how you can use the results to improve both your testing process and the software itself. (And how professional services like Global App Testing can handle the test execution process for you!)
Here’s what we’ll be getting into:
What is a test execution strategy?
Why is test execution important?
Factors that influence test execution
Examples of test execution outcomes
Test execution analysis examples
Make sure your test execution is effective
Test execution means carrying out (or executing) a set of specially-designed tests on a software product, to verify that it meets all of its pre-defined specifications and functionality. The tests are performed according to a test plan, which breaks the whole activity down into separate modules and/or requirements, with detailed test cases for each one.
It is performed in the QA specific testing environment and is an important part of the Software Testing Life Cycle (STLC), along with the Software Development Life Cycle (SDLC). The STLC is a sequence of specific activities that ensure software quality goals are met.
After the planning stage is complete and entry conditions are met, the testers can begin to execute tests and document the results, checking whether the actual results match up to the expected outcomes. They log any defects or bugs and report them, then retest the fixes and track the defects to closure.
Test execution can be carried out manually or with test automation tools or a test management tool like Jira. Businesses often outsource the testing process to a specialist firm like Global App Testing, which combines remote crowdtesting and on-demand testing services.
A test strategy in software testing is a set of principles that describe how the software testing process will be carried out. It determines the test design – setting out which modules to test and which techniques to use. It’s a long-term plan of action and can be used for multiple projects.
This systematic approach to the process ensures the quality, traceability, and reliability of testing, and makes planning easier. A test strategy includes documentation formats, test processes, team reporting structure, and client communication strategy.
It should not be confused with the test plan, which is a separate document that defines the scope, objective, approach, and emphasis of a software testing effort. (However, in some smaller projects, a test strategy can be one of the sections in a test plan.)
A test strategy may be either preventative or reactive. The preventative approach sees tests designed before software development begins, whereas in a reactive strategy, the test team waits until the software is received before “reacting” to the actual system under test.
Global App Testing’s worldwide network takes on repetitive testing tasks so you can focus on strategy and analysis.
New software products have to undergo various tests, such as performance, functionality tests, and smoke testing, to ensure they are bug-free before being released onto the market. Test execution plays an important part in making sure that the software delivers the expected results.
Test execution shows whether the pre-defined requirements were correctly implemented in design and architecture, whether the results are correctly gathered and interpreted, and whether the development team has built the software product in accordance with these requirements.
Test execution is used for:
Test execution is probably the most important phase of the STLC. It ensures that any problems are discovered early and determines whether the software is ready for release. If execution results do not match the expected or desired results, the product may have to go through the SDLC and STLC again.
Efficient test execution is important for generating accurate test reports, including how many bugs were found, their severity, and which features or functions were impacted. The presence of bugs will mean the product goes back to the development team for correction, before retesting is performed.
The test execution phase also evaluates and validates the efforts of everyone involved with the software’s development, so that all contributions and work are properly recognized.
Within the test execution phase, there are a number of ways to run tests. Your choice will be informed by factors like the type of software you need to test and whether you want to test it on a single device or multiple devices, remotely or locally.
During each test execution, the software is placed in different scenarios, which help the team to verify and validate its various aspects. By analyzing the results, you’ll be able to tell not only whether the software is ready, but also if the testing process is working as expected in a specific context and environment.
One of the major choices to make is whether you want to run the tests manually or use automation (or a combination of both methods). Manual test execution means that a human follows all the steps set out in the test cases. In automated testing, a tool is programmed to carry out the plan for you. Automation speeds up the process and shortens the release cycle, but manual testing allows for some deviation from the written plan if necessary—whereas automated tools follow it to the letter.
If you want your pre-defined reports to display test results by test plan, iteration, or test environment, you can generate test case execution records before you run tests. They enable you to run the test in a specific test environment and map test planning and environment information to each test case. They can be used to run both manual and automated tests.
A test case is a collection of test steps used for testing a specific concept. It includes preconditions (which must be satisfied before execution), steps to follow, and one or more postconditions. A test run is an executable version of a certain test case, which can be executed in multiple areas, such as different releases or sprints.
A test suite is a group of related test cases, typically executed together. Test suites can include test cases with both manual and automated test scripts, as well as test cases without associated test scripts.
Within the suite, test cases may run in parallel or sequentially (where you can stop the execution of the suite if a single test case does not pass). You can also select a subset of the test suite to be executed in a particular cycle.
Test artifacts, also known as test deliverables, are all the reports or documents created while the testing is being carried out. The most common test artifacts are the test plan, test cases, test suite, and bug or defect reports. They are shared with everyone involved in the project, including the whole testing team, clients, and stakeholders.
There are many factors that have a bearing on the test execution process, including the scope and complexity of the software being tested, the flexibility of the lifecycle, and whether the project documentation covers everything the testing teams need to know before test execution begins.
The success of test execution also depends on the length of time allocated for testing, the skills of the people involved in carrying it out, and their ability to work as a team. Finally, the quality of automated tools is an important factor.
The coding stage takes place before the test execution phase, when the tests are being designed. It’s important to avoid repeating code and to write the minimal amount necessary so that testing costs are reduced. It should also be easy to understand, as maintenance teams spend a lot of effort reading and understanding it.
Developers can either create the code first, or write it specifically to pass the test cases. The latter is known as test-driven development (TDD).
It’s important to create a dedicated QA (Quality Assurance) environment in which testing can take place, so that the results won’t be affected by unrelated factors in other environments like development.
The QA environment should mimic production as closely as possible, with testers using the product as consumers would. Validation of the test environment setup is always recommended (often through smoke testing) before officially starting test execution.
It’s crucial to have a skilled and competent team of testers in order to deliver good quality test results. Apart from their own skill sets, they also need to be able to work well as a team—and adapt to the changing size of that team, as it’s not constant from the beginning to the end of the project. Test execution is the phase when the team is at its maximum size, so scalability of resources is important.
A test cycle is a container for tests and test suites, with test cases grouped together to achieve specific test goals. Examples include regression testing, build-verification tests, and end-to-end tests. Test cycles are broader in scope than test cases, spanning multiple users and projects. They have a defined period in time with a start date and end date, allowing you to track and compare actual results with expectations in real time.
Test scripts are typically line-by-line descriptions of all the actions and data needed to perform a test, describing exactly what a user would have to do in order to carry out particular actions in the program. The scripts also include specific results expected for each step, so you can see whether the software is performing as it’s supposed to.
Test scripts need to be well-written so that even inexperienced testers can follow the directions. However, scripts can limit the creativity needed for testers to discover hidden bugs.
Because software products are continually updated, the test scripts have to be adapted accordingly. Test script maintenance is also required when a change to the product would cause it to fail the existing test.
Maintaining test scripts is unavoidable, but it can be time-consuming, as it cuts into time spent on actual testing—some companies develop a collection of reusable test scripts, written to factor in a degree of change.
Before the test execution stage begins, certain criteria must be met, including completion of the plan, test design, and preparation of test management tools. There must be a process in place for tracking test data and metrics, and instructions for logging and reporting defects must be available to all team members.
When those elements are in place, you can move on to the execution of test cases, in which testers will execute the code and compare the expected and actual results. This includes marking the status of test cases (see next section), and reporting, logging, and mapping defects.
It also involves re-testing to check whether the problem is resolved and regression testing to ensure that those fixes haven’t caused another problem.
Following execution (with retesting if necessary), you can check that the deliverables and exit criteria have been met—which means all planned tests have been executed, defects logged and tracked to closure, and an execution and defect summary report has been prepared.
It’s crucial that you evaluate the test execution process itself, as well as the actual results. By analyzing what went well and what didn’t, you can make improvements to practices and tools ready for the next project.
There are several different outcomes in the test execution phase, each of which is assigned a status. If you are carrying out manual testing, a human tester will note down the status on a chart. If you are using automation, the tool will display the status for you.
The results are communicated in the form of daily or weekly reports, which establish transparency for the QA team’s activities of the day during each test cycle. This includes both defect information and test case run information.
Here are the main examples of test execution outcomes:
If the test result matches up with the pre-defined expectations, the software is considered to have passed the test. There are few or no defects to report, and you can move on to the next stage of the STLC. The test case pass rate gives you a good indication of the quality of the product being tested.
If the expected result is not met, the test execution status is “failed” and the defect or bug must be reported to the developers. You will carry out the same test later, once the bug has been fixed. If one of the steps in a multi-step test case does not meet its expected result, failure may be declared straight away and the subsequent steps need not be executed.
Sometimes there may be a problem in the running of the test itself, such as a network error or a mistake in the test script. If this makes it impossible to continue the test, the “Error” result is produced and the issue can be investigated before resuming.
Although most tests return a pass or fail result, occasionally the status may be given as “inconclusive.” As the name suggests, this means it was not possible to produce a clear result and further investigation is required.
The test results require careful analysis so that you can track progress against the planned schedule. By studying test completeness and success, your team can understand the quality of the overall solution. Analysis is also important because it helps you to spot serious issues early on, and to take action.
The results of test runs can be displayed as graphs, which can be shared beyond the testing team—the evidence helps programmers to keep their code defect-free and enables managers to deliver evidence-based progress to stakeholders. Global App Testing makes it easy to analyze your results by filtering, grouping, and sorting them to pinpoint problem areas.
Metrics to analyze include tests planned (number of tests scheduled to be executed in the iteration), tests implemented (number of tests built and ready to be executed, manually and automatically), tests attempted, and the number of passed and failed tests.
You should also review the number of blocked tests, which are tests that could not be executed right through to the last step for some reason—either the manual tester could not execute all the steps, or the automated testing tool reported a pass, which was overruled by a human test analyst.
You may notice the following patterns from your results:
This is exactly what it sounds like—a graph where the line slopes upwards. Whether or not this is a good thing depends on which metric you’re looking at!
For example, a rising slope is desirable for tests planned, implemented, attempted, and passed. But if it shows an increase in failed or blocked tests, this could indicate a decline in quality, failure to keep to the schedule, or problems in the test environment.
A falling slope shows the exact opposite. If the number of tests planned, implemented, or attempted is decreasing, there’s a problem—perhaps tests are being removed from the scope of the test effort, or there are not enough resources to write and execute tests.
A decline in the number of passed tests is also a cause for concern, especially if previously passing tests are now failing. Conversely, a downward trend for failed or blocked tests is a good thing.
While you might assume that a flat line indicates plain sailing, it’s actually not so great. If new tests are not being added to the overall test effort due to lack of resources or clear requirements, the graph for tests planned, implemented, and attempted would stay the same.
If the graph for tests passed is a flat line, defects are probably not being corrected (or there could be a coincidental net zero difference in the number of passing tests). It’s the same for failed and blocked tests—you want to see that number going down. A flat line means the test schedule may be failing.
As we’ve discovered, test execution is a hugely important stage of the STLC. Done correctly, it will tell you whether the software being tested is good quality, and help you spot problems before they cost you too much time and money.
Test execution results are assigned a status that’s easy to understand, with the results shown on a graph for easy analysis. Manual testing is more time-consuming but allows testers to think outside the box, while automated tools save time but stick to a narrow script.
If you decide to hand the process (including exploratory testing) over to the professionals, Global App Testing has a worldwide network of testers who can deliver real-time results in 30–150 minutes. Tests are carried out on real devices, OSs, and network combinations in countries around the world.
We’d love to give you a personal demo of our platform. Find out how we manage, execute and analyse test results to help you release high quality software anywhere in the world.Ready? Let's talk