Did you know that Smoke testing has the highest ROI of any testing you can use to identify bugs? Smoke testing offers a quick and cost-effective way of ensuring that the software's core functions properly work before developers subject it to further testing and release it for public download.
In this ultimate guide to smoke testing, you will learn:
Let's start!
"Smoke testing" refers to broad but shallow functional testing for the main functionality of a product. It is a software testing method designed to assess the proper functioning of a software application's fundamental features. Its primary objective is identifying and rectifying major issues or defects within the software, ensuring the application's stability for subsequent testing phases.
During smoke testing, the software build is deployed to a quality assurance environment, where the testing team verifies the product/application's stability. Successful smoke tests validate that the software can advance to more in-depth testing and eventually be released to the public.
Note: If the smoke test fails, developers can use application logs or screenshots testers provide to pinpoint and address the identified issues.
The goal of a smoke test is to prevent the needless expense and time waste of deeper tests on a broken product. It is called a smoke test because if there is smoke, there is a fire. The test is meant to catch any "fires" before they become major issues.
For example, in the context of the Titanic, a smoke test would ask if the Titanic is floating. If the test shows that the Titanic is floating, further tests would need to be conducted to verify that it is seaworthy. If the Titanic is not floating, there is no point in conducting additional tests yet because it would be clear that there is a significant problem with the ship's basic functionality.
A smoke test can be used in multiple build environments but is strictly functional. It should try to touch every component of your product ("broad") but must be fast to achieve its goals ("shallow"). For example, you might test that you can set up a bank account and transfer money (for a banking app), buy an item (e-commerce), or make a social post (social software).
When new features are added to existing software, smoke testing is usually conducted to ensure the system works as intended. In the development environment, developers perform smoke testing to verify the accuracy of the application before releasing the build to the Quality Assurance (QA) team.
Once the build is in the QA environment, QA engineers carry out smoke testing. Each time a new build is introduced, the QA team identifies the primary functionalities in the application for the smoke testing process.
Also, to answer who performs the smoke testing, it can be done by either developers, testers, or a combination of both.
The term "smoke testing" has an intriguing origin, with two prominent theories explaining its nomenclature. According to Wikipedia, the term likely originated from the plumbing industry, where smoke was used to test for leaks and cracks in pipe systems. Over time, this term found application in the testing of electronics.
Another theory suggests that "smoke testing" emerged from hardware testing practices, where devices were initially switched on and tested for the presence of smoke emanating from their components. While these theories provide historical context, the contemporary significance of smoke testing lies in its widespread use in the software development process. Although no smoke is involved, the same underlying principles apply to software testing.
Smoke testing provides numerous advantages for software development teams and organizations looking to improve their quality assurance processes:
By identifying critical issues early in the testing cycle, smoke tests prevent teams from wasting time on comprehensive testing of fundamentally broken builds. This early detection mechanism ensures that only stable builds progress to more resource-intensive testing phases.
The cost-effectiveness of smoke testing is unmatched. Since smoke tests are quick and cover only essential functionality, they require minimal resources to execute while delivering maximum value by preventing expensive downstream testing on faulty builds.
Regular smoke testing enforces discipline in the development process. Knowing that builds will be smoke tested encourages developers to ensure basic functionality works before committing code, resulting in more stable builds overall.
Smoke tests provide rapid feedback to development teams, often within minutes or hours. This quick turnaround allows developers to address issues immediately while the code is still fresh in their minds, improving fix quality and reducing context-switching costs.
Smoke testing integrates seamlessly with CI/CD pipelines, acting as a quality gate that ensures only working builds proceed through the deployment pipeline. This integration supports modern DevOps practices and enables more frequent, reliable releases.
When smoke tests consistently pass, teams gain confidence that the fundamental functionality remains intact. This confidence enables faster decision-making around releases and reduces the anxiety associated with deploying new builds.
There are several methods and approaches teams can use to implement smoke testing effectively. The choice depends on your team's resources, technical stack, and development workflow:
In manual smoke testing, QA engineers or developers manually execute a predefined set of test cases that cover critical functionality. Testers interact with the application interface, following documented test steps and recording results. This approach is suitable for small projects, early-stage products, or situations where automation infrastructure isn't yet in place.
Advantages: Low initial setup cost, flexibility to explore unexpected issues, suitable for UI-heavy applications.
Disadvantages: Time-consuming, prone to human error, not scalable for frequent testing.
Automated smoke tests use scripts and testing frameworks to execute test cases automatically without human intervention. These tests can be triggered on-demand, on a schedule, or as part of CI/CD pipelines. Automation tools like Selenium, Cypress, Playwright, or API testing tools like Postman can be used to build smoke test suites.
Advantages: Fast execution, consistent results, enables frequent testing, integrates with CI/CD pipelines.
Disadvantages: Requires upfront development effort, maintenance overhead, may miss visual or UX issues.
Many teams adopt a hybrid approach, combining automated tests for repetitive, predictable functionality with manual testing for areas requiring human judgment or exploratory testing. This balanced approach maximizes efficiency while maintaining thoroughness.
Advantages: Balances speed and coverage, leverages strengths of both approaches, adaptable to different scenarios.
Disadvantages: Requires coordination between manual and automated testing efforts, may create workflow complexity.
Build Verification Testing is essentially smoke testing performed immediately after a new build is created. It verifies that the build is testable and stable enough for further testing. BVT is typically automated and integrated into the build process.
Some teams perform smoke tests that lean more toward sanity testing, focusing narrowly on specific modules or recently changed functionality rather than the entire system. While not strictly pure smoke tests, this targeted approach can be effective when time is extremely limited.
Understanding how smoke testing works in practice helps teams implement it effectively. Here are real-world examples across different application types:
For an online retail application, a smoke test might verify:
This smoke test touches all critical user journeys without deeply testing edge cases, payment processing details, or complex scenarios.
For a mobile banking app, smoke testing would check:
The test ensures core banking operations are functional without testing complex scenarios like concurrent transactions or edge cases in payment processing.
For a social networking application:
For a backend API service, smoke testing might verify:
For a project management application:
Understanding the cost implications of smoke testing helps teams make informed decisions about implementation and resource allocation. The costs vary significantly based on the approach chosen and team structure:
Time investment: Manual smoke tests typically take 15 minutes to 2 hours per execution, depending on application complexity. For teams running smoke tests multiple times daily, this can represent 1-4 hours of QA time per day.
Labor costs: If a QA engineer earning $60,000 annually spends 2 hours daily on manual smoke testing, approximately 25% of their time, the annual cost is roughly $15,000 in labor dedicated solely to smoke testing.
Opportunity cost: Time spent on repetitive manual smoke testing could be invested in exploratory testing, test automation, or other high-value QA activities.
Initial setup: Building an automated smoke test suite typically requires 40-120 hours of development time, depending on application complexity. At $80-150 per hour for automation engineers, initial setup costs range from $3,200 to $18,000.
Infrastructure costs: Cloud-based CI/CD services and testing platforms may cost $50-500 monthly depending on usage volume. Self-hosted infrastructure requires server costs and maintenance time.
Maintenance: Automated tests require ongoing maintenance, typically 10-20% of the initial development time annually. This translates to 4-24 hours monthly for test updates and fixes.
Execution costs: Automated tests consume compute resources. For most teams, execution costs are negligible, under $100 monthly, but high-frequency testing at scale can increase costs.
Despite upfront costs, automated smoke testing typically delivers positive ROI within 3-6 months by:
Start small: Begin with 10-15 critical test cases and expand gradually rather than attempting comprehensive coverage immediately.
Leverage open-source tools: Tools like Selenium, Cypress, and Playwright eliminate licensing costs while providing robust automation capabilities.
Use managed testing services: Platforms like Global App Testing offer smoke testing as a service, eliminating setup and maintenance costs while providing rapid turnaround times.
Parallelize execution: Running smoke tests in parallel across multiple environments reduces execution time and opportunity costs.
In industry practice, both sanity and smoke tests may be necessary for software builds, with the smoke test typically performed first, followed by the sanity test. However, due to the combination of test cases, these terms are occasionally used interchangeably, leading to confusion.
Sanity testing is performed to evaluate if the additional modules in an existing software build are functioning as expected and can proceed to the next level of testing. It is a subset of regression testing, focusing on evaluating the quality of regressions made to the software.
The primary aim is to verify that changes or proposed functionalities align with the plan. Typically performed after successful smoke testing, the focus during sanity testing is on validating functionality rather than conducting detailed testing. It involves selecting test cases that cover important aspects, resulting in wide but shallow testing.
For example, in an e-commerce project, sanity testing would validate specific modules, such as the login and user profile pages, to ensure changes do not impact related functionalities.
Regression testing might be confused with smoke testing, given its occurrence after each new build. However, the fundamental distinction lies in its purpose. Unlike smoke testing, regression testing delves deeper into the examination process, extending beyond the seamless user experience.
Its primary objective is to ensure that recent changes, such as bug fixes, addition or removal of features/modules, code or configuration alterations, or changes in requirements, do not adversely affect the existing functionality or features of the application.
Like all testing, your smoke testing quality process is highly specific. It can be traced back to your organization's commercial and operational incentives. Global App Testing generally advocates for higher quality processes and products in our book Leading Quality. (But then, we would say that, wouldn't we?)
In a narrow sense, the moment to run a smoke test is "whenever you want to check the application is working." In a more prescriptive sense, there are key times this crops up during which smoke tests are a sensible investment. Let's take a look at common moments to run smoke tests:
If you don't run your full test suite in a local environment, you should at least make sure you haven't broken anything so severe that it shows up in a smoke test.
Key stages include:
Before undertaking major testing efforts, including regression and acceptance testing, run smoke tests first. Although an automated smoke test would theoretically save time against any manual test, how much time you'll save is proportional to the scale of the test series you're about to undertake. We undertake manual smoke tests on clients' behalf before a major test series, too, but the saving is more marginal.
Immediately after deployment is a sensible time to undertake a smoke test to ensure that everything is still working properly.
As long as your smoke test is rapid (and probably automated), we would generally advocate for a liberal smoke testing policy. Any time you need to test whether your system is working in a big-picture sense, it demands a smoke test.
Because smoke tests generate faster failures and shorter feedback loops, they have become closely associated with modern programming methodologies such as testing for agile and frameworks that focus intensely on the speed of deployment. With DevOps in particular, while it's not a technical requirement to have automated smoke tests, it is probably a de facto one.
You can execute any manual test, including smoke tests, by logging into your Global App Testing dashboard, entering or uploading the appropriate test cases, and pressing "launch" to receive your results in as little as 2 hours.
Express tests with Global App Testing are made to offer manual tests with the ease, convenience, and speed of running automated tests. We have also integrated with Jira, TestRail, GitHub, Slack, and Zephyr Squad so you can launch and receive test results where your teams like to work.
For more complex and bespoke testing for our clients, we will often run a quick smoke test as a matter of policy to verify that the test is worth doing.
To create effective test cases, follow these steps:
By executing these steps, you enhance the likelihood of your tests working as intended, preventing the temptation to cut corners in the testing process.
Here's our advice on what to do when you're smoke testing:
Because of the frequency with which you'll undertake smoke testing, the savings of automating your smoke tests are the greatest of all the savings we'd associate with the automation of any test.
We're a manual testing solution, so we often have to make judgments about when our clients would be better off with an automated test (or even give our clients the bandwidth to automate more of their testing suite), but a smoke test is usually better off done by a program.
Once you have automated your smoke test, especially if some parts of your testing suite require manual or time-consuming work, you can benefit from testing more frequently. We suggest running the smoke test at every stage of the production environment, from before you commit until after deployment. If your smoke test is manual, consider automating it to ensure more frequent and efficient testing.
One failure mode of building a smoke test is that only part of the system gets tested by the test. If your smoke test is not sufficiently broad, it will fail to find a major fault in some modules. Ensure that your test cases touch every function of the system without looking into the complex instances of each function. If you have a modular system, you should ensure that the relationship between modules is tested in addition to the modules themselves.
It is common for smoke tests to be executed poorly when the test's primary purpose is forgotten. The objective of the test is to save time, not to discover every bug extensively.
Consequences of excessive depth include:
Failure to correctly identify when and whether a smoke test will save your organization time is the second most common category of failure. Probably, you are smoke testing less frequently than optimal, and you could save your organization more time by automating smoke tests and testing more often. But in some cases, you may be testing pointlessly, where a smoke test will not save your organization any time.
If smoke testing fails, it indicates significant issues in the basic functionalities. Developers may need to investigate and fix the problems before further testing. Failed smoke tests may prevent the software build from progressing in the development cycle.
Smoke testing can be both automated and manual. Automated smoke tests are scripted and executed automatically, while manual smoke tests involve testers manually checking essential functionalities.
No, smoke testing is not a replacement for comprehensive testing. It is a quick check to ensure the basic functionalities are working. More thorough testing, such as regression testing and integration testing, is still necessary for comprehensive software quality assurance.