Property 1=dark
Property 1=Default
Property 1=Variant2
smoke-testing-cover-visual

The Ultimate Guide to Smoke Testing

Did you know that Smoke testing has the highest ROI of any testing you can use to identify bugs? Smoke testing offers a quick and cost-effective way of ensuring that the software's core functions properly work before developers subject it to further testing and release it for public download.

In this ultimate guide to smoke testing, you will learn:

  • What smoke testing is
  • When to test
  • The difference between sanity, regression, and smoke testing
  • Mistakes we've seen clients make in their smoke testing process and best practices to follow

Let's start!

What Is Smoke Testing?

"Smoke testing" refers to broad but shallow functional testing for the main functionality of a product. It is a software testing method designed to assess the proper functioning of a software application's fundamental features. Its primary objective is identifying and rectifying major issues or defects within the software, ensuring the application's stability for subsequent testing phases.

During smoke testing, the software build is deployed to a quality assurance environment, where the testing team verifies the product/application's stability. Successful smoke tests validate that the software can advance to more in-depth testing and eventually be released to the public.

Note: If the smoke test fails, developers can use application logs or screenshots testers provide to pinpoint and address the identified issues.

The Goal of Smoke Testing

The goal of a smoke test is to prevent the needless expense and time waste of deeper tests on a broken product. It is called a smoke test because if there is smoke, there is a fire. The test is meant to catch any "fires" before they become major issues.

For example, in the context of the Titanic, a smoke test would ask if the Titanic is floating. If the test shows that the Titanic is floating, further tests would need to be conducted to verify that it is seaworthy. If the Titanic is not floating, there is no point in conducting additional tests yet because it would be clear that there is a significant problem with the ship's basic functionality.

Where Should You Use Smoke Testing?

A smoke test can be used in multiple build environments but is strictly functional. It should try to touch every component of your product ("broad") but must be fast to achieve its goals ("shallow"). For example, you might test that you can set up a bank account and transfer money (for a banking app), buy an item (e-commerce), or make a social post (social software).

When and Who Performs Smoke Testing?

When new features are added to existing software, smoke testing is usually conducted to ensure the system works as intended. In the development environment, developers perform smoke testing to verify the accuracy of the application before releasing the build to the Quality Assurance (QA) team.

Once the build is in the QA environment, QA engineers carry out smoke testing. Each time a new build is introduced, the QA team identifies the primary functionalities in the application for the smoke testing process.

Also, to answer who performs the smoke testing, it can be done by either developers, testers, or a combination of both.

The Name Origin

The term "smoke testing" has an intriguing origin, with two prominent theories explaining its nomenclature. According to Wikipedia, the term likely originated from the plumbing industry, where smoke was used to test for leaks and cracks in pipe systems. Over time, this term found application in the testing of electronics.

Another theory suggests that "smoke testing" emerged from hardware testing practices, where devices were initially switched on and tested for the presence of smoke emanating from their components. While these theories provide historical context, the contemporary significance of smoke testing lies in its widespread use in the software development process. Although no smoke is involved, the same underlying principles apply to software testing.

Benefits of Smoke Testing

Smoke testing provides numerous advantages for software development teams and organizations looking to improve their quality assurance processes:

Saves Time and Resources

By identifying critical issues early in the testing cycle, smoke tests prevent teams from wasting time on comprehensive testing of fundamentally broken builds. This early detection mechanism ensures that only stable builds progress to more resource-intensive testing phases.

Reduces Testing Costs

The cost-effectiveness of smoke testing is unmatched. Since smoke tests are quick and cover only essential functionality, they require minimal resources to execute while delivering maximum value by preventing expensive downstream testing on faulty builds.

Improves Build Stability

Regular smoke testing enforces discipline in the development process. Knowing that builds will be smoke tested encourages developers to ensure basic functionality works before committing code, resulting in more stable builds overall.

Enables Faster Feedback Loops

Smoke tests provide rapid feedback to development teams, often within minutes or hours. This quick turnaround allows developers to address issues immediately while the code is still fresh in their minds, improving fix quality and reducing context-switching costs.

Facilitates Continuous Integration and Deployment

Smoke testing integrates seamlessly with CI/CD pipelines, acting as a quality gate that ensures only working builds proceed through the deployment pipeline. This integration supports modern DevOps practices and enables more frequent, reliable releases.

Increases Confidence in Releases

When smoke tests consistently pass, teams gain confidence that the fundamental functionality remains intact. This confidence enables faster decision-making around releases and reduces the anxiety associated with deploying new builds.

Methods and Approaches to Smoke Testing

There are several methods and approaches teams can use to implement smoke testing effectively. The choice depends on your team's resources, technical stack, and development workflow:

Manual Smoke Testing

In manual smoke testing, QA engineers or developers manually execute a predefined set of test cases that cover critical functionality. Testers interact with the application interface, following documented test steps and recording results. This approach is suitable for small projects, early-stage products, or situations where automation infrastructure isn't yet in place.

Advantages: Low initial setup cost, flexibility to explore unexpected issues, suitable for UI-heavy applications.

Disadvantages: Time-consuming, prone to human error, not scalable for frequent testing.

Automated Smoke Testing

Automated smoke tests use scripts and testing frameworks to execute test cases automatically without human intervention. These tests can be triggered on-demand, on a schedule, or as part of CI/CD pipelines. Automation tools like Selenium, Cypress, Playwright, or API testing tools like Postman can be used to build smoke test suites.

Advantages: Fast execution, consistent results, enables frequent testing, integrates with CI/CD pipelines.

Disadvantages: Requires upfront development effort, maintenance overhead, may miss visual or UX issues.

Hybrid Smoke Testing

Many teams adopt a hybrid approach, combining automated tests for repetitive, predictable functionality with manual testing for areas requiring human judgment or exploratory testing. This balanced approach maximizes efficiency while maintaining thoroughness.

Advantages: Balances speed and coverage, leverages strengths of both approaches, adaptable to different scenarios.

Disadvantages: Requires coordination between manual and automated testing efforts, may create workflow complexity.

Build Verification Testing (BVT)

Build Verification Testing is essentially smoke testing performed immediately after a new build is created. It verifies that the build is testable and stable enough for further testing. BVT is typically automated and integrated into the build process.

Sanity-Style Smoke Testing

Some teams perform smoke tests that lean more toward sanity testing, focusing narrowly on specific modules or recently changed functionality rather than the entire system. While not strictly pure smoke tests, this targeted approach can be effective when time is extremely limited.

Real-World Examples of Smoke Testing

Understanding how smoke testing works in practice helps teams implement it effectively. Here are real-world examples across different application types:

E-Commerce Platform Smoke Test

For an online retail application, a smoke test might verify:

  • Homepage loads successfully
  • User can search for products
  • Product detail pages display correctly
  • User can add items to cart
  • Cart displays added items
  • Checkout process initiates without completing payment
  • User can log in and log out

This smoke test touches all critical user journeys without deeply testing edge cases, payment processing details, or complex scenarios.

Banking Application Smoke Test

For a mobile banking app, smoke testing would check:

  • App launches and displays login screen
  • User authentication works
  • Account balances display
  • Transaction history loads
  • Fund transfer interface is accessible
  • Bill payment section opens
  • User can navigate between main sections
  • Logout functionality works

The test ensures core banking operations are functional without testing complex scenarios like concurrent transactions or edge cases in payment processing.

Social Media Platform Smoke Test

For a social networking application:

  • User registration/login works
  • News feed loads with posts
  • User can create a new post
  • Posted content appears in feed
  • User can like/react to posts
  • Commenting functionality works
  • User profile page loads
  • Search functionality returns results
  • Notifications display

API Service Smoke Test

For a backend API service, smoke testing might verify:

  • API server is running and responding
  • Authentication endpoints work
  • Key GET endpoints return expected response codes
  • POST endpoints accept data and return success
  • Database connectivity is functional
  • Error handling returns appropriate status codes

SaaS Project Management Tool

For a project management application:

  • User can log in to the platform
  • Dashboard displays existing projects
  • User can create a new project
  • Tasks can be added to projects
  • Task assignment works
  • Status updates are saved
  • Team collaboration features are accessible
  • File upload functionality works

Cost of Performing Smoke Tests

Understanding the cost implications of smoke testing helps teams make informed decisions about implementation and resource allocation. The costs vary significantly based on the approach chosen and team structure:

Manual Smoke Testing Costs

Time investment: Manual smoke tests typically take 15 minutes to 2 hours per execution, depending on application complexity. For teams running smoke tests multiple times daily, this can represent 1-4 hours of QA time per day.

Labor costs: If a QA engineer earning $60,000 annually spends 2 hours daily on manual smoke testing, approximately 25% of their time, the annual cost is roughly $15,000 in labor dedicated solely to smoke testing.

Opportunity cost: Time spent on repetitive manual smoke testing could be invested in exploratory testing, test automation, or other high-value QA activities.

Automated Smoke Testing Costs

Initial setup: Building an automated smoke test suite typically requires 40-120 hours of development time, depending on application complexity. At $80-150 per hour for automation engineers, initial setup costs range from $3,200 to $18,000.

Infrastructure costs: Cloud-based CI/CD services and testing platforms may cost $50-500 monthly depending on usage volume. Self-hosted infrastructure requires server costs and maintenance time.

Maintenance: Automated tests require ongoing maintenance, typically 10-20% of the initial development time annually. This translates to 4-24 hours monthly for test updates and fixes.

Execution costs: Automated tests consume compute resources. For most teams, execution costs are negligible, under $100 monthly, but high-frequency testing at scale can increase costs.

ROI Considerations

Despite upfront costs, automated smoke testing typically delivers positive ROI within 3-6 months by:

  • Preventing wasted testing effort on broken builds, saving 4-10 QA hours per prevented incident
  • Catching critical issues before they reach production, avoiding customer impact and reputation damage
  • Enabling more frequent releases through faster feedback loops
  • Freeing QA resources for higher-value exploratory and usability testing

Cost Optimization Strategies

Start small: Begin with 10-15 critical test cases and expand gradually rather than attempting comprehensive coverage immediately.

Leverage open-source tools: Tools like Selenium, Cypress, and Playwright eliminate licensing costs while providing robust automation capabilities.

Use managed testing services: Platforms like Global App Testing offer smoke testing as a service, eliminating setup and maintenance costs while providing rapid turnaround times.

Parallelize execution: Running smoke tests in parallel across multiple environments reduces execution time and opportunity costs.

Smoke Testing vs. Sanity Testing

In industry practice, both sanity and smoke tests may be necessary for software builds, with the smoke test typically performed first, followed by the sanity test. However, due to the combination of test cases, these terms are occasionally used interchangeably, leading to confusion.

What Is Sanity Testing?

Sanity testing is performed to evaluate if the additional modules in an existing software build are functioning as expected and can proceed to the next level of testing. It is a subset of regression testing, focusing on evaluating the quality of regressions made to the software.

The primary aim is to verify that changes or proposed functionalities align with the plan. Typically performed after successful smoke testing, the focus during sanity testing is on validating functionality rather than conducting detailed testing. It involves selecting test cases that cover important aspects, resulting in wide but shallow testing.

For example, in an e-commerce project, sanity testing would validate specific modules, such as the login and user profile pages, to ensure changes do not impact related functionalities.

Differences Between Sanity Testing and Smoke Testing

  • Goal: Smoke testing verifies stability, while sanity testing verifies rationality.
  • Performers: Smoke testing is performed by software developers or testers, whereas in sanity testing, testers perform it alone.
  • Purpose: Smoke testing verifies critical functionalities, while sanity testing checks new functionalities such as bug fixes.
  • Subset category: Smoke testing is a subset of acceptance testing, and sanity testing is a subset of regression testing.
  • Documentation: Smoke testing is documented or scripted, while sanity testing is not.
  • Scope: Smoke testing verifies the entire system, while sanity testing verifies a specific component.
  • Build stability: Smoke testing can be done on stable or unstable builds; sanity testing is conducted on relatively stable builds.

Smoke Testing vs. Regression Testing

Regression testing might be confused with smoke testing, given its occurrence after each new build. However, the fundamental distinction lies in its purpose. Unlike smoke testing, regression testing delves deeper into the examination process, extending beyond the seamless user experience.

Its primary objective is to ensure that recent changes, such as bug fixes, addition or removal of features/modules, code or configuration alterations, or changes in requirements, do not adversely affect the existing functionality or features of the application.

Differences Between Regression Testing and Smoke Testing

  • Goals: Smoke testing ensures the software's primary functions are working correctly. Regression testing ensures that changes or updates have not created unexpected effects in other software parts.
  • Scope: Smoke testing is limited in scope, covering only the most basic functions of the software. Regression testing covers a much broader scope, including all areas of the software, even features that may not have been changed.
  • Time required: Smoke testing can be completed relatively quickly. Regression testing can take longer as it covers more areas of the software.
  • Frequency: Smoke testing is usually done at the start of a software development cycle and sometimes during integration testing. Regression testing is generally done after software elements have been changed or updated.
  • Test cases: Smoke testing often uses a limited set of test cases. Regression testing typically uses a large set of test cases.

Smoke Testing as Part of Your Test Anatomy

Like all testing, your smoke testing quality process is highly specific. It can be traced back to your organization's commercial and operational incentives. Global App Testing generally advocates for higher quality processes and products in our book Leading Quality. (But then, we would say that, wouldn't we?)

In a narrow sense, the moment to run a smoke test is "whenever you want to check the application is working." In a more prescriptive sense, there are key times this crops up during which smoke tests are a sensible investment. Let's take a look at common moments to run smoke tests:

1. Before You Commit Code to a Repository

If you don't run your full test suite in a local environment, you should at least make sure you haven't broken anything so severe that it shows up in a smoke test.

Key stages include:

  • Running smoke tests in the initial pre-commit stage
  • Developers working on local devices by running a Git test script
  • Coupling tests with a client-side hook for automated verification

2. Before a Large Test Series

Before undertaking major testing efforts, including regression and acceptance testing, run smoke tests first. Although an automated smoke test would theoretically save time against any manual test, how much time you'll save is proportional to the scale of the test series you're about to undertake. We undertake manual smoke tests on clients' behalf before a major test series, too, but the saving is more marginal.

3. Immediately After Deployment

Immediately after deployment is a sensible time to undertake a smoke test to ensure that everything is still working properly.

4. Any Time You Need Verification

As long as your smoke test is rapid (and probably automated), we would generally advocate for a liberal smoke testing policy. Any time you need to test whether your system is working in a big-picture sense, it demands a smoke test.

Smoke Testing Within CI/CD, DevOps, and Agile

Because smoke tests generate faster failures and shorter feedback loops, they have become closely associated with modern programming methodologies such as testing for agile and frameworks that focus intensely on the speed of deployment. With DevOps in particular, while it's not a technical requirement to have automated smoke tests, it is probably a de facto one.

How Do I Execute a Smoke Test Using Global App Testing?

You can execute any manual test, including smoke tests, by logging into your Global App Testing dashboard, entering or uploading the appropriate test cases, and pressing "launch" to receive your results in as little as 2 hours.

Express tests with Global App Testing are made to offer manual tests with the ease, convenience, and speed of running automated tests. We have also integrated with Jira, TestRail, GitHub, Slack, and Zephyr Squad so you can launch and receive test results where your teams like to work.

For more complex and bespoke testing for our clients, we will often run a quick smoke test as a matter of policy to verify that the test is worth doing.

How to Write a Suitable Test Case?

To create effective test cases, follow these steps:

  • Identify test areas: Enumerate all the product areas suitable for a smoke test.
  • Define core functionality: Break down the core functionality of each identified area into step-by-step processes.
  • Write test cases: Document the identified steps as test cases.
  • Execute tests properly: Implement the tests in a structured manner, writing "pass/fail" next to each step.
  • Avoid Ad Hoc play testing: Refrain from ad hoc play testing and opt for a systematic approach to ensure the test's accuracy.

By executing these steps, you enhance the likelihood of your tests working as intended, preventing the temptation to cut corners in the testing process.

Smoke Testing Best Practices and Failure Categories

Here's our advice on what to do when you're smoke testing:

Best Practices

1. This Is the Highest ROI of "Automating" Any Test – You Should Probably Automate It

Because of the frequency with which you'll undertake smoke testing, the savings of automating your smoke tests are the greatest of all the savings we'd associate with the automation of any test.

We're a manual testing solution, so we often have to make judgments about when our clients would be better off with an automated test (or even give our clients the bandwidth to automate more of their testing suite), but a smoke test is usually better off done by a program.

2. Run Smoke Tests Frequently

Once you have automated your smoke test, especially if some parts of your testing suite require manual or time-consuming work, you can benefit from testing more frequently. We suggest running the smoke test at every stage of the production environment, from before you commit until after deployment. If your smoke test is manual, consider automating it to ensure more frequent and efficient testing.

3. Ensure the Whole System Is Touched

One failure mode of building a smoke test is that only part of the system gets tested by the test. If your smoke test is not sufficiently broad, it will fail to find a major fault in some modules. Ensure that your test cases touch every function of the system without looking into the complex instances of each function. If you have a modular system, you should ensure that the relationship between modules is tested in addition to the modules themselves.

Smoke Testing Failure Categories

1. Lack of Focus

It is common for smoke tests to be executed poorly when the test's primary purpose is forgotten. The objective of the test is to save time, not to discover every bug extensively.

Consequences of excessive depth include:

  • The test suite fails to be both thorough and quick
  • Tests become time-consuming and lose their efficiency advantage
  • Teams conflate comprehensive testing with quick verification checks
  • Some software modules may not get tested at all
  • Testing becomes not cost-effective in terms of time or money

2. Failure to Identify Whether It Will Save Time

Failure to correctly identify when and whether a smoke test will save your organization time is the second most common category of failure. Probably, you are smoke testing less frequently than optimal, and you could save your organization more time by automating smoke tests and testing more often. But in some cases, you may be testing pointlessly, where a smoke test will not save your organization any time.

Catch critical bugs before they slow you down

Run a smoke test with GAT

FAQ

What Happens If Smoke Testing Fails?

If smoke testing fails, it indicates significant issues in the basic functionalities. Developers may need to investigate and fix the problems before further testing. Failed smoke tests may prevent the software build from progressing in the development cycle.

Is Smoke Testing Automated or Manual?

Smoke testing can be both automated and manual. Automated smoke tests are scripted and executed automatically, while manual smoke tests involve testers manually checking essential functionalities.

Can Smoke Testing Replace Comprehensive Testing?

No, smoke testing is not a replacement for comprehensive testing. It is a quick check to ensure the basic functionalities are working. More thorough testing, such as regression testing and integration testing, is still necessary for comprehensive software quality assurance.

Keep Learning

  • Remote Working has Changed Software Development Forever. Here's Why
  • App Users Today Have No Time For Poor Quality (We Asked Them)
  • How Flip cut their regression test duration by 1.5 weeks