Software Testing - Smoke Testing vs Sanity Testing
Definition
Smoke testing (sometimes called "build verification testing") is a quick, surface-level run through the main flows of an application to confirm the build is stable enough for deeper testing. The goal is to catch show-stopper problems early.
Primary goal
Validate the major/critical features are functioning and the build did not collapse. If a smoke test fails, testers typically stop and send the build back to developers.
When to run
-
Immediately after a new build is deployed to QA / test environment.
-
Before executing full functional/regression suites.
-
After major merges or automated deployment pipelines.
Scope
-
Broad coverage across the application, but shallow — each flow gets a few critical checks rather than exhaustive tests.
-
Focus on high-level end-to-end flows: app launches, login, basic CRUD, navigation, payment gateway connectivity (if core), page loads, DB connectivity, APIs up, major integrations.
Typical items to include
-
Application starts and shows main UI (web app loads, mobile app launches).
-
Login/logout with valid credentials.
-
Basic create/read/update/delete for critical entities.
-
Navigation between major pages/screens.
-
Critical API endpoints return expected status (200).
-
Basic data persistence (data saved and retrievable).
-
Payment checkout (if essential) — or at least initiation.
-
No fatal crashes or 500 errors on critical pages.
Acceptance criteria
-
A short pass/fail list: if any critical check fails, build fails smoke testing. Usually a green/red decision.
Example (web app)
-
Home page loads (200, key elements visible).
-
Login succeeds with valid credentials.
-
Search returns results.
-
Add-to-cart works and cart shows >0 items.
-
Checkout initiates (payment gateway reachable).
-
Logout works.
Advantages
-
Fast feedback to devs — prevents wasting QA time on broken builds.
-
Cheap gate for CI/CD pipelines.
-
Early detection of severe integration issues.
Disadvantages / risks
-
Not exhaustive — can miss subtle defects.
-
Over time, smoke suites can become too large and lose speed.
Best practices
-
Keep smoke tests small (5–20 checks depending on app size).
-
Automate smoke tests and run them in CI on every build.
-
Keep them stable, maintainable, and independent of test data where possible.
-
Document clear pass/fail criteria.
2 — Sanity Testing
Definition
Sanity testing is targeted testing to verify that a specific functionality, bug fix, or small set of related functionality works correctly after changes. It's narrower and deeper than smoke testing.
Primary goal
Verify that recent changes/bug fixes did not break the specific functionality and it behaves as expected so the build can proceed to more comprehensive testing.
When to run
-
After developers deliver a bug fix or change to a particular module/feature.
-
After patch builds, or when a small set of features are updated.
Scope
-
Narrow focus — only the changed module(s) or defect areas.
-
Deeper validation than smoke testing in that area: verify workflows, boundary conditions, and related dependent behavior.
Typical items to include
-
The exact steps to reproduce the bug (now expecting it to be fixed).
-
Validations, error messages, edge cases in the fixed module.
-
Dependency checks (does change affect other nearby features?).
-
Basic sanity checks in the area to ensure no new regressions were introduced.
Example
Developer fixes an “Add discount code” bug. Sanity tests:
-
Apply discount code (valid) -> cart totals update correctly.
-
Apply invalid code -> correct error message.
-
Ensure discount applied persists after page reload.
-
Ensure checkout accepts discounted total.
Acceptance criteria
-
All sanity tests for the changed area pass. If fails, send back to developer.
Advantages
-
Fast verification of fixes without running entire suite.
-
Reduces rework by focusing on the change.
-
Useful for triaging hotfixes/patches.
Disadvantages / risks
-
May miss side-effects outside the tested area unless dependencies are considered.
-
If developers or testers miss dependencies, defects can leak into regression.
Best practices
-
Define clear list of tests tied to the change/bug.
-
Include tests for common edge cases and immediately adjacent functionality.
-
Keep it quick but thorough enough to be confident the fix didn't break other things in scope.
3 — Key Differences (Concise table)
| Aspect | Smoke Testing | Sanity Testing |
|---|---|---|
| Purpose | Verify overall build stability | Verify specific functionality/bug fix |
| Scope | Broad, shallow | Narrow, deep |
| Performed by | QA on new build | QA after changes/bugfixes |
| Automation | Commonly automated (CI) | Often manual but can be automated |
| When to stop | Fail — do not run full tests | Fail — send fix back and re-test |
| Time required | Short | Short — usually shorter than full regression, similar to smoke |
4 — Relationship to Other Testing Types
-
Regression testing: exhaustive testing to check that existing features still work; usually follows smoke & sanity if build passes.
-
Sanity is often a subset of regression (targeted).
-
Smoke is a gate to allow regression and functional tests to run.
5 — Automation Considerations
Smoke tests
-
High value for automation in CI/CD because they must run on every build.
-
Use headless browsers (Puppeteer, Playwright, Selenium), API tests (Postman/Newman, REST-assured), or unit-level integration tests as appropriate.
-
Keep smoke tests stable and resilient to UI changes (use stable selectors, APIs for critical checks).
Sanity tests
-
Often manual for quick ad-hoc checks after fixes, but can be automated for frequently changed modules.
-
Automate sanity flows for high-risk components or recurring hotfixes.
-
Use test tags/labels to run only sanity tests related to a change.
6 — Example checklists / templates
Smoke Test Checklist (example)
-
App loads on target URL and renders main header/footer.
-
Login with known test user (success).
-
Navigate to dashboard (load within acceptable time).
-
Create a new order / create resource (CR part).
-
Read: view created order/resource.
-
Update: change an attribute and save.
-
Delete: remove resource and verify removal.
-
Key third-party service reachable (payment, email API, etc.).
-
No JS console errors / no server 500 on critical pages.
If any step fails → Fail build.
Sanity Test Template (for a bug fix)
-
Title: Sanity test for [BUG-123] Fix discount calculation
-
Steps:
-
Login as user X.
-
Add product A (price 500) to cart.
-
Apply valid discount code
DISC50. -
Verify cart total = 450 (500 - 50).
-
Reload page, verify discount persists.
-
Proceed to checkout, verify payment amount is 450.
-
-
Expected result: Discount applied and persisted, no errors.
-
Pass criteria: All steps succeed.
7 — When to choose which
-
New build arriving from CI/CD: always run smoke.
-
After a small change, bug fix, or patch: run sanity for the changed area.
-
Before major regression suite or release candidate: run smoke first, then sanity where applicable, then full regression.
8 — Common anti-patterns & pitfalls
-
Making smoke suite too large — it then becomes slow and defeats the purpose. Keep it as a minimal verification gate.
-
Treating sanity as ad-hoc with no documentation — create at least minimal steps per fix so it's repeatable.
-
Not considering dependencies — a sanity test should include checks for adjacent features to catch side effects.
-
Over-automation without maintenance — flaky automated smoke/sanity tests reduce confidence.
9 — Metrics & reporting
Track:
-
Smoke pass rate per build (automated).
-
Time to detect build-breaking issues (from commit to smoke failure).
-
Sanity turnaround time (time to verify a fix).
-
Flakiness rate of smoke/sanity tests (important for automation hygiene).
10 — Practical tips
-
Keep both smoke and sanity suites in source control and versioned with the application.
-
Use tagging in your automation framework (e.g.,
@smoke,@sanity) to run specific suites. -
Pair smoke tests with environment health checks (DB up, queues running, 3rd party credentials valid).
-
Make sanity tests traceable to bug IDs or ticket numbers.
-
Run smoke tests automatically on pull request builds to catch integration issues early.