Complete guide to Playwright reporting

Learn how Playwright's reporting ability improve test visibility and debugging. This article covers built-in and third-party reporters, including HTML and Trace Viewer, plus best practices to get faster feedback and actionable insights from every test run.

User

Pratik Patel

Jan 1, 2026

Complete guide to Playwright reporting

Software test reporting is the difference between knowing a test failed and understanding why it failed.

In modern engineering teams, end-to-end (E2E) test suites can execute thousands of assertions per run, across multiple browsers, environments, feature flags, and CI pipelines.

At scale, failures are inevitable, but without strong reporting, they turn into guesswork, slow root-cause analysis, and ultimately unreliable releases.

Industry data consistently shows that developers spend 20-30% of their CI debugging time just figuring out what went wrong, not fixing it.

Poor reporting increases mean time to resolution (MTTR), delays deployments, and erodes trust in test automation.

This guide explains Playwright's reporting capability from first principles through production-grade, CI-optimized setups used by high-performing engineering teams.

By the end, you’ll understand:

  • What Playwright reports actually contain
  • Various types of reports generated by Playwright
  • When to enable traces, screenshots, and videos
  • How to design a reporting strategy that scales with CI, test history, and team size

Why Are Test Automation Reports Needed?

Once tests start running, the next challenge is understanding the results.

A failing CI job only tells you that something broke, it does not explain:

  • Which user flow failed
  • Whether the failure is deterministic or flaky
  • If the failure is new or historically recurring
  • Whether it impacts one browser, one environment, or all pipelines

Test automation reports solve this by turning raw test execution data into clear, actionable insights the entire team can consume.

With effective Playwright test reporting, failures include:

  • The exact test step and assertion that failed with logs
  • Precise timing data (critical for CI optimization)
  • Supporting artifacts such as screenshots, videos, or Playwright traces (if enabled)

This context dramatically reduces feedback and retries loops.

Over time, good reporting also makes test suites easier to manage Teams that adopt structured reporting typically see:

  • 40–60% faster failure triage
  • Fewer “re-run until green” cycles
  • Higher confidence in CI gatekeeping

Reporting as a CI Optimization Tool

Beyond debugging, reporting plays a major role in CI performance and test governance.

Over time, good Playwright reporting helps teams:

  • Identify slowest specs that inflate CI duration
  • Detect flaky tests, which account for up to 30% of E2E failures in large pipelines
  • Track failure patterns across branches, pull requests, and release trains
  • Decide which tests belong in PR checks versus nightly or release pipelines

In high-velocity CI environments, reports become a shared source of truth. Developers, QA engineers, and engineering leads rely on them to make consistent decisions about:

  • Blocking vs non-blocking failures
  • Test quarantining
  • Release readiness

Without reliable Playwright test reporting, teams often lose trust in automation-leading to ignored failures, disabled tests, and manual verification creep.

And I have personally seen that by working with a lot of startups and even enterprises.

What is a Playwright Test Report?

A Playwright test report is the structured output generated after a test run that summarizes execution results in a way humans can quickly understand.

Instead of scanning raw logs, teams can immediately see which tests passed or failed, how long each test took, and where failures occurred.

From a Playwright test reporting perspective, this output goes beyond pass/fail status.

When a test fails, the report links directly to the exact step that broke and can include supporting artifacts such as screenshots, videos, and traces(if enabled).

This level of detail allows teams to inspect the application state at the moment of failure. Developers can review the DOM snapshot, analyze network activity, and follow the sequence of test steps from a single report.

As a result, debugging becomes significantly faster and more reliable, both for local development and CI pipelines where access to the running environment is limited.

At scale, Playwright test reports act as a shared artifact between developers, QA engineers, and CI systems, ensuring everyone is looking at the same source of truth.

What Makes Playwright Reporting Different

Playwright reporting stands out because reporting and debugging are built directly into the framework.

Test results, execution steps, and debugging artifacts are generated as part of the normal test run, which keeps reporting consistent across local environments and CI pipelines.

Compared to other test frameworks, Playwright reporting offers several practical advantages:

  • Step-level visibility into test execution, not just test-level results
  • Native integrationwith traces, screenshots, and videos
  • Clear reporting for parallel test runs, even at high concurrency
  • Minimal configuration,with useful defaults that work out of the box

These capabilities are especially important in CI, where tests often run in parallel across browsers and environments. Playwright ensures that reporting remains readable and actionable, even as execution scales.

Why CI Optimisation Matters for Playwright Test Reporting

Playwright’s strength is not just its powerful browser automation- it’s the depth of reporting data it can generate when configured correctly. Teams that invest in reporting early:

  • Scale E2E coverage without slowing delivery
  • Reduce CI costs by optimizing execution
  • Build long-term test history instead of short-lived logs

That’s why Playwright test reporting is no longer a “nice to have”—it’s a core part of modern test architecture.

Different types of Playwright Reporters

When you run Playwright tests without explicitly configuring a reporter, Playwright uses the list reporter by default.

You can customize reporting by specifying one or more reporters in the playwright.config file. In many setups, the HTML reporter is commonly enabled to provide a detailed, interactive view of test results.

You can also configure reporters directly from the command line using the --reporter flag. For example, to run tests with a specific reporter:

npx playwright test --reporter=line

After the test run completes, Playwright generates the report output in the configured directory, such as the default playwright-report folder or a custom path if specified.  

These reports make it easier to review test results and choose the reporting format that best fits your local development or CI needs.

Built-In Playwright Reporters

Playwright includes several built-in reporters that work out of the box and cover most reporting needs, from local development to CI pipelines.

These reporters require no additional installation and can be configured directly in the Playwright configuration file.

1. List Reporter

The List reporter is the default reporter in Playwright.

It prints each test name along with its status as execution progresses, making it ideal for local development and debugging where readability matters.

When a test fails, error details are printed immediately, allowing developers to understand the failure without scrolling through excessive logs.

Best use case

  • Local development
  • Small test suites
  • Debugging newly written tests
playwright.config.ts
export default { reporter: 'list', };
Playwright list reporter output showing test execution results in the terminal with passed tests across browsers

Best practice:

Many teams keep the List reporter enabled locally while switching to more compact reporters in CI to reduce log noise.

2. Line reporter

The Line reporter outputs one line per test, making it more compact than the List reporter while still preserving clarity.

This reporter works well for medium to large test suites, where verbose output slows log scanning and increases CI log storage costs.

Best use case

  • Medium-to-large test suites
  • CI pipelines where readability and compactness must be balanced
playwright.config.ts
export default { reporter: 'line', };
Playwright line reporter output showing concise test execution summary in the terminal after running npx playwright test

Why teams use it:

CI data shows that overly verbose logs can increase log storage and retrieval times by 20–30%, especially in parallel test execution environments.

3. Dot reporter

The Dot reporter prints one dot per test as it runs. It displays a single character per test.

By default it uses:

Character Description
. Passed
F Failed
× Failed or timed out - and will be retried
± Passed on retry (flaky)
T Timed out
° Skipped

So “green dot” and “red dot” are more like a UI interpretation. In the terminal it is usually plain characters (color may or may not show depending on your terminal).

For example:

.....F..

This minimal format is best suited for CI environments where reducing log size is more important than detailed output.

Best use case

  • Large CI pipelines
  • High-parallel execution
  • Resource-constrained runners
playwright.config.ts
export default { reporter: 'dot', };
Playwright dot reporter output showing dots for test execution progress and a summary of passed tests in the terminal

Why teams use it:

In large organizations, E2E pipelines can produce hundreds of thousands of log lines per day. Compact reporters like Dot help keep CI logs fast, readable, and cost-effective.

4. HTML reporter

It is one of the most powerful features of Playwright test reporting and is widely used for failure triage and post-run analysis.

The HTML reporter generates a rich, interactive report that includes:

  • Test suites and steps
  • Retries and timings
  • Screenshots, videos, and traces (when enabled)

The report is generated in the default playwright-report directory.

playwright.config.ts
export default { reporter: 'html', };
Playwright terminal output showing how to open the HTML test report using npx playwright show-report

Open the report

npx playwright show-report

Playwright HTML reporter showing interactive test results with browser status, execution time, and passed tests

Best practices:

      • Enable HTML reports in CI artifacts

      • Pair with trace: 'on-first-retry' to limit artifact size

      • Share reports during PR reviews for faster feedback

Teams that adopt HTML reporting consistently report 40–60% faster debugging cycles for E2E failures.

5. JSON reporter

The JSON reporter outputs detailed test data, including timing and metadata, in JSON format, and is best used for custom dashboards, analytics, or integrations that consume structured test results.

You can generate a JSON report in the terminal with playwright CLI command npx playwright test --reporter=json

json CLI

You can save this report by adding the following command in your config file

playwright.config.ts
export default { reporter: [['json', { outputFile: 'results.json' }]], };
saved json

Best use case:

     • Custom test dashboards

     • Flakiness analysis

     • Historical trend tracking

     • Enable HTML reports in CI artifacts

6. JUnit reporter

The JUnit reporter outputs test results in XML format that is widely supported by CI systems, making it ideal for CI dashboards that track test summaries and trends over time.

You can get JUnit report in the terminal using the Playwright CLI command
npx playwright test --reporter=junit

Junit CLI

And if you want to save the XML file, you can add this to your config file

playwright.config.ts
export default { reporter: [['junit', { outputFile: 'junit.xml' }]], };
saved junit

Using Multiple Playwright Reporters

One of the strengths of Playwright reporting is that you are not limited to a single reporter.

You can configure multiple reporters in the same test run, with each one serving a different purpose for developers, CI systems, or reporting tools.

A common real-world setup looks like this:

playwright.config.ts
export default { reporter: [ ['dot'], ['json', { outputFile: 'result.json' }], ['list'], ], };

In this setup, the List reporter provides clear and readable output during local runs, the Dot reporter keeps CI logs compact, and the JSON reporter generates structured data that CI systems or dashboards can consume.

All reports are produced from a single test run, so there is no need to rerun tests for different audiences.

This approach scales well from local development to CI pipelines and is suitable for most teams.

Playwright Traces

Reports tell you what failed. Traces show you how it failed.

A Playwright trace is a recorded timeline of a test run. It captures everything that happened during execution, including page actions, DOM state, network requests, console logs, and screenshots.

When a test fails, traces let you replay the failure step by step instead of guessing from logs.

Traces are especially useful when:

  • A test fails only in CI but passes locally
  • A failure depends on timing or async behavior
  • Screenshots alone do not explain what went wrong

Instead of re-running tests with extra logging, you open the trace and see the exact sequence of events that led to the failure.

When and How to Enable Traces

playwright.config.ts
import { defineConfig } from '@playwright/test'; export default defineConfig({ use: { trace: 'on', }, });

When enabled, Playwright records trace data during test execution and saves it as an artifact that can be opened in the Playwright trace viewer.

What a Playwright trace includes:

  • Actions and assertions executed during the test
  • DOM snapshots before and after each step
  • Network requests, responses, and console logs
  • Screenshots tied to execution steps
trace--light

Traces work alongside reporters, not as a replacement. Reporters identify which test failed, screenshots and videos show what the page looked like, and traces explain why the failure occurred by replaying the exact sequence of events.

In CI, traces are usually stored as artifacts or uploaded to reporting tools as part of a complete debugging workflow.

A Common CI Challenge: Trace Size at Scale

While traces are one of the most powerful features of Playwright test reporting, they can become a significant bottleneck in large test suites, especially in CI environments.

For large E2E suites running across multiple browsers and retries, Playwright trace artifacts can quickly grow to hundreds of MBs or even multiple GBs per run.

In GitHub Actions and similar CI systems, downloading these trace files often takes several minutes, slowing down debugging rather than speeding it up.

This issue becomes more pronounced when:

  • Traces are enabled for all tests instead of selectively
  • Multiple retries generate duplicate trace artifacts
  • Parallel jobs each upload their own traces

Custom Playwright Reporter

Built-in reporters are usually enough, but when teams need custom output or integration with internal systems, Playwright allows you to create custom reporters.

A custom reporter hooks into the test run and lets you process results in your own way without changing how tests are written or executed.

Simple Custom Reporter Example
import type { Reporter, TestCase, TestResult } from '@playwright/test/reporter'; import fs from 'fs'; class SimpleReporter implements Reporter { onTestEnd(test: TestCase, result: TestResult) { fs.appendFileSync( 'Report.txt', `${test.title} ${result.status} in ${result.duration}ms\n` ); } } export default SimpleReporter;
Register the Custom Reporter
// playwright.config.ts export default { reporter: [['./custom-reporter.ts']], };
Output

When tests run, the custom reporter writes one line per test to Report.txt, including the test name, status, and execution time.

Playwright custom reporter writing test results and execution time to a Report.txt file during test execution

Third-Party Reporters in Playwright

Built-in and custom Playwright reporters work well for single runs and isolated pipelines.

However, as test suites grow, most teams hit a familiar ceiling: local reports and CI artifacts stop scaling with the team.

At that point, the problem is no longer “Did the test fail?” but:

  • Has this test failed before?
  • Is this failure flaky or real?
  • How many flaky tests we have in preview env.?
  • Did it start after a specific commit or release?
  • Is this slowing down CI over time?
  • Where do developers actually go to debug it?

This is where third-party Playwright test reporting tools come in. Instead of focusing on a single run, they focus on history, trends, and test health over time.

📌 Community Note

It’s also worth noting that Max Schmitt, a well-known contributor in the Playwright community, maintains a curated list of Playwright reporters in Awesome Playwright repository.

TestDino is included in that list 🦕

From a developer’s point of view, Playwright’s HTML reports and traces are excellent, but they come with practical limitations in CI-heavy environments:

  • Reports are tied to a single run
  • Artifacts must be downloaded manually
  • Traces for large suites often grow to hundreds of MBs or GBs
  • GitHub Actions artifact downloads are slow and expire
  • There’s no built-in concept of test history or trends
  • No Integrations with Slack, JIRA and other project management tools
  • No way to track flaky tests over time
  • Managers and stakeholders can't see testing metrics

As a result, many teams end up with:

  • Local HTML reports no one revisits
  • CI artifacts that time out or get deleted
  • Valuable trace data that's technically available but rarely opened because it's buried in CI logs
  • Home-grown or vibe coded dashboards that break with every Playwright update and need constant maintenance
  • Engineers spending hours each week manually triaging the same flaky tests
  • Inability to answer basic questions like "which tests fail most often?"”

How TestDino Fits Into Playwright Test Reporting

TestDino takes Playwright's existing outputs and turns them into a centralized, always-available test reporting system.

It works with 2 approaches:

    1. Native Playwright Reports - Upload your existing JSON/HTML reports after test runs     complete. Zero changes to your Playwright setup.

    2. TestDino Custom Report - Generate richer reports with real-time streaming 🚀, deeper     metadata, and CI optimization controls.

Both modes give developers the same core benefits —but the custom format unlocks real-time visibility and advanced CI workflows.

What This Means for Developers

1. Real Time Execution Updates
See test progress live as it runs. No waiting for the full CI job or HTML report to finish.

2. Centralized CI Test Runs
All Playwright test runs across branches, pipelines, and shards in one unified dashboard.

3. Inline Evidence and Traces
View logs, screenshots, videos, and Playwright traces directly in the browser without downloading artifacts

4. Playwright Visual Testing support.
When a visual assertion fails, TestDino surfaces a Visual Comparison panel automatically within the test case view so you review changes inside the product. 

5. CI Aware Metadata and Controls
Capture execution context, shard data, retries, and CI signals to support smarter orchestration and optimization at scale.

Setup Overview (Minimal, Playwright Native Reports)

TestDino doesn’t require changing how you write tests. It relies on Playwright’s standard reporters.

Prerequisites

  • A working Playwright project
  • Node.js (locally or in CI)
  • A TestDino account & project
  • A project token (stored securely)

1. Install & Check the TestDino CLI

npx tdpw --help

No global install required.

2. Configure Playwright Reporters

TestDino consumes the JSON report (HTML is optional but recommended).

playwright.config.ts
reporter: [ ['json', { outputFile: './playwright-report/report.json' }], ['html', { outputDir: './playwright-report' }], ],

This generates:

  • report.json → structured test data
  • playwright-report/ → HTML, screenshots, assets

3. Run Tests Normally

npx playwright test

No change to your existing workflow.

4. Upload Reports (Local)

tdpw upload ./playwright-report --token="your-token" --upload-html

  • --upload-html uploads screenshots and assets
  • Token stays outside version control

5. Upload Reports from CI (GitHub Actions Example)

- name: Upload to TestDino if: always() run: npx tdpw upload ./playwright-report \ --token="${{ secrets.TESTDINO_TOKEN }}" \ --upload-html

if: always() ensures reports are available even when tests fail—critical for debugging.

6. View Results in TestDino

In the TestDino dashboard, you’ll see:

  • Test runs with pass/fail counts and duration
  • Filters by branch, author, or environment
  • Detailed test case results
  • Failure details: errors, stack traces, screenshots, traces
  • AI hints to identify flaky tests vs real issues

What Playwright Teams Gain Over Time

Over multiple runs, TestDino becomes more than a report viewer:

  • Test run history across branches and PRs
  • CI Optimisation and save CI bills with smart re-run only failure tests
  • Meaningful analytics that save hours on debugging
  • TestDino MCP Server which lets you find, fix flaky tests with AI
  • Various Integrations(Github checks, Slack, JIRA, Linear, Asana)
  • Role specific dashboards for QA, Developer and Managers
  • In-line trace viewer right into your test case view

Most importantly, Playwright test reporting stops being local and ephemeral. It becomes something the whole team can rely on-without changing how tests are written or run.

And over time, TestDino becomes a single source of truth for Playwright test health, turning reporting into a shared engineering signal rather than a local debugging artifact.

Spending Too Much Time Debugging Test Failures ?

Try TestDino Today!

Start Free

Choosing the Right Playwright Reporting Setup

The right Playwright reporting setup depends on where tests run and how results are consumed.

Reporting should stay lightweight during development, structured in CI, and expandable as test suites grow.

A practical way to choose a setup:

  • Local development: Use List or Line reporters for readable output; enable the HTML report only when debugging.
  • CI pipelines: Use a compact console reporter like Dot and always generate JSON or JUnit for CI dashboards.
  • Larger test suites: Store HTML reports and traces as artifacts, usually only on failure.
  • Long-term visibility: Use tools like TestDino that consume Playwright’s JSON or HTML reports to track history and trends.

This approach keeps reporting simple at the start and allows teams to scale visibility without changing how tests are written.

Common Mistakes in Playwright Reporting

Most reporting issues are not caused by the framework itself, but by how reports are generated, stored, or reviewed.

Common mistakes to avoid:

  • Relying only on HTML reports: HTML reports are great for debugging but insufficient for CI dashboards or automation without JSON or JUnit outputs.
  • Enabling all artifacts for every test: Recording traces, screenshots, and videos for every run increases storage usage and slows pipelines unnecessarily.
  • Hiding flaky tests behind retries: Retries can keep CI green but often mask unstable tests if failures are not reviewed regularly.
  • Not reviewing reports consistently: Reports provide value only when teams actively check failures and trends.
  • Overcomplicating reporting too early: Adding custom or third-party tools before they are needed creates noise without clear benefits.

Troubleshooting Playwright Reporting Issues

When Playwright reports are missing or incomplete, the issue is usually related to configuration or CI behavior rather than the tests themselves.

Start by confirming that the correct reporters are enabled and that no CLI flags or environment variables are overriding your playwright.config settings.

In CI, reports may be generated but written to unexpected locations or removed before artifacts are collected. Checking the working directory after test execution and verifying artifact upload steps often reveals the problem.

Differences between local and CI environments can also affect reporting, so running the same command locally with CI settings helps isolate issues quickly.

Best Practices for Playwright Reporting at Scale

  • Keep local reporting simple so developers get quick feedback while running tests.
  • Always generate JSON or HTML reports in CI so tools and pipelines can read the results.
  • Don’t enable screenshots, videos, and traces for every run; turn them on only when needed to avoid slow builds.
  • Make sure reports are saved or uploaded in CI even when tests fail, so failure details are not lost.
  • Review failed and flaky tests regularly instead of relying only on retries.
  • Use the same reporting setup in local and CI environments to avoid confusion.
  • Use Playwright reports to debug individual test runs.
  • When test suites grow and you need history, trends, or flaky test tracking, use a tool like TestDino to centralize and analyze Playwright reports over time.

Conclusion

Playwright reporting turns test runs into clear signals your team can trust. With the right mix of built-in reporters, custom logic, and CI-friendly outputs, failures become easier to understand and faster to fix.

Start simple, then scale your reporting as your test suite grows. Use Playwright for accurate run-level insight, and bring in tools like TestDino when you need centralized visibility, trends, and long-term test health.

When reporting is done right, tests stop being noise and start guiding confident releases. Stop Guessing Why Tests Fail

Stop Guessing Why Tests Fail

One dashboard for Playwright runs, failures, and trends

Start Free

FAQs

In your playwright.config add this reporter: [['html', { outputDir: './playwright-report' }]]
OR
Run tests with npx playwright test --reporter=html This will generate a report file (usually in the playwright-report directory). After the run, open the report using npx playwright show-report.

Get started fast

Step-by-step guides, real-world examples, and proven strategies to maximize your test reporting success