
How 7T Scaled Test Automation, Involved Their Entire QA Team in the Process, and Cut Down on Test Maintenance
Customer Overview
7T (SevenTablets) is a Dallas-based digital transformation firm specializing in AI, machine learning, and enterprise software. They build next-generation business solutions, including mobile and web applications, process automation, cloud infrastructure, and DevOps for enterprise clients.
The Problem
7T’s QA team relied on Selenium with Java and TestNG for test automation. While functional, this approach demanded significant engineering effort to build and maintain. Their test cases were extensive, with multiple scenarios. With their earlier automation setup, they were creating multiple test cases to accommodate each of these scenarios, all documented in Excel sheets.
The core issue was that test creation and maintenance consumed the team’s capacity. Engineers spent the majority of their time keeping existing scripts working rather than expanding coverage. Every UI change risked breaking fragile selectors, creating a cycle of triage and repair that left little room for new test development.
Moreover, only automation engineers could write the test scripts, as coding was required. This was making delivery time constraints a major concern for the QA team as they were struggling to finish their verifications on time. While the QA team included both manual testers and automation engineers, the above challenges made it difficult for them to focus solely on quality, often relying on manual testers to verify the application because test automation was riddled with setbacks and was slower.
- The QA team was able to work on a maximum of 10 automation test cases in a day, even though these test cases covered fewer test scenarios. In some cases, it even took a few hours to get them up and running.
- Maintenance consumed the majority of QA engineering time, limiting new coverage.
- Release cycles were slowed by insufficient automated testing coverage, leading to a heavier reliance on manual testing.
- Automation engineers were diverted from test automation to test upkeep.
As the number of projects grew, testing became a bottleneck.
Key Objectives
- Scale automated test coverage significantly without adding headcount.
- Reduce the time and effort required for test creation and maintenance.
- Enable team members without coding expertise to contribute to automation.
- Accelerate release cycles with broader, more reliable automation test coverage.
Why 7T Chose testRigor
As product complexity increased, 7T recognized that continuing to invest in Selenium-based automation would mean either hiring more QA engineers or accepting stagnant test automation coverage. They evaluated alternatives with specific criteria: the tool needed to dramatically reduce maintenance, be easy to use for every tester on the team, and scale coverage without scaling the team.
testRigor stood out for its AI-powered approach to eliminating brittle selectors and its ability to let anyone on the team – regardless of coding background – create and maintain automated tests.
The Solution
With testRigor, 7T’s team shifted from writing and maintaining fragile Selenium scripts to authoring tests in plain English. This fundamentally changed who could contribute to test automation and how quickly coverage could expand.
testRigor’s generative AI-based, no-code platform enabled the team to build automated tests without programming expertise. Tests adapt to UI changes automatically through self-healing capabilities, virtually eliminating the maintenance burden that had previously consumed most of the team’s capacity.
Solution Highlights for 7T
7T found testRigor to be a simpler and much more effective solution than their previous test automation setup.
- They found the plain English test authoring easier, allowing the entire QA team to participate in the process.
- Test creation got even faster for them as they were able to automate some scenarios in plain English and reuse them across different test cases. Compared to their previous automation setup, they can now add many more test scenarios to a single test case.
- The team found that adding test data and doing mobile testing was much simpler with testRigor.
- Since testRigor’s AI makes test runs resilient to UI changes, the results are more accurate and dependable.
- Developers and testers found it easier to debug and fix issues as testRigor captured test artifacts like screenshots, video recordings, and error messages at every step of the run.
All this led to faster test creation, lower defect rate, and a smoother testing process within the team.
- High maintenance overhead — most QA time spent fixing existing scripts, leading to fewer automated tests being created.
- Test creation required coding expertise and hours per test.
- Limited automated coverage despite significant investment.
- Greater reliance on manual testing to meet release deadlines.
- The QA team could create a maximum of 10 test cases in a day.
- Near-zero maintenance — self-healing tests adapt to UI changes.
- Plain-English test creation in minutes that allows covering more test scenarios, making it accessible to the full team.
- Significantly expanded test coverage without increasing manpower.
- Faster, more confident releases backed by broader automation.
- The QA team can now create a maximum of 20 test cases in a day.
The Result
With testRigor, 7T dramatically improved both the efficiency and reach of their test automation. Plain-English test authoring, powered by generative AI, enabled the team to build and maintain tests faster while spending virtually no time on upkeep.
The result: broader coverage, faster releases, and a more empowered QA team.
- Dramatically faster test creation: With reduced test maintenance needs, test creation speed doubled.
- Near-zero maintenance: Engineering hours shifted from test upkeep back to building coverage.
- Expanded coverage, same team: Significantly increased automated test coverage without additional hires.
- Team empowerment: Non-technical team members began creating automated tests independently.
- Quicker debugging of issues: Developers found it easier to identify the issues as the testRigor test runs capture screen recordings and other details.
Summary
| Objective | Result |
|---|---|
| Scale test coverage | Significantly expanded automated coverage with the same team. |
| Reduce maintenance | Near-zero maintenance via self-healing, AI-driven tests. |
| Empower the full team | Non-technical team members now create automated tests independently in plain English. |
| Reduced dependency on manual testing | Broader test automation coverage with ease, resulting in reduced reliance on manual testing, especially before releases due to time constraints. |
testRigor can also help you
testRigor’s generative-AI-based no-code automation platform makes it easy for QA teams to quickly build test automation while spending almost no time maintaining tests for web, mobile, desktop, databases, APIs, and mainframe apps. Tests are in plain English and empower any person with or without technical knowledge to be able to quickly build and maintain tests, as well as understand test coverage.
Thank you for your request. Please select date and time for a demo.












