There are plenty of options on the market when it comes to test automation, so how would you pick the right one?
It's crucial to know the market well, and pick the best tool available. We've seen way too many times (and many of us working in QA experienced it firsthand) what wrong choices can lead to. We're talking about finding out hidden limitations of the tool mid-project, or coming to a realization that the test architecture wasn't well defined from the beginning. When that happens, very soon you'll bear the consequences of the automation process taking much longer than anticipated, with test maintenance becoming a nightmare. Or even worse - you won't be able to accomplish the testing according to plan and will have to start from scratch and switch to another tool. The more you know, the better off you'd be making the right decision.
Today we'll compare two very powerful options on the market, and see how they fare against each other in every key aspect - UFT (previously known as QTP) and testRigor.
Both are powerhouses that can be used to test Web, API, and mobile apps. UFT is also the only one that supports native desktop apps, and really shines when it comes to legacy platforms such as Siebel, or mainframe technology. It's not ideal for mobile testing, however, since stability is average and it's not unlikely to have frequent crashes.
testRigor's strongest sides are Web and Mobile testing, both native and hybrid. Mobile browsers are supported as well, with a total number of all possible OS version and browser combinations amounting to 2000 (yes, you read it right!). It also has integration with BrowserStack and LambdaTest. You have the option to choose either physical devices or an emulator/simulator for your test suites to run on.
All main browsers are supported (IE, Chrome, Safari, Firefox). Notice that UFT integrates with ALM (Quality Center) which means tests run efficiently on IE, although other browsers are not always that efficient. On the other hand, for testRigor there is no difference in terms of which browser is being used.
Building tests is drastically different between the two tools. UFT only supports VBScript, and you'll need to have QA engineer(s) proficient in it. Tests are essentially being created from a developer's perspective - and as with all tools of the same type it's crucial to get test architecture right from the start.
testRigor uses a sophisticated AI engine behind the scenes that allows you to create tests with just plain English. Even more so, it's very flexible in terms of wording (for example: “tap” or “click” commands will both be recognized). You can also define and create your own rules and phrases, and reuse them throughout your test suite. The process is so intuitive that anyone on the QA team including analysts, and even less technical product managers will be able to contribute.
Let's take a look at how a test case for an Amazon login page will look like in both tools (screenshots are from original interfaces):
Browser("Amazon.com. Spend less.").Page("Amazon.com. Spend less.").Link("Hello, Sign inAccount").Click Browser("Amazon.com. Spend less.").Page("Amazon Sign-In").WebEdit("email").Set "[email protected]" Browser("Amazon.com. Spend less.").Page("Amazon Sign-In").WebButton("Continue").Click Browser("Amazon.com. Spend less.").Page("Amazon Sign-In_2"). _ WebEdit("password").SetSecure "619db97bb873605845dda0e6e9b1bdacdbe758ff63a51c8fe2103476" Browser("Amazon.com. Spend less.").Page("Amazon Sign-In_2").WebButton("Sign-In").Click Browser("Amazon.com. Spend less.").Page("Amazon Sign-In_2").Check CheckPoint("Top picks for you")
login check that page contains "Top picks for you"
You might be surprised with how lean the second test is. That is because you would've already entered credentials when creating the test suite in the first place. Don't worry, if you need a new random account created each time - RegEx is supported too.
For the next example let's compare the test case where you're adding an item into the cart on Amazon:
Browser("Amazon.com. Spend less.").Page("Amazon.com. Spend less.").Link("Best Sellers").Click Browser("Amazon.com. Spend less.").Page("Amazon.com Shopping Cart").Check CheckPoint("Amazon.com Shopping Cart") Dim oDesc Set oDesc = Description.Create oDesc("micclass").value = "Link" Set obj = Browser("Amazon.com. Spend less.").Page("Amazon.com. Spend less.").ChildObjects(oDesc) Dim i For i = 0 to obj.Count - 1 If obj(i).GetROProperty("innerhtml")= "Camera & Photo Products" Then obj(i).Click Exit For End If Next Browser("Amazon.com. Spend less.").Page("Amazon Best Sellers: Best").Link("Wyze Cam Spotlight, Wyze").Click Browser("Amazon.com. Spend less.").Page("Amazon.com: Wyze Cam Spotlight").WebButton("Add to Cart").Click Browser("Amazon.com. Spend less.").Page("Amazon.com Shopping Cart").Check CheckPoint("Proceed to checkout")
click "Best Sellers" click "Camera & Photo Products" click "Wyze Cam Spotlight, Wyze" click "Add to cart" check that page contains "Proceed to checkout"
In this example it's important to emphasize that the “Camera and Photo Products” is an element nested in the dynamic tree view menu, and takes some extra effort for UFT to define so that the test won't occasionally fail. Alternatively, in testRigor you don't need to even think about it, and specify clicking on this menu option just as a real user would've done that.
Both tools have built-in test case storage, however, UFT requires a truly experienced HP Toolset user from the get-go, otherwise test script maintenance will quickly become a pain once the number of tests increases.
On the contrary, maintaining test cases in testRigor is as simple as it has ever been and takes virtually no time. Tests are created from an end user's perspective, which means they'll still pass even if your entire website is moved to a new development framework (as long as UI stays unchanged). And if there are multiple independent test cases that failed because of the same error (for example, a button got removed from the page) - these test cases will be grouped so that you can fix all of them in one place.
UFT supports plugins for Jenkins, Bamboo, Azure. testRigor on the other side has scripts that can integrate with any CI/CD tool for either Windows, Linux or Ubuntu.
Dashboards and reporting
Both tools are great here, with detailed reporting dashboards including screenshots indicating where exactly the test case failed.
Neither of the two is exactly cheap. UFT is one of the most expensive tools on the market, with $3200 per user, per year. Different add-ins such as Java, Oracle, SAP, etc. cost extra. testRigor is more affordable and much more custom, the licensing model varies depending on the number of test suites and parallelizations you will need. Base tier starts at $1800/month per company with an unlimited number of users.
Another difference between the two is that testRigor executes all of your test cases on their servers, as opposed to UFT doing the same on your infrastructure. This means additional costs for either physical or cloud servers.
Both tools have it and it's easy to try out. UFT has a 30-day trial period in contrast to testRigor's unlimited free tier access (your tests on the free account are public though).
As we can conclude, both are very powerful tools capable of helping you to achieve desired automated test coverage. UFT being one of the oldest tools on the market is very robust in certain aspects, versus testRigor bringing all the newest technologies to the table and being much easier to learn and maintain. It's also worth noticing that UFT is the only one of the two that doesn't support Mac OS.
|Both are good at:||UFT is better at:||testRigor is better at:|
|Web and Mobile testing||Desktop testing||Ease of use|
|Cross-browser testing||Robust interface and granular settings||Test creation and maintenance|
|Regression, functional testing||Security testing||Stability|
|Parallel testing||Mac OS support|
|Reports and dashboards||Pricing|
We sincerely hope that this detailed comparison will help you make the right decision in terms of which tool will work best for you and your team.