Webinar: Scaling QA with Generative AI. Register Now.
Turn your manual testers into automation experts! Request a Demo

How to use AI effectively in QA

AI Features that are available for QA

Our AI test case automation tool has implemented many ways to take advantage of using AI.

  1. Generative
    1. Generate test cases based on a feature description
    2. Generate test case steps based on the description and AI context
    3. Generate reusable rule steps based on the name of the rule and AI context
    4. Continue to generate steps
    5. Fix test cases using AI
  2. Executional
    1. Click, hover, and drag using AI
    2. Validations using AI
    3. Other commands using AI
  3. Informational
    1. Describe the test case or rule concisely

To take advantage of AI functionality effectively, you need to make sure that:

  1. your test suite is setup correctly
  2. you are using correct prompting techniques
  3. you use the right processes to maximize ROI of AI

These are described in the sections below.

Setting up your test suite

Application Description

Getting the most out of AI starts with setting up a test suite with a well-formed AI prompt describing the application to be tested. It must clearly describe what your application does in order to give AI the proper context it is in.

An unclear or imprecise description of the application being tested will likely derail your AI efforts significantly. Every single command or action would use that description to give AI the context of a larger picture.

Here’s an example of a good first iteration of an application description for Salesforce:


This is a full-featured web and mobile based CRM system. As a user you can create Contacts and Deals, set up associations between those and other objects, and much more. You can also build your custom forms backed by built-in Apex programming language, and search types of available objects.

It describes what this system is and what you can do there in terms of most important functionality on a very high level.

The following would be a poor example for Salesforce:


Enter [email protected] into username and mypass into password. Click sign in. Go to the contacts and create one associated with a new deal. 

It is a poor description since it does not describe what Salesforce does, but rather provides an example of a script. This might seem harmless, but because this is used in the prompts for ALL scripts and scenarios. It can lead AI to do things you wouldn’t expect.

A good description must describe the application under test in a way that you’d explain what the application does to someone who is not familiar with your company or the application itself when you are answering the questions “What is this about” and “what does it do?” It might be a good idea to get the description from your product manager or from product documentation if available.

Please read more on how to write a clear description in the section below related to prompting techniques.

AI Settings

If you already created an application and wish to improve your application description, you can find it in Settings -> AI -> AI-based Test Generation -> “Description of the application for AI test generation:”. This is the most important of the AI settings.

It is important to set everything up in a way that will maximize your application’s chances of working successfully. Our tool optimizes your default settings automatically when you create new test suites. However, the settings of pre-existing test suites can be updated to enable AI to work properly.

The second most important setting is screen resolution in Settings -> Advanced.

Ideally, the resolution should be set to 1024×768 or smaller if possible. AI has a hard time working with higher resolutions due to its limited context window. The higher the resolution, the worse results you’ll get from AI. It might work, but not as well as you’d expect.

One of the tricks you can use in order to be able to use AI on a large existing project where you can’t change the resolution easily is to create an inherited test suite (or just a new test suite) with the AI-friendly resolution to use it to generate test cases for you (see the last section of this article on the processes to make it effective).

Using correct prompting techniques

Prompting is a way to describe something to AI. When providing any instructions to AI, it is extremely important to follow good prompting techniques to make it as effective as possible. Bad prompting can completely prevent AI from achieving whatever you want it to achieve.

There are several ways to make sure to get as much as possible from your prompts:

  1. Describe everything under the assumption that AI does not know any context whatsoever. You can think of a 3-year-old who is easily confused by unclear or ambiguous instructions.
    1. Needs improvement:
      click "a random product" using AI

      AI would be confused about what exactly it needs to do. It might sound like you want to click on any product, but the word “random” has ambiguous meaning that can throw AI off.

    2. Better:
      click "first product on the page" using AI

      This provides clear instructions for AI to know what to do.

  2. Remove any ambiguity from your prompts. Avoid prompts that have more than one reading or interpretation, and keep them broken down into smaller, more specific parts.
    1. Needs improvement:
      check that page "contains a balance chart that decreases and eventually reaches zero and an uninterrupted line" using AI

      This prompt has several issues. First of all, it is not clear whether we mean the line on the graph is “eventually uninterrupted” or “always uninterrupted”? Secondly, should AI validate that the line actually reaches zero on the screen? Thirdly, there are too many validations requested in a single command.

    2. Better:
      check that page "contains a balance chart that decreases over time" using AI

      and on the next line:

      check that page "contains a balance chart that reaches zero on the screen" using AI

      and on the next line:

      check that page "contains balance chart that is represented by an uninterrupted line" using AI
  3. Positive requests are preferable to negative ones. Example prompt for using a feature description for test case generation:
    1. Needs improvement:

      As a user I can find a kindle. DO NOT USE MENU
    2. Better:

      As a user, I can find a kindle using the top bar search
  4. Perform a question test. Imagine you are a 3-year-old; also, read the prompt from the perspective of someone who is not familiar with the application. Then ask yourself the following question: Is this a clear enough description, or can I make a cleaner one? Would I need to ask further questions to understand this description?
  5. Check the feedback from AI. Read the description. It is available after clicking “More details” and “Show errors info”. The description will tell you how AI understood your prompt.

You must use the right processes to maximize ROI of AI

The way to extract ROI from using AI for test case generation is to do it at scale and allow AI to do as much work as possible.

Keep in mind that AI is prone to hallucinations and one-shot test building almost never works. However, you can still extract its immense value. Here is how:

  1. Make sure the Description of application under test (Settings -> AI) is a good and clear description of the application as described above.
  2. Write as good a test case title as possible using the techniques described above. Provide additional AI context (“AI Context” button) if required. Do not overload AI with unnecessary details, but anticipate what it would need to complete building the test case.
  3. Expect AI to fail to finish building the test case and not be able to overcome some humps regardless of how good and clear your descriptions are. In this case, you need to clean up the test, help it to get over the hump by providing specific steps that you’d expect in this particular situation, and then click “Use AI to complete creating this test”. This way AI will be able to continue working from the point where you helped it to overcome the issue.

If you follow the steps above, you should be able to get ROI if you are starting to build many tests (100+) at the same time. This way while you are working on your tasks, AI can keep working on building out the steps for you.

Please keep in mind that AI is not familiar with your company and your application. Therefore, you should expect it to be unable to guess the right flows accordingly. To overcome that, you should provide more description in either the title or AI context and help it correct its mistakes where possible once you see it has diverged.

If you follow the process above you should be able to get positive ROI from using AI despite hallucinations and lack of full context.

Best of luck building tests with AI!

Related Articles

Codegen Suite Setup

Setup Account Plan Before getting started, you’ll need to enable codegen on your account plan. Steps: Enable the Code ...

How to Automate Native Desktop Testing with testRigor?

What is Native Desktop Testing? Native desktop testing refers to testing software applications that are designed to run directly ...

CI/CD Series: GitLab CI with testRigor

testRigor has native integrations with the most popular test management tools, CI/CD systems, and code repositories. This means ...
Privacy Overview
This site utilizes cookies to enhance your browsing experience. Among these, essential cookies are stored on your browser as they are necessary for ...
Read more
Strictly Necessary CookiesAlways Enabled
Essential cookies are crucial for the proper functioning and security of the website.
Non-NecessaryEnabled
Cookies that are not essential for the website's functionality but are employed to gather additional data. You can choose to opt out by using this toggle switch. These cookies gather data for analytics and performance tracking purposes.