Turn your manual testers into automation experts! Request a DemoStart testRigor Free

Test Automation Best Practices

Regardless of what tool you are using, there are several test automation concepts and best practices that are always the same when it comes to end-to-end tests. I am using examples in plain English supported by testRigor for illustrative purposes, but it could be literally any language you like.

Here is an overview of test automation best practices. I will get into the details below.

Test Automation Concepts

  1. Meaningful Tests
  2. Repeatability – if it ran once it should rerun again if nothing changes
    1. Unique test data generation
    2. Use of variables
  3. Data-driven testing – use datasets to run the same test on multiple rows of data

Test Automation Best Practices

  1. Test ends with a validation
  2. Test is stable enough to be used in CI/CD
  3. Test is very easy to understand
  4. Test requires as little maintenance as possible
  5. Test is independent and can run in parallel with all other tests
  6. Credentials are stored separately from the test
  7. Shared steps are grouped into functions

Test Automation Concepts

Meaningful Tests

Specifically for end-to-end tests, the test cases should represent the actual important end-to-end scenario. Unlike unit tests where tests cover an isolated specific part, the end-to-end tests are designed to help to assess if an end-to-end scenario is working as expected. The test cases must represent what user would actually do on the system, and how it should work. It should usually start from the very beginning (often login) and go through all the steps that are required to setup the appropriate steps for the full scenario the user will be performing. Then, at the end of the test, it should validate that the results of user’s actions are as expected. For example, for banking industry you can login into the user account, and transfer funds from a checking account to savings account, then validate that the savings account value increased exactly by the amount transferred and the checking account decreased exactly by amount transferred. If database is not reset at the beginning of such test suite you might also want to add test transferring the amount back from savings account to checking account to make sure that for the next time the test runs it will have enough funds on the checking account to perform a test.

It is also considered a best practice to build test cases in the order of decreasing importance to the business, starting with the most critical first. I.e. you should assess what would impact the business the most if doesn’t work and start from covering these areas first.

Repeatability

The difference between a test case for manual execution and an automated one is that when you are writing an automated one, you must be specific to the minute detail of how to execute things. This, in particular, means that once written, the test case must be able to be run repeatedly with consistent results. We as humans tend to be able to infer things and guess them on the fly. However, this is not acceptable for automation. In automation, you must be very explicit to do things the way that will allow them to be executed repeatedly.

For example, if you are testing a sign-up with email, like someone here, you must make sure that you are using a new email every time you do it to avoid your test not being successful because some email was already used before.

For example, the following won’t work:
enter "[email protected]" into "Email"
enter "password" into "Password"
click "Sign Up"
...
However, the following would:
generate unique email, then enter into "Email" and save as "newEmail"
enter "password" into "password"
click "Sign Up"
...
This introduces us to two important subconcepts as ways to achieve repeatability:
  1. Unique test data generation
  2. Use of variables

Unique test data generation

In the example with email sign-up above, we learned that we need to be able to generate unique data (email in this case) to be able to run registration successfully. That first command generate unique email, then enter into "Email" and save as "newEmail" does exactly that. Now, there are several ways to generate data that hadn’t been used before. Let us consider three main types of unique data generation:
  1. Ask your Engineers to provide you with an API
  2. Use sequential numbers/letters
  3. Use random sequences

You could, of course, ask your Engineers to provide such an API for you. However, that would just force them to use one of the following methods themselves, it will introduce the overhead and latency related to an API call, and, most importantly, it would be close to impossible to prioritize such a feature from Engineering in the first place.

Sequential numbers are almost infeasible as well since eventually you’d be forced to run your tests in parallel for performance reasons, and it is very hard to keep those numbers in sync, making them unusable for practical reasons.

Therefore, the best bet is to use long enough random sequences to make sure the data is unique and doesn’t overlap with your other tests running in parallel. Modern systems like testRigor have a built-in way of generating unique data out of the box. For other systems, you might consider using UUIDs and/or random number generator classes like this.

Just generating the unique data might not be enough. You often need to use this data later in the test. This brings us to the next point.

Use of variables

Getting back to the sign-up example above, once you generated the unique email, you need to be able to check that email to confirm it, as well as use it later in the test to make sure that you can actually log in using that exact email. This means that you need to store your generated email (or any of your generated data for that matter) somewhere to refer to it for use later. Those places you store your data for your test are called “variables”. You can think of a variable as a label on the box where you store particular data. The concept of variable is basically the same one as you learned in your algebra lessons in school.

For the sign-up flow, we can use the "newEmail" variable to check that email later like so:
generate unique email, then enter into "Email" and save as "newEmail"
enter "password" into "password"
click "Sign Up"
check that email to stored value "newEmail" was received
click "Confirm email"
...

In this example, we used the variable "newEmail" to check the email and render it in the browser and then click “Confirm email” link/button in that email to proceed with the registration flow.

See how we used check that email to stored value "newEmail" was received referring to variable “newEmail” to refer to the email we generated before instead of checking a predefined email like check that email to "[email protected]" was received.

You can see a running example of generating data and using variables here.

Data-driven testing

This concept can be considered a best practice by some, but since it is a built-in feature of virtually all of the testing systems, we’ll consider it as a “concept”.

Sometimes, when you are testing a particular form, you might want to run it through multiple data sets. For example, you might want to make sure that your sign-up form doesn’t allow for:
  1. Empty email
  2. Email with no domain specified like “user”
  3. Email with only one level domain specified like “user@domain”
  4. Email with multiple “@” signs in it
  5. etc.
For that, you might want to devise a data set that contain list of all the data you’d like and you build one test that refers to that data rather than using it. For instance, for a sign-up form test it might look something like this:
enter stored value "myTestEmail" into "Email"
click "Sign Up"
check that page contains "Error"
...
To make this example work, you’d need to connect it to a data set containing all the data you’d like to use for this test like this:
myTestEmail
user
user@domain
us@er@domain

You can see the example of a full sign-up test here.

Test Automation Best Practices

Test ends with a validation

Imagine that your test won’t end with a validation. Then it ends with either a click or entering data. In both of these cases, you do not know if your last action was successful or not. Therefore, you always want to add validation at the end of every test. This rule applies to all kinds of tests, including end-to-end tests, integration tests, and unit tests.

Some people believe that tests should only contain validation at the end and not in the middle of the test. I think this only applies to unit tests and not necessarily end-to-end tests, since an end-to-end test might take some time to execute, and it might be wasteful to duplicate a test just to stick to the rule.

Test is stable enough to be used in CI/CD

If you built your tests and can run them yourself locally, that’s great. However, you’ll get far more value if they are run automatically on every change. You can relatively easily add them to CI/CD. However, if the tests are not stable enough – they will fail, constantly wasting your team’s time on figuring out why they failed. So, clearly, you want to build your tests in a way that makes them as stable as possible. And yes, tools can help here. For instance, since testRigor doesn’t rely on details of implementation – it won’t fail when these details change.

Test is very easy to understand

Imagine that you wrote a test 3 months ago, and now functionality changed, and you need to go back and update it. I can guarantee that you most probably forgot how you wrote this test and why you did it this way. Now imagine that someone else needs to look into your test. This is the effort multiplied by two since this person had never been familiar with your code in the first place.

The difference between readable and unreadable code can be pretty substantial when people are trying to change things. Multiplied by the number of people on the team, it can amount to a sizable part of everyone’s day.

Also, if your test cases are readable by non-engineers (like product managers) you get the benefit of being able to share your code and get feedback if you are testing the right thing.

Test requires as little maintenance as possible

This is very straightforward. Test maintenance is what eats the time. You want to minimize it. Again, modern frameworks like testRigor will get you covered here. With tools like Selenium, you are pretty much on your own in front of the least stable system you can imagine. The problem is not Selenium per se, but rather the fact that it encourages you to use details of implementation like XPath to refer to elements on the screen.

Test is independent and can run in parallel with all other tests

As much as you think that you just need only several tests, at some point, your team will most probably end up with a lot of them running longer than several hours (or whatever threshold you can tolerate), and you’d want them to run faster. Tweaking will get you only as far, you still bare the cost of network calls, server responses, and browser rendering times. At some point, the only way to speed things up will be parallelization. And, if you built your tests to be able to run independently from the get-go, you’ll be able to enjoy the fruits of parallelization and speeding things up.

What does it take to keep the tests independent? It is mostly about the data. And keeping the test stable is actually a spectrum, not a binary. On one side, all your tests will use the same data, passing results from one to another. With these dependencies, parallelization would be challenging. On another side is the way where each test creates all the data it needs within the test. The problem with it, as you can imagine, is that it takes a lot of time. In real life, you’d probably be somewhere in the middle, with some data being shared among the tests and some created for each individual test. What you certainly want to avoid at all costs is dependency, where the next test depends on the results of the previous test.

Credentials are stored separately from the test

The naive way to first build a test would be where you’d put all of your data in the test itself, including credentials. However, even though most of the data does usually need to be in the test for readability reasons, the credentials are an exception to that for two main reasons:
  1. They do tend to change regularly
  2. It is insecure to keep credentials in the open like this

Shared steps are grouped into functions

This is a generic software development practice. You want to reduce code duplication as much as possible. And yes, it does apply to tests as well. The reason why you want to do it has to do with two things:
  1. In case the system under test changes, you now have just one place to adapt instead of multiple
  2. You can make your code far more readable and easy to understand, as well as closer to the literal specification described in your test case, if you give sequences of steps with readable names

There are more best practices to cover, it would probably require a full book to list them all. However, we believe that the ones above are the top most important ones to use in automation.

Here is a webinar we had some time ago on the easiest way to learn test automation where I talk about these concepts:

Related Articles

Test Plan Template

“Plan your work and work your plan” — Napoleon Hill. This philosophy holds even in software testing. Of the many artifacts ...
On our website, we utilize cookies to ensure that your browsing experience is tailored to your preferences and needs. By clicking "Accept," you agree to the use of all cookies. Learn more.
Cookie settings
Privacy Overview
This site utilizes cookies to enhance your browsing experience. Among these, essential cookies are stored on your browser as they are necessary for ...
Read more
Strictly Necessary CookiesAlways Enabled
Essential cookies are crucial for the proper functioning and security of the website.
Non-NecessaryEnabled
Cookies that are not essential for the website's functionality but are employed to gather additional data. You can choose to opt out by using this toggle switch. These cookies gather data for analytics and performance tracking purposes.