8 Steps to Migrate from Manual QA to Developer-led Automated Testing

8 Steps to Migrate from Manual QA to Developer-led Automated Testing

More and more software development shops are moving towards a model where developers are responsible for writing their own automated tests. There are many good reasons to move to developer-led automated testing, and there are also many bad ways to go about doing it. This article will walk you through what needs to be considered when making this transition.

Find alternatives for what to do with manual testers. Shifting testing to development doesn't necessarily mean you no longer want all of your manual testers. It may mean you need to shift their focus. They often have the best product and domain knowledge out of anyone in the company. They instinctively know what will break, when, and where. When you say "I want the daily count displayed" they think about how it's not the same day everywhere in the world, whose day are you using?

People with a manual QA experience can often help out with product management, customer support, or software development. Don't be too quick to dismiss these skills and find places when you can for those who are able to adapt. Don't undervalue these skills and experience.

Select a testing framework. It's generally best to pick a testing framework that's in whatever language developers already use. You don't have to. If you're doing web testing, you could test a JS based site using Python. But if you want developers to write tests, you're not going to have as much luck if they need to learn a new language. The advantages some languages have over others doesn't offset the need to make it easier for developers to participate in testing.

Decide where to put the test code. It's usually also best to put the tests in the same repository as the code. There may be push-back to this, but this is also important for adoption. It doesn't seem like a lot to ask to have developers check out an additional repository, but it is.

Additionally, the #1 reason tests are deemed "unreliable" is when the version of the code doesn't match the version of the tests. When the requirements change, two things must change: your code and your tests. They need to change at the same time. There are other ways to do this, but the easiest way to solve this problem having them in the same repository. Code changes, tests change, one repository push.

Provide training and resources. You wouldn't want an entirely new discipline thrown on your lap without any kind of support would you? Find someone who can speak to the importance of testing from a developer's perspective and make sure that developers get the support they need for learning how to write tests and answering their questions along the way.

Sometimes training isn't just about "how" to do something. Sometimes it's about seeing "why" something is worth doing and what it can do to make your life better.

Focus on API testing, but don't ignore the UI. You will at some point hear about the "testing pyramid" and it will tell you that you should have tons of unit tests, fewer integration tests, and even fewer UI tests. There's a lot of "it depends" baked into that. Are you B2C? B2B? What's the impact to your business if this mission critical thing doesn't work for 5% of your customers?

Don't walk away thinking that UI tests shouldn't be done at all or should be the last tests you write after unit/integration tests. Fewer tests doesn't mean that they're less important, in fact they could be the most important tests you have. For many companies, mission critical UI flows should still be automated.

Decide when and where the tests will run. Running them locally only gets you so far, you'll want the tests to be part of your CI/CD pipeline so that you know they're getting run / fixed along the way. You'll need to have a deployment pipeline set up, which may include feature, development, QA, RC, etc. environments. What tests will you run? When? And where will they run? What will your branching strategy be? Will it be trunk-based? Git flow?

Require the builds to be green. This one's really, really, hard but every time I've gone soft on it, I've regretted it later. It's better to have fewer tests that are 100% reliable and trusted, than to have thousands of tests that no one trusts. It takes work. You have to setup and tear-down test data. You have to have appropriate timeouts. You have to have logging and monitoring so when a test fails you know why. You need to make sure the system being tested can keep up and is reliable. It's a lot of work, but believe me it's worth it.

Someone should still be responsible for quality. I've worked in development shops that were 100% developer-led testing. We did test-driven development / BDD. Had no officially designated QA. In many ways it was great, and is what people are moving towards. But it wasn't all sunshine and roses.

We also found that there were situations where everyone on the team was waiting on tests and builds because no one was responsible for the system itself. When new tools came out that could benefit everyone, it wasn't anyone's job to look into them.

If there were paid solutions that would have a huge ROI, no one was responsible for looking into them. Sometimes "everyone is responsible" means "no one is responsible". So it's important to have some people whose job is not to write the tests, but instead to make sure life is easy for those who are.

If you don't have a team in place that can write tests, you'll need to hire. Great SDETs are hard to find, and you may want to consider a QA staffing partner to help you put together a team rapidly