At Rainforest, we spend a considerable amount of time thinking about what the ultimate testing workflow is. Let me start by saying that the current state of affairs is much better than it ever was. Developers, for the most part, now understand that automated testing is important. There's good literature written around building an efficient QA process as well as how to do this while still shipping often and fast. Tools - especially for web developers - are better than ever.
Even though things are better, there's still plenty of room for improvement. Lets go over some of the types testing that can be done, when to use them and their benefits and downsides.
I love unit testing. For the most part, I write my tests first, as this gives me near-instant feedback loop while writing code. One reason that unit testing is very valuable is that, in most software, the vast majority of the code is not user facing. Meaning, that a lot of your code is not directly related to the interface used by a human. It's an interface meant to be used by code.
Lets reiterate this. You unit test code that is meant to be consumed by other code. Unit tests work great when dealing with lower level concerns, such as testing algorithms or database interaction. This is because the only actors involved in the process are the test code and the code under test.
Even though unit tested code is not necessarily directly consumed by a human user, it's the foundation of code on top of which you build your interface. Unit testing is the right tool to ensure that this base layer of code works.
Unit testing alone is not enough. It's good to ensure that each part of you application works, but it doesn't ensure that everything works together. This is where integration testing comes into play. It allows us to ensure that the pieces all fit together correctly.
There's a variety of tools to achieve integration testing. I tend to group integration testing into two broad categories. The ones that simulate human interactions and one that simulate computer interactions.
This kind of testing could look like a script that calls multiple APIs, runs background processes and then asserts that some conditions are met. APIs are meant to be consumed by computers, so it makes sense to test them with test scripts.
Since your users will ultimately interact with your application using its UI, it's obviously imperative to test it. It's possible to automate these human interaction tests with tools that simulate a human interacting with your application. Tools like Selenium and Capybara are good at simulating human interactions for web applications.
Automating interface testing has many benefits. First, these tests are fairly fast compared to the alternative, which is testing all of this manually every time you make a change. They can be run as part of your CI process. It's also possible to have the test run across in multiple environments, such as multiple browsers or devices.
Unfortunately, there's also a lot of downside to this approach as well. First and foremost, they are painful to write and maintain. A web page or a UI is not meant to be consumed by a script, but by a human. In order to simulate a human, you need to use an imperfect tool to simulate human actions. These tools tend to result in brittle tests, that require major investment to keep up-to-date with your codebase.
Finally, the main issue with these tools is that they will miss a lot of failures that would be obvious to any human user. Selenium won't pick up on a broken layout, it will just click dumbly on links and fill up forms. If the site is visually broken, it won't pick up on this.
The next layer of testing is your internal QA team. Some teams have dedicated QA personnel, some ask their developers to QA their own application. The process is quite simple. You put the code somewhere where it's accessible by a human; traditionally a staging or test server. You then ask the QA team to either follow an established test plan or to do exploratory testing of the application.
This is usually a good process to catch errors in the layout, visual defects or even functional issues with your application.
However there are also multiple downsides to this approach. First and foremost, it's very expensive - you have to hire someone, or manage outsourcing. Obviously the time spent testing an application is not time spent fixing issues, talking to customers or adding new features. The other problem is that it's very slow. If you want to run through an entire test plan for a decent size application it could take days or even weeks for a QA team to do it. Often, you also want to test your app in multiple configurations, such as with multiple browsers, phones, OS, etc. Each time you add one of those, you're multiplying the testing time.
Having an internal QA team is not a practical solution if you want to deploy your application frequently. It's just too slow.
Our current thinking here at Rainforest is simple: that interfaces designed to be consumed by machines (APIs) should be tested by machines. Interfaces designed to be consumed by humans (UIs) should be tested by humans.
Why? Humans are better at picking up visual problems in your application. They are more resilient to small changes in the layout of your page. They can do exploratory testing (testing without a well defined script) of your application.
It's also much easier to explain what you want tested to a person as you can do it using plain English. It's also much faster to write tests in English than in any programming language.
Normally, having humans test your application would be expensive and slow. Fortunately, we've addressed these issues with Rainforest. By leveraging internet services like Amazon Mechanical Turk, we can significantly reduce the cost compared to in-house testing and even outsourced testing. This also allows us to have an elastic crowd of testers that we can send to your web site at any time of day.
As we're a service, we have an API - so you can also easily trigger runs of your Rainforest test suite from a CI server.
You don't need 1 tester for 40h per week. You need 100 testers for 30min per week.
Over time, the QA industry has developed many great processes and best practices that drive software and business success. But some practices are outdated, while others have negatively impacted product success. In this post, we share the Top 5 Don’ts of Software Testing.
Today, we're sharing our approach to test writing and how you can use it to get better results from your QA tests.Crafting well-written test cases is critical to getting reliable, fast results from your manual QA tests.
In this post we explore how to identify and approach breaking the QA bottleneck to ensure that your testing process doesn’t slow down your release cadence.
When we think of Quality Assurance we typically think of Product and Engineering, but Digital Marketing teams own quality too. In this post, you will learn how incorporating a QA strategy into your digital marketing strategy leads to success.