Cross browser testing is a pain. It’s also, sadly, an essential part of test coverage. Fret not! There are easy ways to reduce the pain.

Solving the issue requires access to site stats, and a few handy pieces of Tech. In this article, I’ll show you how I tackle cross browser testing. Let’s start by looking at Rainforest’s numbers.

Get the numbers

Rainforest’s stats

GA-for-rainforest.png

The above is a snapshot of our Google Analytics page. To get similar stats from your GA, goto Audience > Technology > Browsers & OS.

Before we dive in, I should note that Safari mobile and Safari desktop are likely reported together. You can see specifics about your mobile traffic in the Mobile section.

Any browser with over a 5% share of traffic definitely needs to be tested thoroughly. Those under 5% are open to some debate. I like to discuss sub 5% browsers with our Engineering and Product owners. They’ll give me an idea of whether those customers have business value, or not.

The above numbers suggest that Rainforest’s primary test targets should be Chrome, Firefox, and Safari (mobile and desktop).

As an aside, in the case of Rainforest we still care about IE11 and mobile support. We’re a QA company, after all, and as we attract larger and larger customers, ‘legacy’ (ahem) browsers become more and more important.

Global stats

The above chart is from StatCounter. It’s one of many sites sharing broader stats about the browser market. They aggregate data over a couple million sites using their software. Not perfect, but close enough to sniff out rough trends.

The share of IE 9 and IE 10 users is particularly interesting. In these versions of IE, Microsoft introduced an automatic upgrade. Supporting fewer versions of IE definitely helps lighten the test burden.

IE isn’t a big share of Rainforest’s traffic. I’ll flag IE 8 as another test target, but secondary in importance. Since we identified Android as sub-5.0% of our site traffic, it’s also a secondary test target.

The mobile web adds additional burdens. On Android in particular, I could drill down into the various screen resolutions, OS versions, DPI settings, and etc. In the case of Rainforest, the share of traffic is so slight that it’s probably not worth the extra effort. A standard device on a recent version of the OS will do just fine.

Do the testing

Ok! So, we’ve identified a few good targets for our cross browser testing. Any time a release is headed to production, I definitely want to test on Chrome, Firefox, Safari Desktop and Safari Mobile. Unsurprisingly, we use Rainforest for this (as do people like Zenefits, Sky and NPR), so that’s my goto.

I can further reduce my test burden by employing some tools. I particularly like PushBullet and Adobe’s Edge Inspect. Let’s take a look at each.

PushBullet

PushBullet makes sharing text messages, links, and media super simple. Download the app on each device, and send a “push”.

It’s great for running exploratory tests for new features. For instance, when Rainforest adds a new feature to the marketing site, I need to confirm the buttons and other interactive elements function well on mobile.

To ensure I’m hitting all of my targets, I use PushBullet to prime each device. I like using the Chrome Plugin, which lets me type the message once, but send to every test device in my collection. If there are specific test notes, I’ll include those as well.

This appears on the device as a normal Notification:

push-bullet-notification-2.png

When I tap the notification, the device opens the link in a new browser window. Bonus: I know I’ve completed testing once I’ve cleared the push on each device.

PushBullet also helps when sharing bugs (or doing side-by-side comparison). See something gnarly? Take a screenshot, and send a push of the screen back to your desktop.

Edge Inspect

Where PushBullet is passive, Adobe’s Edge Inspect is active. With edge inspect I can drive many mobile devices at once, right from my desktop. This is particularly cool if I’m doing a lot of rapid development, or stepping through complicated flows.

One tap on my desktop’s browser will also send a similar click to my mobile devices. The caveat here is that Edge Inspect may introduce bugs of it’s own. A click on a desktop browser is not equivalent to a tap on a mobile device.

As a result, Edge Inspect is best used for visual updates or non-touch sensitive interactions.

Refinement through Philosophy

So far I’ve slimmed my test burden through intelligent use of stats and tools. With a bit of Philosophy I can eek a little more goodness out of my testing time.

QA is a series of filters

Testing every browser on every code change is a little crazy. When developing, I’m just like you. I start by testing on my local, favorite, browser.

I also know that before the code heads to production I will eventually need to test on all of my browser targets. The progression from testing a single browser to every browser should be gradual.

When I start expanding my list of test targets I like poking the frequent offenders first. That usually means IE + really big or really small mobile targets. That will usually reveal a bunch of odd bugs. I’ll fix those bugs, then carry on with the test suite.

Some bugs aren’t worth fixing

Speaking of odd bugs, a lot of bugs aren’t worth fixing. For instance, a lot of forms break when you enter zalgo.

What’s zalgo? It’s malformed, LoveCraftian text that’s frequently copy pasted as a prank. Unicode allows characters to be super or sub-scripted multiple times. Zalgo does this repeatedly, which causes text to flow in all sorts of unexpected directions.

zalgo-breaks-google.png

The above screenshot is zalgo breaking Google search.

Woah, it broke Google? It sure did, but should Google prioritize a fix post-haste? Probably not. The number of legitimate searches happening in zalgo are slim to none.

Point is, some bugs are worth living with. Google and most other websites are unlikely to see value from zalgo searches.

What kind of app might want to fix a zalgo bug? In a chat app, zalgo allows one user to cover another’s message. That can be kind of frustrating. In that case, it’s probably worth looking at.

Identifying bugs like zalgo is great. You should, however, pause and think for a bit before committing resources to a fix.

Primary and Secondary Targets

Earlier we identified primary and secondary test targets. What does that mean? Primary targets are browsers that need to be tested nearly every release. The exception is very small tweaks, like changes to text. I can probably push that out with minimal cross-browser checking.

Secondary targets should be tested any time we make a significant change to Rainforest. If a brand new feature is queued up, we probably want to do a full pass on all secondary targets.

If the release is a minor tweak, or a code change that’s local in scope, then I can consider just testing my primaries.

Mitigating risk

At it’s core QA is about risk management. I’ll always be able to write more tests, investigate more edge cases, and sniff out more bugs.

As QA, time is my enemy. My willingness for a feature to fail should determine my time investment.

For example, in the case of tweaks to Rainforest’s Marketing pages I’m quite willing to accept failure. The code is relatively simple, deploying a fix is quick, and a if it breaks it’s bad (but not catastrophic).

I test those changes more lightly. How lightly? Just the primary test targets.

The editing or test results screens are a core part of our product. I’m much more bearish about risking failure, there. If something breaks on the Rainforest dashboard, it’s a big deal for our customers. The code is complex and usually difficult to fix quickly, if the database needs a migration then deployment may be complicated, and breakage in most cases is quite catastrophic.

I know I won’t have time to test everything to the degree that I’d like. So, I allocate time a bit more wisely. The time I save on the Marketing site is invested in testing the core of Rainforest, instead. Beauty!

Conclusion

Effective cross-browser testing requires a number of things.

A look to your customers and the ways they access your product. The right kinds of tools for optimize your actual time testing. Finally, a bit of Philosophy to ensure you’re committing the right resources to each release.

I hope you enjoyed following along with our cross browser testing strategy.

Have clever cross browser tips of your own? Share with us on Twitter @rainforestqa.