Following on from my earlier blog post "Test your JavaScript with Jasmine part 1", I am going to show you a few more things that will make you more efficient at using Jasmine to test your JavaScript.
Let's dive right in!
In previous examples, I showed you a single describe
block with a few it
blocks with assertions - but you can also nest the describe
blocks. You can have a root describe
block with nested describe
block. A good example is to have one for the main thing you are testing, then sub-describe
blocks for the methods.
Given this object that we would test:
var myNumberHelper = { isEven: function(value) {
// does some magic
}, isOdd: function(value){
// also does some magic
}};
In this example, I create a describe
per method of the myNumberHelper
object, giving me a nicely organized feel:
describe("myNumberHelper", function() {
describe("isEven", function(){
// some it blocks here to assert is Even is doing its job
});
describe("isOdd", function(){
// some it blocks here to assert isOdd is doing its job
});
});
Whilst you can have many levels of nesting, I strong suggest you try to stay within these 3 levels of nesting. The best practice levels are:
Jasmine doesn't have a context
block like RSpec does, but I love using describe
blocks as equivalents for this. With the right description, you can use them to express the current context of the nested tests in the same way.
If we build on our previous examples, here is how it should look:
describe("myNumberHelper", function() {
describe("isEven", function(){
describe("when the argument is even", function(){
it("returns true", function(){
expect(myNumberHelper.isEven(2)).toBe(true);
});
});
describe("when the argument is odd", function(){
it("returns true", function(){
expect(myNumberHelper.isEven(1)).toBe(false);
});
});
});
// ...
});
It feels like we are starting to see a pattern for easy readability and organization of tests, don't you think?
One of the things that is really useful is having a way to re-use setup steps so that you don't have to repeat yourself - keeping things DRY. Jasmine supports setup steps which are run before each test in a suite. There are also 'after steps', which you can use to clean up the state; this is especially useful when the tests share the same context.
You can define those steps into a beforeEach
or afterEach
:
describe("client", function(){ beforeEach(function(){ this.client = { name: "John Doe", plan: "trial" }; });
it("is on the trial plan", function(){
expect(this.client.plan).toEqual("trial");
})});
In this example, before each it
block, a client is setup. You can also undefine it after each test.
describe("client", function(){ beforeEach(function(){
this.client = { name: "John Doe", plan: "trial"
}});
afterEach(function(){ this.client = null; });
it("is on the trial plan", function(){
expect(this.client.plan).toEqual("trial");
})});
These are just some basic examples; I am sure you are already considering a use case for your code base. If you need a hand, ping us on twitter.
Jasmine has test double functions that are called spies. A spy can replace any function and help you track its usage, plus the arguments used to call it.
Let's say you want to make sure you are tracking an action in your analytics software. You can do that easily by using a spy, which will replace your analytics call, whilst ensuring it is being called as expected. How would you do that?
For this example code:
var myNumberHelper = { isEven: function(value){
var even = value % 2 == 0;
analytics.track('is_even', value);
return even;
}, isOdd: function(value){
// ...
}}l
The test is as simple as this:
describe("myNumberHelper", function() {
describe("isEven", function(){ // ...
describe("when the argument is even", function(){
it("sends a 'is_even' event", function(){
// This is replacing the implementation of analytics.track with a spy
spyOn(analytics, 'track'); // The action which you expect would trigger a call to this myNumberHelper.isEven(2); // Then add an expectation that it has been called
expect(analytics.track).toHaveBeenCalled();
});
}); // ...
});
What did we do exactly?
First, you need to create a spy. This is replacing the implementation of your code, in this case analytics.track
, with code that will help Jasmine know if this specific code was called (plus with which arguments if you want).
Great, that's setup. Next step is to trigger the code you are expecting to call analtyics.track
- in this case we're expecting a call to isEven
.
Finally, the expectation itself, which will verify that analytics.track
was actually called.
You could have made the expectation even more explicit by specifying the arguments you were expecting by changing it to: expect(analytics.track).toHaveBeenCalledWith("is_even", 2)
.
Spies are pretty useful, and they can be more flexible than this. For example, it can call the original function it's replacing if you want with spyOn(analytics, 'track').and.callThrough();
. More on spies can be found in the official documentation.
Finally, if you want to disable or mark a test as pending you can:
it
by a x
, giving xit
which will make it pendingpending()
inside the it
block, which will also make it pendingxdescribre
which will disable all the tests nested into itWhat does this mean and what's the difference? In all cases, those tests will be skipped. The difference is that the disabled one won't be shown in the output, but the pending ones will show up in the results as pending.
There is much, much more to Jasmine. You should look at the official documentation as it's really useful; as the project isn't too big it's easy to read it all - which I suggest if you're serious.
Note that there are different ways to integrate Jasmine to run it automatically as part of your backend test suite, or upon saving your code or your tests. Feel free to reach out to us on twitter telling us how you are using Jasmine, or if you know an amazing third-party tool that is making your life easier.
If you've read this far and are still looking for ways to test more effectively, be sure to check out our ebook on getting to continuous deployment for tips on bringing your QA strategy up to the speed of deployment.