Unit testing tip – additional override

Leave a comment

If you have have a method you want to test, but it does some stuff you like for convenience, and you don’t really want to mess for the sake of testing, create with some overrides and do your testing there.

+ (void)foo:(NSURLSessionConfiguration *)sessionConfiguration
{
    TokenManager *tokenManager = [[TokenManager alloc] init];
    AccessToken *accessToken = tokenManager.accessToken;

    [self addAuthorizationHeaderToSessionConfiguruation:sessionConfiguration token:accessToken];
}

+ (void)foo:(NSURLSessionConfiguration *)sessionConfiguration token:(AccessToken *)accessToken
{
    NSString *accessTokenHeaderValue = [NSString stringWithFormat:@"%@ %@", accessToken.tokenType, accessToken.token];
    sessionConfiguration.HTTPAdditionalHeaders = @{@"Authorization" : accessTokenHeaderValue};
}

The first method is the convenient public one.
The second is the one you can test and inject other things into.

Objective-C Unit Testing Tricks

Leave a comment

Getting to an objects innards through categories

Normally you only want to test an object entirely through it’s public API. But for those times when you need to access an objects innards (but would or can’t expose them), you can make them public by defining the inner method as a category in your unit test.

- (void)testAddNumberToStack {
    Calculator *calc = [Calculator new];
    [calc pushOperand:1.0];
    XCTAssertEqual(1, calc.stackCount); // private
}

Calculator.m

-(NSUInteger)stackCount {
    return [self.myStack count]; // private
}

CalculatorTest.m

@interface Calculator (Tests) // category
-(NSUInteger)stackCount;
@end

Alternative to partical mocks

For those times when you want to override a method in a class under test, declare a new class and override the method in your test.

@interface TestSubclass : ClassUnderTest
@end

@implementation TestSubclass

- (void)methodToOverride
{
    ...
}

...
// In the test
- (void)testSubclass
{
    ClassUnderTest *testObject = [TestSubclass new];

    XCTAssert(...);
}

@end

Objective-C – How to stub out an NSError unit test

Leave a comment

#import <XCTest/XCTest.h>

#import "Player.h"

// Mocking somethings (like NSError's) can be tricky in OCMockito.
// One alternative, if your mocks get too complex, is to simply manually stub.

// create a test class for the behaviour your want to stub,
// override the methods
// and then inject that into your SUT

// For example

// Test class we want to stub
@interface FakeHandler : Handler
- (BOOL)playTrack:(NSString *)track error:(NSError **)errorPtr;
- (BOOL)playAlbum:(NSString *)album error:(NSError **)errorPtr;
@end

@implementation FakeHandler

// Method / behaviour we want to override
- (BOOL)playTrack:(NSString *)track error:(NSError **)errorPtr {

    NSString *domain = @"com.MyCompany.MyApplication.ErrorDomain";
    NSString *desc = NSLocalizedString(@"Unable to connect to network.", @"");
    NSDictionary *userInfo = @{ NSLocalizedDescriptionKey : desc };

    *errorPtr = [NSError errorWithDomain:domain
                                         code:-101
                                     userInfo:userInfo];

    return NO;
}

- (BOOL)playAlbum:(NSString *)album error:(NSError **)errorPtr {
    return NO;
}

@end


// Now we can use all of this in our test.

@interface PlayerStubTest : XCTestCase
@property (nonatomic, strong) Player *player;
@property (nonatomic, strong) Handler *fakeHandler;
@end

@implementation PlayerStubTest

- (void)setUp {
    [super setUp];
    self.fakeHandler = [FakeHandler new];
    self.player = [[Player alloc] initWithHandler:self.fakeHandler]; // Dependence inject our stub
}

- (void)testNetworkFailure {
    // Now when we call our play method, we can expect a failure and test that it bubbles up
    NSError *expectedError;
    [self.player playURL:@"track.xxx" error:&expectedError];
    XCTAssertNotNil(expectedError);
}

@end

How to stub a class for unit testing

Leave a comment

Here is an example of how to override an existing class and create a stub for your unit tests in objective-c.

PlayerStubTest.m

#import <XCTest/XCTest.h>

#import "Player.h"
#import "Handler.h"

@interface PlayerStubTest : XCTestCase
@property (nonatomic, strong) Player *player;
@property (nonatomic, strong) Handler *stubHandler;
@end

// overide default behavior of handler for those methds you want to test
@interface StubHandler : Handler
- (BOOL)playTrack:(NSString *)track;
- (BOOL)playAlbum:(NSString *)album;
@end


@implementation StubHandler

- (BOOL)playTrack:(NSString *)track {
 NSLog(@"Stub track");
 return NO;
}

- (BOOL)playAlbum:(NSString *)album {
 NSLog(@"Stub albumn");
 return NO;
}

@end

@implementation PlayerStubTest

- (void)setUp {
 [super setUp];
 self.stubHandler = [StubHandler new];
 self.player = [[Player alloc] initWithHandler:self.stubHandler];
}

- (void)testExample {
 [self.player playURL:@"track"];
}

@end

 

When you run this you should see ‘Stub track’ in the console.

 

Lessons learned in TDD

Leave a comment

This was an excellent talk Ian Cooper gave at NDC a couple years ago.
It hits on a lot of the challenges and common problems Ian and others have seen with test suites and TDD in general.

It’s a great talk. Which you can watch in it’s entirety here.

Here are some notes.

The Problem

Screen Shot 2016-02-16 at 8.54.34 AM

Where did it all go wrong

Screen Shot 2016-02-16 at 8.54.40 AM

Two keep points
Not writing tests against behaviors.
Coupling our implementation details to our tests.

If we fix these we will have smaller test suites, much more self explaining, and much less painful to own.

There was advice in early TDD that said when you are given a new method on a class, that was the trigger point for writing a new test.

And that’s really wrong.

The trigger for writing a new test is to have some piece of behavior that I want to implement. The test needs to capture that behavior.

The reason why when you go back to your tests that you find them so hard to read is there is a lost connection between the low level test you are writing and the high-level feature you are trying to solve. It’s lost in the noise.

What you need to do is express in your test that we are testing a given behavior of our system.

Before you get to the point where you are going to put the implementation details down, you first put down the behaviours you are trying to capture, and then the implementation. Don’t jump ahead too early.

Testing outside in

Start with the use case, the story, the scenario or feature we want to solve, on the outside, then work your way in from there.

Recommendation: don’t start this at the UI level. Start one level beneath at the plain old object level. Then if your UI changes, it won’t matter. Focus on the domain models.

Test the public API of classes. Not the internals. Test the surface only, and it should be quite narrow.

As soon as you start testing the internals you are coupling your tests to your implementation details. To change your implementation details you now have to change your tests. And that’s the key problem.

So your surface area should be much narrowly than many people are testing today. Just the API.

That means you will write less tests, tests against the use cases, and refactoring the contract remains the same, the internals change, and you don’t break any tests.

BDD

This is what BDD is all about. Testing behavior, and not low level details, and methods on classes.

What is a unit test

Screen Shot 2016-02-16 at 8.54.48 AM

The simplest thing

Screen Shot 2016-02-16 at 8.54.55 AM

The real problem with TDD is Kent asked us to go green as fast as possible, committing as many sins as we want in that step. So go to project, cut and paste some code, and stick in application. Hack it. Make it work. Don’t worry about the design. Get it green ASAP.

Kent’s point here is that it’s difficult to do two things at once. Make the test pass and design at the same time. Better to separate out.

Green is about solving the problem. Refactoring is about doing the design.

Screen Shot 2016-02-16 at 8.55.00 AM

You do not write new tests here when refactoring. You already have the high-level behavior tests written. So long as those pass, you are OK. If you need another test, write it at that level.

But when you are refactoring, you are free to focus on design and engineer it right.

If you write tests here you bind your implementation details to your tests.

Coupling is the biggest problem in software. It is the enemy. Forget DRY. Coupling will kill you. Do you couple tests to implementation details.

Screen Shot 2016-02-16 at 8.55.06 AM

Dependency is the key problem in software development at all scales.

We need to eliminate the dependency between our tests and our code. Tests should not depend on details. Tests should depend on contracts or public interfaces. This allows us to change implementations without changes tests.

Test behaviors not implementations.

Means less tests.
Moving faster.
Going quickly to green.
Now you are refactoring and making it cleaner.
Better forward progress, the tests aren’t slowing you down so much.

Screen Shot 2016-02-16 at 8.55.13 AM

iOS test classes not running

Leave a comment

If you’ve added unit tests to a new project and for some reason they aren’t running… try this.

Double click your blue project icon

Screen Shot 2015-11-13 at 11.18.26 AM

Click the Test Target, Build Phases, and expand Compile Sources

Screen Shot 2015-11-13 at 11.18.43 AM

Screen Shot 2015-11-13 at 11.23.55 AM

Click the ‘+’ sign then ‘Add other’ at the bottom, navigate to your files, and try adding them manually. May fix your problem.

XP Days at Spotify

2 Comments

Screen Shot 2015-10-30 at 1.08.46 PMToday I had the pleasure of hosting the second day of a two day XP style workshop with some incredible developers at Spotify. Here’s what we did.

Build something fast!

We kick things off by asking the class to build a Reverse Polish Notation (RPN) style calculator. In 10 minutes.

We do this partially by design – to create a little stress. But it’s also an ice breaker. Just to verify everyone’s machine is up and running. As well as gauge the experience level of the class.

We then have a discussion about what some of the challenges are when building software. As well as what quality means, and how we can shoot for that in our software.

The birth of XP

This then segues into the birth of XP. Here I share Kent and Ward’s original vision of what XP was. Talk a little about the C3 project. And basically explain the XP attitudes behind if certain things are hard, we are going to continuously do them, all the time. Testing, integration, pairing, design, etc.

I also draw the cost of change curve and share how Kent wanted a way to develop software, such that the cost didn’t increase exponentially over time.

Screen Shot 2015-10-30 at 1.08.55 PM

Elephant Carpaccio

Partly because smaller stories are a good idea, and partly because it helps them with the project they do on day two, we then do an Elephant Carpaccio story slicing exercise.

https://docs.google.com/document/u/1/d/1TCuuu-8Mm14oxsOnlk8DqfZAA1cvtYu9WGv67Yj_sSk/pub

This is great because it reminds people about what makes stories valuable in the first place, while also giving advice on how to slice. Basics, but really good stuff for people who may not have read or had any story training.

TDD together

This is where we formally introduce TDD, and revisit the RPN calculator we built at the beginning of the course.

As a class, I start with one other person, showing them the mechanics behind writing the test first, making it pass, and then … sometimes refactoring. Sometimes I don’t refactor, just to let the duplication build up and save for conversation later.

But we do this as a group, and it gives students a nice intro on a codebase and a problem they already know.

Refactoring

Something that’s surprised me as a middle aged programmer (cough) is how many people still haven’t read Martin Fowlers seminal book on refactoring. I have a lot of fun with this.

Refactoring, even in really tight circles of programming, is still a very misunderstood term. Usually it just means rewriting code. But people don’t appreciate always that when you are refactoring you aren’t adding any new value. You are only improving the existing design.

So I really hit home that the reason why that word appears in a menu at the top of your favorite ID is because of this book. And then I ask how people refactor without unit tests.

It’s a trap of course. Because you can’t. But a lot of people still do. So we talk about that.

So that’s day one. And during the day we also sprinkle in discussions about YAGNI, production readiness, and other XPisms and attitudes.

Code me a game

Day two is all about putting TDD, unit testing, refactoring, continuous integration, collective code ownership, and pairing all into practice. In teams of four, the class then goes about building a text based adventure game.

Screen Shot 2015-10-30 at 1.12.24 PM

Text based games are great for learning TDD. They are fun, most people understand how they work, and they don’t contain a lot of hairiness. But they contain enough hairiness to give a taste of what applying TDD in the real world is like (like how do you handle all those input output streams?).

We have a few rules for the text based game we are going to build.

  1. No production code without a failing unit test.
  2. No code not written in pairs.
  3. Refactoring happens continuously while the code is being developed.
  4. Check in early and often.

Screen Shot 2015-10-30 at 1.14.34 PM

It’s really fun seeing how teams of x4 tackle handling a new code base. We also have the challenge that many people come from different parts of the world. So everyone has their own keyboard which can make pairing tricky.

But people overcome. We have lots of discussion about how to test, how to design, and how to tackle all the hairiness that seems to come with coding, even on a simple text based game.

We usually do x3 one hour iterations. With a demo, code review, and discussion at the end of each.

We also track number of commit, number of stories, and code coverage.

After three to four hours people are pretty pumped and exhausted. We then sit back and reflect on what we’ve learned.

It’s really interesting watching people try TDD for the first time. On the surface, when you see other people do it, it looks really easy. But it’s not. At least at first. It’s hard. It goes against everything not only everything you’ve been taught in school. It also not the natural state for people who are just used to hacking.

But once people get it, you can see a light go one. They design their code differently. They find they need less code. And it takes a lot of the guesswork away. They only end up creating what they need. And for many that’s a revelation.

Of course TDD is only one leg of the XP stool. And all the other practices also come into play – particularly pairing. People often ask how do you keep the discipline up. You play a big part of that I say. But your pair can help you too.

Insights, Actions, and Histories

We wrap up the day with Insights, Actions, and Mysteries. Here we ask people what they learned, things they want to do, and things we still wonder about.

Screen Shot 2015-10-30 at 5.09.06 PM

Screen Shot 2015-10-30 at 5.09.15 PM

Screen Shot 2015-10-30 at 5.09.22 PM
Big insights today were the awesomeness of pairing (you can learn a lot working with someone else). Actions were YAGNI – try hard not to over engineer. And mysteries were things like how fast to move when doing TDD vs going really slow and gearing down to really really small tests.

Overall it was a great day. I always learn as much if not more than the attendees. And this was a great group to work with. See you out there!

Screen Shot 2015-10-30 at 1.13.53 PM

Older Entries

%d bloggers like this: