Week 2: Who tests the tests?

People are already becoming skeptical as to how long I will keep up my daily posting regarding the brave new world of telecommuting. One of those people is me! Well, actually... I never intended to keep up the pace forever. But I figure it's worthwhile to make it daily for as long as I can. It makes sense in a way. Any new experience is very vivid at first. Lots of stuff for the senses to take in. And then, gradually, our brains go on autopilot, seeing less and less noteworthy material. So if I can make the effort to pump out these posts when there is stuff to talk about, we should actually get a pretty good idea of how long it takes for the telecommute experience to become the new "normal". And that, in itself, is useful information for anyone else looking to try.

The only thing that I really noticed today that was different between a regular job and telecommuting had to do with my first "all company meeting" with M7 (by the way, I like referring to the company as M7 because it sounds similar to MI6, which of course, is where James Bond works – where's my Aston Martin, huh?). I don't like letting a meeting go by without making the odd joke to make it feel a little less meeting-like. As I tried to figure out why the meeting felt kind of odd, that was what it came down to. Now, I'm not talking about doing something like, "A horse, a priest, and Jay Leno walk into a bar..." (By the way, I have no idea how this one would end, so please don't ask me). I just talking about general camaraderie that occurs in a face to face meeting (at least in a smaller company – meetings were just meetings at bigger companies). It's harder to do over the phone because you don't get to see the body language that usually indicates whether or not a little joking around is called for. The few attempts made by myself and others were also made awkward by the difficulty in interjecting. It felt more like you were actually cutting someone off. This happened even outside the realm of attempted camaraderie. If someone's saying something and you have a point to add, it's very hard to add it in at the right place without actually cutting them off. They don't see you bursting with something to say, as they might in a face to face meeting, and where they would naturally pause to let you in and then continue.

One thing I'll say about the meeting that has nothing to do about telecommuting or not is... it had a time limit! Listen up, certain former co-workers who will remain nameless but did not set time limits: when you have a time limit, things still work quite well. You still get to get all the important stuff out. You just squeeze it into less time and are more conscious about not going off on tangents. There. Said it. No more callouts on this post.

Finally, the title of the post. Let me introduce you to my latest obsession. When I started, there were around 500 out of 2000 tests broken due to a Ruby on Rails upgrade. We're now down to around 130. Important to note... in the 370 that were fixed, I only found one that was actually to do with the site code. The rest were broken simply because the test framework was broken. The fixes were to the test framework, not the actual code. I became a bit obsessed with getting the failing tests down to zero, and here's why. It's the broken window scenario. I've seen it happen time and time again. It's hard to justify spending a lot of time working on those extra 130 tests. And, in fact, I'm currently hitting the 80/20 rule hard. The first few fixes caused tons of tests to pass. Now it's down to 1-10 at a time. Talking to Seth, I momentarily woke up from my coder's obsession and saw things from the business point of view. Tests aren't features. They're something that helps us coders sleep at night. And they provide no visible benefit. Don't write tests and you'll crank features out much faster. If you're having trouble hitting a deadline, throw out the tests and you'll hit it way easier.

Here's the thing, though. If we let a few months go by where we don't fix those 130 tests, we'll probably stop doing test first development. The whole point of having a test suite is having them all pass. When you allow a lot of tests to fail for long enough (and usually long enough is a pretty short time), it's like not fixing the broken windows on a city block.

Coming from plenty of businesses where the test-first methodology ended up going out the window (pun intended?), I'm not sure this is the end of the world. The problem with having a load of tests, especially when they test really trivial things, is that sometimes the conditions that the test is concerned about at the time become totally irrelevant. Then, when they break, you're sometimes left wondering why the test was there in the first place.

But there has to be a happy medium. One idea on this is simply to watch the tests and be ruthless about culling the ones that become irrelevant. And looking at this as a case in point, I'd say that if we haven't knocked it down from 130 failing tests in a month, we should just delete them. If features are working, and you no longer know why a test is failing, maybe it's better to just throw it out so that it doesn't obscure more important failures and doesn't end up being the broken window that kills your automated testing going forward. Let's be clear. I think having these tests is a good thing. I'm just throwing in some heresy that's floating around in my brain at the moment. I may completely change my mind tomorrow, and certainly would love to find out what some of my fellow developers think of this problem. But even though tests are a great learning experience and, as I'm suggesting, leaving them for too long will result in jeopardizing the practice itself of testing first (that may be up for debate as well), it's obvious that given the choice between working tests and working features, the people actually using the product are going to want the working features.

So for now, I'm moving onto features, though I might pick away at the remaining broken tests as any flashes of brilliance occur. And I'll do my best to keep thinking in terms of writing the working test first and then developing the feature to make it pass. But I'm also going to be thinking of strategies for keeping those pesky tests manageable.