Posts

Showing posts with the label continuous testing

Tests, and Bug Fixes

Image
“Bug fixes must include a test that exercises the bug, and the fix ” This really shouldn’t be controversial, y’know? I mean, after all 1. There is a bug. We all know there is a bug. There is clearly something bad happening (“ Why did the service restart?  I  didn’t ask it to do so! ”), and bad is not good. 2. If we’re lucky, the bug even comes with a test case that exercises the bug (“ Send in an  int  instead of a  string , and watch the fun! ”) 3. If we’re  very  lucky, the bug report includes code (“ To dream the impossible dream… ”) Regardless  of where one is in the spectrum above, once you admit to yourself that there  is  a bug — and this can be an awfully hard admission to make sometimes — then you’re going to have to fix the damn thing. And that is going to involve some level of process where you’ll be doing  something  to make sure that there  is  a bug, right? After all, taking the above in...

Miniformats for Tests

Image
So you’re building out a fairly complex component — one of those things that, try as you might, you really can’t simplify any further. Consider what happens when you start building out your tests for this component. You get all your really basic unit tests done quickly — things like • “ make sure this function returns a positive integer ” • “ validate that the user count is never zero ” and so on. The fun begins when you start layering on functionality. For example your widget’s state includes a whole bunch of opaque internal structures with a variety of bit-patterns, hashes, and whatnot — and your widget does…stuff…based on these structures. Your tests very rapidly end up containing depressingly long sections of code, all designed to say stuff like // set up my widget var widget = new Object(); widget.hash_start_section = "deadbeef"; widget.constructor_type = 1337; . . // continues forever... And that’s on a good day. On a bad day, you do horrible stuff like seriali...

Continuous Testing — Some Semantics

Image
If you’re doing any form of  Continuous Testing  (or heck, even just automating your tests), you’ve probably already run into the  Semantics Gap , where what  you  mean by XXX isn’t what  others  mean by it. This can be quite the killer, in ways both subtle and gross. I recall a post-mortem in the past that boiled down to the QA and Release teams having different assumptions about what “ The Smoke-tests passed ” meant. The resulting chaos — both between the teams, and for the customer — was  epic , and something that still makes me shudder reflexively when I look back at it  . And that, my friends, is just about when I put together the following terminology. Mind you, far be it for me to tell you to adopt this terminology. Heck, you may very well vehemently disagree with it — and that’s ok. The thing is, whatever terminology you use needs to be agreed upon by everybody! (And, you’ve probably got all the same stuff below, just broken ...

Ignorance Is NOT Bliss — Flaky Tests Edition

Image
We’ve all dealt with flaky tests, right? Tests that don’t consistently pass or fail, but are, instead,  nondeterministic? They’re deeply annoying, end up being a huge time-suck, and inevitably end up occupying the bulk of your “productive” time. I’ve written about the common reasons for flakiness before , viz.,  External Components, Infrastructure, Setup/Teardown,  and,  Complexity  ( read this post for the details  — it’s a very short read!). The big takeaway from that article though is that  flakiness never goes away, it just moves up the food-chain!  For example, as you clean up your  Infrastructure issues, you’ll start running into issues with  Setup/Teardown . Fix those, and you’re now dealing with  Complexity . And there is, of course, a special place of pain involved with anything involving  distributed systems , what with consensus, transactions, and whatnot. There are  many many ways of dealing with flakiness...

Flaky Tests — The Bane of Existence

Image
We’ve all had to deal with  flaky tests  — tests that don’t consistently pass or fail, but are, instead,  nondeterministic . They’re deeply annoying, end up being a huge time-suck, and inevitably end up occupying the bulk of your “productive” time. There are many,  many  reasons for flakiness, but in my experience, the vast majority of them can be boiled down to some combination of the following 1. External Components : When the code relies on something that isn’t under it’s control, and makes assumptions about it. I’ve seen people validate internet access by retrieving  http://google.com  (“ because Google is always up ”, conveniently ignoring the path from the test environment to Google), assume that there is a GPU present (“ because it’s Bob’s code, and he always runs it on his desktop ”), and so forth. The thing is, these assumptions get made even after stubbing — our assumptions about the environment we’re in can frequently lead us to places wh...

Testing and Instincts

Image
/via http://cartoontester.blogspot.com/2010/01/how-to-spot-tester-in.html Sometimes I feel like a huge chunk of all the  test-automation  we do consists of figuring out why they entered 4,294,967,297 in the first place,  because it actually ended up taking down the entire super-market .

Continuous Testing and Development Hygiene

Image
Continuous Development (and Continuous Testing ) is increasingly common out there. The process of pushing a commit, and having it kick off a run on Travis / CircleCI / Jenkins is so frightfully mundane that one does it without actually thinking about it much these days. That said, here is (highly unscientific!) study of errors that I've found in CD environments 1. Compilation Errors . Yes. Seriously. And the ridiculous thing is that pretty much every damn IDE these days supports some form of "compile on save" option, no?  2. Missing Code . There's literally a gaping hole where an entire module should be. This could be because the developer forgot to implement it, or that the developer pushed the wrong branch, but still, come on… 3. Wrong Tests. The developer cut/paste tests from a different module as a starting point, and then happily forgot to actually implement them correctly. 4. Missing Data . The test harness depends on data that isn't checked ...

Business Risks, Product Lifecycle, and Testing

Image
" We've automated our tests " - The first thing that pops in my head when I hear this is " I wonder what the tests actually cover ".  At heart, testing is all about coverage - code coverage, requirements coverage, business case coverage, etc. Automating these tests sets things up so that when you kick off your tests, they will go through all of these for you, so that you don't have to remember to do everything . Automation doesn't actually tell you anything about the quality of the tests . Yes, I know, that is me belaboring the obvious, but I do so for a reason -  most tests have precious little to do with business risks . And that matters because, in the end, everything you are working on is aimed at driving business value . (•) Hold this thought, we'll get back to it in a moment  -  let's continue with testing.  The thing to remember here is that you are never done testing . There is always some set of tests that you could be doing. S...