Happy Hack-o-ween: Electronics and a microcontroller spice up the haunt

Ah, Halloween, when a young woman's fancy turns to love. And zombies.

I had two personal requirements for the costume I would build this year:
  1. It shall be spooky.
  2. It shall blink.


I'll tell you about the final result, the electronics and the software that went into it, plus the techniques I used to achieve wearable electronics. I'll introduce you to the Arduino, an open-source microcontroller prototyping platform, which is an exhilarating tool/toy for making your software skills manifest in the physical world.

My husband and I have been teaching ourselves electronics. A few months ago, Dad taught me to solder. I recently read Syuzi Pakhchyan's excellent primer on wearable electronics and smart materials, Fashioning Technology. And I've been making things with my Arduino. All these ideas were swirling and combining in my head to inspire this year's Halloween project. Er, costume. Same difference.

What was I? I was a nightmare... the thing under your bed... the reason for your well developed sense of paranoia...
The thing under your bed

I sported a crop of writhing eyeballs erupting from my head. Each eyeball has an LED inside it, and they blink randomly and independently, until I trigger a hidden switch, which causes the blinky ones to go dark and two red eyes to pulse menacingly. In the Flickr photoset, you can see the construction process.

The Arduino Sketch


The term "Arduino" is overloaded to mean:
  1. a particular chip and circuit board which you can buy or build;
  2. the IDE in which you write programs for the chip;
  3. the language, which is C-flavored;
  4. fun.


Arduino programs are called sketches. Every sketch must contain two functions: setup (runs once) and loop (runs continuously). Here's my sketch, with extra explanatory comments, that blinks the six regular eyeballs and responds to the switch by pulsing the red eyeballs.
    1 #define SWITCH 8
    2 int ledPins[] = {2, 3, 4, 5, 6, 7};
    3 const int ledPinsCount = 6;
    4 int redEyePins[] = {10, 11};
    5 const int redEyePinsCount = 2;
    6 long durations[ledPinsCount];
    7 int ledStates[ledPinsCount];
    8 long previousTimes[ledPinsCount];
    9 int i;
   10 
   11 void setup()
   12 {
   13   pinMode(SWITCH, INPUT); //Specify the switch pin as an input.
   14 
   15   for (i = 0; i < redEyePinsCount; i++)
   16   {
   17     pinMode(redEyePins[i], OUTPUT); //Specify each red-eye LED pin as an output.
   18   }
   19 
   20   for(i = 0; i < ledPinsCount; i++)
   21   {
   22     pinMode(ledPins[i], OUTPUT); //Specify each regular LED pin as an output.
   23     ledStates[i] = random(1); //Randomly set the LEDs to on or off (1 or 0).
   24     durations[i] = GetRandomDuration(); //Define a random duration for each LED to stay in that state.
   25     previousTimes[i] = 0; //At time of setup, the "last time we changed" is at 0 milliseconds, the start of time.
   26   }
   27 }
   28 
   29 void loop()
   30 {
   31   if (digitalRead(SWITCH) == HIGH)
   32   {
   33     TurnOffLeds();
   34     PulseRedEyes();
   35   }
   36   else
   37   {
   38     for(i = 0; i < redEyePinsCount; i++)
   39     {
   40       digitalWrite(redEyePins[i], LOW); //Turn the red eyes all the way off.
   41     }
   42 
   43     for(i = 0; i < ledPinsCount; i++) //For each LED:
   44     {
   45       if (millis() - previousTimes[i] > durations[i])
   46       {
   47         ChangeLed(i); //If this one's duration is up, then flip it.
   48       }
   49     }
   50   }
   51 }
   52 
   53 void TurnOffLeds()
   54 {
   55   for(i = 0; i < ledPinsCount; i++)
   56   {
   57     digitalWrite(ledPins[i], LOW);
   58   }
   59 }
   60 
   61 void PulseRedEyes()
   62 {
   63   //Fade on, then off.
   64   int j;
   65   for(j = 0; j < 255; j+=5)
   66   {
   67     for(i = 0; i < redEyePinsCount; i++)
   68     {
   69       analogWrite(redEyePins[i], j);
   70       delay(10);
   71     }
   72   }
   73   for(j = 255; j > 0; j-=5)
   74   {
   75     for(i = 0; i < redEyePinsCount; i++)
   76     {
   77       analogWrite(redEyePins[i], j);
   78       delay(10);
   79     }
   80   }
   81 }
   82 
   83 void ChangeLed(int ledPin)
   84 {
   85   previousTimes[ledPin] = millis(); //Update the "last time we changed" to now.
   86   durations[ledPin] = GetRandomDuration(); //Give it a new random duration.
   87   ledStates[ledPin] = 1 - ledStates[ledPin]; //Flip the state between on and off.
   88   digitalWrite(ledPins[ledPin], ledStates[ledPin]); //Set the LED to that state.
   89 }
   90 
   91 long GetRandomDuration()
   92 {
   93   //Random number between 1 and 10, then multiplied by 400 to give it a detectable duration.
   94   return random(1, 10) * 400;
   95 }


I like the way the eyes blink independently. If they all flashed in unison, they would look like Christmas lights, and you would notice that two were "special" because they weren't flashing. Instead, the duration that any given eyeball is lit or dark constantly changes.

The blinking is managed by a collection of arrays. One array represents each of my LED pins, so that I can address them in a for loop. The three other arrays hold: the state (on/off) of each LED; the duration each LED should stay in that state; the reading from the millisecond counter when the LED last flipped its state. Each time the loop function executes, if the switch is not connected, then I look at each LED; if the difference between the current time and the time when it previously changed is greater than its duration, flip its state (from off to on, or from on to off), randomly assign it a new duration, and record "now" as the new "previously changed" time. If the switch is connected, then I make the red eyes fade on and fade off.

Fading with PWM


PWM (Pulse-Width Modulation) is a technique for making a digital component (one that turns on or off) simulate analog behavior (be a little bit on, and then a little bit more on). If you turn an LED off and on really quickly, you won't perceive the flickering, but it will look half as bright, because it is actually off for half the time. If you let it spend a little more time off than on, it will appear even dimmer. So by varying the width of the pulses, you can control how bright the LED looks.

The Arduino comes with built-in PWM functions; some pins are already set up to be PWM pins. If you plug an LED into one of the PWM pins, then you can write to it as if it were an analog component. That's why, in my sketch above, I set the brightness of the red eyes using analogWrite(), instead of digitalWrite(). My for loop increments the counter j from 0 to 255, and sets the brightness of both red eyes to the value of j. The Arduino takes care of (imperceptibly) flickering the LEDs with the right ratio of on-time and off-time to achieve a j amount of brightness. So the eyes get gradually brighter, then gradually dimmer. (Then control returns to the main loop function, but if my switch is still connected, the red eyes will throb again.)

Snaps: Wearable Plugs


A metal sewable snap is like a plug for your clothing, an interface between the world of textiles and the world of wires. This is handy when you need the electronics to be separate while you are getting into the clothing, or if you want to wash the clothing. My Arduino hung out at the base of my neck, to be near the LEDs on my head but hidden underneath my wig, but my control switch was near my hip. I could have run a wire down to the switch, but conductive thread was more subtle and more comfortable.

To complete the connection, I soldered a short wire to one side of the snap. That wire plugged into a pin on the Arduino. The conductive thread ran from the switch at my hip up to the back of my dress near the Arduino, and I sewed that conductive thread to the other half of the snap. When the two halves are snapped together, the wire and the thread make a complete connection, as if they were one continuous wire.
Soldered snaps / Sewn snaps

I had two threads (going out to the switch and back), so I encased them each in a bias tape tube, to prevent them from touching each other and shorting out.

I've been saying "switch," but actually, I simplified at the 11th hour. I tied each thread around a safety pin, and stuck the pins to my dress. When the safety pins touched each other, they completed the circuit, which the Arduino sketch interpreted as triggering the switch—cue red-eye glare.

What's Next


Soldering and sewing are both liberating skills to possess—they free up your creativity to make wilder and more integrated stuff. If you are currently proficient with only one, ask around and see if you can find a buddy who's good at the other, and teach each other.

The Arduino comes with a great community of hackers and makers, lots of people to learn from and collaborate with. Definitely check it out. There is lots of fun to be had, and blinking LEDs is the barest beginning of what it can do.

Refactoring Dinner: Interfaces instead of Inheritance

Last time, in Cooking Up a Good Template Method, I had a template method cooking our dinner. An abstract base class defined the template—the high level steps for preparing a one-skillet dinner—and a derived class provided the implementation for those steps. I'm currently reading Ken Pugh's Interface Oriented Design (more on that after I finish the book), and it got me thinking of a way to change the design to use interfaces instead of inheritance.

I think there's value in this refactoring because it allows future flexibility and testability. Let's stroll through it, and I welcome your thoughts about how (and whether) this improves the code.

Previously, we had a base class SkilletDinner, which was extended by variants on that theme, such as chicken with onions and bell peppers or the FancyBaconPankoDinner. (If I've learned one thing from my readership, it is that blog posts should mention bacon. Mm, crispy bacon.) As the first step in the refactoring, I'll create an interface, ISkilletCookable that provides the same methods that were previously abstract methods in SkilletDinner. By naming convention, the interface is prefixed with 'I' and is an adjective describing how it can be used (-able).
    4   public interface ISkilletCookable
    5   {
    6     void HeatFat();
    7     void SauteSavoryRoot();
    8     void SauteProtein();
    9     void SauteVegetables();
   10     void AddSauceAndGarnish();
   11   }


Next, I'll create a SkilletDinner constructor that accepts an ISkilletCookable, and change the SkilletDinner's Cook() method to ask that cookable to do the work. SkilletDinner no longer needs to be abstract.
    5   public class SkilletDinner
    6   {
    7     private readonly ISkilletCookable cookable;
    8 
    9     public SkilletDinner(ISkilletCookable cookable)
   10     {
   11       this.cookable = cookable;
   12     }
   13 
   14     public void Cook()
   15     {
   16       cookable.HeatFat();
   17       cookable.SauteSavoryRoot();
   18       cookable.SauteProtein();
   19       cookable.SauteVegetables();
   20       cookable.AddSauceAndGarnish();
   21     }
   22   }


Then, FancyBaconPankoDinner implements ISkilletCookable and provides implementations for each of the methods that will be called by the Cook() method.

The first benefit from this refactoring is flexibility. While FancyBaconPankoDinner could not have inherited multiple classes (no multiple inheritance in C#), it can implement multiple interfaces. For example, it could also implement the IShoppable interface, thereby providing a ListIngredients() method that would let me include it in my grocery list.

This refactoring also makes it easier for me to test the quality and completeness of my template method. I can verify—does it cover all of the requisite steps for cooking a skillet dinner?—by creating behavior-verifying tests that assess the SkilletDinner's interactions with the ISkilletCookable interface. When I'm writing unit tests for the SkilletDinner class, I want to test its behavior because the behavior is what's important.

To forestall objections, I tried writing a test around the old version, creating my own mock class that extends the old abstract SkilletDinner. It got pretty lengthy.
    4   public class SkilletDinnerSpecs
    5   {
    6     [TestFixture]
    7     public class When_told_to_cook
    8     {
    9       const string heatFatMethod = "HeatFat";
   10       const string sauteSavoryRootMethod = "SauteSavoryRoot";
   11       const string sauteProteinMethod = "SauteProtein";
   12       const string sauteVegetablesMethod = "SauteVegetables";
   13       const string addFinishingTouchesMethod = "AddFinishingTouches";
   14 
   15       [Test]
   16       public void Should_follow_dinner_preparation_steps_in_order()
   17       {
   18         var systemUnderTest = new MockSkilletDinner();
   19 
   20         var expectedMethodCalls = new List<string>();
   21         expectedMethodCalls.Add(heatFatMethod);
   22         expectedMethodCalls.Add(sauteSavoryRootMethod);
   23         expectedMethodCalls.Add(sauteProteinMethod);
   24         expectedMethodCalls.Add(sauteVegetablesMethod);
   25         expectedMethodCalls.Add(addFinishingTouchesMethod);
   26 
   27         systemUnderTest.Cook();
   28 
   29         Assert.AreEqual(expectedMethodCalls.Count, systemUnderTest.CalledMethods.Count, "Expected number of called methods did not equal actual.");
   30 
   31         for (int i = 0; i < expectedMethodCalls.Count; i++)
   32         {
   33           Assert.AreEqual(expectedMethodCalls[i], systemUnderTest.CalledMethods[i]);
   34         }
   35       }
   36 
   37       private class MockSkilletDinner : SkilletDinner
   38       {
   39         public readonly List<string> CalledMethods = new List<string>();
   40 
   41         protected override void HeatFat()
   42         {
   43           CalledMethods.Add(heatFatMethod);
   44         }
   45 
   46         protected override void SauteSavoryRoot()
   47         {
   48           CalledMethods.Add(sauteSavoryRootMethod);
   49         }
   50 
   51         protected override void SauteProtein()
   52         {
   53           CalledMethods.Add(sauteProteinMethod);
   54         }
   55 
   56         protected override void SauteVegetables()
   57         {
   58           CalledMethods.Add(sauteVegetablesMethod);
   59         }
   60 
   61         protected override void AddFinishingTouches()
   62         {
   63           CalledMethods.Add(addFinishingTouchesMethod);
   64         }
   65       }
   66     }
   67   }


In the new design, I can mock the ISkilletCookable interface with a mocking framework like Rhino.Mocks. The interface is easy to mock because interfaces, being the epitome of abstractions, readily lend themselves to being replaced by faked implementations. Rhino.Mocks takes care of recording and verifying which methods were called.
    7   public class SkilletDinnerSpecs
    8   {
    9     [TestFixture]
   10     public class When_told_to_cook
   11     {
   12       [Test]
   13       public void Should_follow_dinner_preparation_steps_in_order()
   14       {
   15         var mocks = new MockRepository();
   16         var cookable = mocks.StrictMock<ISkilletCookable>();
   17         var systemUnderTest = new SkilletDinner(cookable);
   18 
   19         using (mocks.Record())
   20         {
   21           using (mocks.Ordered())
   22           {
   23             cookable.HeatFat();
   24             cookable.SauteSavoryRoot();
   25             cookable.SauteProtein();
   26             cookable.SauteVegetables();
   27             cookable.AddSauceAndGarnish();
   28           }
   29         }
   30         using (mocks.Playback())
   31         {
   32           systemUnderTest.Cook();
   33         }
   34       }
   35     }
   36   }


The test relies on Rhino.Mocks to create a mock implementation of ISkilletCookable, and then verifies that the system under test, the SkilletDinner, interacts correctly with ISkilletCookable by telling it what steps to do in what order.

That test is quite cognizant of the inner workings of the SkilletDinner.Cook() method, but that's specifically what I'm unit testing: Does the template method do the right steps? I don't mind how the steps are done, but you have to start the onions before you add the meat, or else the onions won't caramelize and flavor the oil.

By the way, if you had previously found the learning curve for Rhino.Mocks' record/playback model too steep a hill to climb (or to convince your teammates to climb), check out Rhino.Mocks 3.5's arrange-act-assert style. It creates more readable tests, putting statements in a more intuitive order. I really like it. I could not, however, use it here because I have not found a way to enforce ordering of the expectations (i.e., to assert that method A was called before B, and to fail if B was called before A) in A-A-A-style. So we have a record/playback test, instead.

Here's a summary of the refactoring. I extracted an interface, ISkilletCookable, and composed SkilletDinner with an instance of that interface, liberating us from class inheritance. Because SkilletDinner is now given the worker it depends on (via dependency injection), I can give it a fake worker in my tests, so that my unit tests don't need to perform the time- and resource-consuming operation of firing up the stove. And I managed to write another blog post that mentions bacon. Mm, bacon.

Cooking Up a Good Template Method

The software concept of "raising the level of abstraction" has improved my skill and creativity in cooking, by teaching me to think about recipe components in terms of their properties and functions. Practicing abstraction-raising in cooking feeds back to help me with coding; for example, keeping me from going astray the other day with the Template Method pattern. This post is more about coding than cooking. The cooking's a metaphor. (The cake is a lie.)

Abstract Cooking
My skill with cooking grew from rote recipe following to intuitive creation when I started to think of it in terms borrowed from software: raising the level of abstraction.

Consider a week-night skillet dinner. If I told you to heat canola oil in a cast-iron skillet, saute slices of onion and chunks of chicken seasoned with salt and pepper, and toss in bell peppers cut into strips, you could probably follow along and make exactly that. But that's pretty limiting. If instead I described the process as using a fat to conduct heat for sauteing a savory root, a seasoned protein, and some vegetables, then you could use that as a template, and make a week of dinners without repeating yourself.

Let's dive into that step of using a fat for conduction, because it is a cool and useful bit of food science. To cook, you need to get heat onto food. The medium can be air, liquid, or fat. Each creates different results, hence the terms baking, boiling, and frying. When you toss cut-up bits of food in a skillet with oil and repeatedly jostle them, you're sauteing ("saute" means "to jump"), and that oil is playing the role of the fat, which is conducting the heat. If you'll pardon the metaphor, CanolaOil implements the IFat interface.

It's useful to think of cooking this way, because if you know the properties of the various cooking fats, you can choose the right IFat implementation for the job. Canola oil is heart-healthy and stands up well to stove-top heat. Olive oil has wonderful health benefits, a bold flavor, and an intriguing green color, but those attributes are pretty much obliterated by heat, so save your expensive EVOO for raw applications like salads and dips. Butter makes everything taste better, browns up beautifully, but is harder on the heart and will burn at a low temperature; temper it with an oil like canola to keep it from burning. Peanut oil stands up to heat like a champ, so it's popular for deep frying. Armed with this kind of knowledge, I don't need to check a recipe when I'm cooking; I just think about what I'm trying to accomplish, and choose the right implementation.

Pam Anderson's How to Cook Without a Book got me thinking about food this way, and Harold McGee's On Food and Cooking provides a feast of food geekery to fill in all the particulars.

Template Coding
Thinking about food this way, raising the level of abstraction, guides my thinking about code. My meal preparation follows the Template Method pattern, as does a class my teammate and I needed to modify recently.

In this example, our application sends instructions to various external systems. The specifics of how those systems like to hold their conversations vary between systems. However, the series of steps, when phrased in our core business terms, remain the same. You do A, then you do B, then you do C, in whatever way a particular instance likes to do A, B, and C.

Here's my class with its template method, translated back to the dinner metaphor:

    3     public abstract class SkilletDinner

    4     {

    5         public void Cook()

    6         {

    7             HeatFat();

    8             SauteSavoryRoot();

    9             SauteProtein();

   10             SauteVegetables();

   11         }

   12 

   13         protected abstract void HeatFat();

   14         protected abstract void SauteSavoryRoot();

   15         protected abstract void SauteProtein();

   16         protected abstract void SauteVegetables();

   17     }


But lo, I encountered an external system that needed to do one extra little thing. I needed a special step, just for that one instance. Like dinner the other night, where the vegetable was asparagus, the fat was bacon (oh ho!), and the final step was to toss some panko breadcrumbs into the pan to brown and toast and soak up the bacony love.

How do I extend my template method to accommodate an instance-specific step?

One idea that floated by was to make the method virtual, so that we could override it in our special instance. But we still wanted the rest of the steps, so we'd have to copy the whole method into the new instance, just to add a few lines. Also, anybody else could override that template, too, so that when they were told to do A, B, and C, they could totally fib and do nothing of the sort.

    3     public abstract class SkilletDinner

    4     {

    5         public virtual void Cook()

    6         {

    7             //Note: The Cook template method is now virtual,

    8             //and can be overridden in deriving classes.

    9             //That's not good.

   10             HeatFat();

   11             SauteSavoryRoot();

   12             SauteProtein();

   13             SauteVegetables();

   14         }

   15         protected abstract void HeatFat();

   16         protected abstract void SauteSavoryRoot();

   17         protected abstract void SauteProtein();

   18         protected abstract void SauteVegetables();

   19     }

   20 

   21     public class LazyDinner : SkilletDinner

   22     {

   23         public override void Cook()

   24         {

   25             OrderPizza();

   26             //We're overriding the template and *cheating*!

   27             //Although, if it's Austin's Pizza,

   28             //maybe that's okay...

   29         }

   30 

   31         private void OrderPizza()

   32         {

   33             //With extra garlic!

   34         }

   35 

   36         protected override void HeatFat() { }

   37         protected override void SauteSavoryRoot() { }

   38         protected override void SauteProtein() { }

   39         protected override void SauteVegetables() { }

   40     }


That LazyDinner class isn't really a SkilletDinner at all; its behavior is completely different. No, that option flouts the whole point of the Template Method pattern.

Our better idea was to make one small change to the template method, adding an extension point. That is, a call to a virtual method which in the base implementation does nothing, and can be overridden and told to do stuff in specific cases.

Back to dinner:

    3     public abstract class SkilletDinner

    4     {

    5         public void Cook()

    6         {

    7             HeatFat();

    8             SauteSavoryRoot();

    9             SauteProtein();

   10             SauteVegetables();

   11             AddFinishingTouches(); //Here's the hook.

   12         }

   13 

   14         protected virtual void AddFinishingTouches()

   15         {

   16             //By default, do nothing.

   17         }

   18 

   19         protected abstract void HeatFat();

   20         protected abstract void SauteSavoryRoot();

   21         protected abstract void SauteProtein();

   22         protected abstract void SauteVegetables();

   23     }

   24 

   25     public class FancyBaconPankoDinner : SkilletDinner

   26     {

   27         protected override void AddFinishingTouches()

   28         {

   29             //In this case, override this extensibility hook:

   30             ToastBreadcrumbs();

   31         }

   32 

   33         private void ToastBreadcrumbs()

   34         {

   35             //Toss in the bacon fat; keep 'em moving.

   36         }

   37 

   38         protected override void HeatFat()

   39         {

   40             //Cook bacon, set aside, drain off some fat.

   41         }

   42 

   43         protected override void SauteSavoryRoot()

   44         {

   45             //Minced garlic, until soft but before browning

   46         }

   47 

   48         protected override void SauteProtein()

   49         {

   50             //How about... tofu that tastes like bacon?

   51         }

   52 

   53         protected override void SauteVegetables()

   54         {

   55             //Asparagus, cut into sections.

   56             //Make it bright green and a little crispy.

   57         }

   58     }


This maintains the contract of the template method, while allowing for special cases. With the right extensibility hooks in place, my dinner preparation happily follows the Open-Closed Principle—open for extension, but closed for modification.

I enjoy the way my various hobbies feed into and reflect upon each other. I hope this post has given you some useful insight into the Template Method pattern, or dinner preparation, or both. Look for synergies amongst your own varied interests; it can be the springboard for some truly breakthrough ideas.

Mmm, bacon...

Inconvenient Accessibility Makes Self-Documenting Code

Intentional use of access modifiers (public, private, etc.) is like a clear memo to your team. This came up during Steve Bohlen's Virtual Alt.Net talk on domain-driven design.

Steve explained the distinction between Entity objects, which have a unique identity independent of their properties (Even when I change my name, I'm still me.), and Value objects, which are defined by their properties (If you change the house number in an address, you have a new address.). When dealing with Entities, code should not be able to change the unique id—that would be like someone claiming your social security number and thereby becoming you. Therefore, Entity classes should have private setters for their unique identifiers.

A meeting attendee asked why, since this gets inconvenient when you're creating an object based on a record fetched from the persistence repository. It's a big pain; why bother? The analogy I would offer is this. When you're defining a class to represent an Entity in your business domain, you know it's an Entity. You intend for it to behave and be treated like an Entity. You don't want any of your teammates setting its unique id in their code. So you send them an email: "Don't set Person.UniqueId, okay?" Uh hunh. How well is that going to work over time?

Instead, if you simply don't provide a public accessor to the UniqueId property, your teammates will get the message loud and clear. Granted, someone could edit the code and change the accessibility, but the fact that he or she needs to is a flashing neon sign saying "Stop. Think. Are you barking up the wrong tree?" You've made your code communicative. Its structure conveys your intent. No need for comments; this is an example of self-documenting code.

Giving Mono to my Husband

Holy crossed-platforms, Batman! How did I not know about Mono, the free, open-source framework that will run .NET applications on Linux and Mac OS X?

Not to get too personal, but I'm part of a mixed marriage: I run Windows and develop primarily in C#; my husband runs OS X and is not (actively) a programmer. Through love and mutual respect, we make it work. But what we have so far not been able to make work is my writing handy utilities and toys that he can use on his laptop.

I learned about Mono in Rod Paddock's intro to the May/June issue of CoDe Magazine. Then I came home, had Jon install Mono on his Mac, and gave him a quick little console app I'd written in C#. It ran like a charm. A WinForms app with one button and a popup message also ran, looking distinctly X11-y.

This is super exciting for us. We've been talking about a card-game playtest simulator, to help with his creation of card games. (Jon posts one free board game a month and has a few upcoming commercial releases.) That process usually involves a significant investment in card stock and time with the paper cutter, just to see how hands of cards come together and move through the game. A simulator would help him to vet the first and maybe second drafts of the cards without printing them out. Now that I know I can build something he'll be easily able to run, it's time to start designing!

Got the 0000FFs

Given up on attaching meaning to those three- or six-character codes that define colors in HTML and CSS? Sure, you can use an online color picker, but let me give you a nuts-and-bolts explanation of what they mean. This info is worth having because:
  1. It's a time-saver. If you want to make a color a little more blue, or a little less saturated, you can do the math in your head and take care of it right there in your editor.
  2. It gives you more options. If you find a cracking color combo in the Color Index, but it's given only in RGB values, you can convert it to HTML-ready values using just math.
  3. It's satisfying. Don't you prefer knowing how something works, instead of just how to work with it?
  4. It will be diverting. There will be stories, you know me.

Two experiences in my childhood laid the foundation for my understanding of hex color codes, so I will share them with you. (See? Stories.)

When I was very young, I learned that the three primary colors are red, yellow, and blue, and when you mixed them together you got, well, mud, but theoretically black. That's true for pigments (paint and ink; think magazines and newspapers), and if you're going to be pedantic, those pigment primary colors are properly called magenta, yellow, and cyan. (Add blacK and you have CMYK, the other color scheme you'll see mentioned in design books.) But it's a whole 'nother ball game when you are mixing light instead of pigment, and computer monitors are big light bulbs.

My seventh-grade science teacher, Mr. Saeger, created an excellent demonstration that I still think of when I'm mixing up hex color codes. He set up the overhead projector. He placed a square of red cellophane on the projector, and it threw a red square of color up on the wall. Sure. Then he added a piece of green cellophane, and the area where they overlapped was... yellow? That's curious. Last he added a piece of blue, and the intersection of all three was white. It blew my mind.

If you can replicate this effect (shine a light through overlapping colored plastics), it's a great science experiment to share with your kids. It will help you remember the mixing of light colors with the same intuition you have for mixing pigments. And it's cool.

The second formative experience from my youth was working the stage lights in my high school theater. Hanging above the stage were three rows of lights; the lights alternated amongst white, yellow, red, and blue, and were controlled by a huge wall of levers backstage. Big, creaky, ancient things, that really let you know you were working the lights. I had to crouch and get my shoulder under them to move the big ones.

Picture them: Four rows of colored levers, corresponding to each color of light out over the stage. Each lever controlled a light. Down was off, and as you pushed the lever up, the light would gradually brighten. A big handle at the end of a row would move all the levers of that color, so you could, for example, bring up all the whites in unison. You could slowly turn down the yellows over the course of a scene while a teammate pushed up the reds, and make a sunset. You could push all the colors up to make the light full and cheery (and make the stage hotter than a tanning booth), or pull them all down to plunge the stage into darkness at the dramatic conclusion of Act I.

Levers... lights... hex codes, here we go.

Light is mixed from red, green, and blue. (Remember the order: RGB, RGB, RGB.) Computers like to count not from 1 to 100, but from 0 to 255. Think of 0 as off, with the lever all the way down, and 255 as on, with the lever all the way up. To make yellow, you need a lot of red and a lot of green, and no blue, so R = 255, G = 255, and B = 0. To make a paler yellow, you want to bring it closer to white. White is all three on at maximum; therefore you need to turn up the blue. Maybe R = 255, G = 255, and B = 153. To make it more orangey, you'd back off the green. And so forth.

So we have three levers. A hex color code has three pairs of characters. That yellow would be #FFFF00. Put another way: FF, FF, 00. It's the same three RGB values, but in base 16 instead of base 10. 255 in base 10 becomes FF in base 16. Counting in base 16 is like counting in base 10, if you had 6 extra fingers. 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 10.

The Math Bits site gives a visual explanation of how to convert from base 10 to other bases. Also, you can use the calculator on your computer. In Windows, set the calculator to Scientific mode in the View menu, make sure the "Dec" radio button is marked, type in your base-10 number, then switch to the "Hex" radio button and read the converted value. But usually when I'm writing HTML, I just need to nudge a color a little, not do a whole decimal-to-hexadecimal conversion. So it is sufficient to know that all zeroes is all black, all Fs is all white, CC is more "on" than 99, and EE is just a teeny bit less than full on.

CSS also permits three-character color codes. The same color-mixing is happening there, it's just a shortcut that doubles each character for you. So #ca9 is equivalent to #CCAA99.

Putting this into practice...
#FFFFFFTurning all three levers full on makes white.
#CCCCCCBacking them off a little, but keeping them all equal, makes gray.
#000000Turning them all off: Black.
#FF0000Red on and the rest off makes red.
#990000Turning down the red, so that it moves closer to black, makes a darker red.
#FF6666Turning up the others, so that the whole mix moves closer to white but has more red than anything else, makes pink ("Lightish red!").
#FFFF00Red plus green (when mixing light) make yellow.
#FF9900Keep the red, but reduce the green, to make orange.
#00FFFFGreen and blue make teal.
#FF00FFRed and blue make purple.

So there you have it: An explanation of hex color codes by way of my seventh-grade science class and my high school drama—er, drama department. Right.

The Null Object Pattern: When a slacker is just what you need

I had a challenge that was neatly solved by the Null Object pattern. I'd like to share it with you, so that I can explore the idea and provide a practical example.

Simplifying a bit, I have a Person object, and I need to fill it with details retrieved from an external system. When I started looking at the code, the call to look up the Person attributes took a Person as a passed-in parameter and modified it. That struck me as bad behavior ("Hey! I gave you that so you could use it, but I didn't expect you to change it. Sheesh."). I thought it would be more honest for the method to return information, which the controlling class could use to update the Person object if it chose to.

Let me name the players, to make this easier to follow. I have a class coordinating activities that, in our business context, is called a Translator. I created a Client that makes the actual calls to the external system. Before the refactoring, the Translator would call Client.Lookup(Person). The Client would create a message to the external system, get back a response, and use it to set attributes in the Person.

I changed Client.Lookup so that it does not change the Person, and instead returns a Response that contains the needed attributes. But if the external system did not have any info to return, should Lookup return null, throw an exception, ...?

Usually the most appropriate answer to this question is to throw an exception. If no info means you're in an invalid state or an unknown state, then it is not safe to continue, and the code should throw. In this case, though, we could continue. We didn't require the info coming back from the external system; it was just handy if available.

So I return null? But that means every time I call Client.Lookup, I have to check whether the Response is null before I use it. And so does anyone else who might be calling it in the future. It seems disingenuous for a method to say, "I'll give you a Response, but I might actually give you an empty bag. I hope you've guessed I might do that and planned accordingly."

   12 public void DoFancyBusinessSteps(Person person)

   13 {

   14     Response response = client.Lookup(person);

   15     if (response != null)

   16     {

   17         person.Address = response.Address;

   18         person.ExternalSystemId = response.ExternalSystemId;

   19     }

   20     //More stuff based on the Person...

   21 }


I'd rather return an object that is safe to use, regardless of what answer we got from the external system, and is helpful if we received useful info. This is the Null Object pattern.

I created an IResponse interface that exposes one method, Update(Person). Next I created two implementations of that interface, a Response and a NoDataResponse.

   12 public void DoFancyBusinessSteps(Person person)

   13 {

   14     IResponse response = client.Lookup(person);

   15     response.Update(person);

   16 

   17     //More stuff based on the Person...

   18 }


Response.Update uses its fields to set properties on the Person (with a method name that clearly states it is doing so). NoDataResponse.Update quietly does nothing. This allows the Translator to ask the Client to look up info about the Person, and ask the resulting Response to update the Person.

   21 public class NoDataResponse : IResponse

   22 {

   23     public void Update(Person person)

   24     {

   25     }

   26 }


I like it. As with all good tools, it's prudent not to over-use it. If quietly doing nothing would leave the Person object in a bad state, so that it blew up or corrupted data when you tried to use it later, then don't use the Null Object pattern. Throw an exception instead. The Null Object pattern is handy when you want to return an object that can do stuff in some conditions and will be harmless in other conditions.

Follow-up to the Retrospectives Workshop

My co-facilitator Suzy wrote up her reflections from the AgileAustin retrospectives workshop, where she shares more of the great insights that came from the participants.

Retrospectives: Collaborative Improvement

Suzy Bates and I facilitated an AgileAustin workshop this morning called "Cheaper Than an Off-Site: How to hold effective retrospectives that improve team collaboration and performance." It was absolutely a blast, from planning through delivery. A benefit I didn't anticipate (but should have) is that I learned a lot, myself. I'll share some of those epiphanies here.

If you were to ask me what one thing a team could do to improve its delivery of great software on a predictable schedule, I would say: hold effective retrospectives. A retrospective is a regularly recurring discussion where the team reflects on how they work together, and what they will change in order to get better. The end of a sprint, right after the demo, is a good time for this.

We presented the following as the key ideas:
  1. Safety
  2. Diversity of viewpoints
  3. Collective ownership
  4. Structured discussion


Being a facilitator for the workshop was a bunch of fun. Collaborating with Suzy was excellent—meshing our two approaches really enhanced the content, as I brought the philosophy and she brought the practical hands-on exercises, and we kept each other moving and making progress on our preparations. If you're tempted to lead a workshop but daunted, find a teammate.

The audience participation was even richer than I'd hoped. People enthusiastically contributed, leapt into the exercises with gusto, and shared some striking insights. I learned neat stuff. Here are some highlights:
  • Rotate who holds the role of retrospective leader (or sponsor). That brings fresh ideas and spreads the sense of shared ownership.

  • When a team member expresses dissatisfaction (for example, through a team satisfaction histogram), sometimes it is appropriate to acknowledge it without delving into why and root cause and solutions. If team safety means I can say how I feel and that's okay, then I should be able to say how I feel without getting interrogated about why I feel that way.

  • A bunch of "went well" items without any "needs fixing" items might indicate a lack of team safety. In today's exercise, our sprint teams barely knew each other, so they wanted to be polite, and so they were very positive in their reflections. When you're comfortable with your team and trust them, it becomes easier to talk about areas for improvement. I'd view a series of "everything was great" sessions as indicative of a lurking problem.

  • Affirmative Inquiry is a philosophy/strategy worth looking into. It replaces, for example, the negative "why is this broken" by changing it into a positive "what was successful in the past that we could apply here."

  • Book recommendation for Jim Highsmith's work (probably Agile Project Management?).


If you attended the workshop: Thanks! It rocked. If you weren't able to, keep an eye out for future AgileAustin workshops. They're announced on the AgileAustin email list, so sign up for that. If you don't live in Austin, well... nanny nanny boo boo.

Quantifying Benefits on Refactoring Work

When last we talked about estimating and prioritizing code maintenance, we'd left it as an exercise for the reader to devise a method to compare the relative size of the benefit from two different refactorings. In other words, you can work on A or B; which one is likely to give a greater benefit? I've had an idea for how to do this. It even includes some satisfying math.

Start with the premise that, when estimating the size of effort, it is sufficient—nay, preferable—to compare values in an abstract unit ("story points") that does not map directly to any real-world concept (such as "hours"). When you try to estimate effort in a real-world unit, people get distracted by (hung up on) the wrong details. Better to use an abstract unit that lets you compare the relative sizes of two efforts. Then you can prioritize them ("Let's do the smaller one."), and over time you develop a predictive measure of how many units your team can deliver. Can we create a similarly abstract yet useful unit for comparing benefits?

You undertake a refactoring because you want to make the code better. The benefit from this work comprises two pieces: how much you're going to improve an area of the codebase, and how important that area is. Your impact will be made up of how much "betterness" you can impart, and how much the betterness will matter.

You've probably had a similar experience: You find an area of the code that makes you frankly itch to improve it. You could dramatically increase its beauty. But it's not really used in very many places, and business needs hardly ever drive you to change it, so its beauty or lack thereof is actually rather insignificant. On the other hand, there's a class that makes you queasy every time you have to interact with it, but it's used everywhere, so changing it would be a huge, risky undertaking. It stays ugly, despite being so important.

There are a number of intuitive and emotional influences in those decisions, and they interact with each other in additive and multiplicative ways. This is a good place to apply some rigor, to get the emotions out of the way and compare options more objectively. You apply a similar rigor when you tackle a difficult decision by actually writing down the pros and cons in two columns on a piece of paper, so that you can see how the two sides stack up. So let's apply that to comparing the possible benefits from two refactorings. We only have time to do one, so we're trying to decide which one to do.

Consider a questionnaire, with pairs of questions, asking about what the code might be like after you're done.

  1. Future proofing:


    1. How much easier will it be to change it the next time?

    2. How often do we get asked to change this area?


  2. Regression proofing:


    1. How much better will our test suite be able to prevent defects in this area?

    2. How business-critical is it that we don't introduce defects in this area?


  3. Avoiding risk:


    1. How likely are we to succeed without creating problems?

    2. How tolerant is our business of risk in this area?


  4. Improving satisfaction and increasing velocity:


    1. How likely are we to reduce tech support incidents by this work?

    2. How many of our tech support incidents can be attributed to this area?


And so on, with questions that are specific to your own team. Collaborate with your team to create the questions, so that they represent the team's decisions. Note how a pair of questions covers how much improvement and how much that improvement will matter. Also see that for each question, a more emphatic answer is a good thing. For example, the avoiding-risk question asks how likely we are to succeed, not how risky the task is. Questions are phrased so that, the more you say "Yes, a lot," the more that's an endorsement in favor.

Now, depending on your bent, this next bit will remind you either of a prioritization matrix or a Cosmo quiz. Nevertheless, for each question in your questionnaire, answer a 1, 3, or 5, to represent "barely," "some," "a lot." Then multiply the coordinated a's and b's and add those up: (1a * 1b) + (2a * 2b) + (3a * 3b)... In this way, you represent the multiplicative relationship between How Much Better and How Much Does It Matter.

We need a unit for these scores. I'm thinking "bunits" (BYOO-nits) because they are a unit of benefit and of beauty. So if you have two refactoring tasks on the table during your sprint planning, and Refactoring A has an effort of 13 story points and a benefit of 17 bunits, while Refactoring B has an effort of 8 story points and 23 bunits, you can lean towards choosing Refactoring B. As with all matrices of this type, if that decision completely flouts your intuition, then discuss it with your team—maybe your intuition can be calmed, or maybe the questionnaire is failing to cover an important aspect of your work and needs some additional questions.

The real benefit of refactoring is proven only over time. Keep previous bunit estimates in mind and use those in your comparisons and trade-offs when selecting refactorings to undertake. Add this technique as another tool to help you understand the parameters of your decision, but never to override your own good sense.

This is a technique for quantifying the benefit of work that does not have a direct dollar ROI. Why quantify benefit, especially in a unit that has no analog in the real world? Two reasons. First, if you're going to prioritize your to-do list, every item needs some sense of effort and some sense of reward. It just makes sense to do easy things with lots of benefit before hard things that barely matter. Second, asking yourself questions like the above imparts rigor to your decision-making process. We are seekers of beauty, and want desperately to clean up whatever icky thing we looked at most recently. Surveying the choices and comparing relative benefits prods us to make sound business decisions, instead of scratching an itch.