The Trouble with Technical Debt

This article is not about the perils of accumulating technical debt, nor the challenges in paying it down. Instead, it is a call to action to the developer community that we change the way we talk about scheduling technical debt.

It's a constant sore spot and the source of many arguments between developers and product owners: How do you make time for refactoring? When do you pay down your technical debt? I say we developers need to change how we make our proposals.

We ask our product owners to prioritize the backlog: to attach a business value to each user story and put them in order. Then we provide a size estimate for each story, indicating its relative cost in time and resources, and voilà: a prioritized list of requirements with a cost and a benefit for each.

Then we propose investing some time in cleaning up a lurking problem. The work will improve our velocity in the future. By how much? A lot. And how long will it take? A while. So please fit this chunk of work into your carefully prioritized backlog.

You see what we have here, folks: an ArgumentNullException. The product owner has sorted a list of objects based on their cost and their benefit, and then we try to place an object into the list with an undefined cost and an undefined benefit. No amount of arguing is going to make that work.

The first counter-argument I receive from folks is that the business should trust us or, from the more progressive thinkers, that we need to earn the trust of the business. Um, perhaps. But imagine you were the person prioritizing your own list. Even if you trusted your own motives perfectly, how would you prioritize an item with unknown cost and unknown benefit amongst a list of items with known costs and benefits? And that's assuming perfect trust; I'm skeptical of our discipline to admit that substandard, icky code can be good enough. Developers (the good ones, anyway) have a strong desire for elegance and simplicity.

The second counter-approach I hear is timeboxing: allocate a fixed amount of time to refactoring and only spend that much time. That fills in a value in the "cost" column, but it does not address the undefined benefit. Also, can you think of a scenario where you've undertaken a significant refactoring, the clock ticks over, but you're not yet in a stable state? You either roll back the changes (and realize zero benefit), or muscle through (increasing the cost)? For the timeboxing approach to honestly reflect your cost, you must work with the discipline to make many small, beneficial changes, and stop when the clock runs out. Are you up for that?

Speaking of size, be sure to consider the testing cost. Technical debt accumulates in the areas folks are afraid to change. Those are usually business-important, application-spanning areas. Your refactoring will likely touch many features in the application, warranting thorough regression testing. How good is your automated test suite? Be responsible about how much testing you'll need, and be honest about how much time it will take.

Okay, agile developer community. If we're going to get technical debt work and refactoring into a sprint, we need to apply the same rigor that we ask from our product owners. Cut the work into a discrete unit. Write acceptance criteria so you'll know when you've finished. Make some kind of estimate of the benefit; talk with your teammates about how you might quantify such benefit. Using the acceptance criteria, estimate the complexity and size, same as if the work had been requested by your product owner. Observe and reflect on the predictive accuracy of your cost and benefit estimates to improve your estimates in the future. Start conservative to build experience and credibility over time.

Most product owners are happy to invest time in making the product more stable and making the team faster on future features. We need to give them estimated costs and estimated benefits so they can fit that work amongst other business priorities.

Organizing Ideas with Concept Maps

I love concept maps as a way of explaining a topic. (Here's an example, capturing what I learned at a KaizenConf session.) If you're a visual thinker, you'll definitely want to check this out. If you design UIs, this is also of interest.

The way I use concept maps most often is exploring and explaining a concept to myself. The act of drawing the map sorts my ideas visually, lets me hang new information off logical hooks, and gives me a picture to visualize when I want to recall the info later. (If you've talked with me about F# and watched where I gestured, you've seen that OO ended up on the left side of my map, and functional programming was right of center.)

CmapTools is a free software application that makes drawing maps intuitively easy—better than paper, because you can move concepts after you've fleshed out more of the landscape. And I have a strong preference for paper for brainstorming and thought-capturing, so that's saying something.

I like it not only as a tool, but also as an example of usability. It's so low-friction because you trigger actions (creating a concept, making a connection) right where you're already focused, in the work area, not in some menu at the top of the window. Click-and-drag from an existing concept to create a new concept and connect them; then type, click, type to enter the labels on the connection and the concept. I admire CmapTools for its non-noisy GUI—and of course I love it as a user because I can create a map fast enough to not lose the thread of my thoughts.

The example linked above is cool in another way: It's on a wiki where participants at KaizenConf are creating the conference proceedings. Once you go self-organizing, baby, you'll never look back.

Oh, I needed that.

So the thing about mind-expanding, discussion-rich conferences is they can leave you feeling a little overwhelmed with how much more you have yet to learn. At the end of the day, I wanted a cup of tea. The aphorism-inscribed flag on my teabag said:

"Keep up."

Sprint Heartbeat: Visual Task Tracking

Continuing to think about visually tracking the health of a sprint, I advocate tracking at the task level, rather than the story level. Recall that my goals for a successful solution include:
  • Radiate information.
  • Clearly communicate whether the sprint is on track and likely to conclude successfully.
  • Alert us to lurking risks, so that the team can react and adapt proactively.
  • Tell a story about the sprint from which we can learn during our retrospective.

Instead of moving user stories through phases (development, review, testing...), list their tasks. Tasks for a story could include testing and reviewing pieces, merging code into previous versions, meeting with a product owner to review the UI and clarify requirements, or convening a design session. You can add tasks to the list as they are discovered, which can signal that a story is ballooning or that a technical problem is thornier than originally estimated. If there are certain steps the team is trying to get more rigorous about (like writing unit tests before writing code), you can list them as explicit tasks until they become second nature.

Each day, estimate the number of hours remaining in each task. Not the hours spent, and not a comparison of estimates to actuals. This is not about "metricking estimation accuracy" or some other useless bean counting; it is about ascertaining right now, today, how many hours' worth of work is there left to do, and will that fit in the time we have left.

The hours remaining on a task might go up. (Ever peel back the wainscoting on a piece of code and find it's all termites underneath?) New tasks might get added during the sprint. The reason to track the hours remaining is so that you can adjust when you spot a potential problem—perhaps a team member is getting pulled away with other work, or is stuck in a mess of code that your expertise could alleviate, or maybe it's time to negotiate a trade-off with your product owner. To deal with any of these effectively, you need to know, if you'll pardon the vernacular: Where we at?

Sample agile task list, tracking hours remaining for each task

In the sample task list above, you can see that we're not only showing the hours remaining, but also tracking the movement of that number day after day. That allows us to make a graph, creating that visual heartbeat we're looking for.

Sample sprint burndown chart, comparing each day's hours remaining against an optimal trend line

At the end of the day (or completion of a task), update tomorrow’s column for the number of hours remaining for that task. If you finish a task on Day 4, then at the start of Day 5, there are zero hours remaining for that task. New tasks that crop up unexpectedly should show a zero until the day on which they were discovered, and then an hours-remaining estimate until finished.

The graph plots each day's sum of estimated hours remaining. It also shows an idealized burndown line, as if the team were completing [original hours estimate / number of days] worth of work each day. This makes the picture clearly communicate whether you're in fair weather or potential trouble.

If estimating hours remaining seems daunting, split the tasks into smaller pieces. A task should fit within a day, and the work you claim in the morning stand-up should fit within a day.

Time tracking is a sensitive topic, and suggesting it may not be well received by the team. Despite good intentions, many organizations use time accounting policies abusively. Superstitious beliefs, uncharacteristic vehemence, and irrational resistance are indicators of prior abuse. There is a tension between the two Agile tenants of transparency and self-organizing teams: I don’t want to hide data from you, but if I let you see the tools we’re using to track our progress, you’re liable to interfere, draw wrong conclusions, or base bonuses and promotions on them. These are not flaws in tracking sprint progress, though; they are deeper organizational issues, that are worth correcting and rehabilitating so that the team can take advantage of a burndown chart's benefits.

A graph tracking your team's completion of work gives you a picture of the state of the sprint, making it easy to see if you're on track to succeed, and alerting you to struggles early enough to react.

Visible Sprint Status: Maybe Not Corkboards?

I'm thinking about ways to track the activity—the status... progress... stuff—that happens within a development sprint.

You could conceive a user story as moving through a series of phases (e.g., development, code review, testing, accepted). You might set up your Scrum task board using this paradigm, where you move an actual piece of paper between different columns on a corkboard. Some project-management software packages I've evaluated encourage a workflow of phases.

I wonder if there's a better way. At the user-story level, the idea of "passing through phases" seems Waterfall-ish, which always snags my attention because so few human endeavors are that linear. Phases lead you into some non-Agile patterns of thought, and they obfuscate the current health of the sprint.

Mike Cohn's task boards (with pictures!) divide into rows as well as columns. A row corresponds to a user story, and the tasks that compose that story move through the columns, which correspond to statuses. I'd previously missed (or forgotten) this distinction, and I thought the cards on the board were stories. I finally noticed the difference while drafting up this blog post, because I went looking for corroboration about the disadvantages of moving user stories through phases on your task board. What follows is an analysis of those disadvantages: non-Agile patterns and obfuscated sprint status.

By non-Agile patterns, I mean that columns on a board draw boundaries (literal and metaphorical) between collaborators. Do you intend to tell your testers that they can't think about a story until it lands in their column? Are you keeping your product owner from looking at a feature until all the code is built and blessed (and you've run out of time in the sprint)? Are you throwing code over the wall at people who might as well be in a different department?

"Heck no!" you cry. But doesn't a task board with indivisible user stories, sitting inertly in discrete statuses, imply that's what you're doing? Well, it could. And the metaphor subtly seeps into your thoughts and behavior, constraining your team's responsiveness, creativity, and collaboration.

How does a story-oriented task board obfuscate the status of your sprint? After all, it is tangible, visible, large-as-life where everybody can see it. But what is it telling you? Here are some challenges I've observed.

The pieces of paper representing user stories are the same physical size, but the effort required for different stories can vary dramatically. You glance at the board for a gut check, but you'll draw the wrong conclusion from lots of little stories (unnecessary worry) or few big stories (misplaced complacency). You could ameliorate this by making the pieces of paper represent component tasks instead of whole stories, but does a mere task move into the testing phase? Does your product owner evaluate a task? Can your customers use one? No, they need complete user stories. Once you've made the papers represent similar-sized chunks of work, the moving-through-phases metaphor gets in the way again.

Phases are liable to proliferate, until you want more columns than is practical for an actual corkboard. For any given phase, you might want to distinguish between "Ready to be picked up" and "In progress." Even if you're tracking virtually in a project-management tool, having too many phases is still annoying; you spend your time clicking dropdown lists instead of creating software. If you're in this mode, though, one solution for distinguishing between "ready" and "in-progress" is to write your name (or set the "Assigned To" field) on an item when you claim it.

Some items visit a phase more than once—like when a tester discovers a bug and gives the story back to a developer. (See that hand-off? That's the non-Agile-ness rearing its head again.) So when you see an item in the development column, is that its first trip through, meaning a lot of work remains, or a subsequent trip and it's nearly finished? You can't tell by looking at the board.

A communicative tracking tool would show you where bottlenecks or wasteful idleness is occurring. Using phases obfuscates this as well, because it implies that your testers aren't even looking at a user story until the coding is complete. (Can you call a piece of coding complete if it hasn't been tested or shown to the product owner? I'd say not. So read "complete" as a different word that represents "passed from the Development phase to the Testing phase.") But testers at the beginning of a sprint are creating test plans and collaborating with developers and product owners on the acceptance criteria; their "Testing" column looks empty, but they are in fact very busy with useful work. Someone outside the team might look at the board, conclude that the testers are idle, and be tempted to distract them with other tasks.

What a list. It's strange to me to write it. At my old job, my team had to have a virtual task list so that we could share it across continents. I was always jealous of the kids who got to have real live note cards on real live corkboards. Oh, how wistful I was, to have such an elegantly simple solution.

But now that I've used a task board, I can see its limitations, and I wonder about alternatives. I still want to cling to the belief that a tangible solution is better than a software tool, but I might have to let that go. Software is much better, for example, at turning points of information into a communicative picture. It is also better at searching, archiving, sharing with out-of-town stakeholders, and scaling.

The right solution will radiate information. It will clearly communicate whether the sprint is on track and likely to conclude successfully. It will alert you to lurking risks, so that the team can react and adapt proactively. It will tell a story about the sprint from which you can learn during your retrospective. These are the requirements by which I'm evaluating tracking techniques. The right method will show the heartbeat of the sprint.

You're too kind. Tip yer waitress.

There's a "Pho King Coming Soon" sign near my house, which means there will soon be another Pho King restaurant. I wonder what their Pho King food will be like. I hope they have good Pho King service, and a nice Pho King ambiance.

Oh, why get excited? It's just Pho King noodles.

Yes, Jon and I can amuse ourselves for miles with this gag.

Market Research

Your opinion, please: What are the qualities of a conference that attracts a sparkling diversity of attendees?

I'd like to gather our collective observations and apply them to increasing the variety of ideas that are brought to our conferences and the variety of people we reach with our ideas. My first practical applications will be software-oriented conferences, but I think the best suggestions may come from non-software events. Think broadly; where have you been amongst a bunch of people who differ from you, where you benefited, learned, and taught?

The question usually comes to me regarding women, but I'm interested in other kinds of diversity, too:
  • interaction style
  • culture
  • experience level
  • type of experience
  • project methodology preference
  • age
  • outlook
  • interests
  • worries
  • ...
So please, think about gatherings you've attended where a lot of different people harmoniously shared and challenged ideas. What gave those gatherings the opportunity to bring those people together? Once they were there, what made the space safe for them to contribute, to speak up, to risk being different?

If you don't mind, ask this question around. I'm looking for a bunch of contributions that are, well, diverse.

Workstation Hack: Visual Studio on 2 monitors

Having two monitors is an obvious improvement to any development workstation, but are there tricks for really effectively using all that extra acreage? I learned a great strategy from a co-worker for a two-monitor setup for Microsoft Visual Studio.

Hack Summary
Maximize the editor on one screen, and pull out all the floating windows (Solution Explorer, Error List, etc.) onto the second screen—which is great until you want to look at your database or your wiki or your work-in-progress app while also typing code. (If you're typing in the editor, Visual Studio is in front, which means all those little windows are in front of any other app you want to look at while coding.) The solution is to set up some shortcut keys that switch you from "Sprawl across two screens" mode to "Pull everything into one screen, dock the little windows, and make the second monitor available for other stuff" mode, and back.

Export Two Sets of Window Settings
You can save Visual Studio settings (such as window layout) to a file, and later import that settings file. For this hack, we'll export two settings files (one for two-screen, sprawl mode; one for one-screen, compact mode). Then, we can switch from one mode to the other by importing the appropriate settings file.

Get your windows how you like them in one-screen mode. For example, my Solution Explorer is a docked pane on the right, unpinned so that it auto-hides; and so on, for the handy windows I want at my fingertips. So, set those up.

Go to Tools > Import and Export Settings... > select the radio button for Export > click Next > uncheck All Settings > expand General Settings > check Window Layouts > click Next. Name the file (e.g., OneScreen.vssettings) and give it a location. Click Finish.

Now set up your windows in two-screen mode. Unhide a window by hovering over it, pin it, right-click on its title bar, and pick Floating. Drag it where you want. This is the artistic phase.

As above, export those settings, with a different name (e.g., TwoScreens.vssettings).

Create Two Switcher Macros
Add a macro module in the Macros IDE (Tools > Macros > Macros IDE), and create two macros, one to import each settings file. Here's my module, called WindowsSettingsSwitcher, which gets my settings files from C:\VSSettings\.

Imports System.IO
Imports EnvDTE
Imports EnvDTE80
Imports System.Diagnostics

Public Module WindowsSettingsSwitcher
    Public Sub SwitchToWinLayoutTwoScreens()
        DTE.ExecuteCommand("Tools.ImportandExportSettings", "-import:C:\VSSettings\TwoScreens.vssettings")
    End Sub

    Public Sub SwitchToWinLayoutOneScreen()
        DTE.ExecuteCommand("Tools.ImportandExportSettings", "-import:C:\VSSettings\OneScreen.vssettings")
    End Sub
End Module

You can test the macros from the Macro Explorer by double-clicking their names.

Assign Shortcut Keys to the Macros
Create some shortcuts that execute those macros. Tools > Options > expand Environment > click Keyboard. In the "Show commands containing" box, enter some text that will find your macros, e.g., "switchtowin" if you used my names above. Highlight a macro, put your cursor in the "Press shortcut keys" box, and type your shortcut. (Mine are Ctrl+Alt+1 for one-screen, and Ctrl+Alt+2 for two-screen, since those weren't in use by anything else.) I left the "Use new shortcut in" set to "Global." Click OK.

At this point, you can use your shortcut keys to switch modes. I added them to my View menu, too, mostly to give me a visual reminder of the shortcuts.

Add Switcher Commands to the View Menu
Tools > Customize > Commands tab > select the "Macros" category. Click and drag your macro from the Commands list in the dialog up to your toolbar. When you hover over View, the View menu will appear, showing the insertion point for your new command. Drop it in place. Right-click on the command in its new location, click on the Name box, and edit that down to something helpful (e.g., "OneScreen"). Repeat to add the other macro. Close the Customize dialog.

With this hack, you can keep lots of useful Visual Studio bits visible (liberating you from relying on the mouse to hover over auto-hiding windows, and maximizing your editor window), but stow them away quickly when you need the second screen for something else.

So how 'bout you? What are your workstation hacks?

Change your organization, or...

So here's the thing about being in a place that makes you depressed: You're too depressed to make the changes necessary to get out of the place. It can seem insurmountable. In eerily coincidental-sounding but utterly unrelated news (*cough*), I have a new job! You will certainly already know this if you've had occasion to see me bouncing off the walls lately. While some of my friends are content in their jobs, many are not, so I'll share my self-help program, in the hopes that it might console and inspire those who are wishing for a change.

I had been in the old job for 8 years, so even thinking about a change was scary. Here's what I did.

You deserve a job that makes you happy.

Really, you do. You probably have a list of reasons why you need to stay at the place you're at, but look critically at those reasons. Put yourself into a five-year-old's mindset and ask "Why? …Why? …Why?" about each one. The reasons on my list, when I started doing some fact-checking out in the real world, turned out to be myths.

Throw rocks at that old adage, "…that's why they call it 'work.'" Pfft, whatever. If you're reading my blog, I'll assume you're a knowledge worker, and probably a programmer, and someone who likes to think about new ways of doing things. Lucky us, there are many lucrative careers for people who like to think. For me, writing software is like play—heck, it is something I do for play—so I didn't need to find a different line of work, just a different job that let me write software. You don't need to suffer to put bread on your table.

Get out there, walk amongst the people. If you're really sunk in the doldrums, this is probably the most tempting one to blow off, but here are some benefits: meet people with similar interests and challenges; dispel myths about the job market; find inspiration, in people who like their jobs, in people with passion; meet job prospects; become known for your ideas and contributions.

Networking and professional venues I find helpful: door64, AgileAustin, AgileATX, GeekAustin, Austin .NET User Group, classes, and my blog. Yeah, yeah, and LinkedIn.

Define the goal.
I developed a really clear list about what I wanted in an employer. Honed it, you might say. Took it out during meetings and quietly polished it. But this was helpful, because it gave me a concise list of interview questions to ask of my potential employers. The ultimate filter question, the one at the top of the list that immediately let me know whether it was even worth continuing, goes like this: "Do you like your job?"

Look for a yes. There are a ton of answers that are not yeses. I know, because I had years of delivering them myself. "It presents me with a lot of challenges. I get to work with people from all over the world, and I'm part of a great team." Yeah, but that wasn't a yes. I want to work in a place where it's possible to like your job.

Keep it positive.
No matter how bad it is, if you come off as a complainer, you will sour job prospects. Be diplomatic, and steer the conversation to what you're working towards, what you look for in the future.

Seek support.
To keep the gushing to a minimum, I'll simply say: My husband rocks. You've probably got someone, too, whether a sweetie or a best friend, a pen pal or your mom or your dog, someone who knows that you're awesome. This is a good time to trust that person (or dog), to let him or her know that you're embarking on something intimidating and you'd like encouragement through the journey. It helps to have a belayer.

Believe in yourself.
At my lowest, I believed that I had no useful, marketable skills; all I was good at was politicking my way through my employer's byzantine bureaucracy. I had to break out of that if I was going to even begin the process of moving elsewhere. (Or of getting a promotion, for that matter.) I got a big sheet of newsprint and a box of goofy-big crayons, I went into a room by myself and closed the door, and I brainstormed. Crazy, wild, unrelated ideas about what I do, what I'm capable of, and what I am that is awesome. I wrote a lot of things on that paper, and many of them showed up on the next draft of my résumé.

And then I thought about what I wanted my résumé to look like. Or, more specifically, what skills I would need in order to get the kind of job I would enjoy. I made a curriculum: books to read, topics to research, and projects to implement. Tuesday is homework night, for writing code. Here's the trick to making this palatable: Every time I worked on a curriculum item, I complimented myself on taking another step towards that new job. When you're in a hole, the very act of climbing up makes you see the sky. That's where I'm going, and I'm making progress towards it. Yay, me.

So there's an overview of how I improved my lot in life. The hardest part was overcoming inertia and getting started. Well, that, and persevering through to the goal. ;-) If you're happy where you are, don't forget to look around and appreciate that from time to time. Every job has its annoyances and challenges I imagine, but it is possible to find one where you feel fulfilled and appreciated, where your skills are useful, and where the challenges are fun. You deserve a job you enjoy.

Lurking Under that Resistance

Some of the topics at the AgileAustin Open Space revolved around a theme. Is Agile appropriate for critical systems (medical, life-supporting software)? Can we use Agile when we have a fixed timeline and a finite budget? Will Agile work when our software supports an always-on manufacturing floor? What about when developing strategic software, breaking new business ground? Are there times when Agile is not appropriate?

I believe the conveners are primarily asking these questions as proxies. They're at the conference because they believe in Agile, but they've been asked these questions by stakeholders they need to convince. So they're looking for help in crafting their arguments.

In my experience, resistance to Agile derives from an underlying fear. That fear tends to take some mix of two forms: "I've invested my career in getting good at one way of doing things, and you want to make me obsolete," and "My rear is on the line for a lot of money. I'm not interested in risking my rear while you experiment with your touchy-feely methodology. Just deliver."

When trying to convince someone, first sussing out his concerns and then pitching your argument to address them will make you most effective.

For the fear of obsolescence, convey that, while some ways of doing things may no longer be needed, the person is still valued, and he has skills and experiences that will help the team make the transition. Give that person a clear role, an obvious place of value, and his fears and therefore resistance should relax.

To those who are inherently change-averse, you can still work to improve the feeling of safety, to make the change more palatable. There are some folks, however, whom Agile doesn't suit. Let them find jobs as SOX auditors or something.

For those who seek assurances that this crazy experiment will deliver, on time and on budget, I take two tacks. I show them burndown charts, and explain how you can clearly see, "Does this trend line look like it's going to hit the finish line when you want it to?" Burndowns are very communicative graphics; they're a great tool. Second, I ask permission to try it for a month. Just, give me two sprints. Worst case, you've lost a month, which you would have spent writing half of a Business Requirements Document. Best case, you might have a few high-value, ready-to-ship features. That's a pretty good risk-to-return ratio.

Understand what fears are hiding behind their resistance. Address those fears (usually without making direct mention of them; fear tends to turn defensive if you point out a perceived weakness). Talk in their language, using their own levers, to present your case (e.g., talk numbers to a finance person). Finally, ask for something reasonable: "You don't have to do this forever, just let us try it for a bit, and then you can decide whether to continue or adjust."

Agile Open Space: On Certifications

The AgileAustin Open Space has kicked off. I found the opening session positive, engaging, really fun to be there. I'm chuffed because I felt confident enough to suggest (convene) a few sessions. One responsibility of a convener is to capture notes from the session. This blog post is meeting that responsibility.

Y'see, one of the sessions I proposed was entitled, "Certified ScrumMaster??" I wanted to gauge people's opinions of the CSM certification: Is it valuable, is it real, or is it just résumé-padding fluff?

I got my answer. The groans, grumbles, and rolled eyes around the room confirmed my suspicions. In this crowd at least, it is worse than irrelevant. It is counter to and detrimental to the philosophy of the Agile community.

After the proceedings broke into less formal conversations, I caught up with some community members whom I respect and enjoy. They elaborated on their earlier non-verbal remarks. It is not difficult to get a CSM; you attend a two-day class. That's it. That sounds like just enough knowledge to be really dangerous.

Why do we even need certifications? What does a certificate indicate about my real skills, abilities, and past experience? No, I inherently reject any model that sets up a gate-keeper-style hierarchy to knowledge—a system that says, "We know things; you don't. Your ideas and perspectives are not as good as ours until we bless you and permit you to be one of us (and your check clears)." I don't accept religions that do this, nor governments, nor software project methodologies, for Pete's sake.

The true flaw in the CSM is the name: Certified ScrumMaster. Go to a hiring manager and ask which she'd rather have, someone with 3 years' experience as a Scrum team member, or a Certified Scrum Master [trumpet fanfare]. Those in the know, know that ScrumMaster is the role; you hear it without the space. But to those who are not yet well versed in Scrum, it sounds like Mastery of the Scrum process; they inject a space between the words.

The Training page on the Scrum Alliance website says it plain: "The journey to mastery begins with..." and "These courses [CSM and CSPO] provide a solid foundation to help you make the paradigm shift to managing a project using Scrum." [emphasis added] They state straight up that this is the starting point. But the name of the certification doesn't say that. The opportunity for misinterpretation will get people into trouble.

Why does it get my dander up? Personally, because it threatens to be Another Damn Thing I gotta do to stay in the game. Professionally, because agile projects can be magnificent, and certifications smack of the process-for-process'-sake mindset that turns software development into a tedium. Philosophically, because neophytes will incorrectly elevate the merit of opinions from a Certified ScrumMaster, no matter how little experience he may have, and dilute and muddle the tenets of Agile.

Mike Cohn the other night joked about the CSM culminating in a tattoo. I don't know, man, I might put more stock in that, if the tattoo embodied the Agile Manifesto. (Embodied—ha!) It would at least convey the right level of commitment.

Simplest Responsible Thing

Someone on Saturday asked rhetorically, how do you reconcile the agile philosophy of "do the simplest thing possible" with good OO design principles. That apparent conflict stems from misquoting the first part. The more useful version is:
Do the simplest responsible thing that works.

That is, responsible to highly probable future features; responsible to your known performance demands; responsible to your business's security and reliability requirements.

Above all, responsible to your future teammates (and your Future Self) who will have to support and maintain that code. Happily, this last responsibility is nicely supported by the emphasis on "simplest."

Note also that it is "simplest," not "easiest." Your old familiar habits will be easiest, but if those habits were formed in a different paradigm—say... old ASP that you learned by osmosis (Hey, we all had to start somewhere.)—then it will take some dedicated study to turn the responsible thing, the simplest thing, into something you can do easily. Meanwhile, the state of the art in Simple will keep moving, but that just gives us something to keep striving towards. That's what keeps it interesting, right?

Wrangling Rhinoceri

I've been asked to help some teammates become more comfortable with Rhino.Mocks. I'll practice my explanations here, and y'all can offer guidance and ask clarifying questions if you're so inspired. I'll be glad for the help.

Rhino-what? What-mocks? What-what?
The domain we're discussing is unit testing, the code that developers write to verify the logic in their code. Rhino.Mocks is not a testing framework, but a mocking framework, which means it allows you to mock (simulate) parts of your application. You use it along with a testing framework like NUnit or MbUnit.

If you have written code without keeping an eye on testability, it is likely to be witheringly difficult to apply Rhino.Mocks after the fact. Life is simpler if you write the tests first, therefore Rhino.Mocks plays well with Test-Driven Development, which drives you to design your code in a testable way.

As far as what it is, Rhino.Mocks is a dll you include with your test project files. You create a reference to it from your test project, and it gives you some additional classes and methods you can use within your unit tests.

State-based and interaction-based tests
There are two main styles of unit tests, state-based and interaction-based, and each is useful for different things. With state-based, you set up some variables, run them through the methods you want to test, and then verify that their state has become what you expect. You use Assert statements to assert, "The state should be [this]. If not, fail the test." If you're testing an addition method, you can verify the state of the answer, as in, "I assert that 2 plus 2 should be 4."

Rhino.Mocks is used in interaction-based tests. Here you are verifying the interaction between your classes. For example, in a Model-View-Controller architecture, you could test your Controller's interactions with the Model and the View, as in, "I want to verify that, when the Controller is asked to display customers, it gets customers out of the data repository, runs the results through my CustomerFormatter class (I don't know, it's just an example), and sends them to be displayed on the user interface."

You have three interactions there, so you'll probably write three tests. You are not testing what comes out of the data repository, just that the Controller talked to it. In fact, you don't even need a real data repository for this test. If you could just simulate that repository in a way that let you verify it was called... This is what Rhino.Mocks gives you.

What next?
Martin Fowler has written a great article, Mocks Aren't Stubs. If this is your first introduction to Rhino.Mocks and mocking frameworks, perhaps not everything in that article will click with you. It is still worth reading now, and I'll link to it again in my next post. It wouldn't hurt to read Martin's article twice.

My next post will include why you would want to simulate classes instead of using the real thing. I also want to explain why you even want interaction-based tests. I have an intuitive sense of the value, but I am not yet good at articulating it. Feel free to chime in.

Globally distributed scrum

I'm on record as saying that globally distributed scrum teams can work, but sub-optimally. I'd like to retract and revise that position, so that something I'd said doesn't inadvertently lead a team down a poor path.

My team's scrum adventure lasted from March 2007 to April 2008. (Why does it have an end date? Buy me a beer, and we'll talk. But none of the people who were a part of or were affected by our efforts wanted it to end. The project was a success.) We started out with four developers in Austin, six developers in Bangalore, a Test Lead in Austin, and five testers in Hyderabad. I'll call this the Two Parallel Teams period. Team members were moved over time, so that we became two developers in Austin, eight developers in Bangalore, a Test Lead in Austin, and three testers in Hyderabad. This is the One Distributed Team period.

We had a good collection of team members, and I think we could have worked together very effectively, had the distance between our desks not been so great. I am not criticizing off-shoring; I am criticizing scattering your product owners, developers, and testers across different continents. Note who I included in that list there; if your dev team is co-located, but your product owner is far away, you have a distributed team.

We did not have a choice about where our teammates should be located, but we had influence over which project methodology we would use. So the choice was not "distributed scrum versus co-located scrum," but instead "distributed scrum versus distributed waterfall."

Do I still think distributed scrum is better than distributed waterfall? Yes, although with less unbridled enthusiasm. It required more meetings from all team members (including product owners) at crazy hours, but it still made it easier for the product owners to get the system they need; it still had us writing more code, more often, which is way more fun than fighting over contracts; and it still enabled productive, visible progress toward our goals.

We were happiest in our Two Parallel Teams mode. The developers in one city worked on one feature, and likewise for the other city. The only people who really suffered here were the product owners and the scrum master, who had to accommodate a ten-and-a-half-hour time difference to have their sprint planning meetings.

We offset our two-week sprints by one week, so that the testers could be shared between the two sets of developers. It worked okay because it did not require the "save state and hand off" process inherent in distributed teams. That process is, at the end of the day, you write an email explaining what you did, where your thoughts are about the work-in-progress, what you want your teammates to pick up, and what you want them to leave alone because it's currently too fragile to explain or share. It's like "hibernating" in Windows--save a snapshot of where you're at--and it takes time.

The more distributed the team, and the more they need to collaborate, then the more time will be spent handing off, and the team will be less efficient. In our One Distributed Team mode, the hand-off process overwhelmed the development process, so that more time was spent saying what you're doing than doing. This is the dangerous mode I want to warn people away from. If multiple people are collaborating on a feature, they have to be able to talk to each other, in person, real time.

There are still dangers lurking in the Two Parallel Teams mode. The most insidious is the way it undermines egalitarian, democratic, self-organizing decision making. For a team to decide things like designs, system architecture, even working practices, they have to be able to discuss and debate as a team. This requires trust, camaraderie, and personal safety. Those things are built in informal, day-to-day interactions--y'know, the way friendships are built. Over email and conference calls, they happen incredibly slowly, if at all.

You have to be comfortable with someone to be able to argue with him. The threads stitching together a global team are tenuous, and we operate with great delicacy to avoid snapping them. The communication channel is so anemic, you have to choose every word carefully, to ensure it does not accidentally offend. This is the opposite of having the confidence to know that, after we argue about this topic, we'll still be a team. Which means a lot of decisions go un-debated; they get made by default instead of by consensus.

Do I recommend globally distributed teams? No. Regardless of the project methodology, and no matter how talented the team members are, you will be dramatically less efficient if the members of your team are scattered over the planet. If you're off-shoring, then off-shore--put developers, testers, product owners, project managers, the whole team in one place. Software development is intrinsically collaborative. Structure your teams so that they may collaborate.

This experience gave me a taste, however, of how fun scrum can be. With a co-located team, you could seriously rock and roll.

Software Development as Manufacturing Metaphor

Alistair Cockburn's talk the other night was inspiring, thought-provoking, and entertaining. If you have the opportunity to hear him speak, I highly recommend it.

I have been wary of the agile community's current enthusiasm for the Lean literature, as applied to software development. I'm hesitant to trust "manufacturing" as a metaphor for "software development," because I work for a large manufacturing firm. My executive management believe they are experts in efficient manufacturing, and therefore apply those lessons to managing software teams. The agile-to-lean trend gives them permission to make this leap.

The problem comes in attending to the wrong part of the metaphor. Incorrectly applied, it overlooks that developers are knowledge workers, and you are led to believe that they are interchangeable laborers who should specialize on a small task. Easily outsourced, among other things.

But Mr. Cockburn made me realize the right part of the metaphor to apply, the useful part. He said, substitute "unverified decisions" for "inventory," and now you can map your team's process, look for bottlenecks, and improve throughput. When unverified decisions pile up, work is not getting done.

Mr. Cockburn said that different types of bottlenecks mean you need different solutions. He also flouts the strictest adherence to Lean, which would have you remove all waste, and instead suggests spending efficiency to increase throughput. You have too many developers and too few db designers? Let the devs iterate through a bunch of designs and code until they have it really baked, before handing it over to the db designer. Too many analysts and not enough devs? Get the requirements solid, signed off, and thoroughly detailed before giving them to the devs. Too few product owners to tell you which route to build? Build both and let them choose.

Spend your excess efficiency in order to increase the speed of the overall team.

Back to my beef with the manufacturing metaphor. Incorrectly applied, it lets you think waterfall software development makes sense: They give us the specs for a computer, we build it, it gets tested. But the scale is wrong. What a waterfall project actually does is give you the entire order for the DoD's workstation upgrade initiative, and you build 1000 computers before any of them get tested. (A huge backlog of unverified decisions.) Rapid iterations bring you back to building one computer at a time. And Scrum makes the deciding and the verifying nigh simultaneous...
Developer: Purple or green?
Product Owner: Purple.
Developer: 2gigs or 4?
Product Owner: Ooh, 4!
Tester: Oops, hold it, you actually grabbed the 2.
Developer: Whew, good catch.

What I learned from Mr. Cockburn is that the Lean philosophy holds useful lessons for the agile practitioner. Drawing an analogy between writing software and installing chips on a motherboard does not help you, but thinking of your software team as a process comprising different roles, each with their own decisions to verify, can help you increase your team's productivity.

Computer as Brain Metaphor

I have been splitting my reading lately between finite automata and metaphors that shape thought and action. My husband Jonathan and I have an on-going debate about the appropriateness of using the computer as a metaphor for the human brain. I find it useful for describing all sorts of events; he feels it undervalues the capabilities and elegance of the brain.

I probably don't need to pitch my side of the argument to y'all. You've likely said things like...
  • I wish I could install more RAM.
  • Dude, that guy totally overclocked his processor.
  • I can only hold x things in memory.
  • Core dump!

But it bothers Jon, and he takes me to task whenever I say something that betrays my underlying metaphor. I've been trying to understand why we have different emotional stances on this, and I've hit on the following. As a programmer, I think of my computer as the outlet of my creativity, the tool by which I express my craft. A user tends to think of his computer as an annoying appliance that would give him access to all this great stuff (entertainment, social contact, information, pr0n), if it would just freakin' work. You've seen how users treat their laptops: bang, pound, whack! Given how Jon and I view our brains (as valuable, precision tools), I can see why he does not want to think of his brain the way a user thinks of his computer.

Beyond that, I see Jon's side: This metaphor leads you to think of intelligence in terms of processor speed or amount of RAM, which leads you to believe you can quantitatively compare the intelligence of one human to another, and that leads to a very narrow definition of human intelligence. There is more to intelligence than number of instructions per second. To satisfy me, a metaphor for intelligence has to accommodate empathy and personality and intuition. From these spring creativity, and society, and humanity.

At least, that's what the wetware wants me to believe.

Runs in the Family

I'm a language geek, but I try not to be a nit about it. I like the subtle nuances in choosing the "right" word for a situation, but not at the expense of connecting and communicating with real human beings. However, sometimes a small change in word choice can have a large impact on clarity.

In JP's Nothin' But .NET class last week, we had an object called "criteria." (He was creating a DSL for generating SQL statements. Neat stuff.) My brain kept tripping over the example, until I realized, "Oh! Your criteria is a single thing, and I'm expecting it to be a collection. Ah ha!" So I suggested a name-change to "criterion," not to be pedantic (I swear), but just to improve clarity.

A little time passed, and then a student raised the objection I was expecting. "Well, actually, Merriam-Webster's says..." I know this objection well; I've explored it, and I've read Merriam-Webster's editorial philosophy (Did you know your dictionary has an Introduction?). M-W has a descriptive focus, not prescriptive; their intent is to capture and document language usage in the wild. It's a good resource, but not when you want to know the "proper" use of a word.

Knowing this did not improve my image in the class.

Ah, well, I gotta be me. Teasing me, one student said, "Hey, you know so much about dictionaries. Do you have strong opinions on hash tables?" No... that's my dad!

NAnt and LINQ - namespace error

"The type or namespace name 'Linq' does not exist in the namespace 'System' (are you missing an assembly reference?)"

I was getting this error when using a NAnt script to build a project I created in Visual Studio 2008. Turns out to be related to which .NET Framework is targeted, in this case.

Aside: If you are not using NAnt, and you're getting this error when you try to build, then make sure your project has a reference to the System.Core assembly. But if you use the defaults in Visual Studio 2008, then you already have this reference. My project would build through Visual Studio, but not through my NAnt script.

Here's the cause. I thought I should use the latest stable release, instead of the latest beta, so I downloaded NAnt 0.85. It supports multiple frameworks, and by default it targets the framework in use. You can point it to another using the -t command-line argument.

But NAnt 0.85 is aware of frameworks only up to 2.0. For LINQ, you need 3.5, and therefore you need NAnt 0.86 (in beta at the time of this writing). I kept 0.85 just in case, so my NAnt folder contains a folder for each version; they co-exist. I just changed the path in my nant.bat file to point it to 0.86-beta. And voilà.

Hope this helps. Yay, automated builds!

Pretty code: Skeptical of elses

Pretty code is readable code. One strategy for code beautification is to look critically at every if/else statement. Is there a more streamlined way to write that statement? Cultivate a general mistrust of elses.

Some examples...

Testing a boolean to return a boolean

if (x == true)
  return true;
  return false;


return x;

If you need to swap those,

return !x;

That's a pattern. Recall that comparison operators (less than, greater than, equal to) return a boolean. So this also applies here:

if (myPropertyToCheck == someValue)
  return true;
  return false;


return myPropertyToCheck == someValue;

Guard clauses
You can use a guard clause when you are testing if it is safe to do something, and if not, you want to exit.

if (safeToDoThis)


if (!safeToDoThis) return;

When the DoThis() is rather involved, guard clauses greatly help the readability for your future teammates (even when that's you). Step 1, check if we're in an okay state, and if not, just get out of there. This saves you from the Hunt The Else game (although, if that matching else is that hard to find, you could break up your method into smaller pieces with more specific responsibilities).

Compare() and CompareTo() methods need to return -1, 1, or 0. It's handy that these are integers, because now you can harness the Power of Arithmetic to do your bidding. Also when you are comparing your own classes, you often want to compare a property that is a value type or a string. Those already have their own CompareTo() methods, which you can borrow.

Say you want to sort your children by their ages. You don't need

if (child1.age < child2.age)
  return -1;
else if...

(Not just an if/else, but an if/elseif/else. Aaah!) Use instead:

return child1.Age.CompareTo(child2.Age);

If you want to sort them from oldest to youngest, this is where the arithmetic comes in. To flip a negative 1 into a 1, multiply it by negative 1. Same for flipping positive 1 into a negative 1. And zero, conveniently, doesn't mind being multiplied by anything. So you could say:

return -1 * child1.Age.CompareTo(child2.Age);

You can get it even simpler. Because -(-1) = 1, and -(1) = -1, your Compare() method becomes:

return -child1.Age.CompareTo(child2.Age);

Not only no elses, but also no ifs! Very pretty.

Reducing the number of paths through your code (i.e., reducing cyclomatic complexity) gets it closer to being read like prose. Simpler code has fewer bugs, and your successors who have to read your code will think you are smart and good looking. Keep a healthy mistrust of else statements, and write pretty code.

5 Qualities that Make Social Software Social

I have single friends who try to meet people by going to bars. But the only thing you have in common with people you meet this way is that they are trying to meet people. Contrast this with taking a class, joining a club, of volunteering for a charity, where you meet people with a common interest and compatible outlook. Much more likely to hit it off.

Social software intrigues me, as it ushers us further down our path to becoming integrated cyborgs. What creates self-sustaining communities? What causes some corners of digital life to reach critical mass and become essential social outlets, while others wither and wander off? What makes these sites successful?

MySpace and their knockoffs strike me as attempts to meet people in a bar (complete with the assault of unwelcome music when you walk in the door). Flickr (photos) and Folia (gardening) are clubs for people with similar interests. I have fun hanging out in these clubs and enjoy the people I meet there. The first community-builder is: Pull together people with a common interest.

Closely related but slightly different is that hobby-oriented sites give people something to talk about besides themselves. For a while I used LiveJournal to keep in touch with my friends, until I got completely fed up with it. I like people better when I am not privy to their every insecure and narcissistic thought (and they shouldn't be subjected to mine, either). But when I keep up with my friends via Flickr, I see their projects, their trips, their outings and their adventures. Those are excellent conversation-starters. The second community-builder is: Plant conversation seeds, something to talk about or argue about that is outside the realm of psychotherapy.

Something that delighted me about Flickr from the first time I used it is the human-oriented, whimsical language displayed by the software. The site greeted me in a different language each time I logged in—how silly! When they brought their new messaging system online and I received my first message, I said, "Hey! A message!" I clicked on it, and Flickr displayed on the screen, "Hey! A message!" That this "photo management system" would talk to me like a goofy, light-hearted friend charmed me utterly. The third community-builder is: Set the tone; make it a fun place to hang out.

If you've played with Flickr or YouTube or LibraryThing, you've had occasions where you go to look up one little thing, which causes you to stumble onto another thing, which leads you to another thing, and then you look at the clock and find that hours have elapsed, making you wonder if you were abducted by aliens who then wiped your memory. By allowing you to stumble upon content, following tangentially related linkages, these sites invite you to explore. The exploration engages your curiosity and keeps you there for hours. Unexpected discoveries create a feeling of delight. The fourth community-builder is: Enable serendipity.

I have been puzzling over my own site, Invisible City, for years. My husband has posted a free print-and-play board game every month since 2000. We get an encouragingly consistent number of hits. The site has areas for visitors to comment, and yet... hardly anyone does. It's like performing a show every night to a sold-out crowd who never claps. The site is definitely missing something.

Looking again for lessons from Flickr, Folia, and YouTube, I have a theory: Community members need to establish their own identities and create their own contributions. In other words, they need a profile page and a place to post their stuff. Invisible City is more like a gallery than a community, because people can come comment on our content, but they can't display their own. Galleries provide a useful service, so there isn't necessarily anything wrong with Invisible City's format, but it will never become a hip hang-out unless it changes. The fifth community-builder is: Give members the means to showcase their distinct identities.

Meat-market social networking sites aimed at helping singles hook up will always be transitory, passing in and out of popularity like fads. A community-oriented site can grow and blossom into a self-sustaining organism with staying power. Looking at my own habits and preferences as a user, I observe that the following five facets facilitate the formation of communities:

  1. Provide services and features that pull together people with a common interest.

  2. Give them something non-narcissistic to talk about.

  3. Set the tone to create a fun place to hang out.

  4. Enable exploration and serendipity.

  5. Give members the means to showcase their distinct identities.

Do you have a fun site that you love to hang out in? What makes it fun and what keeps you coming back?

Past the Summit

Earning the "instigator" lable in my blog masthead, I collaborated with some fellow employees to present the "Agile Summit." Yesterday was the day. It was an internal event, a day of training to give a taste of the concepts—whetting the organization's appetite.

The day was definitely a success, but I am also relieved to be past it. (You need a good waterfall project, with lots of slack, to have the time to plan such a thing.) My product owner and I were also part of the agenda, as a case study of one of the few teams at our company using an agile method. Given only a few minutes to get my message out to 300 people, I chose to focus my talk on collaboration. If people learned only one thing from me yesterday, let it be this:

Talk to each other.

Don't use documents and processes as walls to hide behind. Don't look to Agile to be new wallpaper for those old barriers. Agile is a different mindset, and tenet number one is collaboration. Break down barriers and get rid of gate keepers. Here are some of the compelling benefits we've experienced.

Between the business and developers:

  • We're able to react to changes (or new information) in the business process, and we're always working on the highest priority features.
  • Prototypes—or, really, quick shells of working code—clarify requirements. It's easier to communicate what you want when you have something to look at.
  • Developers can understand the why of what they are building, which lets them build the right thing.
  • As the business gives feedback and sees that feedback incorporated into the code, they get more engaged and give more feedback. It's worth their bother.

Between the testers and the business:

  • Testers are able to understand the business reasons and user goals.
  • Testers will ask questions from a user perspective ("Wouldn't the user want to do this?"), which makes a better product.
  • Here's a quote from one of our India-based testers, when asked about the advantage of our project's collaborative model: "Everyone in the team has the freedom to ask questions and concerns to the Business." Freedom. Right on.

Between the testers and the developers:

  • This code is more tested, period, than on any of my past projects.
  • Developers get feedback the very next day on code they have just written.
  • Testers are aware of changes in requirements and logic because they are attending the stand-up calls. No need for change requests and updating specification docs.
  • Visibility (in the form of the burndown chart) gives us a real, honest feel for where we are in the project timeline. You can look at it and really know whether we're on time or not.

This collaboration makes us successful, and it makes the work more fun. I mean, which would you rather do: talk to a teammate or fill out a 20-page template?

Dictionary as DropDownList Data Source

I have a Dictionary<T,T> called myListOptions. To use it as the datasource for an ASP.NET DropDownList called MyList...

MyList.DataSource = myListOptions;
MyList.DataTextField = "Value";
MyList.DataValueField = "Key";

It took me some rummaging to realize that you use the actual strings "Value" and "Key," so I hope this post comes in handy for someone else in the future (even if it's me).