Posts

Showing posts with the label programming

Bug Teams v. The Nature Of Defects

How it Happens You realize that you're not getting as much done as you expected to get done. It's troublesome because you have plans and promises and releases to deal with. You're likely to end up the scapegoat when your behind-ness snowballs into a large organization-wide issue. You also have quality problems. Your team leads estimate that the teams are spending 70% of their effort on defect-fixing activities.  It dawns on you that you can get back 70% of the productivity of your team if you can spin up a separate team to handle bugs! Now one team can be 100% dedicated to adding new functionality without being encumbered by bug-fixing work.  After months, you find that the defect density has not improved from your effort. You see a ramp-up in the number of defects fixed per month as the bug team's diagnostic and corrective skills improve, but they are still lucky to hold even against the tide of defect injection. Why doesn't this work? What has g...

Tests and Immersion in Code: The relationship

There is a relationship between how slow tests are and how much we interact with the tests while we are developing software. If the tests run instantaneously, I'll run them constantly. If the tests run in 30 seconds, I will run them 3 times in any 5 minute window of coding. If they run in 1 minute, I will run them 3-4 times per hour. If they run in 10 minutes, I will run them maybe 3 times a day. If they run in an hour, I'll run them 5 times a week at most. If they run for a day, I will almost never run them. If the build/test cycle is not incredibly fast, I will not participate in the code as fully and richly, instead falling back to rely on my IQ and memory and good intentions more. This is why tools like Infinitest for Java, sniffer for Python, autotest for ruby, and the like matter so much. It is also why there have to be build servers and test servers for any significant project. It is also why manu...

Invisibility of Process, Visibility of Results

There are some special challenges with dealing with productivity of knowledge workers. Most of them have to do with the invisibility of the work and the difficulty in managing invisible work. Programmers and testers don't assemble machinery or bend paperclips or mold parts from molten goo. They don't stack boxes or bricks, or swing hammers. The work they do has no physical manifestation, which makes it both hard to observe and hard to understand. I don't blame managers in the 70s and 80s who counted lines of code. It was one of the few visible manifestations of the work programmers do. It was entirely misguided of course, and several of us have experienced net-negative lines of code in consecutive weeks of work (I've even had awkward and unpleasant meetings with managers for "messing up the metrics," ending in an admonition to stop it). Other attempts to make the work visible count data fields on screens and in databases and on reports. This is a bit...

Hoarding Knowledge: sharing enhances productivity

Queuing  If only one person on the team understands the database, then any work done by other programmers that affects the design of the database must wait up for the one competent individual.  Sometimes that queuing becomes significant and impedes the group. If the expert is not available, the work must wait, or else be done in an inexpert way. Loss Experts can go away. Teams euphemistically refer to this phenomenon as the "bus number" of the team -- the number of people who, if they were hit and killed by a bus, would doom the entire project to failure.  A "bus number" of one is an unacceptable risk. A bus number of 10 represents a well-mitigated loss-of-knowledge risk. Thankfully, few developers are hit by buses in the street. Instead, experts tend to be hired away by competitors or companies working in entirely unrelated industries. When this happens, teams do their best to muddle along, sometimes making poor decisions along the way and damag...

Simple v. Complicated

I borrowed this from Dictionary.com, so I kept the first word as a link to the original. I hope that's okay with them. sim·ple      [ sim -p uh l ]     Show IPA   adjective,   sim·pler, sim·plest,   noun adjective 1. easy   to   understand,   deal   with,   use,   etc.:   a   simple   matter; simple   tools. 2. not   elaborate   or   artificial;   plain:   a   simple   style. 3. not   ornate   or   luxurious;   unadorned:   a   simple   gown. 4. unaffected;   unassuming;   modest:   a   simple   manner. 5. not   complicated:   a   simple   design. My hot words are "simple" and "complex." Most people who say "simple" don't mean simple. They mean "easy", with an implied "to think of" or "to type in" or "to do without researching."  Easy is a virtue...

14 Weird Observations About Agile Team Velocity

(note: I added a 15th, but was worried that changing the title would invalidate links, so you get a bonus observation at no extra cost) I frequently have to address questions about velocity, so in the interest of time I present all the answers here in a short post: Velocity is a gauge, not a control knob. You can't just turn up the velocity -- you can only break the gauge by trying. Velocity is (frustratingly) a lagging indicator. It primarily tells you about the fundamental process and technical work you did weeks, months, or years ago. You seldom get an immediate, true improvement. Though velocity is a gauge, it is subject to  Goodhart's Law . It is rather dodgy when used as a basis for governance. Velocity value is highly derivative of many factors, chief among them being the work structure of the organization. The more governance and procedure (permission steps, queuing and wait states, official limitations,  risk of personal blame, reporting and rec...

Pycon: Examples of Atrocity in Python Programming

A nice discussion about things that should not be done in Python code, but frequently are. Nice mention of Clean Code's naming rules (I'm a proud daddy).

Ambient Annunciators

Image
Jim Hood (of Pillar Technology) and I just installed our first ever "ambient annunciator" to announce our build's breakage. Well, technically our first two. What it does is watch for a build to fail in Jenkins, then start playing a series of randomly-selected choices from our assortment of spooky public domain sound files. Some are musical interludes, some are animal sounds, some are just weird. Annunciator #2 video, woofer, and R/L speakers. Our intention is that these sounds, playing through a decent set of speakers, will create an atmosphere of something being "not quite right." With both systems playing test noises, it was a tad eerie. You really felt that there was something different about the space. It is a bit more subtle than flashing lights, and we're hoping this will be an advantage. Developers will know the build is broken, and will have incentive to get the space back to normal (it is a little annoying) but will still be able to have co...

Learning from bad examples

I saw this in a python tutorial today: if os.path.exists("/tmp/uPlayer"): f = open("/tmp/uPlayer/STATUS", "w") f.write("IDLE") f.close else: os.mkdir("/tmp/uPlayer") f = open("/tmp/uPlayer/STATUS", "w") f.write("IDLE") f.close Really, now? We start by duplicating the whole open/write thing in order to create a directory? Maybe one of the problems people have with clean coding is that their examples are poor. If you write, it behooves you to write refrigerator code. Not to mention that close is a function, so it needs parentheses. This code doesn't even do what it thinks it is doing. It's closing the file when f goes out of scope, not when we reference the f.close bound method. It makes me sad for the children.

You Cannot Possibly Do TDD after Coding

Just for the record: it is flat out impossible to "do the tdd" after the code is finished. This is just a matter of definition. You can write tests after the code is finished, but that has no relationship to TDD whatsoever. None. Nada. Zip. In TDD you write a new, failing test. Next you write enough code to pass the test. Then you refactor. This repeats until the code is done, writing and running tests continually. It shapes the way the code is written. It is a technique for making sure you write the right code, and that you do so in an incremental and iterative way. In ATDD, you have a failing acceptance tests. You take the first failing tests (or the easiest) and use TDD to build the code so that that part of the AT passes. You run the AT after each major step that you've built using TDD. When the AT is all green, you have completed the feature. This helps avoid early stop and also helps avoid gold-plating. If tests were product, then it would make no difference...

Pairing Styles

Overall, pair programming isn't as controversial as you've heard. It depends on your styles of pairing. Some styles of pairing are very common, even among non-agile teams. They tend to be episodic, and last just long enough to get or give some aid. The good news is that they work. If a team views pairing with trepidation, these are common, non-threatening forms to start with: Rescue Pairing Training Pairing Brainstorming Experimenter/Researcher Others are such bad ideas that they are rightfully avoided by all sane teams I've seen so far.  If you are considering pairing, and a team member balks, you should see if their mental model of pairing matches one of these styles. I suspect a team should decide that these styles will never be practiced personally, or tolerated within the team's pod.  You have my blessings to object to pairing if "pairing" means: Worker/Rester Worker/Watcher Master/Slave Bully/Victim Writer/Critic Ball-and-chain pair marriag...

Microtesting Python Album

I'm working with Industrial Logic's newest Python album on microtesting (not yet released) and was lucky enough to get to test out the automated critique. So the way this eLearning works is that you download a problem to work on, and you're given tasks to perform. In this case, it's all about writing microtests for some simple python code. When you finish, you upload your results and an automated critique system digs through the code and gives you ratings and pointers. This is rather like having a mentor sitting with you, reviewing the code. I tell people that the eLearning here is something like they've never seen, but people think I'm marketing or something. Today I have a story for you: Yesterday I made a mistake and the automated critique busted me. The mistake was one where I constructed an object incorrectly and invalidated the premise of the test, yet I did this in such a way that the test passed for the wrong reason. I was feeling so sure I'd...

Code Doesn't Lie

I'm learning my way through Pylons, which really is very simple (so far) but the tutorials are all wrong in different ways. Even different versions of the same one. One will reference sa.something or orm.something but never import or define 'sa' or 'orm'. Others refer to things without prefixes, but show imports "as sa" or "as orm". Yet others leave out whole sections of code to type. That's the problem with docs, even the best of them. They lie. They may not have lied originally, but eventually they lose touch with reality and begin to say things that are not true. I don't know the editing snafus that lead to several copies of the QuickWiki tutorial all being wrong, but I know that it takes extreme vigilance to keep a document from going south. And I know that as you re-edit, you begin to be less wary. You start to read what you meant instead of what you wrote. I'm sure this is the same case. The subject matter is quite sim...