The Siren Song of Software Planning

Why Experiments Work Better Than Plans

Kevin Meadows
19 min readSep 6, 2023
Photo by Amirhossein Khedri on Unsplash

Introduction

In Greek mythology, the Sirens were beautiful creatures who sang such compelling songs that hapless sailors passing their island would steer toward them. It was said that their songs were so pure and lyrical that they enraptured one’s soul. Adding to their appeal, they could also see the future, which was an irresistible temptation to many a mortal. Indeed, the prophecies were so alluring that sailors who wanted to hear them refused to sail away to safety. Unfortunately, those who yielded to temptation were led unwittingly to their deaths.

In the story of the Odyssey, the hero Odysseus is warned by the goddess Circe of the Sirens’ Song danger and wisely takes precautions. He requires his crew to plug their ears with wax so they won’t hear the song. Additionally, he has them tie him to the ship’s mast, instructing them to ignore his pleas to release him no matter how earnest. Indeed, he tells them to bind him even tighter whenever he pleads to be set free. He thus saves himself and his crew and continues on his homeward journey. (Astute readers may realize there is no terminating condition for Odysseus’s command. It so happens that his shipmates eventually release him once they sail safely past the danger. Fortunately, no one involved was a programmer, or poor Odysseus would still be lashed to the mast.)

Today, we use the term “Siren Song” to refer to an irresistible appeal that, if heeded, leads to damaging outcomes.

Hmm, the irresistible allure of knowing what lies ahead. Something that promises a fail-safe plan for a cloudy future. An idea that seems so sweetly appealing but holds danger that we can’t see. Our refusal to reconsider our decisions, even in the face of calamity. Does this sound familiar to those of us in the software industry? With a moment’s reflection, it might. It’s called Software Planning.

Like the sailors of old, we frequently hear the Siren Song of Software Planning, but unlike Odysseus, we too often fail to plug our ears and instead sail headlong into its embrace. Though we in the software industry are not mythical heroes in ancient Greece, when it comes to the Siren Song of Software Planning, we would do well to heed Odysseus’ lesson and steer clear of its appeal.

Defining Software Planning

It’s important to define what Software Planning means in the context of this article. Software Planning is defined as

  1. Plans that project into the future beyond a few days or a single two-week sprint.
  2. Plans that do not include provisions for continuously evaluating our model of reality and revising it whenever we receive feedback alerting us that the plan must be changed.

The time boundary is intentionally fuzzy. This fuzziness avoids pointless, dogmatic debates that a 4.9-day plan is not wasteful, but a 5.1-day certainly is. The crucial point is not the exact number of days a plan encompasses but whether we are “responding to change over following a plan” [1]. Even a one-hour plan is damaging if conditions change during the hour and we cling to the outdated plan instead of responding to the change.

In sum, Software Planning is defined here as longer-range, inflexible plans.

It must also be said that a vision and a plan are different things. Defining a vision for an endeavor is usually a good idea, but it’s essential to accept two things when we do.

  1. We must allow the path to our vision to emerge.
  2. We must be willing to change our vision as we learn.

Finally, some may conclude that the ideas presented here mean any form of forethought about our work is the same as Software Planning, and we must instead charge unthinkingly into everything we do. That isn’t the case.

Why We Plan

There’s a fundamentally simple reason that we engage in Software Planning: Management often requires it. Clearly, we must plan if it’s required, whether we agree with the practice or not.

There are numerous reasons for management’s insistence. The first is the belief that capital is protected and properly deployed when we plan how it will be spent. This belief is manifested in the saying that “failing to plan is planning to fail.” Understandably, if one holds these beliefs, it’s inexcusable not to create longer-term plans to protect the organization’s lifeblood of capital.

Another managerial belief is that planning avoids so-called Frankenstein software. Some may think Software Planning is necessary to prevent a software monster that lacks cohesion and looks assembled from parts lying around. It isn’t clear if that’s a legitimate risk. Indeed, it’s ironic that by trying to avoid Frankenstein software, we create elaborate plans that delay crucial feedback, allowing us to change the software to suit our needs better. This delay often results in the Frankenstein software we were trying to avoid.

Another reason we indulge in Software Planning is because it doesn’t fail catastrophically each time we try it. Unfortunately, that’s the trap. We would quickly stop doing it if its profound failure nearly bankrupted the business. For example, here’s an idea that we would try only once: “We can make more money if we stop paying everyone.” But the problems with Software Planning aren’t this readily apparent. Instead, disappointing outcomes encourage us to believe we can improve with more rigorous application of the same method. What we ought to see is our need for counterfactual thinking. We should ask, “Would we get better outcomes with a different approach?”

Software Planning offers emotional comfort to those made anxious by uncertainty. For some, there is palpable anxiety relief from planning everything in detail. This approach probably stems from the idea that the only alternative to Software Planning is uncontrolled chaos, akin to a clown car at a traffic light. It’s a false dichotomy that we face only two choices: planning or directionless chaos. There’s a broad spectrum of options between the two.

Software Planning helps us partake in the questionable practice of holding people accountable when they fail to hit their plan, over which they typically have little control. Unfortunately, it’s often reassuring if we find a simple reason for the failure of our model to reflect reality. When we have people to blame, we avoid the unnerving conversation about whether our model is fundamentally and irreparably wrong.

Lastly, and possibly most importantly, we get paid to make plans, not question their validity. Hence, we partake in Software Planning. We likely always will until incentives change.

A Brief History of Planning

It seemed so simple in the beginning. Other industries planned their work, so we should too. So early in our industry’s history, back in the Waterfall days, managers believed that we operated in a domain where expert foresight could determine how long a project would take and how much it would cost. Intricate schedules were derived that specified, with excruciating precision, when every work item would be delivered.

Of course, it never worked out that way because managers at the time never realized they were trying to force-fit their vision of the work environment into an incompatible model. In this case, they were trying to force the work into a predictable domain when it was firmly anchored in an unpredictable domain where long-range plans would never succeed.

Eventually, after enough failures over the years, our industry finally adopted a different, more flexible methodology. The Manifesto for Agile Software Development [1] and its Principles [2] make clear that long-term plans are a relic of the past. We should instead focus on short timelines with rapid feedback and frequent course corrections.

However, somewhere along the line, our industry veered from the ideas behind the manifesto and its principles. Software Planning suggests that our practices are rooted in the thinking of the Waterfall era. The ever-present human need for certainty and predictability seems to have won out in some organizations, which ironically still consider themselves Agile.

Do we truly understand the landscape we inhabit? That’s examined next.

A Mathematical View of Our Landscape

As much as we may otherwise wish, Software Planning generally lies in an unpredictable domain. Even when we believe we’ve accounted for the unknown in our plans, we’re still surprised by unexpected events that wreck them.

Shouldn’t we be able to plan this? Typically, no, because of the nature of stochastic processes, and software development can undoubtedly be regarded as one. Stochastic processes are governed by random variables that result in unpredictable outcomes. Even when we try constraining the process, we still get variable and unexpected results.

We often think that we can predict the outcomes of our undertaking with enough careful forethought and planning. But that’s not how stochastic processes work. They are frustratingly inconsistent, probabilistic, and random. Aside from the occasional lucky guess, they are notoriously resistant to deterministic planning. Our only clear vision of stochastic outcomes is through the rear-view mirror. The windshield, alas, is always opaque.

By nature, plans with fixed steps are deterministic because they assume unchanging future events. (For this discussion, we aren’t considering probabilistic plans such as those made via Monte Carlo models.) However, once our plans meet reality, each unexpected event affects future events unpredictably and randomly. Hence, deterministic plans quickly accumulate errors as randomness intrudes at every moment, becoming substantial with longer timelines. That’s why the most effective Agile strategies avoid extended plans. In doing so, they reduce the magnitude of their error, minimizing the effect of randomness inherent to stochastic processes.

Our deterministic view is rooted in the idea that we can adequately determine the precise starting conditions of our undertaking. However, we frequently fail to recognize the mathematical significance of what we’re estimating, namely, initial conditions, which are known to have profound effects on outcomes when even slightly off (the so-called Butterfly Effect) [3]. The initial conditions that most affect our results are requirements and task duration. Usually, our estimations of both are woeful. Why? For starters, requirements are typically emergent. We often only know what the market will reward once we experiment and see. Additionally, we rarely know how long it will take to complete tasks until we’ve finished them, even if we’re convinced we can estimate them. If our initial conditions are wrong, then our plans are wrong, often substantially so, before we even begin work.

What’s the bottom line here? It’s this: Mathematically speaking, the core shortcoming of Software Planning is that it assumes random processes are deterministic. Disappointing outcomes are the inevitable result.

But in a stochastic world, how do we make decisions? We need a model to guide us. It turns out we already have one. It’s called the Cynefin (kuh-NEV-in) Framework.

Cynefin Framework

The Cynefin Framework is a decision-making guide [4]. The framework is divided into five domains, each having its own decision-making process. The domains break down as follows.

from https://en.wikipedia.org/wiki/Cynefin_framework

Clear

In the Clear domain, cause and effect are easy to grasp, and taking an action leads to a predictable outcome. The work is typically standardized, thus allowing the creation of “best practices” that can be used to generate reliable results. In this domain, we sense the facts of our situation, categorize them according to rules, and then respond with the appropriate best practice.

Complicated

In the Complicated domain, cause and effect can still be understood, but it requires analysis by trained experts exercising good judgment. Best practices are replaced by “good practices” that are subject to change. Here, we must first sense the situation to determine the facts, analyze the results, and respond with the appropriate good practice.

Complex

In the Complex domain, cause and effect are unclear and can be determined only via hindsight. No obvious “best” answers exist, but many possible ideas could work. Here, we must first probe via experiments, sense the results, and respond with actions we believe will have beneficial outcomes. This domain lies squarely in the realm of “unknown unknowns.” That is, we don’t know what we don’t know. Changes in one part of the system result in unpredictable, non-linear changes in other areas. There are no best practices, but beneficial practices will emerge with time. Rather than force a plan of action, we must patiently allow the path forward to be revealed through emergence and iteration.

Chaotic

In the Chaotic domain, cause and effect can’t be determined. Events are uncontrolled and chaotic. Taking quick action of any kind is the only way forward. We must first act, sense the result, and then respond with further action that seems appropriate, attempting to create order out of the chaos around us. The goal is to migrate the situation to the Complex domain eventually.

Confusion

The Confusion domain is located in the center of all others. Here, it isn’t clear in which domain our environment lies. Different factions jostle for power, and people argue with each other. The only way out is to break down the problems into parts and assign each to a different domain, then respond according to the dictates of the respective domains.

The Software Engineering Domain

Some of our software engineering work lies in the Clear or Complicated domains. We’ve done these jobs before and have best practices for them. An example in the Clear domain is changing the text on a button or updating data for a website that displays sales tax rate. An example of the Complicated domain would be adding an interface to an external service, where we already use many others. This interface work is difficult, but we’ve done it before, and it only requires careful execution from experts. Occasionally, our work falls into the Chaotic domain. A good example is when our subscription-based software unexpectedly goes offline, preventing large numbers of paying users from doing their work.

But we more often operate in the Complex domain where our work is empirical, non-deterministic, and emergent. [5] To find a way forward, we must probe with safe-to-fail experiments, sense the outcomes, and respond with further experiments to illuminate a path. We must be patient and iterative in our approach. Our solutions should be discovered through experimentation rather than imposed from on high.

Problems arise when managers, frustrated by slow progress and failed experiments, attempt to force Complex work into the Complicated domain, or worse, into the Clear domain without realizing that’s what they’re doing. The insistence for certainty has thus reared its ugly head, and Software Planning is a short step behind. Cynefin’s authors [4] are clear about the risks here:

“Of primary concern is the temptation to fall back into traditional command-and-control management styles — to demand fail-safe business plans with defined outcomes. Leaders who don’t recognize that a complex domain requires a more experimental mode of management may become impatient when they don’t seem to be achieving the results they were aiming for. They may also find it difficult to tolerate failure, which is an essential aspect of experimental understanding. If they try to overcontrol the organization, they will preempt the opportunity for informative patterns to emerge. Leaders who try to impose order in a complex context will fail, but those who set the stage, step back a bit, allow patterns to emerge, and determine which ones are desirable will succeed.”

David Snowden and Mary Boone

Our greatest challenge is not successfully operating in the Complex domain. Instead, it’s the temptation to force-fit our work into simpler domains. Of all the difficulties we face in our software efforts, this may be the most challenging to overcome because our problems aren’t business markets, competitors, regulatory actions, or other external forces. Instead, it’s us being our own worst enemies. The comic strip Pogo aptly described it.

“We have met the enemy and he is us.”

Walt Kelly, Pogo, 1971

We claim to want Agility in our work, but that’s often secondary. Instead, we want predictability, which requires the deterministic plans of Software Planning, and deterministic planning isn’t Agile. The Manifesto for Agile Software Development [1] states clearly:

“Responding to change over following a plan.”

Before we knew better, it seemed entirely reasonable that because other industries could plan and schedule, we should too. It was a sensible belief when we didn’t know about the Cynefin Model and the Complex domain in which we work. Today, we have no such excuses. We should understand that predictive planning, schedules, and top-down command-and-control management belong in the past for our industry.

An anecdote: I once had my manager tell me that the goal of Scrum was to provide predictability and that the mark of a successful Scrum team was always to deliver everything promised in the sprint plan created months prior. I wanted to keep my job, so I declined to say what I thought, “You have misunderstood the fundamental tenets of Agile. Also, if you want predictability, you should consider another line of business because you’ll rarely find it in software engineering. You’ll only find the false belief in predictability.”

The Risk of Delaying Valuable Feedback

When operating in the Complicated or Clear domains, we don’t need feedback on whether we’re on the right course. We know what to do and how to get there, and our deterministic plans guide our work.

But we need frequent and actionable feedback on our approach in the Complex domain. This feedback provides valuable learning, helping us shape our efforts by telling us what to build.

It’s a given that our early attempts at discovering market needs will be wrong. We must humbly seek quick feedback, learning as we go, iterating between our theories and the market’s judgment of them. Features may need to be reworked multiple times as we converge on a solution.

Deterministic plans are risky because they provide a market signal after completing the work. By then, it’s far too late to be actionable. Our mistakes pile up as we delay learning, becoming more costly with each passing moment. By contrast, quick and continuous feedback helps us steer toward a solution we could never achieve through Software Planning because we’re constantly evaluating whether our software is on the right path. However unsettling it may be, we should continuously subject our ideas to proving themselves useful in the marketplace.

Steering toward a solution is nearly impossible with Software Planning because each task is one in a long line of tasks that assume each of their predecessors is correct. We face two expensive choices if we learn something from the market before the plan’s end. First, we rework the plan, which is usually fraught with problems because most people’s schedules are fixed in advance and not easily changed in organizations using Software Planning. Reworking schedules in one place cascades into other areas that must also be reworked. This reworking is a costly, time-consuming process. Alternatively, we may decide it’s impractical to change the plan entirely and proceed as-is, thus delaying the financial reckoning to a later and more expensive time.

When we allocate capital to a software endeavor, we must understand that the risk to that capital is a function of time, not fixed at the outset. We increase that risk each instant we delay testing our ideas in the market. Is the risk linear? Exponential? It’s hard to say, but the conclusion is obvious: We should test our ideas quickly.

The Hidden Costs of Queues and Inventory

One of the more insidious effects of Software Planning is the creation of inventory and queues. By nature, long-term plans necessitate that some stories wait while others are completed. Additionally, some completed stories wait to be deployed until a defined date arrives, such as the end of the current or future sprints. In these situations, we create inventory that collects dust in a queue. In the principles that govern Lean Manufacturing, inventory and waiting are waste. We typically create much of this waste in software engineering.

There are several problems with queued inventory. First, a completed story’s marketplace value goes untested while it waits. As already discussed, that increases risk. If the story must be reworked, we do so later when contextual memory has faded, requiring us to spend money refreshing it. Second, if the story proves valuable, we have foregone the revenue we could have received while the story sat in the queue. Finally, the inventory must be managed at some level while queued to ensure it’s ready to go when deployed. Such management isn’t free.

The result of queued inventory is costs that we rarely see, much less manage, and these costs are substantial. In The Principles of Product Development Flow [6], Donald Reinertsen indicates how expensive it is.

“Queues are the root cause of the majority of economic waste in product development.”

Donald Reinertsen

Surprisingly few organizations monitor their inventory or queues. This omission can easily be demonstrated by asking, “How much capital do you have tied up in inventory?” The response might be a blank stare, followed by a statement like, “We don’t have inventory. We’re a software firm.” This lack of awareness is a costly oversight.

Analyzing how much time stories spend in work states versus wait states is illuminating. Simply enumerate all the steps required to take a story from inception to Production and assign time values to each one. Break the time into two columns, one for working and one for waiting, then calculate the percent of time spent waiting. We may find that stories spend more than half their time in queues. (Details of this analysis can be found at [7].)

In sum, when we employ Software Planning in our work, we inescapably create inventory and queues that are a costly and avoidable drain on finances.

Keeping It Simple

So, if we’re better off without Software Planning, what kind of approach should we use instead? Follow the pattern described in the Cynefin Framework’s Complex domain: Probe, Sense, Respond. Translating this into software terminology as enumerated in Agile models, we follow these steps.

  1. Build
  2. Deploy
  3. Retrospect
  4. Repeat until no longer important

We build a model of reality that we believe the market will reward. Deploy that model as quickly as possible so we can get early and valuable feedback on our model’s usefulness in the marketplace. Use that feedback to update our model and continue this iterative process until we have achieved a “good enough” solution so that the problem is no longer essential for us. Then, move on to the next problem to begin the cycle anew.

It’s vital to iterate in short intervals based on signals from frequent market feedback, changing direction as needed to arrive empirically at a solution instead of deterministically predicting it. We aren’t Agile if we must instead “follow the plan.” At its core, Agility implies that we focus on the simple and let it lead us to a destination that can’t be predicted in advance. The mathematician George Box [8] warned of this when discussing science and statistics.

“Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”

George Box

Translating Box’s statement from the scientific to the software realm implies that we settle for mediocrity when creating overly detailed plans that quell our anxiety about the future. In other words, we risk trading better outcomes for peace of mind. Agility means accepting that the future is always uncertain, that planning it is wasteful, and that rather than creating complicated plans, we accept the discomfort of not knowing and reap better results.

Funding Experiments

When we change from deterministic to empirical methods of building software, an understandable question arises: How do we fund projects if we abandon Software Planning?

Rather than the traditional approach of allocating fixed amounts to individual projects, we instead take advantage of software’s malleability. From a given amount of capital, we distribute small, initial amounts to an array of safe-to-fail experiments, getting early and frequent feedback on each. The experiments that receive positive market signals are rewarded with additional funding. Those that don’t are discontinued. This approach allows us to cut losses early and steer toward profitable decisions. It’s similar to the venture capital model, where some bets won’t pay off, but those that do will more than compensate for the losses.

We must allow these low-cost experiments to fail and resist the temptation to punish those behind them. It’s the nature of emergent domains that many experiments will disappoint as we continuously probe to find a way forward. Indeed, we should seek out failures because they tell us where to correct ourselves to avoid being continually wrong. It’s essential to accept this as a cost of doing business. Put simply, failures are how we learn, and learning is how we improve results. If we instead punish failures and revert to Software Planning, we ignore the market’s wisdom and adopt a centrally-planned, command-and-control approach. Disappointing outcomes will be the result.

An anecdote: Once, when describing to an executive the emergent and iterative nature of software engineering and how fixed plans and budgets were often incompatible with it, he stopped me mid-sentence and said, ”That’s not how business works.” He was right, in one sense at least. Most businesses don’t operate like that, and surprisingly few software businesses do, even though the latter would probably have better results if they chose instead to experiment with the approach that the executive found so objectionable.

The Last Word

Software Planning has been with us since the dawn of our industry, and it’s tempting to believe that it represents the best practice for developing software. Like anything else that has a long history in the face of new techniques, it’s often challenging to surrender old methods. But it may be helpful to experiment with a new approach.

Think of it this way: An organization’s goal is not necessarily deciding to change its approach. It must only decide what it means if its competitors change theirs. What if those who accept software’s emergent and exploratory nature will outcompete those who don’t? It’s a question worth asking.

Another way of looking at it is that Software Planning isn’t always and everywhere a failure. Instead, we frequently experience disappointing results when engaging in it, and the cause of those results may only sometimes be apparent. What if we can achieve more favorable outcomes using a flexible approach?

If we want to plan our software endeavors rigidly, we should reconsider our business choice because software typically isn’t in a predictable domain. In predictable domains, competition is fierce, and margins are thin because complexity is trivial enough that planning succeeds. If we want to earn the high returns of software engineering, we need to accept that it comes with considerable uncertainty.

It’s distressingly easy to fall under the spell of the Siren Song and opt for the false clarity and reassurance of control that Software Planning provides. Instead, if we allow ourselves to live with the discomfort of uncertainty, we open the path to better outcomes. Plans convince us we’re right at the expense of Agility. We achieve more desirable results when we accept software’s emergent nature and choose to experiment instead of plan.

References

[1] Manifesto for Agile Software Development
https://agilemanifesto.org/

[2] Principles behind the Agile Manifesto
https://agilemanifesto.org/principles.html

[3] Motter, Adilson E.; Campbell, David K. (2013). “Chaos at fifty”. Physics Today. 66 (5): 27–33.
https://arxiv.org/ftp/arxiv/papers/1306/1306.5777.pdf

[4] A Leader’s Framework for Decision Making
https://hbr.org/2007/11/a-leaders-framework-for-decision-making

[5] On Understanding Software Agility — A Social Complexity Point Of View
https://cdn.cognitive-edge.com/wp-content/uploads/sites/2/2020/11/16124017/110510-On-Understanding-Software-Agility.pdf

[6] Reinertsen, Donald G. The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing, 2009.

[7] Waste Not, Want Not: A Simplified Value Stream Map for Uncovering Waste
https://www.infoq.com/articles/waste-not-want-not/

[8] Box, George E. P. (1976), “Science and statistics”, Journal of the American Statistical Association, 71 (356): 791–799

--

--

Kevin Meadows

Kevin Meadows is a technologist with decades of experience in software development, management, and numerical analysis.