It’s What We Know that Just Ain’t So: Epistemic Failure in Software Engineering

Understanding and Overcoming Our Industry’s Knowledge Failures

Kevin Meadows
16 min readMar 1, 2021
Photo by K. Mitch Hodge on Unsplash

Introduction

Software engineering is an industry notable for its strongly-held beliefs. Sometimes, we are also noted for our inability to reflect on those beliefs and question whether there is a factual basis underlying them.

While our beliefs usually serve us well, there are times when they don’t, especially when we believe something lacking in truthfulness. In such cases, we are likely to be led astray by our misguided notions, making us no different than our fellow humans, as best captured in the following quote:

What gets us into trouble
is not what we don’t know
It’s what we know for sure
that just ain’t so

(various attributions)

Failing to investigate our beliefs leads us down the path of believing something “that just ain’t so.” These are epistemic failures, and understanding and preventing them is what we’ll discuss in this article. But let’s first get some definitions out of the way.

Definitions

What does epistemic mean? It comes from the word epistemology, which is a branch of philosophy devoted to studying knowledge. The word epistemology derives from the ancient Greek epistēmē (“knowledge”). The field of epistemology seeks to understand what constitutes knowledge, how we attain it, and whether our beliefs are justified, and if so, the means of justifying them [1]. Epistemic, therefore, simply means something related to knowledge itself. An epistemic failure means we have failed in one or more ways of acquiring and justifying our knowledge of something.

So what constitutes epistemic failure? Let’s take a simple example. Suppose one holds the belief that the earth is round. How could we justify this belief? It’s quite easy. We can start with the established fact that humans have known about the spherical earth for thousands of years, going back to the ancient Greeks’ time. Since that time, there have been innumerable scientific measurements and studies that conclusively prove the earth is round, and these studies are readily accessible for reading and review. We also have evidence from satellites and airplanes that prove the earth is round. Finally, we have pictures of the earth from outer space showing the earth’s spherical nature. Taken together, these lines of evidence make an irrefutable and analytical case that the earth is round. Our belief is justifiable, true, and meets the definition of epistemic validity.

But suppose we take the opposite view and choose to believe that the earth is flat? (Don’t laugh, this belief is more common than we might imagine [2].) How could we justify this belief? We could say something like this, “Well, I went to the ocean shore, and when I looked out over the water, it looked flat. Plus, I went to a high mountain and looked out, and it looked flat too. And I don’t trust the scientists saying it’s round.” Notice the lines of thought used to justify the belief. They are hyper-local to the self’s experience, dismissive of outside expertise, and make the implicit and illogical leap that if something is believed, it must be true. The belief is not carefully analyzed and tested against verifiable facts. This clearly constitutes epistemic failure.

While the above example is an extreme and easily identifiable form of epistemic failure, not all such instances are so readily apparent, as we’ll soon see.

Relevance to Software Engineering

Ours is a knowledge industry. The software engineering world revolves around the acquisition and verification of knowledge in pursuit of business or research objectives. As such, the quality of our knowledge is paramount. There is no escaping it, we work in an epistemic world.

To be successful, we must ask ourselves how we arrive at our knowledge and how we can verify its truthfulness. Mission-critical decisions require accurate and verifiable knowledge, often based on complex and incomplete pictures of our business environment. At all points, we must determine the reliability and limit of our knowledge.

Failing to examine the foundations of our knowledge will often lead to poor decisions. Repeated poor decisions will eventually jeopardize the viability of a business. A quick scan of history shows us a long line of business failures caused by poor decision-making, warning us of the imperative of examining our knowledge.

For a better understanding of epistemology’s relevance to software engineering, let’s now turn our attention to past examples of epistemic failure in our industry.

Examples of Epistemic Failures

The Waterfall Method

It made so much sense at the time. Like other engineering projects, software projects would follow a simple, pre-defined set of steps to plan and deliver the product. It was beguilingly laid out in 1970 by Winston Royce [3] as follows:

Requirements > Analysis > Design > Coding > Testing > Operations

The Requirements phase was where we interviewed the customer and determined, far in advance, the exact details of everything the software must do. From there, we analyzed the requirements to determine how to model the system, the necessary business rules, and the data models required. Then it was on to the design, where we generated reams of technical design documentation describing every detail of how we would construct the system: the languages, interfaces, methods, and so on. After all that, we could at last start coding. The fun part we all waited for! Once we finished coding, we would hand it over to testers to make sure the code performed according to the intricate specifications. Once they signed off, we would deliver the code to our customers and enjoy a well-deserved release party.

Of course, it never worked out that way, no matter how many times we tried to make it so. Requirements were never fully realized until after the software’s delivery. They also never stopped changing during the project, necessitating a return to earlier steps with each change. It was either that or “lock the requirements,” which inevitably produced software that disappointed the customer. And all that design documentation? It had to be laboriously reworked with every change, or it quickly became misleading. By then, we had lost so much time that the coding phase turned into a “death march” to meet the schedule. The poor testers, always at the end of the line, did not receive a graceful handoff of completed code. Instead, it was thrown over the wall with the wish “God be with you” as the programmers bolted for the nearest exit.

Yet, for all this time, few of us ever stopped to consider that what we believed was wrong and there might be a better approach. Epistemically blind, we marched on, believing with each new project that “this time would be different.” The irony, or perhaps tragedy, is that Royce actually wrote that the Waterfall method was unlikely to work, and a more iterative approach was required. Yet, the Waterfall model’s simplicity was so irresistible that we ignored Royce’s warnings, and it was widely adopted. It’s difficult to imagine something that would be more indicative of epistemic failure. After all, our predecessors failed to investigate their beliefs and didn’t even read the short paper describing the Waterfall model and its problems. Worse still, this belief persisted in our industry for over thirty years before the emergence of Agile methods.

Add Programmers When Your Schedule Slips

This, too, made so much sense at the time. What if programmers were already working nights and weekends to “make up for lost time,” but the schedule was still slipping? Simply add more programmers, and the time required to complete the code will decline in direct proportion with the number of people added. After all, programming is just like other work, right? Two workers will require only half the time as one. That’s easy to see. And three will finish in one-third the time, and so on. So it’s simple, add some programmers, and we’ll finish sooner. What could be wrong with this approach?

Plenty, as it happens. As we now know, programming is not like most other work. It’s knowledge work that requires high levels of communication and interdependency between workers. And adding more knowledge workers often has the perverse impact of slowing everyone down instead of increasing throughput. Besides, a moment’s thought would have brought us to the conclusion that the algebra involved was much too simplistic. If the work were perfectly linear, we would only have to add enough programmers, and we would finish in near-zero time, clearly an absurd situation. Eventually, a saying began to work its way into the industry that recognized the folly of this approach: “Adding more programmers to a late project makes a late project later.”

Of course, Fred Brooks had this figured out many decades ago in his seminal book, The Mythical Man-Month [4], from which the above quote is derived. Published in 1975, the book made the unassailable case that adding programmers would not hasten a late project. Instead, it would only make it even later. Yet, for decades after, our industry failed to heed the lessons that Dr. Brooks had so convincingly relayed. It’s not uncommon, even today, to still hear some version of “let’s just add more programmers to speed things up.”

Clearly, we failed to investigate and verify our beliefs and instead relied on wishful thinking. This represents another example of epistemic failure.

Something Simplifying This Way Comes

In what appears to have been perpetual optimism in the face of reality, we often believed that new software projects would complete quicker due to new technology. The newest development environment would surely be quicker than the last one we used. The new programming language was so much better we were bound to finish sooner. Or maybe it was the newest software management paradigm that would make us more productive. There were a nearly limitless number of things in which we placed our faith. Inevitably, we would discover the magic elixir that would deliver our projects sooner.

It seemed like a reasonable enough supposition. After all, our tools were often cumbersome, and our management methods even more so. And newer tools and methods were vastly improved over earlier versions and made things more efficient. So why haven’t we been able to make quantum leaps forward as we have in hardware? Because software is an inherently complex business of inventing new products from scratch, usually without knowing what’s required until we have prototyped something that isn’t quite what’s wanted. Such endeavors are by necessity non-deterministic and halting, proceeding in fits and starts as we experiment, make mistakes, learn, and repeat the process. It’s inevitably a slow, iterative, and time-consuming process. And while new tools and methods have greatly helped, they have done so only at the margins. They cannot change the inherent complexity of software development.

Fred Brooks, once again, realized this all the way back in 1987 in his article No Silver Bullet [5]. In it, he stated there was no tool or process that promised in ten years, a ten-fold improvement in productivity, reliability, or simplicity. In short, a savior isn’t coming, no matter how much we may wish to believe it. We’re stuck with complex and difficult work.

But to this day, we often still cling to the hope that some silver bullet will enable us to complete our projects in record time if only we can find the right tool or approach. If we were validating our beliefs, we would have long ago abandoned such a myth and learned to accept software engineering’s inherent difficulty. That we haven’t indicates an epistemically-failed belief system.

Predictions

A stakeholder undoubtedly asked, “How long will it take?” when the first software program was proposed. And nearly every answer we have given since then has been mostly wrong, sometimes disastrously so. Despite decades of trying, our predictions often have less accuracy (and sometimes rigor) than a simple coin toss. Studies show that between 60–70% are delivered over-budget, over-schedule, or canceled outright [6]. Yet, we continue to expend capital on predictions, rarely stopping to wonder if our belief is justified. Worse, these predictions often form the basis of mission-critical business decisions. These are costly failures of our knowledge and the faith we place in it.

It seems so logical that, like other industries, software engineering should provide predictions to stakeholders. No building gets made, no bridge gets built, no car gets repaired without first providing some estimation of the expected time and cost. But it never seems to work with software. Our predictions are usually wildly off. Why is this? Do we regularly fail because of poor execution by an entire industry, or is it mathematically impossible, and our attempts are doomed before they begin? [7]

Whichever is the case, the debate is roiling the software industry at present, much like the furor that erupted when Agile methods first entered the scene. We will benefit from the discussion if it helps us realize that our industry needs to become more epistemically rigorous. We won’t if it’s just social media histrionics with each camp retreating into its fortress from which to hurl invective. Only time will tell.

What is clear is that the record shows we have mostly failed at predictions for decades. Still, we continue with the same approaches that have failed us. It’s abundantly clear that we have not examined the epistemic foundation of our predictive beliefs.

Today’s Failure

It’s easy to look back at the historical beliefs listed above and revel in a bit of smug self-righteousness. How could anyone have believed such nonsense back then? It’s so clear how wrong it was. We would never believe such things today. Indeed, we are certain of the correctness of our current beliefs.

Maybe we think something along the lines of, “Anyone can plainly see that Waterfall planning methods were completely wrong. Those early technologists just weren’t as smart or thoughtful as we are today.” But to believe that means we are already failing epistemically. For as wrong, yet certain, as people were then, so are we now. We just don’t know it yet. We do not have a new and permanent monopoly on truth. If we can say anything with certainty, it’s that while long experience has given us more knowledge than our predecessors had, it’s a given that our descendants will look back at us and howl with laughter at the nonsense we so strongly believed. The nonsense which seems perfectly reasonable from today’s perspective, just as it did back then for our predecessors. The question then becomes, what are we epistemically failing at today?

Hopefully, out of this comes the realization that we are probably wrong at any given moment and that our best defense against being permanently wrong is avoiding the belief that we are permanently right. We’ll examine how to do that in the next section.

How Do We Fix This?

Our industry did not get into this difficulty overnight and we won’t escape it in short order with another round of “best-practices” that promise to deliver us onto an enlightened path if only we rigorously apply them. The solution involves straightforward methods that are simple in concept but fiendishly difficult in practice because they require us to struggle with that most intractable of problems: overriding our emotions.

Evidence

A sound epistemic basis for a belief is that we justify it through careful analysis of the evidence. Even so, we hold some skepticism in reserve. If upon further evidence, we find our belief to be mistaken, we change our belief.

Epistemic failure inverts this process such that it becomes, “I believe it, therefore it is true.” In effect, this is a perverse form of a SWAP function. If evidence later surfaces that casts doubt on our belief, an inverted process will cause us to avoid reconsidering it. Indeed, we will all too often double-down on our belief in the face of facts that refute it. Left unchecked, our justification amounts to, “I believe it. That settles it.”

But how do we justify a belief? Through evidence. And therein lies the key: Evidence emerges over time, and increased amounts of evidence lead to increasingly valid beliefs. From this, we can see that seeking confirmatory evidence of a belief is a never-ending task. We can also see that we are not in complete possession of all verifiable knowledge at any point along the evidence timeline. We should therefore be willing to update our beliefs as new evidence emerges. In simple terms, the great economist John Maynard Keynes is reported to have said:

“When the facts change, I change my mind. What do you do, sir?”

It’s an admonition well worth remembering.

Bayesian Thinking

There is an elegant approach in mathematics called Bayesian Probability [8]. The details are complicated enough that entire doctoral theses are published on it, but a simple definition will suffice for our purposes.

The idea is that initial certainty around a belief is cloudy and unfocused. We begin with an estimate of the probability of our belief being true. We then run experiments and use the conclusions to update our prior probability of the belief’s truthfulness. We then run more experiments, again updating our priors. This iterative process produces steadily more reliable knowledge with each iteration. The upshot:

  • At any given moment, our knowledge is incomplete and might be entirely wrong.
  • We need to remain intellectually humble and be willing to change our beliefs when new evidence emerges that proves our prior knowledge wrong.
  • Having complete and correct knowledge a priori is nearly impossible, but intellectual hubris can lead us to this thinking mode. This is the root of epistemic failure.
  • At no point can we say, “I am 100% certain now.” Uncertainty, in all its frustrating persistence, becomes an inescapable fact in our existence.

Our knowledge is then something that we accrue over time, steadily accumulating with each discovery. Does this sound familiar? It might. It’s the foundation of the scientific method. We might be given pause here by noting the use of the word “science” in describing our computer science industry. If science lies at our foundation, perhaps we should think like scientists and use their methods to accrue knowledge.

In some ways, Bayesian thinking is probably the inescapable endpoint of our industry. Many industries like software are often characterized by a lack of mathematical rigor when still young and fledgling. During this period, management fads, so-called gurus promising quick and simple answers for certain success, and other such epistemically-failed ideas run rampant through the culture. But with time, the mathematization of the industry slowly begins to occur. It may be that we are approaching that junction now.

Perhaps our industry needs a new rallying cry, something on which to base our epistemology. Just as Edward Dijkstra’s famous 1968 letter, “Go To Statement Considered Harmful” [9] led to a sea change in how we structured our code, a new rallying cry might arise today. It’s remarkably straightforward: “Update Your Priors.”

Agile Methods

Born out of frustration with the repeated failures of the Waterfall method, Agile methods provide us with a flexible approach to building software. Since their arrival on the scene in the early 2000s, they have proven to be a successful template for our industry and are widely adopted today.

At first glance, Agile methods are about responding to change and making our priority the delivery of working software that meets its customer’s needs. It’s from this that the Agile mantra “Inspect and Adapt” derives. But if we dig a bit deeper, we find that Agile is actually built on a foundation of epistemic humility. Agile methods put us on the path of constant reevaluation of our knowledge by embracing change and focusing on inspecting and adapting. In so doing, they move us away from the static form of thinking that underlies epistemic failure. We are much more likely to succeed when we regularly inspect our knowledge and willingly change it when we find it incorrect.

But we aren’t truly Agile if we simply adopt our chosen methodology’s required steps, checking each box as we do, and pin an Agile Certificate of Completion to our wall. To be truly Agile means to change our fundamental approach to accruing knowledge. It means being ceaseless in our pursuit of ever more truthful beliefs, often at the expense of abandoning our most-cherished and certain values. For it’s quite likely that in our most certain knowledge lie our most certain problems. To be Agile means to be willing to inspect those deeply-held beliefs and change them if required. It’s a difficult task but one we must adopt if we wish to consider ourselves Agile and build a foundation for epistemic success.

Of course, nothing about Agile methods prevents us from turning them into a dogma that won’t tolerate questioning or dissent. Many of us have experience with Agilists who decry straying from the rigid path of their doctrine. For example, the Scrum Master who wields autocratic power, the Product Owner who will not split their stories into smaller pieces to ease delivery, or the Agile Coach who insists that nothing in their framework can be modified. Not only is this ineffective, but it also belies the fundamental tenets of Agile. We should be willing to look in the mirror and see when our beliefs in flexible methods have themselves become inflexible and dogmatic.

Intellectual Humility

The most powerful tool we have in our epistemic arsenal is the willingness to be intellectually humble. It is also the most difficult to use. To be intellectually humble means to be willing to admit uncertainty and change our beliefs when new facts emerge. We accept, however grudgingly, that we are in some way wrong today but might be less wrong tomorrow if we can only put aside our egos. Seen in this light, epistemic failures are a form of cognitive arrogance and laziness. Instead of doing the hard work of evaluating our beliefs, we fall back on the easy practice of adhering to dogma.

Being intellectually humble means that we realize we never know what’s absolutely true, and therefore some degree of doubt is essential. Understanding this places us on the path to seeking ever greater knowledge and verifiable beliefs through experience, reassessment, and refinement. It isn’t an easy practice, however. The siren song of certainty always beckons us, its danger ever-present and tempting, ready to convince us that we are finally free of nagging doubt. Our job is to resist it.

It’s important to state that intellectual humility does not mean throwing up our hands in frustration and exclaiming, “there’s no point in anything if we’re always wrong!” While it’s true that we can never find absolute truth, we still move forward with the knowledge we have at the current moment. We simply accept that we may soon need to revise it, and when that time arrives, we won’t shrink from doing so.

Final Thoughts

Given the long list of failures we have reviewed so far, it might be tempting to conclude that our industry is hopelessly mired in epistemic failure. That’s an understandable but inaccurate conclusion. If we step back and view history’s sweep, we see that we eventually jettison our incorrect beliefs however much we struggle to do so at any given moment. It’s this difficult but ultimately successful acceptance of new beliefs that gives our industry its ability to adapt and succeed.

But why has this been such a struggle over the years? It’s human nature. The essential problem is that beliefs are notoriously difficult to change and resistant to facts. Our challenge is to understand that our knowledge is always incomplete and how difficult it is to acknowledge that. It comes down to this, an enervating realization, which is at the same time so energizing, that we are inevitably attracted to cognitive stasis. Enervating, because it seems so hopeless to overcome. Energizing, because we realize our duty to overcome it is eternal. Only then do we replace epistemic failure with success.

References

[1] Stanford Encyclopedia of Philosophy
https://plato.stanford.edu/entries/epistemology/

[2] Modern Flat Earth Beliefs, Wikipedia
https://en.wikipedia.org/wiki/Modern_flat_Earth_beliefs

[3] Royce, Winston (1970), “Managing the Development of Large Software Systems”
http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970.pdf

[4] Brooks, Fred (1975), “The Mythical Man-Month”
https://en.wikipedia.org/wiki/The_Mythical_Man-Month

[5] Brooks, Fred, 1987, “No Silver Bullet — Essence and Accident in Software Engineering”
https://www.cgl.ucsf.edu/Outreach/pc204/NoSilverBullet.html

[6] Standish Group 2015 Chaos Report
https://www.standishgroup.com/sample_research_files/CHAOSReport2015-Final.pdf

[7] Meadows, J., 2020, “Accepting Uncertainty: The Problem of Predictions in Software Engineering”
https://jmlascala71.medium.com/accepting-uncertainty-the-problem-of-predictions-in-software-engineering-26dbcd120b90

[8] Bayesian Probability
https://en.wikipedia.org/wiki/Bayesian_probability

[9] Dijkstra, Edward, 1968, “Go To Statement Considered Harmful,” Communications of the ACM
https://dl.acm.org/doi/10.1145/362929.362947

--

--

Kevin Meadows

Kevin Meadows is a technologist with decades of experience in software development, management, and numerical analysis.