Predictability
PREDICTABILITY
TTM | ROI | Sellability | Agility | Reputation |
The ability to predict something of consequence, such as effort, cost, behaviour, event, or availability.
“For the man who grasps the causes of future events necessarily grasps what lies in the future.” - Cicero.
Predictability - a simple word for a notoriously tricky subject matter. Our ability to predict is invaluable, yet we rarely focus on this quality, even though it's known to have important benefits and significant consequences. It covers a lot of ground, including the ability to accurately predict the outcome of an event, how someone (or some entity) will behave or react, the size (effort) of a work item, or when a change will be released for use.
It shouldn't come as a massive surprise when I say that business relationships - and therefore success - are built on Trust. There are few faster ways to break trust than failing to deliver on your promise. Of course this is rarely a case of intentional deception (most businesses aren't in the habit of intentionally deceiving their customers), rather it's one of invalid assumptions (Assumptions), and consequently, poor Predictability. The business (through its representatives) predicted and promulgated an expectation (e.g. effort, delivery date) that proved to be invalid.
RELIABILITY CONSEQUENCES
The main consequence of poor predictability is poor reliability (and low confidence) from the people and businesses who depend upon you. In the software world this is mainly the result of: slowness (in terms of change, communication, or reaction), lateness, incorrect sizing, quality issues, system failures, service disruptions, or missing some other expectation.
If the expectation was wrong, why was it wrong? Sometimes we find ourselves too caught up in solving the next problem, or too accepting of our poor ability to predict, that we neglect the root cause. We find ourselves treating the symptom, not the cause, habitually placating customers with discounts or credits to deflect them from the poor service they've experienced, as if it's a minor inconvenience, rather than solving the underlying cause of that poor experience. This may generate some positive sentiment, but it's short-sighted, and doesn't serve our fickle friend, trust.
So what sort of quality is Predictability? Clearly it's not technological, although it does have technological ramifications. Neither is it divination, although it might sometimes feel that way. The closest relationship I can think of is one of trust and Confidence. High predictability breeds high confidence. But it's also preventative. If we are willing to listen, we can use it to prevent bad things from happening.
DECISION-MAKING
Prediction is central to many of our decisions, be it social (e.g. can I successfully predict which train will allow me to meet my friends for dinner?), or work-related activities (e.g. can I accurately indicate how long it will take me to produce my output?). Yet it doesn't get the same level of importance one might expect.
Project managers in particular, love high predictability, as it equates to high confidence and low risk - i.e. the accurate prediction of budget (costs) and timelines. Unfortunately, this doesn't reflect how some projects are run. Maybe you've been involved in a project where the project manager has arbitrarily added a tolerance (e.g. 30%)? This represents a leeway, a bet if you like, allocated in case things don't go to plan (they rarely do). Their past experiences have shown this to be the safest course of action, and so, quite understandably, it's become a normalised practice, regardless of the project's shape, size, or complexity. This, though, is a (over) reaction that doesn't tackle the underlying problem - the failure to correctly identify (or resolve) the unpredictable aspects - and therefore unsustainable to the business.
Let's now look at some common causes of unpredictability.
COMMON CAUSES
The more common causes of poor predictability include: Assumptions, Tight-Coupling, Blast Radius, Surprise, the batching of work, poor communication (and thus understanding), a lack of preparedness, poor combinational assumptions, Innovation, skills and knowledge, and people. Fundamentally, it's about failed assumptions. Most of the succeeding causes are just specialised versions of it.
Let's look at them now.
DEPENDENCIES & TIGHT COUPLING
Tight-Coupling may be the most glaring example of an assumption in the software world. The more things we're (tightly) coupled to, the more dependencies to manage (a large Blast Radius). The more dependencies to manage, the greater the likelihood that one (or more) of those dependencies will throw us a curveball. Predicting which components will be problematic, nor the manner of the problems we'll face, is difficult.
POOR COMMUNICATION
I've heard it said that only half of the information spoken is retained by the listener. Combine this then with the effect of miscommunication, and it leaves enourmous room for error.
Poor communication generates a difference in understanding and expectation, between the information provider and receiver(s). People are left with incomplete, or invalid information, leading to more incorrect assumptions, and Predictability challenges.
LACK OF PREPAREDNESS
A lack of preparation suggests insufficient upfront work, or thinking, has been undertaken. In the software world, this is regularly seen during the implementation phase, but it's generally a deficiency in: project inception (e.g. starting a project before it has been sufficiently articulated), acceptance criteria (e.g. a lack of clarity), poor requirements (unclear or insufficient), or poor "definitions" (e.g. Definition of Ready or Definition of Done) that either don't exist, or aren't used as a yardstick.
Aside from the Waste aspect (the Seven Wastes), the lack of context makes Predictability difficult. People caught in this predicament often don't know what they're doing, nor why they're doing it, suggesting they can't accurately or comprehensively undertake their work, or efficiently identify problems.
COMBINATIONAL ASSUMPTIONS
Firstly, combinations have the potential to create a Combinational Explosion. Secondly, and more keenly felt in this context, we may not appreciate that selecting a different combination may produce a very different outcome.
Substituting some (or all) aspects of a proven combination won't necessarily generate an equivalent result. It's an approach commonly applied to team construction. The (flawed) thinking goes that people are interchangeable; i.e. we can replace person A with person B of the same role, and still get the same result. This is rarely the case. Even (supposedly) small variances can create significant differences. Humans are complex beings, we make different choices (Rational Ignorance), and have different skills, experiences, cultures, ideas, mindsets, risk tendencies, and empathetic associations. It's not really safe to predict quality or output levels, should you decide to replace staff half-way through a project.
SURPRISE & KNOWN QUANTITIES
Surprise is really just an unexpected outcome from a poor assumption, due to an unknown quantity. We don't like surprises - at least not the bad ones - they create Unplanned Work, system failures, missed deadlines, unhappy customers, late nights, the proverbial burning of the midnight oil, and unpredictable behaviour. We're forced to find solutions to problems that we weren't even aware existed, so hadn't planned for.
INNOVATION & DIVERGENCE
The software industry is renowned for its rapid change and Innovation. That implies something new (i.e. divergence), unknown, and consequently unpredictable.
EXTERNAL EXPERTISE
Attempting to accurately estimate the effort required to do something new (and unknown) is perilous. It's notoriously difficult to scope work that has never been attempted before, simply because the problems haven't yet been encountered. Unknowns - that require accurate scoping - may indicate a high dependence on their success (which is a form of betting), and are therefore a good reason to bring in external expertise.
BATCHING OF WORK
The batching of work also generates predictability concerns - something acutely seen in Waterfall projects. When we employ batching, we perform a large chunk of one type of activity, followed by a large chunk of another activity, followed by a large chunk of another activity. "Large" is the operative word here. We're talking about big change, typically distributed through large and lengthy releases (Lengthy Releases).
Batching presents many challenges. Large change contains many assumptions (many more than one small change). That implies risk. It's much riskier to make such a change - many more factors are at play, so more things can go wrong. Compared to small changes, it's also very stop-start (Waiting), moves glacially, feedback is slow and hard to come by (Fast Feedback), and problems aren't always encountered quickly enough to solve them.
Larger releases (batches) are also better at hiding quality issues. I shudder to think of how many times I've seen Circumvention used on a large release in order to deliver it on time. A large batch requires us to process far more (technically and cognitively), such that we can't comprehensively process it. To play on the proverb, we can't see the tree, due to the forest.
Batching also implies fewer releases, and thus, Hand-Offs, Waiting, and less communication. It's quite typical for the teams involved in a batching model to never be fully invested, partly because they've not collaborated on it. Indeed, it's common for some specialists to only gain access to the project output after a large proportion of the work is already complete, inevitably leading to difficult communication, political infighting, and a loss of Shared Context. If you don't engage with stakeholders (at the appropriate times), you can't necessarily predict their behaviour, nor the outcome.
Sizing is another interesting aspect made difficult with large batches. The greater the number of changes, the greater its divergence from any previous work we may have undertaken, making comparison - a convenient and reliable way of sizing - difficult. We break down work items for this very reason. The smaller it is, the fewer assumptions and complexity it contains, making it easier to find commonality, and thus comparable (predictable) to others.
PEOPLE
People represent one of the most unpredictable aspects of a business. A poor Culture and Team Dynamics can create: friction, bickering and infighting, despondency and slowness (despondency doesn't tend to create effectiveness), a lack of empathy (and thus, team spirit), exclusion (insufficient diversity), unseen work (people working behind other people's back), duplication of work (“I don't trust them to do it right, so I'll build my own version”), and a general lack of Trust . Unsurprisingly, this leads to poor Predictability.
SKILLS & TRAINING
Linked to the “people question” is that of skills and training. It's difficult to predict how long it will take someone to acclimatise and understand how something new, or untried, works, nor their exact (individualistic) training needs. Coupling a skills deficiency (peculiar to that individual) with a time-critical outcome is a risky practice. It puts that work item on a critical path trajectory, but it also pressurises the individual to compensate for a deficiency (e.g. work overtime) not necessarily of their making.
(LACK OF) SAFETY NET
A Safety Net is a tool or process that reduces Surprise and thus, generates confidence. It makes change more predictable.
For instance, should I make a change to service A, which already has robust and broad (automated) test coverage, then I can quickly predict, with a high degree of confidence, whether my change will work. Another form of Safety Net may arguably be having like-for-like environments. If my development environment mirrors the production environment (including data sets), then I can confidently predict my software will work as intended when it's promoted.
FORMS OF PREDICTABILITY
We should probably talk about some of the different forms of Predictability commonly seen in the software industry. They are:
- Effort.
- Release (Date).
- Outcome, or event.
EFFORT PREDICTABILITY
Prediction is widely used to size (i.e. effort) a work item. Whilst it's often viewed dispassionately (even derogatorily), it's necessary for: budgeting, marketing, customer relationships, pricing (how much to charge customers), return on investment (ROI) calculations, and long-term planning (we can't start the next work item until we're finished this one). Additionally, we also tend to predict a delivery date from it (see the next section).
Accurately predicting effort though, is notoriously difficult. Most of the preceding problems affect sizing, including: divergence (such as from Innovation), the influence of dependencies, large batches of changes, Complexity, Surprises, unknowns, skill-sets, and politics. You must consider them all.
POLITICS
The political aspect is a curious one, but I've seen teams intentionally over-egg their estimate. I've also seen estimates chopped because they didn't meet the expectations of certain business executives (which, come to think about it, could be a cause of the first point).
Effort prediction is a funny one. Predict too low and you're the fool who's cost money. Predict too high and you're either never asked again, or the fool who's next prediction won't be believed. That's why accurate prediction is important.
LOOSE ESTIMATES
You'll sometimes be pressed to estimate effort based on very limited information. No-one should assume that this type of estimate is accurate. How could it be, when so much information remains hidden? But this approach is quite acceptable, and normal, assuming it's treated with caution.
A few things can be done here. We might choose to treat the initial estimate with kid gloves (a caveat emptor), and scope and budget it with some additional flex. Another approach is to apply a confidence factor to the estimate; e.g. “it's around 100 days or work, with a confidence factor of 50%”. It could be anywhere between 100 to 150 days of work (no, I'm not going to get into a discussion on “points” here). We could also look for things similar - albeit the problem here is there's not enough information to know how close a match it is.
It's worth emphasizing that this is not a prediction. We're basically saying it can't be accurately predicted until something changes (e.g. reduce the unknowns).
I'll discuss solutions to these problems in the Predictability Solutions section, but it's mainly treatments to the aforementioned problems, such as removing doubt, breaking items down in size, spiking new technologies and approaches, and employing Uniformity where appropriate.
DELIVERY PREDICTABILITY
Successfully predicting a release date, ensuring that a solution is available when it was agreed, is another important aspect.
POOR DELIVERY PREDICTABILITY
As I write today, I still await delivery of a product I ordered three weeks ago. The lead time was 3-5 days. After the stock was released by the retailer, it spent over a fortnight with the delivery firm, either in their distribution center, or with several failed delivery attempts to some random location (I have a lovely picture of someone else's garden). The overall experience left me very frustrated and slightly cynical of the retailer, due to their (conscious) selection of their distribution partner. They demonstrated very poor Predictability and I won't buy from them again.
We know the release date is heavily influenced by the (successful prediction of) effort, so I won't go over old ground, yet there are other considerations, such as: too much Work-in-Progress (WIP) (we can't finish anything), Waiting (the Seven Wastes), the expediting of other work (Expediting), politics, and even customers (e.g. they're not ready to integrate with it).
There are of course solutions, which I'll describe in the Predictability Solutions chapter, including: employing regular releases, automation, reducing Work-in-Progress (WIP), promoting Uniformity (to avoid divergence, and nasty surprises), repeatability (e.g. practices and mechanisms), small batches of work, organisational changes, and Observability.
OUTCOME (EVENT) PREDICTABILITY
Predicting the occurrence of an event is another very useful quality, assuming it's given the necessary credence. For centuries we've recounted stories of sages who've predicted an important outcome, from the Oracle at Delphi, the Sybils (and the Sibylline Books, closely guarded by the Romans [1]), Nostradamus, Cassandra (doomed to predict the future, but for no one to listen [2]), to modern-day predictions on the economy and sports. Outcome predictions are held in high regard.
Of course prediction in this sense is only relevant when the outcome is relatable, and important to someone; e.g. "if you do this, then this is the likely outcome". If the outcome (its effects) is poorly described, irrelevant, or insufficiently acute, then it will gain little traction with the decision-makers. I find many technologists tend to have conversations with business stakeholders where they successfully articulate a technical aspect, but fail to articulate the business outcome. For instance, the (potential) outcome isn't poor Security, it's harm to Reputation and Sellability. If that outcome is unacceptable, then you now know where to focus attention.
There's a great quote (one of many) from Winston Churchill on history. He says: "The farther backward you can look, the farther forward you can see." Meaning that we can better predict the future by visiting the past. A system that has failed repeatedly in the past will likely continue to. A team that averages a velocity of fifty story points per sprint isn't suddenly going to jump to eighty. A person who regularly fails to deliver probably isn't going to start doing so now.
As a general rule, I feel it holds water, but with some notable exceptions. For instance, the recruitment sphere regularly promotes the principle that "past performance is the best predictor of future performance." That, however, implies that we're measuring performance on a like-for-like basis; i.e. we're using the same (or at least similar) input variables. It's a combinational assumption that's - at the very least - inconclusive, and akin to us taking a recipe, replacing one (or more) of its ingredients, and still expecting a wonderful cake. Not necessarily. This expectation is most obvious in professional sports. There are many examples of sports persons tipped for stardom, who have proven they can deliver with one team, but then failed to deliver with another. You can't necessarily predict individual performance within an entirely different context (business), culture, team members, hierarchy, communication channels, and domain [3]. Many jobs have a probationary period for this very reason.
Another challenge here is the Cassandra effect. You may successfully predict an outcome, but that's of little benefit if the decision-makers aren't listening. When Chamberlain returned from Munich in 1938, after speaking with Hitler, he declared that there would be “peace for our time”. Yet within a year, the Allies were at war. Chamberlain failed to predict the outcome, mainly because he neither had a measure of the man, nor had his government considered the predictions of others.
Churchill had been a staunch adversary to Hitler's appeasement. Around a fortnight after Chamberlain's triumphant return, he had this to say: “And do not suppose that this is the end. This is only the beginning of the reckoning. This is only the first sip, the first foretaste of a bitter cup which will be proffered to us year by year unless by a supreme recovery of moral health and martial vigour, we arise again and take our stand for freedom as in the olden time.” - Churchill.
Of course this problem also exists in the corporate space, with stories of executives failing to act upon evidence that could have saved future hardships. Whistleblowing policies are meant to counter this.
The challenge with outcome prediction is the said event may never occur. Remember the story of the boy who cried wolf? One could argue that he predicted a wolf would come. Unfortunately, due to his previous shenanigans, he'd lost all respect from the villagers, so no one rushed to help when the wolf finally arrived. The moral is: “this shows how liars are rewarded: even if they tell the truth, no one believes them” [4], but it could also be a cautionary tale about predicting things that don't come true. The predicament then is that the people who do predict consequential outcomes, may be ridiculed, ostracized, or simply ignored, yet without them speaking out, the likelihood of that event occurring increases.
The people who succeed have the ear and trust of those making the decisions. But relationships - and trust - take a long time to form. Many people who predict a (significant) problem never even meet the decision-makers, let alone get their ear - something common in hierarchy-oriented organizations - so can't influence the outcome.
Complex Systems also create their own (event) predictability challenges. You can't necessarily predict how a complex system - particularly one under duress - will behave. Cause-and-effect is indiscernible. A failure in one area leads to another, and another, and another, in a cascading manner (much like it did with the Titanic), with a single (seemingly) insignificant problem ignored, causing another problem, also ignored, until a much bigger problem occurs. The solution, unsurprisingly, is to reduce complexity.
PILLARS AFFECTED
TTM
The most interesting forms here are effort and delivery prediction. Some business decisions are predicated on their ability to be "first to the market", or ahead of its competitors. They might have chosen a different path, had they known the truth. Poor Predictability makes this difficult.
ROI
From a financial perspective, poor prediction typically equates to shattering budgetary expectations. It costs more to complete the work than was anticipated, or agreed. Consequently, the project may: (a) be cancelled, suggesting we get zero return on our investment, (b) have its quality is cut, creating a friction between the business and its customers that impedes Trust and Reputation, or (c) continue, but with a newly revised, higher budget, thus lowering ROI.
SELLABILITY
Your ability to accurately predict the time, size, or cost of an item, or foresee an impending event that others can't, are important qualities for your customers. They expect it of you. Businesses that (regularly) fail to meet their prediction may be considered unreliable, and not worth their investment.
TESTING YOUR METTLE
Some clients will test you before signing up. Your ability to predict what they need, and then display the mettle to deliver on your promise creates a high degree of trust, and thus, more sales.
REPUTATION
Poor prediction can leave you in a difficult position. You stop the project, get more time or money, reduce functionality, or reduce quality.
Unfortunately it's all too common for quality to be first for the chopping block. Poor quality can be hidden - at least for a short while (Quality is Subjective) - so is enticing to some. This view, of course, is very short-sighted, leading to Circumvention, Sustainability concerns, Technical Debt, slowness, the Cycle of Discontent, and reputational harm.
SUMMARY
If prediction was easy, there would never be late-running or the overspending of projects. No frustrated customers, still awaiting the delivery of that fabled feature they were promised a month ago. No significant events causing outages or severe disruption. But let's be realistic. We'll never be able to predict everything.
What we can do though is improve our ability to predict. Throughout this book, I provide you with the information to do so. By correlating the issues you're facing, with the consequences (e.g. late delivery, poor quality) described in these pages, you can identify which quality(s) are missing, and then use the pointers to the mechanisms and practices I've provided to improve your situation, and thereby Predictability.
FURTHER CONSIDERATIONS
- [1] - https://en.wikipedia.org/wiki/Sibylline_Books
- [2] - Cassandra. Fated to tell true prophecies but never to be believed. https://en.wikipedia.org/wiki/Cassandra
- [3] - Past performance doesn't necessarily predict future performance - https://www.linkedin.com/pulse/past-performance-best-predictor-future-lesson-mashudu-nethavhani
- [4] - The Boy Who Cried Wolf - https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf
- Assumptions
- Blast Radius
- Circumvention
- Combinational Explosion
- Complex Systems
- Cycle of Discontent
- Definition of Ready
- Expediting
- Fast Feedback
- Innovation
- Lengthy Releases
- Predictability Solutions
- Quality is Subjective
- Rational Ignorance
- Seven Wastes
- Shared Context
- Surprise
- Technical Debt
- Tight Coupling
- Unplanned Work
- Waterfall
- Work-in-Progress (WIP)