Boy, did I kick a hornet’s nest.
A couple of weeks ago, I shared an article from MIT Sloan Management Review, Implement First, Ask Questions Later (or Not at All) by Stephen J. Andriole. A quick summary:
As a result of easy-to-acquire SaaS tools and pressure to harness the relatively unknown possibilities of emerging technologies before competitors do, companies are increasingly bypassing the heavyweight “requirements gathering” processes of classic enterprise IT and, instead, more quickly trying new software apps to see what works. Some of these adoptions iterate and evolve. Some get tossed on the scrap heap. But speed and a test-and-learn approach are favored.
Crikey, were there polarized comments in reaction to that share!
Some people enthusiastically agreed: those who didn’t do this were dinosaurs, plodding to extinction. Others vociferously disagreed: those who did do this were meteors, hurtling toward disintegration in a fiery crash.
Obviously, there’s a wide swath of middle ground between those two extremes. But I haven’t seen that dramatic of a divided response since suggesting that, hey, maybe marketers should manage their own technology stacks 10 years ago. (“Die, evil shadow IT devil scum!”)
Frankly, I was surprised. In essence, the article simply revealed another case of agile overtaking waterfall. I didn’t think that was considered radical anymore. And empirically, don’t we all see tons of examples these days of people trying out SaaS apps with a fraction of the procurement overhead of decades past?
So I took a pass at illustrating the two models side by side — a draft of the illustration at the top of this post. I figured that by seeing the time-to-adoption advantage and organic proof of utility of the “agile” approach, the naysayers in the earlier thread would concede that the agile approach could be good in many circumstances.
But the ensuing
melee debate surfaced a number of good points. First, even if we consider these two models to be a binary choice — it’s either one or the other, which is clearly not the case — there are different circumstances under which each makes more sense.
In situations where you face higher costs to adopt — especially if cost for even the initial adoption is high — or there are greater risks or dependencies around adoption, then the waterfall model is probably a better fit. The stakes are higher and test-and-learn is less appealing. The autopilot software for a commercial jet, for instance.
But there’s a second axis to consider: certainty. Or, rather, uncertainty.
Agile Was Made for Uncertainty
Waterfall approaches are predicated on the assumption that you actually know what will work. The more certainty you have in that — up to a reasonable time horizon, of course — the better able you are to plan out the right solution in advance and optimize your adoption.
You can pick which risk cluster you prefer, but there’s no free lunch.
But the thing about the agile model is that costs and risks for test-and-learn pilots are usally scoped to be relatively small. And the risks associated with scaling only come into play when the pilot has been successful — at which point, you’ve mitigated arguably the greatest risk in any software adoption: will this app actually deliver the outcomes we want?
Freemium Shifts the Cost/Risk Curve Downward
In this age of software-as-a-service (SaaS), the direct cost associated with trying new software apps has also dropped significantly. SaaS go-to-market models such as freemium, free trials, in-app purchases, transparently scalable pricing — even for so-called “enterprise” software — are increasingly common, making it easier for buyers to try apps with smaller scoped trials.
I know there’s still debate around freemium-ish models in enterprise software. But Box, Canva, Dropbox, G-Suite, GitHub, LinkedIn, Slack, Zoom, etc., have proven that it’s certainly not a myth. It may not be right for every app or platform, of course. But buyers are increasingly demanding better alignment between vendor pricing and more agile software adoption.
Big-bang, bet-the-farm software deployments are less and less appealing.
Now, that’s not to say there aren’t also costs associated with software adoption other than the direct price of the app itself, such as integration and training.
But even those costs are steadily shifting downward, thanks to expanding” platform ecosystems> — you knew I was going to connect this to platform thinking in some way, right? — and better UX and onboarding flows associated with the consumerization of IT.
An agile approach doesn’t work for everything. But it works in more scenarios than you might have imagined even a few years ago.
In Practice, Waterfall and Agile Are Blended
There are very few “pure” waterfall or agile projects in the wild. Almost everything we do has some degree of planning and goes through some adaptation along the way. All good agile projects start with a need, a hypothesis, a “job-to-be-done” that is connected to business value. It’s just a question of the ratio of planning to doing.
Instead of arguing over the extremes, we should be able to agree that most software projects happen along a continuum between these two poles:
However, I would argue that the set point for technology adoption along this continuum has been steadily shifting to the right. And that in this decade, we’ve witnessed a significant jump toward the agile end of the spectrum.
I’ll leave you with this quote from Andriole’s article in MIT Sloan Management Review:
We heard a consistent theme. As one business process manager at a Fortune 100 pharmaceutical company put it, “We’ve abandoned the strict ‘requirements-first, technology-second’ adoption process, whatever that really means. Why? Because we want to stay agile and competitive and want to leverage new technologies. Gathering requirements takes forever and hasn’t made our past projects more successful.“
Are we cool? Or have I just kicked the hornet’s nest yet again?