Wednesday, April 23, 2014

Pilot Projects and Problems

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

Earlier this month the Director of the MaRS Solutions Lab, Joeri van den Steenhoven, spoke in Ottawa about Systems Change. The Systems Change idea is that to solve Canada’s complex problems, we not only have to create new solutions but alter the systems in which we develop and implement solutions in the first place. 

There was much to dig into from Steenhoven’s talk, but for today I want to point to a statement he made about pilot projects. His idea was that while we dutifully test out ideas -  we being social entrepreneurs, businesses, municipalities, provinces, or the federal government - we rarely maximize or scale what we learn through those pilots.

I’m sure we could make a long list of why that is, but I've had three factors in mind lately: the We’re Special syndrome, False Negatives, and False Positives.


We’re Special

Here, organizations neglect to look for, or take advantage of, the lessons of previous pilot projects or programs on the basis that their environment, organization, or challenge is fundamentally different. Or because they mistakenly believe that they’re the first to work in a particular problem space (hat tip).

Every organization is unique, yes, so other organizations’ lessons learned may only provide a rough guide, or principles and parameters, for a pilot project. But very few organizations are downright special. Exceptionally talented people are prone to error and bias, and few solutions are entirely novel (“All art is either plagiarism or revolution.” - Paul Gauguin). So those principles and parameters are invaluable starting points.


False Negatives

Any pilot that is launched without the requisite resources, time horizon, provisions for adjustment, or level of understanding will almost certainly get chalked up as a failure. A pilot project will likely be a pretty rough draft of the hypothetical full-scale program; pilots are new to the team executing them, exempt from economies of scale, and have fewer collaborators who can spread continuous improvements.

So, that game-changing idea that needs buy-in? It’s going to get approved or rejected based on the single worst version of it that anyone ever sees. Here is where a genuine understanding must support metrics, especially when it may be difficult to set them meaningfully. What of the experiment was meaningful, and what wasn’t? Were shortcomings because of the concept? Or the design, resourcing, or execution?


False Positives

Occasionally pilot projects fail to lead to meaningful organizational learning because of false positives. As with false negatives, it’s hard to set metrics for a project that has never been done and is not well understood. So, one of the easy mental baselines is zero: that is, before the pilot, nothing happened; during the pilot, something happened. Ergo, it was a success.

The real standard should be what would have been possible had the pilot project been done very well, and establishing that standard is only possible by employing the defence mechanisms against problems #1 and #2, above: establishing parameters by understanding comparable projects done elsewhere, and building as complete of an understanding of the environment as possible.

The false positive problem leaves ideas in a dangerous middle ground, with just enough success to avoid adjusting the approach, never revealing the true potential.


Processes, not Projects

Last year Tariq Piracha wrote about thinking about change systemically (see: Change is a Process, not a Pilot). He suggested getting away from the traditional pilot project model for approaching change within organizations. Which is the core of van den Steenhoven’s talk: if our current system for testing concepts is flawed, how do we create a better system in the first place?

No comments:

Post a Comment