Showing posts with label adaptive management. Show all posts
Showing posts with label adaptive management. Show all posts

Wednesday, April 23, 2014

Pilot Projects and Problems

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

Earlier this month the Director of the MaRS Solutions Lab, Joeri van den Steenhoven, spoke in Ottawa about Systems Change. The Systems Change idea is that to solve Canada’s complex problems, we not only have to create new solutions but alter the systems in which we develop and implement solutions in the first place. 

There was much to dig into from Steenhoven’s talk, but for today I want to point to a statement he made about pilot projects. His idea was that while we dutifully test out ideas -  we being social entrepreneurs, businesses, municipalities, provinces, or the federal government - we rarely maximize or scale what we learn through those pilots.

I’m sure we could make a long list of why that is, but I've had three factors in mind lately: the We’re Special syndrome, False Negatives, and False Positives.


We’re Special

Here, organizations neglect to look for, or take advantage of, the lessons of previous pilot projects or programs on the basis that their environment, organization, or challenge is fundamentally different. Or because they mistakenly believe that they’re the first to work in a particular problem space (hat tip).

Every organization is unique, yes, so other organizations’ lessons learned may only provide a rough guide, or principles and parameters, for a pilot project. But very few organizations are downright special. Exceptionally talented people are prone to error and bias, and few solutions are entirely novel (“All art is either plagiarism or revolution.” - Paul Gauguin). So those principles and parameters are invaluable starting points.


False Negatives

Any pilot that is launched without the requisite resources, time horizon, provisions for adjustment, or level of understanding will almost certainly get chalked up as a failure. A pilot project will likely be a pretty rough draft of the hypothetical full-scale program; pilots are new to the team executing them, exempt from economies of scale, and have fewer collaborators who can spread continuous improvements.

So, that game-changing idea that needs buy-in? It’s going to get approved or rejected based on the single worst version of it that anyone ever sees. Here is where a genuine understanding must support metrics, especially when it may be difficult to set them meaningfully. What of the experiment was meaningful, and what wasn’t? Were shortcomings because of the concept? Or the design, resourcing, or execution?


False Positives

Occasionally pilot projects fail to lead to meaningful organizational learning because of false positives. As with false negatives, it’s hard to set metrics for a project that has never been done and is not well understood. So, one of the easy mental baselines is zero: that is, before the pilot, nothing happened; during the pilot, something happened. Ergo, it was a success.

The real standard should be what would have been possible had the pilot project been done very well, and establishing that standard is only possible by employing the defence mechanisms against problems #1 and #2, above: establishing parameters by understanding comparable projects done elsewhere, and building as complete of an understanding of the environment as possible.

The false positive problem leaves ideas in a dangerous middle ground, with just enough success to avoid adjusting the approach, never revealing the true potential.


Processes, not Projects

Last year Tariq Piracha wrote about thinking about change systemically (see: Change is a Process, not a Pilot). He suggested getting away from the traditional pilot project model for approaching change within organizations. Which is the core of van den Steenhoven’s talk: if our current system for testing concepts is flawed, how do we create a better system in the first place?

Tuesday, January 14, 2014

Standardizing Innovation

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

Voices have been asking how government could take advantage of interesting models such as gamification, crowdsourcing, nudges, etc., looking for opportunities to innovate. I've tended to think that, if there is value in such approaches, the better question would be "Why are we not already using them?"

And there's a reasonable answer: misalignment between the hypothetical incentives of an organization and those of individual decision makers within it (the principle-agent problem), which holds for experimentation with creative solutions to problems.

Such experimentation might work for a project. But it would definitely benefit the broader organization, in terms of pathfinding approaches that might be scalable for many projects. But, those benefits would be long-term, widely distributed, and hard-to-measure. In contrast, the risks would be immediate, local, and direct. Creative solutions and organizations are mismatched.

Nick shared on Twitter this article about creativity's uphill battle, which connects solidly on the topic.
"Even people who say they are looking for creativity react negatively to creative ideas, as demonstrated in a 2011 study from the University of Pennsylvania. Uncertainty is an inherent part of new ideas, and it’s also something that most people would do almost anything to avoid. People’s partiality toward certainty biases them against creative ideas and can interfere with their ability to even recognize creative ideas."
Games are their rules, and in most cases these rules discourage deviation from the established path.
"In terms of decision style, most people fall short of the creative ideal … unless they are held accountable for their decision-making strategies, they tend to find the easy way out—either by not engaging in very careful thinking or by modeling the choices on the preferences of those who will be evaluating them."
So how could we hold ourselves to account for our decision-making strategies? That is, how could we best change the rules of the game?


The Rules of the Game

I think that there is opportunity to change the rules where performance measurement, strategic planning, and project approvals meet. In the field of Environmental Economics there's a decision-making model called Adaptive Management, which in effect mandates innovation. This is a standard business planning cycle:


Adaptive Management, by contrast, adds three key features:

1. It mandates experimenting with multiple models to solve a problem
2. It adds a "hypothesis" gate to solution design, mandating a statement like "This is what we think will happen" (inevitably accompanied by why,which is crucial to enable the 3rd)
3. It makes "the acquisition of information with which to make future decisions" a part of the outcome on which managers are measured



So instead of deciding on the singular course of action and following through regardless, an Adaptive Management process would apply the scientific method to complex solution design and test multiple solutions. Then, dissect what worked, what didn't, and why.

This isn't new, even to government. The U.K. government has been working on randomized controlled trials for public policy. And I think it could work closer to home.

There's even a governance model for it. Government real estate projects now go through a P3 Screen. That is, an assessment for suitability for a public-private partnership. Organizations could institute an analogous Experimentation Screen for program and policy development.


So what would this do?

This would dissolve risk aversion: delivering two models that don't work is part of the goal, so managers would have policy cover and incentive for bold experimentation with policy and program design.

This would create a body of well-documented experiments on which to base future solutions.

This would create situations in which novel solutions are proven to work, and there'd be little need to justify their pursuit over more conventional approaches.

This would lead to crowdsourcing, gamification, and crowdfunding. Or not. The important thing is that it'd lead to what works, and we'd know it. And how, and why.