Showing posts with label risk. Show all posts
Showing posts with label risk. Show all posts

Friday, August 12, 2016

The transparency antidote to risk

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

A couple years ago my team worked on a project that got some negative attention. I'll go light on the details, but we took some criticism and my teammates got worked up, looking for ways to defend the outcomes. I had the opposite response: now we had some ammunition to do things better in the future.

Across the public sector, the meme seems to be that people are scared of transparency (e.g., open data excuse bingo). Which is, at first blush, understandable: it might mean more scrutiny, more criticism, a loss of control over the conversation.

However, there's another meme that bureaucracies are risk-adverse and that they stick to the status quo. Transparency about public programs and services reveals the risk in the way things are now, and gives decision-makers an even playing field to make decisions about change.

Here's an example. Edmonton has had a public dashboard of public policies and services for years: transit ridership, 311 call response times, growth in small and medium businesses, etc. And some are in the yellow and red, not meeting the benchmarks set.


No one signs up for public service to deliver a service that doesn't meet the public's needs. Having the performance freely available on a struggling program means that the program managers will have to explain why things are the way they are, but it also means that the program will receive the support it needs to improve - or change. It's short term stress for the sake of being a part of something worthwhile in the long run.

And the public gets a better shot of getting what they need, sooner - both in terms of information and public outcomes. Where there's little transparency about current performance, potential changes from the status quo get disproportionately scrutinized. And given that even long-running activities - e.g., how governments have been doing IT - are still experiments with uncertain long-term outcomes, we need to put those activities on an equal footing with their alternatives.


Wednesday, April 6, 2016

Culture and risk


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

"Culture, as the saying goes, eats strategy for breakfast, apparently it also eats technology, and probably has a taste for deliverology as well."
- Thom Kearney [You can change culture now]
The culture-eats-strategy theme was thickly present at last week's Canadian Open Dialogue Forum, a direction-setting conference about citizen engagement in Canada.

Citizen engagement is messy. It's uncertain. And it's in the open. So naturally, in lockstep with the culture-eats-strategy theme was the question of whether government is prepared for risks (real or perceived) associated with citizen engagement and open government.

Former Clerk of the Privy Council Wayne Wouters spoke about culture and risk, asking "How can you do something as a public service employee if you feel 'I'm breaking a rule'?" He disparaged the stock answer to mistakes in the public sector, which is to create a rule to ensure that X never happens again.

The problem is that any system, no matter how reliable, will generate errors with enough repetition - a fact that's at odds with a previous Clerk, Paul Tellier, who called for "an error-free administration." As Deputy Premier of Ontario Deb Matthews lamented, “We’re not allowed a failure on version 1.0 in government.” Unfortunately, that's the culture that has stuck.

A couple stories


Last year, a handful of public servants wrote a letter of praise, intending to send it to the managers of a colleague who'd been doing an amazing job and who was really helping out the broader community through sharing information and advice. When asking for signatories, a number of people said this: "This could backfire - collaboration may not be universally seen as  positive." That is, people were worried that drawing attention to an employee's collaborative, whole-of-government approach would diminish that person's standing in the organization.

Collaborative, networked, whole-of-government approaches are the strategy. Culture 1, strategy 0.

More recently. an NGO called In With Forward came to Ottawa to conduct a design lab with policymakers, exploring ways to support street-involved adults. From their blog:
We were testing what it would take to add ethnographic data to policy briefs. How could we give people in power direct access to the experiences of street-involved adults, and how could they use this information in the decision-making process? An oft repeated response was, “We can’t use stories. That’s not what we are asked to provide up the line. I wouldn’t even try to get it through the approval process.”
Design thinking, social innovation, and user research are part of the strategy. Culture 2, strategy 0.

What gives strategy something to chew on?


Ryan Androsoff and Xenia Menzies were exploring a possible hierarchy on Twitter throughout the conference: 

structure > incentives > culture > strategy

Strategy, in this model, has somewhat of an uphill battle. The left side, if poorly aligned with strategy, represents "organizational debt" that has to be addressed before you can make investments and start gaining ground (see: Nesta on Innovation in the public sector: Is risk aversion a cause or a symptom?)

Simply telling people that it's okay to take risks only works on the margins. And, like in the letter example, I'd even argue that it can backfire, leaving employees conflicted between what they're hearing and what they're experiencing.

Structural and systematic biases - in this case, a bias against risk - need structural and systematic responses. Governments have done this with Official Languages and Employment Equity, but we're never going to have a Key Performance Indicator for risk tolerance. Governments can't have risk quotas to meet (I'd dread the reporting: "We undertook 100 activities this year and 5% of them were classified as high-risk.")

Which means we need to dissect the structure, incentives, and culture to figure out the DNA of why public sector employees and executives make the decisions they do.

That said, in the meantime I'd propose a natural starting point: risk and hierarchies don't play well together. Short of calling for removing layers, I'd suggest that we revisit the assumption that hierarchies and decision-making chains have to be the same thing. In Australia, for instance, policy directors send advice and briefs directly to Ministers; the senior executives focus on coordination and administration. There are alternatives.

Wednesday, March 9, 2016

Deconstructing Risk Aversion


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

A year ago a friend joined a new government organization, and one of the things she was struck by was how frank, honest, and transparent the external communications were.

Since then, my friend started writing a lot of the deputy head’s speeches. Not as a part of a communications branch, but as a strategic planner. At which point one of the explanations for the frank communications became clear: after a first draft, she sits in the deputy head’s office and talks it out with them to make edits. Just the two of them, while the four layers of management in between do their jobs elsewhere. 

The public service is risk-averse, they say. And when these people - whomever they are - say it, they usually mean it in a bad way: as a barrier to innovation, as a bottleneck to information-sharing or stakeholder engagement, or as a tendency to opt for safe and uncontroversial policy options when a much better policy option carries some chance of embarrassment.

I’d like to add a boring and functional possible explanation for that, alongside the usual slate of media, watchdogs, auditors, and career protection. I’d suggest that risk-aversion is a borderline inevitable function of how we approve things.

Here’s the long story short:
  • The time demands on senior executives allow little flexibility
  • Everyone in an approval chain approves everything about a proposal: not just the content, options, and recommendations, but the grammar, font, and format, and who else approved it
  • It’s worse than just approval; layers of management have to both personally approve of a proposal, but also A) believe that the people above them will approve it, and B) believe that those people will believe that the people above them will approve it (see: Where Good Ideas Go to Die)
  • There is rarely if ever direct communication between layers of the approval chain more than two steps apart
  • The goal when sending recommendations and questions for approval is to have things go up the approval chain exactly once; there is no mechanism for exploring possibilities or asking clarifying questions (e.g., “If the parameters are X, I recommend action A; if the parameters are Y, I recommend action B.”)
  • This is exacerbated because of the time demands on executives and how direction can shift - things tend to get approved at the last minute which completely rules out any two-way interaction with the final approver
All of which creates powerful incentives towards boring, safe, and precedented recommendations and external communications. There’s no time or mechanism to explore options and explain complex rationale.

Getting back to my speechwriter friend, to draft a frank and honest speech in an approval-chain/game-of-telephone system, she’d either need to send recommendations up long before drafting the speech and get clear parameters that all approvers would abide by, or she’d have to send two versions of the speech: one safe and “approvable,” and one frank and honest. And no one has time for either approach.

Her example provides one of the possible solutions: connecting the subject matter experts and the decision-makers (in many places around government, this already happens). Which has its downsides, sure. Analysts would have to learn an entirely new bedside manner, and some would be better at it than others. But that’s just a learning curve. And there’d be a lot more subject matter expertise in the room when decisions are made - which I think is a better deal for both sides of the table.

Wednesday, May 6, 2015

Risk Aversion in Hierarchies


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


This is essentially a two-year delayed corollary to Where Good Ideas Go to Die, about the nature of hierarchies and how they influence decision making. I'm not suggesting that either model is an ironclad rule - they're simplifications with much room for exceptions, but hopefully worth considering as food for thought.


A central feature of large organizations is delegated authority: establishing a mandate and structure within which officers can exercise authority on behalf of the organization. For instance, one might have authority to spend money on certain things, up to a limit, without additional approvals.

That said, not much "big" stuff is left to delegated authority. It may be for reasons of accountability or importance, or because "big" stuff either impacts different parts of the organization or requires cooperation for implementation. So many proposals get approved at each level, then continue up the chain of command.

However, refusals are almost always left to delegated authority. That is, if a level of management decides that a proposal should go no further, it stops. The level above does not necessarily hear about it. So a given level of management makes very few final "go" decisions on behalf of the organization compared to the number of "stop" decisions. For instance, an executive will know every employee's proposed training plan, but not what was struck off the plan by the level of management below.

False positives (poor ideas that get recommended) get caught by the system, by a higher layer of management. False negatives (good ideas that get stopped) don't.

Accordingly, false positives result in feedback for the person who recommended approval. That is, proposing an idea up the chain of command and getting a “no” provides information on which to base future proposals. Those who are too risk-tolerant will get reined in. However, false negatives get no such feedback. Managers who are too risk-adverse, wrongly making “stop” decisions on behalf of their organizations, will remain so. This also means that senior executives will systemically underestimate the level of risk aversion in their organizations.

Alternatively, instead of it being different managers' styles, it could be individual managers who propose too much in some areas and too little in others. A manager could be risk-adverse on communications but overly ambitious on staffing requests. The latter would get corrected, the former would go unchecked.

I'm sure that when a decision-maker is uncertain, they'll often check in with their management. But given the scale of organizations, the desire to minimize demands on senior executives' time, and the sheer volume of proposals moving on a given day, there's room for error. In a large enough organization, over enough time, tiny breakdown rates still mean a lot of breakdowns. Small asymmetries in the forces influencing decisions add up.

Wednesday, May 28, 2014

Risk, Failure, and Honesty

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

Last year Nick and I went down a long rabbit hole on the idea of the faceless bureaucrat (see: Embracing Authenticity Means Embracing Complexity). There's a maxim that bureaucrats are rightly anonymous, in that it facilitates professional, non-partisan advice, but I've been wondering if the foundations on which that maxim rests are shifting. We're in an era of hyper-connectivity, citizen engagement in governance, and an increasing recognition that end users' needs should be the starting point for policy and program design. In other words:
  • The public service (and the public, for that matter) is made up of people
  • People not in government can be trusted, invaluable partners in governance
  • The starting point for solutions should be a genuine understanding of the problem
All of which begets pretty fundamental questions about the relationship between government and citizens. One that Nick and I did not thoroughly cover is the approach government takes to honesty, problems, and failure.

Recently, David Emerson suggested that public service needed to adapt quickly to the state of the world, and the article was summed up with the headline that Public servants risk becoming policy dinosaurs. Is this a problem we have to face?

Well, the strongest language that we tend to admit to is that we face challenges and that there are risks. We don't have problems, we don't have failures. So it can't be a problem.

That said, over the last year, there has been talk about adopting Engineers Without Borders' Failure Report model, one Crown Corporation has admitted the need to reinvent its business model, and the idea of change labs has spread, a model dependent on a laser focus on problems, as well as experimentation and iteration based on past results, including failures.

It seems as though we recognize that honesty is needed about the problems that we're facing, so that we can bring to bear the appropriate resources to solve them. And yet, the language stays firmly fixated on opportunities and innovation, never on problems or failures. When multiple people approve documents, it becomes very likely that at least one of them will soften the language.

Innovation requires taking chances, and chances can lead to failures. Any system that involves humans, no matter how reliable, will generate mistakes as a matter of statistical inevitability. It's okay. And small failures, if done well, will contribute to consistent successes. And until a would-be innovator can as easily summon anecdotes of failures being accepted as being maligned, we're stuck with the safe road or, at least, pretending to others that we're on it. Either of which is exceedingly hard to learn from.

I think part of it is the Shopping for Votes approach: defenders of soundbite-based communication argue that average Canadians don't have time for complexity, and won't appreciate the nuances of real, gritty problems. To boot, every piece of even internal communication can suddenly become external through Access to Information. However,there is evidence that experts that own up to their shortcomings, or demonstrate a degree of fallibility, can be seen as more credible and reliable (and certainly more likable) than those who maintain an strictly stoic veneer. Nick once suggested that a culture of acceptable failure could be a competitive advantage.

And the decline in trust in government would suggest that in general, the problem-free communication approach isn't working ideally. It may be worth considering the possibility that we systematically overestimate the risk of admitting to problems and failures, and underestimate the longer-term risks of losing trust and credibility - and the risks of inappropriately intervening on ill-defined problems.

It could be a tragedy of the commons effect, in that individual actors know that long-term stewardship requires a certain approach but are incentivized to take a different one. In that view, it's not so much a question of whether we should embrace an honest focus on problems, or that our communications model needs to evolve, but a question of how we normalize those admissions of humility and humanity.

When everyone around you is touting success, who goes first on failure?

If it is a tragedy of the commons, the answer is less in the culture change and more in altering how the market works.

Friday, December 7, 2012

On risk, fearless advice, and loyal implementation

I've been thinking a lot lately about the issue of risk and how it relates to the idea of fearless advice and loyal implementation. Admittedly, these thoughts aren't entirely formed yet so bear with me.


When it comes to risk, we are our own worst enemy

Generally speaking I feel as though civil servants over-inflate the risks of almost everything they (we) do. We often chalk it up to the culture, forgetting that if you stripped bureaucracy of the bureaucrats the risk averse culture would likely disappear. Sure the written rules will remain, but no where (to my knowledge, and in my experience) are the written rules actually as stringent as our interpretations thereof.  My thoughts on how we approach to risk are best summed up with the famous Walt Kelly line (pictured to the left): "We have met the enemy and he is us". We are responsible for the culture, to blame it, is to blame ourselves; on this, I doubt I can be convinced otherwise.


The relationship with fearless advice and loyal implementation

Before delving any deeper, I should probably hang this caveat out there. Recent discussions with a number of colleagues (at different stages in their career) have led me to the conclusion that I am likely an outlier in that I have a fundamentally different understanding of risk than is the norm. Yes - its surprising isn't it - I have a high tolerance for risk, but I also have a high tolerance for consequences arising from my actions; and in the real world risk and responsibility are inextricably linked.

However, inside the bureaucracy I'm of the view that they are artificially divided. By this I mean to say that even when someone is willing to take a risk and bear the responsibility for its consequences they can't actually do so.  Rather than being able to simply pull the trigger on something they are forced to crunch their willingness to move ahead into some sort of recommendation (often in a briefing note) that gets pushed upwards onto someone else.  Sometimes this can be a good thing, for example it protects junior employees from taking the heat if the shit hits the fan, but it can also be paralysing since it concentrates risk in certain areas which likely makes those areas more sensitive to risk than they would otherwise be.


That said, here's my (related) observations on fearless advice and loyal implementation

Giving fearless advice is a low risk activity; there are plenty of opportunities to do so all along the long tail, and most of them are behind closed doors.  Loyal implementation on the other hand is a high risk activity; execution is always a delicate matter, and always held to the highest degree of public scrutiny. I think that perhaps we (public servants) have been spending far too effort on loyal implementation and not enough on fearless advice. Our natural hypersensitivity to risks out there in the public realm have crept into our conciousness in here.

Here's my best effort and trying to boil that all down to a single image (click to enlarge):

Cheers


Originally published by Nick Charney at cpsrenewal.ca
subscribe/connect
RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Friday, September 16, 2011

On Seeing Risk Differently

Whenever anyone steps up and tells you that [social media] and the public service don't mix; when they tell you not to, and when they tell your that you are risking too much tell them that the real risk is failing to soldier on. Tell them that the truth of the matter is this: the real career limiting move is keeping your head down, never taking a risk, and fear-mongering when you realize that the calculated risk-taker beside you is likely to quickly surpass you on the career path. -- Nick Charney, The Truth About Career Limiting Moves

I admit (at least I've been told, and my experience seems to confirm) that I see risk differently than many people who share my line of work. When it comes to risk, my primary concern is the risk that arises from inaction or leaving assumptions unchallenged. It is a quality that Gilles Paquet impressed upon me very early in my career when I had the privilege to sit next to him on my very first panel, it’s what led me to write Scheming Virtuously, to start this blog, and to otherwise engage myself in the conversation around the public sector. Despite my efforts over the past four years, the dominant worldview is still one that argues that risks (and therefore consequences) arise primarily from taking action and questioning the status quo. I offer no direct evidence, but feel that few among you would argue the point.

Surely, this dichotomy is a false one. But I'd rather talk about its cruel irony than argue over semantics. Perceiving risk in either of the ways I've articulated above makes you blind to the other side of the equation. In taking a side we fail to see the absolute potential for risk (and thus consequences) before us. Instead we base our decisions (or more likely our nascent gut reactions) on only one half of the equation. Yet if the idea that consequences more readily follow action than inaction is empirically true, than the equation isn't a balanced one at all but rather a question that is skewed in favour of status quo thinking. The exact degree to which it is skewed may be directly related to (if not at least correlated with) the distribution of the people across the dichotomy. I can't help but wonder if this is what people are trying to communicate when they tell me I am a risk-taker (a label that I have never really identified with); that my actions are more likely to be met with negative consequences than others.

When facing consequences, my hope is that, regardless of what side of the divide you sit on, you are willing to take them on the chin.

I certainly am.


Originally published by Nick Charney at cpsrenewal.ca
subscribe/connect
RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney