Coffee, Costs, and Collaboration

Wednesday, April 29, 2015

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


Espresso people are intense. Everything is purposeful: how finely the beans are ground, the temperature of the water, the relative amounts of each. So it's not shocking to find that online communities of espresso people have thoroughly investigated their price-per-cup. There's a range, but let's go with one person's assessment of $0.20.

Which, we all recognize, is hardly the full cost of enjoying espresso at home. Espresso people are intense about their equipment, too, which can easily run into the thousands of dollars for machines and grinders. (PhD dissertations could be written about the subtleties of grinders.) Which makes the first cup of espresso cost, say, $1,000.20, not $0.20. After 100 cups, you're at $10 per cup, and you get near Starbucks prices at around 500 cups or so.

There's a big difference between the marginal cost per unit (the cost of an additional cup once producing) and the total cost per unit.

One of the reasons for excitement about the digital world is "zero-marginal cost collaboration": connecting with anyone, anywhere, for free or cheap. I wrote a couple weeks back that we still have much to learn about online collaboration, and I think part of that is that we get excited about 'free' collaboration - we think about the price of the beans and forget about the machine. "Can we crowdsource this?" "Let's ask people for input." "Can we work with someone on this?" Etc.

Such collaboration is undoubtedly worth it - but have you bought the machine yet? That is, have you done the up-front work? Are you a part of that community, do you have credibility, do you know the lay of the land, have you built relationships?

Otherwise, it's like watching someone make espresso at home and thinking, "Man, how great would it be to have free espresso every morning?"

Performance Management meets the Public Service Employee Survey

Friday, April 24, 2015

by Tariq Piracha RSS / cpsrenewalFacebook / cpsrenewaltwitter / tariqpiracha

My daughter starts kindergarten this fall. As a parent, I obviously want my children to succeed. I want them to be taught by great teachers and have access to the latest and greatest tools and resources that they may need to succeed in the classroom. However, time spent in the classroom is no guarantee for success. The level of support at home, the stability of a household, the kind friends and family who surround us, and many more factors can contribute to, or undermine, the success of a student. So, I have a responsibility to ensure that my daughter has as much support outside school as she does within the school system. That’s the advantage my daughter has: someone (me) is looking out for her.

Last year, a new performance management system was implemented across government. (There was always a system, but last year it went through some significant changes.) Its objectives include making managers and employees more accountable for what they are supposed to deliver each year, and supporting employees who either are exhibiting potential or perhaps those who may need some, well, tender loving care.

It’s not difficult to draw a comparison between the system of performance in the public service and the education system. Both have systems that are designed to achieve specific public interest objectives, and support individual success.

There are also similarities in their limitations: as I pointed out above, standards within the school system only go so far. As a parent I might research the schools in my neighbourhood, talk to other parents, perhaps even structure home-buying and transit decisions around getting our daughters into good schools.

As employees in the public service, we don't necessarily have anyone looking out for uswe are lucky if we find ourselves working for a supportive manager, but it's on us to set ourselves up for success. Yes, the performance management system may help get public servants on track with setting objectives and identifying potential areas for improvement and training, but a course on project management isn't going to magically solve things for a team that is short-staffed and overworked. Signing up for French training may not help a public servant improve their French if English is the only language they are exposed to on a daily basis. Some employees may not have supportive managers. Or a large training budget. Or an environment that fosters improvement or creativity.

This is where the Public Service Employee Survey (PSES) comes in. The Survey comes out every three years, surveying public servants on a number of topics including work-life balance, harassment in the workplace, overtime, and more. And the latest results are in.

In the past, I never gave the Survey much thought.  Some years I’d fill it out, sometimes I couldn’t be bothered. The results seemed little more than an academic exercise that confirmed what I already thought: the public service is good in some areas and poor in others.

Well, now the survey results are providing public servants with an opportunity to be a little more strategic with their careers.

While the Survey is not a rating system, nor does it get into specifics about the positive (or negative) reputation of particular managers, the Survey does provide indicators about which environments *may* be be most conducive to development. Or, hell, which sectors within particular departments would simply be more stable and supportive so that one doesn’t dread getting up in the morning to go to work. It could be something as simple as percentages that show where one might find lower instances of harassment, or higher levels of job satisfaction.

The point is that the survey results may provide a rough measure of which organizations better align with your own values and goals. It is essentially another tool that is at our disposal. Little pearls of information just waiting for you to take a look and say, “Huh. That’s interesting", pointing you to something more rewarding.

Moving into those more supportive positions or organizations? Well, that's on you.

Innovation and Rigour

Wednesday, April 22, 2015

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


There's a general rule in academic research: you know you're coming to the end of your literature review when you've already read all the references in the articles you're reading. The second line of defence is that this gets validated with an advisor who has been studying the field for decades.

We don't have that luxury in public administration, and particularly in public sector innovation. There's no such systematic record of experiential and practitioner knowledge. How do we know when we know enough about a skill, project, or field?


The four stages of competence

You may be familiar with the four stages of competence model for a given skill. The idea is that we start incompetent, but don't truly know it. We know so little about the skill that we can't even meaningfully assess our own ability: (1) unconscious incompetence. As we learn more, we realize how little we know, reaching (2) conscious incompetence. Eventually we become adept and know it, the level of (3) conscious competence. When we master something, we can do it on autopilot, without really thinking: (4) unconscious competence.

I wrote last week that, in general, people are probably unconsciously incompetent at facilitating online collaboration (see: The Promise of Online Collaboration). Which isn't trivial: if someone has a bad experience collaborating with a group, they'll disengage and not return (just as they would attending a poorly designed meeting or conference). So where we fit in this competence rubric for a given project is important. We're making decisions in the public trust. How do we know when we're prepared to do so? When to move forward rather than signal-checking with others or conducting additional research?


The dark side of experimentation

Experimenting is good. Experimenting without truly knowing what we're experimenting on is not: it leads people to skimp on the equivalent of the 'literature review' (what's been done before? Who can I talk to for advice?), skip setting markers to know if they're on the right track, and fail at critically assessing and sharing the knowledge gained (see: Standardizing Innovation).

It's tempting to use a baseline of zero: "Before we did X, nothing was happening, then we did X and something happened: ergo, success." Unconscious incompetence applies both to implementation and measurement. Falsely declaring success creates complacency (see: Pilot Projects and Problems). It slows the move towards competence. And it represents underdelivery.


Get meta about innovation

So what's the equivalent of academic rigour for public sector innovation? Could we work out useful heuristics to ask ourselves? Something like:
  • Who are the leaders in this field? What are they doing? Can I talk to them?
  • Could I give a hour-long presentation on this field tomorrow?
  • Where would this project fit in the Cynefin framework for problem complexity?
  • What will this project impact?Is the level of rigour in designing the project commensurate with its potential impacts? (Future prospects for the organization? Groups of stakeholders? How many people?) 
I don't know what would work, but I'd love to hear ideas. Because every time we try something that hasn't been done before and succeed, the culture needle moves a tiny bit from risk-adverse to innovative. Every time we fail, it wiggles back. We owe it to ourselves, our colleagues, the next generation of driven public servants, and to Canadians to be thoughtful and purposeful. But, also to avoid the dark side of being unnecessarily rigorous - one project's reckless abandon could be another's costly analysis paralysis.

This is the get meta about innovation: how do we move from asking ourselves Is this a good idea? to How do I know this is a good idea?

Impossible Conversations: The Black Swan by Nassim Taleb

Monday, April 20, 2015
Nick Charney
RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Honestly, don’t bother reading the book; Kent’s description below is about all you need to know about the theory and George's review was probably way too polite. 


by Kent AitkenRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

There are a few interesting nuggets throughout this book, but here’s the major point: let’s imagine a series of events over time, as in the chart below. This could represent hurricanes, financial crises, wars, anything (though it’s more important for things for which the impact scales exponentially, not linearly). The large spike on the right represents an extreme event. If you’re trying to understand the nature of these events from time X0 (the red line), your model will be based on the relatively stable period before, and you’ll be unprepared for the impact of the event that follows. It’s only at time X1 that we’d understand the flaws in our models, but we can never know if and when we’ve reached that point, when dealing with improbable, high-impact events.




RSS / cpsrenewalFacebook / cpsrenewaltwitter / tariqpiracha

While Taleb has some interesting nuggets as mentioned by Kent, the book is exceptionally long and tediously slogs through irrelevant personal anecdotes and the occasional made-up historical figure to get to those nuggets. It’s tough to like a book so very filled with the author’s arrogance and disdain for his readers.

Where innovation meets fearless advice and loyal implementation

Friday, April 17, 2015
by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Last week I appropriated Clay Christensen's Innovator's dilemma to the world of policy innovation in an attempt to see if the model/thinking holds (See: The Policy Innovator's Dilemma). At a high level I think it generally holds that the dilemma facing the policy innovator is whether or not they ought to pursue sustaining innovations or disruptive innovations. Innovation rhetoric aside, its a leadership decision about whether or not the organization is going to direct its scarce resources towards generating policy process improvements or new policy thinking. The rest of the application is interesting but I think at its core the above statement is the kernel most worth pursuing.

Risk-aversion causes incremental (sustaining) approaches to innovation

A friend of mine reflected (by email) on how it didn't seem like a true dilemma in a risk-averse public service culture. The idea he and I unpacked over email was that risk aversion eliminates disruption as an option, thereby negating the either-or element, resolving the dilemma and leaving public servants to focus solely on policy process improvements. In other words – and this is likely nothing new – risk aversion is the root of incremental approaches to policy innovation.

Who or what ultimately determines which innovation strategy to pursue?

Jeff on the other hand left a comment (rebutted by Angela) that argued the dilemma was at the feet of Ministers. I found this too to be an interesting perspective. However, I don't think its as cut and dried. For example, if you parse the PM's Guide for Ministers and Ministers of State (2011) you will see (among many other things) that:

  • Government policy is established by Cabinet
  • The Cabinet decision-making process is a key mechanism for achieving overall coherence and coordination in government policy
  • A Minister may delegate to a Parliamentary Secretary specific duties for policy development initiatives. Overall responsibility and accountability remain with the Minister, who also remains responsible for the direction of public servants and departmental resources, and has authority to initiate departmental actions.
  • Public servants, reporting in a clear chain of command to the deputy minister, provide professional, nonpartisan policy advice to Ministers and conduct departmental operations through the exercise of legal authorities flowing from the Minister
  • Ministers who wish to support an item that is equivalent to a new government policy decision must seek Cabinet approval to do so
  • Deputy ministers are accountable for a wide range of responsibilities including policy advice, program delivery, internal departmental management and interdepartmental coordination. As deputy ministers, they do so in a manner that supports both the individual and collective responsibilities of their Minister. They are accountable on a day-to-day basis to their Minister, and a cooperative relationship between the two is critical. The advice that deputy ministers provide should be objective and must respect the law. If conflict occurs between the Minister’s instructions and the law, the law prevails. 
  • The Prime Minister leads the process of setting the general direction of government policy. The Prime Minister is responsible for arranging and managing the processes that determine how decisions in government are made, and for reconciling differences among Ministers. The Prime Minister establishes the government’s position before Parliament by recommending to the Governor General the summoning and dissolution of Parliament, by preparing the Speech from the Throne outlining the broad policy agenda for each new parliamentary session and by determining whether proposed government legislation approved by the Cabinet is subsequently put before Parliament. The Prime Minister approves the Budget presented by the Minister of Finance. 
  • The deputy minister, as the Minister’s principal source of public service support and policy advice, is expected to advise the Minister on all matters under the Minister’s responsibility and authority. While the deputy minister does not have direct authority over non-departmental bodies in the portfolio, he or she plays a key role in promoting appropriate policy coordination, and building coherence in the activities and reporting of the portfolio bodies. Deputies can provide advice to Ministers on the appropriate means to ensure integration in the undertakings of their portfolio, while respecting any accountability requirements and mandates set out by legislation. 

In sum, plain English and chronology:
The PM sets the broader direction. Cabinet brings greater specificity to that policy direction and ensures coherence. Deputies serve as the principal source of policy advice to Ministers. Public servants provide professional and nonpartisan advice to their Deputies.
To me this looks like a fairly collaborative approach to policy making and while it may be technically true that ultimately the final decision-making powers lie at the Cabinet table there are a whole series of smaller decisions diffused across the system that significantly impact what actually makes it to that table in the first place. To me this would seem to indicate that decisions about which innovation strategies are similarly diffused across the policy making process. If this is true than a lack of disruptive policy options could be attributed to the fact that no one along the supply chain is raising them or that somewhere along the supply chain they are actively being suppressed. The former option jives with the above argument about risk aversion and seems likely. The latter however seems far less likely, in my experience the active suppression of policy ideas is more of a ghost story than a reality (queue the naysayers who've been told to stay in their box).

Which brings me to the rub

I said in the opening that essentially the policy innovator's dilemma comes down to a leadership decision about whether or not the organization is going to direct its scarce resources towards generating policy process improvements or new policy thinking. Where I think we've landed after today's discussion is that those leadership decisions aren't concentrated in the hands of the few but rather diffused across a complex system and risk-averse culture.

In many ways these are enduring themes that I've written about in the past and this is where I think innovation meets fearless advice and loyal implementation. Back in 2011 I wrote about fairly well read post On fearless advice and loyal implementation. At the time I was writing about the context of government culture writ large but I think it also applies to how we think about and approach innovation, here's an excerpt:
I think the problem is that we have collectively misinterpreted the significance and underestimated the opportunities we have to effect our work culture and sub-cultures, regardless of where we work or what we work on. We mistakenly think of fearless advice as something that only the people at the very top of the organization do; something that is reserved for private meetings between Deputies and their Ministers. In fact, I think that speaking truth to power (fearless advice and loyal implementation) more often means pushing against the small "p" office politics and the small "c" culture of the bureaucracy. In other words, fearless advice isn't reserved for ministerial briefings, but rather happens in the hallways, over cubicle walls, and in the lunch rooms among peers. 
Think of it in terms of the long tail:



The idea here being that new disruptive policy ideas can emerge from anywhere along the long tail, to which I suppose I still only have one remaining question: why aren't they? 




The Promise of Online Collaboration

Wednesday, April 15, 2015

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


I think one of the most fascinating questions of this point in history is whether we, as a society, are awesome or terrible at online collaboration. Personally, I’m rooting for terrible. I actually would be happy if we were completely wretched at it right now.

The promise of collaboration


Why? Well, for starters, we've been promised much by online collaboration. That it "changes everything":

 












Mass collaboration, facilitated by the internet, has been touted as a powerful, world-changing opportunity. And so far, there have been amazing successes: Wikipedia, Open Street Map, Ushahidi. In my own experience, online collaboration has been astonishing, opening opportunities I could never have imagined even five years ago. I think the promise of the digital era is (mostly) real, and that over time it's going to reform governance. 

Yet, our days as professionals are still spent in face-to-face meetings. Digital democracy has hardly taken root. Most people don’t engage in online communities; the content is largely created and debated by a small subset of power users. When people do engage online, it’s usually for “light” collaboration, leaving the heavier or more complicated tasks for in-person work.

So there are a few possibilities to explain this state of affairs:

  1. We’re good at online collaboration, but only for certain cases and situations

    or

  2. There are fundamental differences between in-person and online collaboration

    or

  3. We have no idea what we’re doing*

*But impressive examples (like Wikipedia) are inevitable by virtue of the sheer number of collaboration experiments between the sheer number of people on the internet

I think that we have no idea what we're doing

Or at least, we have little idea. And that's good news, in a roundabout way. Consider this:
  1. Innovation labs are the order of the day for governments. They’re built around tools, processes, techniques, and understanding what sort of space and conditions people require to innovate.

  2. If you go back thirty years in the Public Participation research, you run into articles like Citizens Panels: A New Approach to Citizen Participation. Ten years later, other researchers were still sorting it out:

    “...most citizen participation techniques have been judged to be less than adequate tools for informing policy makers about the people's will. Recently, having planners or policy analysts work closely with long-standing citizen panels… panels can overcome many of the limitations to effective citizen participation.”

  3. The roles of facilitators and guides are increasingly recognized as crucial for organizations. Some (very worthwhile) examples from the Government of Canada:
    1. National Manager’s Community Tools for Leadership
    2. Or their Tools for Building a Learning Organization 
    3. Or Policy Horizons’ Learn and Grow Together: What is a learning organization?

Which I'm taking as evidence of this idea:

We’re still learning how to collaborate in person, let alone online.

The above examples demonstrate the realization that inviting a bunch of people into a room and hoping for the best is a terrible approach. We still do that online (and sadly, sometimes, in person). 

And we're pretty new to online (the Government of Canada declared “mission accomplished” on Government On-Line only nine years ago). It’d be perfectly reasonable if we were not that good at online collaboration yet. Online is different. There are similarities, but it’s different. We'd be crazy to think that we simply understand how to do this intuitively. Instead, it will be part art, part science. It will merit rigour and some degree of professionalization.

This is good news. It means that the lofty promise of online collaboration remains intact. It's a matter of scaling a learning curve, which we've just begun, towards truly and fully understanding (and becoming effective at) online collaboration. 

With that in mind, I stand by my seemingly hyperbolic opening line. I think one of the most fascinating questions of this point in history is whether we’re awesome or terrible at online collaboration.