Wednesday, February 25, 2015

A Quick Note on a Public Service Highlight


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


For my first four years in the public service, I worked for what is known as a Common Service Organization: a government department that provides services to other government departments. It has only been in the last approximately 18 months that I've held a position with public-facing elements.

Over the weekend I traveled to Vancouver for the CODE Hackathon, a competition based on the creative use of Government of Canada open data. We logged the better part of a week's work from Friday night to Sunday night, supporting participants and learning from them. It was fantastic. 

A while back I wrote that "Ottawa, the concept, needs to spend more time out of Ottawa, the city", an idea reinforced by every opportunity to do so. I went from reading daily media monitoring reports to a room seemingly exempt from politics. I realized that participants were downright excited that the Government of Canada, of all entities, would be interested in the works that they accomplished. And I got to see, firsthand, what many public servants' hard work meant for people on the ground.

A post like this would, stylistically, merit a pithy observation at this point. Something like "We need more of that/X". But my purpose for sharing is stylistically boring, as a combination of many things: a reminder that the cynics about the public service are louder than those content with it, a note that government does interesting things, and an expression of appreciation for being a part of them.


Unrelated: hackathons are basically Mac advertisements.


Friday, February 20, 2015

On Comparing Compensation


by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Earlier this week the Fraser Institute published a report entitled "Comparing Government and Private Sector Compensation in Ontario"; they've provided a handy tl;dr in the form of an infographic which pretty much sums up their findings (and their worldview).

When I read the report I was struck by the notion of how easily the data falls into alignment with the dominate (and largely negative) public discourse around the archetype of the 'overcompensated public servant'. According to the report, Public servants in Ontario are better paid, have better pensions, retire sooner, are fired less, and absent more from their jobs than their private sector counterparts. Admittedly, the fact that the data is used to support the archetype is in part due to the ideological leanings of the report's point of origin but the fact that the report's conclusions essentially go untested in the public sphere speaks to something larger.

Despite all our talk about the changing nature of evidence-based policy in a data rich environment when someone actually leans in and slogs through some data (as the Fraser Institute did), we jump right to a conclusion (in this case "how such a premium might be managed and eliminated over time") rather than discuss the findings in the broader context (e.g. Given the premium, what actions, if any, should be taken?). Shouldn't this sort of analysis inform a larger conversation in the public discourse?

Yes, of course we could manage the premium over time, but the higher order question is ought we?

Is public-private compensation parity a net benefit to society?

Why or why not?

If yes, how is this best achieved?

Should we freeze public sector compensation until the private sector can reach parity?

What happens if market forces fail to deliver that outcome?

Ought we also consider implementing wealth creation strategies that will help bring the private sector up rather than drag the public sector down?

What are the expected impacts of these courses of actions?

Why is this line of reasoning – to say nothing of similar lines of inquiry about pensions, retirement, absenteeism and its corollary, presenteeism – largely absent from the public discourse?

I have my own theories, what's yours?

Wednesday, February 18, 2015

Macro-level Innovation Lab Experiments


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


Last week Nick posted about innovation and Adobe's Kickbox (see: the post). He suggested that government try the employee-led, many-experiments model alongside the innovation labs model, but noted that the pressure to succeed may limit governments' experimentation options. In the comments, Blaise noted that the government is trying many different models, and the question is instead the extent to which they learn from each other.

There's wisdom - and not necessarily disagreement - in both takes. They're both talking about experimentation on innovation models themselves, not project innovation within innovation labs.


Measuring Innovation Success

This macro-level comparative analysis is possible - it has been done before. Back in 2004, Charles O’Reilly and Michael Tushman researched innovation and found that organizational structure was a striking predictor of success. They compared four models (the Emerging Business box being analogous to our innovation labs):

















Here's what they found:

More than 90% of those using the ambidextrous structure succeeded in their attempts, while none of the cross-functional or unsupported teams, and only 25% of those using functional designs, reached their goals.

This is not a perfect analogy - these businesses are private sector organizations pursuing, largely, product innovations. But it proves two things: one, that organizational design and discrete factors can massively influence outcomes, and two, that comparative analysis of innovation models is possible.


Macro-level Innovation Experimentation in Government

It'd be relatively easy to create a rough framework of government innovation labs such that we could look back in a year and critically assess successful characteristics and trends. It would make it easier for labs' genetic material to mix, and allow them to productively evolve into future iterations.

This could look something like this:


And include any number of other variables: how projects are approved, how budgets are structured, what projects were undertaken, how many, and so on. Labs could be assessed regularly, perhaps on an abstract colour scale (say, from Cadmium Red to Burnt Umber) to avoid strict rankings.

The point wouldn't be a thorough understanding of individual labs, but a rough, easy-to-generate overview from which to generate hypotheses about general conditions for innovation lab success.

All of which could maybe bring us closer to the goal of standardizing innovation.

Thoughts? If there's any interest I think we could hack this together on GCconnex pretty quickly. 

Friday, February 13, 2015

Why Governments Would Never Deploy Adobe's Kickbox and Why Maybe They Should


by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Earlier this week Adobe's employee innovation program Kickbox got a lot of attention online when the company announced they were open sourcing the entire thing. Given that this just hit the ecosystem this month I'm still poking around on the site. That said, I'm impressed by the overview and dig that they have a separate section for senior folks looking to deploy Kickbox in their organizations and another featuring the core contents of the program (in fact, it reminds me a lot of the approach to public sector mutuals in the UK).


Kickbox tl;dr 

Kickbox is a two-day innovation workshop built around a starter-kit for would-be innovators. The workshop is designed to remove typical barriers to innovation: money, a process, innovation tools, and energy (caffeine and sugar); short (PR) video below:




Kickbox explained

For a more complete overview of the Kickbox program (which has been operating for the last 18 months) I suggest watching Mark Randall's presentation at the Lean Startup Conference embedded below.





Deflecting your early skepticism

Yes, you are right, Kickbox is likely as much about good PR for Adobe as it is about innovation. However, the fact that they open sourced it in its entirety should be taken as evidence that it has paid dividends for the company, that Adobe thinks its content can stand up to scrutiny on the web, and that it will attract talent that shares its values and commitments. I, for one, plan on having a closer look in the weeks ahead.


Why Governments Won't Use Kickbox

Because it would never work here.

Because our accountability culture makes it easier to approve $24,500 on a sole source contract than to approve 25 individual spends of $1,000.

Because not every $1,000 expenditure could be directly tied to a demonstrable 'innovation'.

Because every failed attempt will be met by the ruthless faux outrage that dominates our public discourse.

Because the relative safety of the status quo is easier for people to bear than the uncertainty of experimentation and failure.

Because backing such an experimental approach in spite of the lack of incentives to do so would require courage and constitute a heroic act.

Because once we've committed to a particular course of action, pursuing multiple and possibly competing strategies would likely be considered by many poor form rather than healthy experimentation, or more plainly A/B testing.

Why not A/B Test Innovation Labs and Kickbox?

Kickbox is built around the idea that innovation can happen anywhere — that if you lower barriers to participation and equip people with the right tools and resources, they can ideate quickly, lever their networks, and experiment at extremely low costs. As a result, Kickbox is a 'fail fast' approach to innovation and focuses more on building the innovative capacity of people (e.g. how they approach problems and the networks they have to solve them) rather than delivering a particular innovation or series of innovations. In short, it moves the organization as a whole towards thinking about problems and how to solve them differently today (and tomorrow) than it did yesterday.

Labs are fundamentally different. They centralize rather than diffuse the innovation function, create new institutional costs, situate those costs firmly within a subsection of the hierarchy, and reinforce the status quo of situational power structures where access and information are the ultimate sources of influence. As a result, labs are vulnerable to the same bureaucratic pressures that slow innovative forces in the rest of the organization. They are inherently exclusive (not everyone can work in the lab — that would after all undermine its very essence), which means that they are more focused on building and diffusing innovation rather than building widespread capacity for innovation.

Caveat #1: Yes, I'm an innovation lab skeptic and I understand that I'm swimming against the current on this one; and while I've written about them numerous times (See: On Dragon's Dens, Hackathons and Innovation Labs and/or The Future of Policy Work) I also know a lot of smart people who have been assigned to them. These are capable and committed people, many of whom I would consider friends, and all of whom I wish success because we need all the success we can get on this front.

Caveat #2: I had a conversation recently where I came to the conclusion that innovation labs may in fact just be our response to policy shops turning into issues management shops and that innovation labs are really just our way of re-introducing that function back into our organizations. It's not well thought out, but worth thinking about later when we are done celebrating their launch and evaluating their results.

Caveat #3: One of my biggest fears on the lab front is how likely I think it is that their walls become analogous to the organizational boundaries they were established to help circumvent — that their exclusivity and prestige actually increase the barriers to innovation rather than drop them. One of my earliest lessons in collaboration came from Clay Shirky's Institutions vs Collaboration (circa 2005) which convinced me that there is always more cognitive surplus and capacity outside an organization than within it. If labs are to be successful, those who work in them need to have a very specific skill set, a mandate to reach out to anyone with expertise, and the humility to consistently put themselves second.

My point isn't that one is right and one is wrong but rather we don't know what will work, why and under what circumstances; so why not A/B test these two different types of approaches?

Why maybe they should

Demonstrable results. Short lead times. Low cost (watch the video).

It's a free methodology for experimentation (look, its right here).

Desperate need (look around).

Wednesday, February 11, 2015

Short-term Thinking and Why Communication Can't Defeat Silos


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


I'm wary of efforts to carve the world into types of people, but let's imagine this rough divide: those who focus on the direct impacts of their decisions, and those who can imagine a cascade of effects. In other words, those who see only a single link in the cause-effect chains they start, and those who know that it continues into the distance.

Businesspersons who burn bridges to make deals, networkers who abuse relationships and trust to make contacts, and managers who step on employees fall into the first category. Mentors teaching others, collaborators giving their time to others' projects, and leaders who ensure the long-term health of their workplaces are examples of the second.

We might adopt Adam Grant's nomenclature, and call these people Takers (or Matchers) and Givers. The research in his book, Give and Take, bears out the truth of these causal chains: those who take the What's in it for me? approach lose out to the altruists in the long term. Appreciating the long-term and indirect impacts of decisions (or at least, understanding that long-term and indirect impacts exist in the first place) creates healthy systems and workplaces, engenders trust, and allows positive-sum games that benefit more people.

However, there's a anomalous third type of people: public servants.


Collaboration and Institutions

The population of public servants probably breaks down into the above types proportionally to the public at large. But within their environment, they are paradoxically incentivized to take on Taker and Matcher identities.

Takers in the public at large are the product of the failure to appreciate the long-term, indirect impacts of decisions. And it's neither good for them, nor the people they interact with.

In a public service, the same result is created both by A) the failure to appreciate the long-term, indirect impacts and B) the much more common failure to convince others to appreciate those impacts. In their roles, public servants make decisions on behalf of, and in consideration of, many actors.

These could be hierarchy superiors, colleagues, watchdog groups, citizens, journalists, anyone. The result is the same: public servants become largely limited to those actions which directly and immediately benefit their specific mandate, in the manner intended. This is antithetical to the give-and-take nature of collaborative relationships, in which one can help others without any guarantee or timeline of reciprocation.

The mere ability to communicate between silos, to be aware and to coordinate actions, is not the sole prerequisite for tearing those silos down. 


Keeping Score

It's impossible to completely understand every way in which our actions reverberate. This world defies measurement. And the world simply works better  when we're not always keeping score - we help, we look for mutual wins, and we build relationships. Yet, we live in an era that is as much about high standards for government as is it about transparency and accountability. I would never suggest that we sacrifice either, but the typical approach to this tension will do that on its own.

Unfortunately, instituting the typical approach alone - checks and balances, rigourous measurement - is a neat, tidy, single-link cause-effect decision itself, and is therefore defensible.