Wednesday, September 9, 2015

The Next Big Thing

by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

Almost a year ago, Tariq, Nick, and I caught up for drinks and to bounce around ideas for CPSRenewal. We felt that the vibe of the blog had been changing. Nick had originally envisioned it as “Lifehacker for government,” which led to posts that made sense of trends and provided advice on how to make the most of new platforms and tools*. And we agreed that as time went on there were fewer of those future-focused posts.

For a while I worried about that change, as if I was missing something. But now my theory is this: I don't think sensemaking the future is as unique and valuable as it once was, for a few reasons:


1. People can choose to ignore ignorable things
2. The future is becoming less predictable
3. Being hyper-networked isn’t special


Ignorable things



Organizations will change for at least two reasons: when there is a burning platform (an urgent need or pressure for change), or when the benefits of change are obvious (and the opportunity cost of not changing is great and obvious). However, public organizations have a high threshold for what constitutes a burning platform. Ten years into Twitter's existence, governments worldwide are using still social media as a broadcast-only channel. There's really no possibility for catastrophe resulting from a cautious approach here.


Calculating the benefits of change is tricky here too. Any change, no matter how obvious a win it would be in a vacuum, requires one of an organization's scarcest resources: management attention. It doesn’t work if a requirement to realize an activity's benefits (or avoid costs) is the attention and approval of an executive who can provide neither. There is a vast and powerful attention economy within public institutions.


The future is becoming less predictable



I highly recommend that you read Wait But Why's piece on artificial intelligence. In particular, the opening section on why we can’t picture the magnitude of changes coming in the future. It opens with a graph, and the question "What does it feel like to stand here?"


Edge1



"It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:"


Edge



We have a growing body of evidence suggesting that change is occurring on an exponential scale, in several ways. And we can understand that idea rationally; we've pretty well internalized it.


"We have all heard this before, but constant change is the norm and the speed of change is staggering...

The complexity of the issues we face is also growing across all domains—fiscal, health, environment, security, diplomacy, development, defence, transportation, to name a few. "
- Janice Charette, Clerk of the Privy Council [source]


But, like the figure in that second graph, we tend to revert back to thinking that we can manage that level of change. The problem is that our brains are swapping one question for another without us knowing it. Instead of answering "Will the near future look very different from the present if we’re experiencing an exponential rate of change?" we're answering "Can I personally imagine the effects of exponential change on the near future?" and the answer is actually that no, we can’t. We tend to just mentally extend the trendline from the last few years.


How we deal with a largely unpredictable future merits much longer shrift. For another time.


Hyper-networked isn't special



In the earlier days of the digital (and particularly social) world, finding insights in other fields or sectors of the economy and being able to imagine them applying to government was a really useful skill.


The thing is, it no longer takes research or even much insight to recognize useful tools or credible change drivers. We can replace "Aha!" moments with mental shortcuts, and the way we find information provides cues about people's intelligence and authority. For instance, if X colleague and Y scientist reference Z person's idea, and we think X and Y are smart, we'll probably think Z is smart and the idea holds water. An extreme example would be when Stephen Hawking, Elon Musk, and Bill Gates warn about artificial intelligence. They're smart, and when other smart people agree with them, the idea is credible. It doesn't take any understanding of AI on our part to catch a glaring hint about its importance - all we've done is compare the claim against a mental rolodex of trusted sources. Search algorithms, human curation, and the existence of instructions for pretty much anything have hugely leveled this playing field.

It was easy to see Uber coming, but much harder to prepare for it. Which is why taxi drivers were still protesting in Ottawa yesterday.


The next big thing



So I’m left thinking that the Next Big Thing is that we get better at how we make sense of purported Next Big Things, and we get better at how we handle constant Big Things that we won’t really see coming. Which would mean we need to dig into and dissect the concepts of foresight, change management, adaptability, agility, resilience (agility and resilience being two very different things), and take them far, far more seriously.


*A quick note on future-focused posts: long before I joined the site, posts like this hugely impressed me: Signal to Noise. I still find those kinds of posts to be strong, and they seem to garner more interest:


No comments:

Post a Comment