|by Nick Charney|
"By 2020, the cognitive technologies–machine learning, natural language processing, speech recognition, and robotics–start to augment the government workforce and improve the quality and efficiency of government systems. These technologies bring forth a range of applications in government including law enforcement, transportation, healthcare, and even fraud detection. More importantly humans remain “in the loop” not only to develop, customize, and train the systems; but also oversee, guide, and improve them." - Gov2020 DeloitteCognitive government (as described above) is one of the mega shifts identified in Deloitte's Gov2020 research project. Its something that Bill Eggers and Paul MacMillan elaborated on for Nesta's blog back in May, arguing that cognitive government requires open functionality, applied learning and adaptive-rule making and asking:
"How can governments shorten their learning curve to more effectively adapt to the technological changes that surround them?"*Eggers and MacMillan seem to think that advancements in Artificial Intelligence (AI), Cognitive Systems and Machine-to-Machine learning holds the key to achieving fundamental changes to the architecture of government; and the more I think about it the more I tend to agree.
Enter Artificial Intelligence (AI)
To date, we've largely thought about our information systems as static repositories of information rather than a dynamic AI. This type of thinking has informed procurement decisions, operational decisions, and shaped the very flow of information in and thus knowledge contained within our organizations. But what if our records and document management systems were built around a dynamic AI rather than a static repository?
Cognitive computing is supposed to accelerate and enhance human expertise; capture the expertise of top performers; improve decision making; and scale easily alongside both the information supply and demand. Our current information systems do none of these things, instead they rely on the user to classify, query, judge the validity of information and bring it to bear.
What if we had cognitive systems that are able to put the entire corpus of a department's information base into context and provide confidence-weighted responses, supporting evidence and map any related actors, polices, regulations, and services?
What if the system continued to learn as new information was inputted? If it could gauge and determine the validity of a given information resource based on who produced it, who was consulted, how quickly or far it moved through the system, and whether it was ultimately approved or set aside. What if we took a more enterprise wide approach and linked departmental AIs together?
Imagining the future of collaboration
What if – to take a real world example – we had approached the web renewal initiative with an AI (rather than Content Management System) in mind?
Imagine all of the government's information in a single window that learns not only from how the civil service organizes its information, policies, services, and regulations but also how citizens search for, consume and interact with them online. The AI could automatically prioritize related information for users as they search, highlight seasonal information as it is a priority (e.g. tax filing, voting, etc), and remove redundant, outdated and trivial content (ROT).
This would take co-creation and collaboration to a whole new level as the AI brings governments and citizens together in a way that is otherwise impossible to achieve. We may not think of this as co-creation or collaboration in the contemporary sense of the word but it surely entails at least some small but important part of the future of both.
*Caveat: Melissa picked up on this point in her wildly popular, "A government that learns by design"; it is worth reading if you have not yet done so.