This is the first in what will be an ongoing and semi-regular series of columns examining specific foresight issues for association decision-makers. This column focuses on artificial intelligence.
In my January column, I argued that we cannot build future-ready associations based on human intelligence alone, and that associations must endeavor to blend human and machine intelligences for maximum beneficial impact. My view is consistent with the belief that we are headed toward an “AI first” world over the next decade. In fact, this world is already emerging around us, and one of the most important questions that association decision-makers must ask themselves is how they will ensure that operating in an AI-first way serves everyone by eliminating drudgery from our lives, augmenting human performance and improving the well-being of our society.
The Pew Research Center’s December 2018 report, “Artificial Intelligence and the Future of Humans,” is based on input gathered from nearly 1,000 international experts in technology, business and related fields. One of the report’s key questions was whether the experts believe that AI will leave people better off over the next decade. Here is the topline finding:
“Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.”
While this 63-37 spread is impressive, it also captures well my own divided perspective on artificial intelligence. On the one hand, AI is already having a significant impact on our society, and my ongoing reading and research confirms that this impact is mostly positive. I concur, therefore, with the 63 percent of Pew experts who choose to be hopeful that AI will benefit humanity over the longer term. On the other hand, I wholeheartedly embrace the real-world concerns about AI among the 37 percent who disagree, including worries about how AI is being developed, implemented and applied to reshape the present and future of work. It remains supremely critical, therefore, that staff and voluntary association decision-makers (and their advisors) exercise care and not get too caught up in the hype by assuming only the best or only the worst about AI and its influence on human beings over the next decade and beyond.
What should association decision-makers do?
For association boards, chief staff executives and other decision-makers who are wondering what they should do to ensure that AI is serving the best interests of their associations, stakeholders and the broader industry and professional systems within which they operate, I have three immediate recommendations:
Devote attention to understanding the emerging direction of AI
As part of their duty of foresight, association boards must learn as much as possible with the rapidly unfolding future of AI. It is essential for boards to cast a wide net and gather insight and perspective on what is happening in the AI space beyond the new product announcements, market forecasts and tech billionaire opinions that are the typical focus of mainstream media reporting. For example, associations should look at resources from entities such as the AI Now Institute at New York University that are working to ensure the responsible and ethical implementation of AI.
Clarify how AI decisions will be made
Associations should establish clear decision-making principles to guide the way AI is introduced into every aspect of organizational work. The mere fact of articulating principles is not enough, however. Such principles must be consistently applied in decision-making, especially when doing so might be viewed as inconvenient. In addition, associations should make all AI-related decisions using diverse teams that involve contributors with expertise outside of technology. Associations must strive to compensate for any concerns about AI development and training by including different perspectives into the decision-making process about how human and machine intelligences can be most effectively and responsibly integrated.
Collaborate only with AI-responsible technology partners
Consistent with their organizational decision-making processes for AI, associations must have thoughtful conversations with every ongoing and emerging technology partner to ensure those providers are committed to the responsible and ethical use of AI in current applications. If AI is not an element of existing technologies, associations need to know when and where AI integration is likely to be introduced on their product roadmaps and what steps those companies are already taking to ensure responsible implementation. In each type of conversation, associations should be sure to explore the explainability or interpretability of their AI applications. Technology partners must help increase trust in AI by creating ways for humans to explain how those smart technologies arrive at their predictions or, at a minimum, make it possible for humans to interpret the connection between cause and effect in AI decision-making.
Back in May
The Duty of Foresight column is taking the month of April off, but it will return in late May.
About The Author
Jeff De Cagna FRSA FASAE is executive advisor for Foresight First LLC in Reston, Virginia and a respected contrarian thinker on the future of associating and associations. Jeff advises and serves on association and non-profit boards, and he has pursued executive development in both the work of governing (BoardSource and Harvard Business School) and the work of foresight (Institute for the Future and Oxford University). Jeff serves on ASAE’s Key Consultants Committee and on the ASAE Foundation Research Committee, for which he chairs the AI/Automation Task Force. He is the author of the eBook, Foresight is The Future of Governing: Building Thrivable Boards, Stakeholders and Systems for the 21st Century, produced in collaboration with Association Adviser, a Naylor publication.