April 04, 2023

Article at Linkedin

View original

Generative AI: Defining the Association Community's Path Forward Together

Generative AI: Defining the Association Community's Path Forward Together
Photo Credit: whyframestudio (https://www.istockphoto.com/portfolio/whyframestudio?mediatype=photography)

This is an open letter to the association community outlining critical questions we must ask and essential actions we must take in response to the rapid adoption of generative AI.

Please share your thoughts on the letter as comments to the LinkedIn post or by email at foresightfirst@gmail.com.

AUTHOR'S ATTESTATION: The text in this open letter was written entirely by its human author, Jeff De Cagna FRSA FASAE, without the use of generative AI.


Since the initial release of ChatGPT in late November 2022, there has been a global explosion of interest in "generative AI." ChatGPT reached 100 million active users just two months after its launch, and its adoption continues to grow. In a recent blog post titled, "The Age of AI has begun," Bill Gates argues, "[t]he development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it."

Gates's observations may be correct, but there is another more concerning side to the story. The accelerating adoption of AI overall—and generative AI specifically—continues to raise numerous real-world problems for human beings that neither government entities nor technology companies are taking sufficient action to resolve. The short-term concerns include 1) the discriminatory impact of biased AI algorithms, 2) cybersecurity, legal, and privacy issues, 3) the inaccuracy and unreliability of AI-generated content, 4) AI-generated deepfakes/misinformation/disinformation, and 5) the energy usage/environmental impact of and human exploitation in the training of large language models (LLMs). Over the long term, additional concerns include 1) the impact of generative AI on jobs, 2) the loss of human agency, and 3) the existential risks of artificial general intelligence (AGI).

At this crucial moment when the call to address generative AI's risks and harms is growing louder, the association community and its important voices must be heard. Our community must move forward together to ensure that it is making constructive, empathic, and positive-sum contributions to the conversations that will help define the direction of humanity's short-term and long-term futures.

In this open letter, I outline six critical questions regarding AI/generative AI that association boards, chief staff executives (CSEs), other senior decision-makers, and individual contributors must ask. In addition, I am proposing six essential actions on which our community and its contributors can cooperate to safeguard our organizations, stakeholders, and successors from technological harm.

Six Critical Questions

•What is the current level of adoption/influence/impact of AI technologies, including generative AI, within the industries/professions/fields that associations serve?—Association decision-makers must develop a clear picture of how AI is already shaping (or reshaping) the short-term and long-term futures of their industries/professions/fields. Any actions that association boards and staff partners might take with respect to AI in the months and years ahead will benefit from understanding this larger context.

•How has the association community's recent use of AI-enabled platforms and tools helped and harmed stakeholders and successors?—Associations were using machine intelligence to varying degrees before the arrival of ChatGPT and other generative AI technologies, perhaps without even realizing it. Association decision-makers must audit the current use of AI applications and evaluate their ongoing beneficial and detrimental impact on both their organizations and the human beings they serve.

•What is the association community's foundational motivation for using generative AI?—The growing list of generative AI platforms and tools offers the association community stakeholders new ways to increase their capability, efficiency, and productivity. Given both current and emerging risks and harms, however, association decision-makers must exercise care and examine whether achieving those outcomes provides a sufficient and compelling justification for using these tools.

•What actions will the association community take to ensure AI, including generative AI, benefits human beings and does not inflict harm?—Shaping a more responsible and sustainable future for AI, including generative AI, may well require short-term sacrifices by all affected stakeholders to minimize harm over the long term. If associations are truly committed to ensuring AI is good for humanity, we will view such sacrifices as meaningful and purposeful actions taken for mutual benefit.

•What can the association community do to create a trusted and cooperative context for dialogue on the future of AI?—Associations are human systems that have always operated primarily on the basis of cooperation and trust, as demonstrated most recently during the worst periods of the COVID-19 pandemic lockdown. AI is another wicked problem requiring immediate attention on which the association community can make a positive-sum impact by gathering key stakeholders to listen, learn, imagine better outcomes, and mobilize to achieve them.

•How can the association community define a beneficial and collaborative path forward with respect to AI, including generative AI?—Arguably the most significant obstacles we face in addressing AI's real-world concerns are 1) our shared willingness to discard orthodox beliefs, i.e., the deep-seated assumptions we make about how the world works, and 2) our collective ability to display humility toward the future. To define a beneficial and collaborative path forward, we must let go of orthodoxy, enable intentional learning, and reach a necessary conclusion: for all we think we know about AI, we still don't know what we don't know.

Six Essential Actions

•ASAE should organize the association community to advocate for a slowdown in the training and release of more powerful AI models—On its website, ASAE states, "[w]e believe associations have the power to transform society for the better." The current upheaval surrounding generative AI offers ASAE an opportunity to demonstrate the reality of associations' beneficial impact on our world. By organizing our community to challenge government and technology companies to slow down the training and release of new AI models (and any related products), we can create a cooling-off period in which to pursue constructive and inclusive dialogue about the future direction and impact of these powerful technologies on human beings.

As part of the call for a slowdown, ASAE and other associations should publicly support the Federal Trade Commission's recent guidance regarding AI advertising claims.

•ASAE should convene a virtual summit in 2023 to discuss the future direction and impact of AI on the association community and beyond—Once again, ASAE can use its agency to assemble representatives from nearly every field of human endeavor to examine with care the current and potential future impact of AI on associations, the human beings they serve, and other affected stakeholders. This summit should include diverse contributors from across multiple sectors, including business, government, and technology companies, as well as experts on AI ethics. The purpose of the summit would be to identify promising areas for ongoing cooperative action to shape a more responsible AI future.

•Associations must prepare their boards of directors to engage in focused, fully-informed, and meaningful conversations about the short-term/long-term implications of AI for the industries/professions/fields they serve—Association boards have a responsibility to safeguard their stakeholders and successors from technological harm. To fulfill this responsibility, boards, CSEs, and other senior decision-makers must collaborate to build a shared understanding of generative AI's beneficial and detrimental impact on human beings, their work, and their lives. For boards, this intentional learning process is essential to fulfilling their stewardship responsibilities, including the duty of foresight.

•Associations must adopt comprehensive policy, practice, programmatic responses to generative AI adoption to safeguard their stakeholders from technological harm—Association boards and staff must work together to craft policies around both acceptable and unacceptable use cases for generative AI involving consultant/contractors/industry partners, staff, and voluntary contributors. In addition, associations must establish effective practices to ensure that content for their certification, credentialing, professional development, publishing, and other offerings is in full compliance with those policies. Associations also must design multiple programmatic offerings to help educate stakeholders and successors about generative AI and its potential beneficial/detrimental impact.

•All association contributors must pursue intentional learning into the ethical concerns and issues that generative AI platforms and tools create—Consultants/contractors/industry partners, staff, and voluntary contributors in the association community must strive to develop an empathic and holistic understanding of the real-world ethical dilemmas and questions raised by generative AI. Through intentional learning and reflection, contributors will develop a more robust ethical point of view on the unintended negative consequences of using these platforms and tools for their associations, other stakeholders, and successors.

•All association contributors must commit to using generative AI platforms and tools with transparency and responsibility—When making use of generative AI platforms and tools, especially in the absence of more robust safeguards, consultants/contractors/industry partners, staff, and voluntary contributors in the association community should 1) consider them options of last resort, 2) strive to minimize their use, 3) fully disclose their specific acceptable use cases and purposes, 4) avoid using them to inflict harm, and 5) ensure that all synthetic media products are identified with clear text-based attestations or embedded watermarks.

For more information, please review the Partnership on AI's Responsible Practices for Synthetic Media.

What will our successors say about us?

This is the preoccupying question of my professional life, and it has been front and center in my thinking as I have written this letter. In total candor, I am deeply worried about the future we are creating for our successors, especially young people. Despite the obvious difficulties before us, we owe them the strongest possible effort we can make for the rest of this decade and beyond to leave the world better than how we found it.

Through this letter, then, I have a straightforward request of the association community to which I have devoted my career: let us find common ground and enable our constructive participation in moving powerful yet troubling AI/generative AI technologies toward a more ethical, responsible, and trusted pathway for the future.

Our successors are counting on us to stand up for their futures today. They will be watching what we do and how we do it. We must not let them down.

Jeff De Cagna FRSA FASAE is executive advisor for Foresight First LLC in Reston, Virginia, is an association contrarian, foresight practitioner, governing designer, stakeholder/successor advocate, and stewardship catalyst. In his work, Jeff advises association and non-profit boards on how they can set a higher standard of stewardship, governing, and foresight [SGF].

A graduate of the Johns Hopkins and Harvard universities, Jeff has continued his learning with the future at the MIT Sloan School of Management, Oxford University, Harvard Business School, the University of Virginia's Darden School of Business, BoardSource, the Copenhagen Institute for Future Studies, and the Institute for the Future.

Jeff is the 32nd recipient of ASAE’s Academy of Leaders Award, the association’s highest individual honor given to consultants or industry partners in recognition of their support of ASAE and the association community.