
“The future we all want to live in won’t just show up, fully formed. We need to be courageous. We must take responsibility for our actions.” – Amy Webb
I recently finished reading Amy Webb’s fine book “The big nine: How the tech titans thinking machines could warp humanity,” (2019). New York: Public Affairs/Hachette Book Group.
The book is a history and evolution of the power of “the big nine” tech companies (6 US-based, 3 in China), with a primary focus on the power and possibilities of artificial intelligence. It takes a deep look at all the power, opportunities, possibilities (both positive and devastating) that AI brings now and into the future.
The Big Nine defies a “simple” framework (AI is good/AI is bad). Rather it focuses on the idea that AI is almost incomprehensibly powerful and requires the responsible attention of individuals, communities and governments to assure that the highest ideals and possibilities are achieved and the greatest threats are reduced/eliminated (almost as if one might think of the way that we think about power/possibility of nuclear power – though the developmental trajectories have distinct differences).
From a social work perspective, the focus intersects with our own thinking/imagining of the “future” of social justice, human well-being and equity work. What is a future in which a few powerful (largely white, male, economically dominant and western in the US) construct underlying structures and digital machinery that decides, sorts, and controls much of the workings of modern life? How might existing inequities be replicated, multiplied – or conversely, interrupted and resolved? These are essential concerns that social work would be well-advised to factor into the way we think about the future and our work in it. How will these mechanisms (or have already begun to) (re) arrange modern life, who will continue to win and lose, and how will those trajectories play out according to the way social work thinks about ourselves and the work we aspire to do? Likely these will be a combination of ways that we are professionally familiar with (poverty, structural violence, “isms” and the like) as well as new types or variations of oppressions that we can only begin to predict and understand. My recent blog post on algorithmic transparency, bias and justice goes into some of these issues in more detail.
Our values, knowledge and skills regarding the importance and processes of engaging community voices, interrupting oppression, building more just and liberatory structures, and recognizing and addressing structural barriers to well-being could all be important skills as increasing pressure builds regarding recognizing the human rights issues associated with growth in tech that is not reflective of the well-being of all. But we will need to be intentional regarding our needed learning curve to remain relevant in these complex new spaces. I found this book to advance my own thinking/understanding regarding how vast and complex discussions of “big tech” and AI can be, and yet largely understandable using our own frames of political economy, human rights and social work ethics – just in new spaces and new ways. Social workers belong in this conversation, and this blog remains a call to action and invitation to continue dialogue about how we might best do that.
While the book as a whole is readable including three primary sections: 1) overview and evolution of tech in the modern world (formidably challenge for the non-tech reader but she does a fine job of keeping it accessible), 2) a fascinating, inspiring and sobering deep presentation of three possible “futures” concerning AI. These scenarios are crafted with the intention of fully exploring various possibilities that exist for humanity based on decisions that are made (as Ms. Webb might say) while our ability to do so is still collectively within our grasp and 3) a final section that lays out an action plan and analysis of what needs to be done to optimize all that AI has to offer, while simultaneously building a new set of global policy guardrails to protect us, in some respects, from ourselves and the worst of the risks that are increasingly apparent in the rapid evolution of these technologies. The purpose of this post is to share what I considered to be the most substantive part of the book, which is Ms. Webb’s suggestion that to succeed in the years ahead with the complexities (and risks) that AI introduces into our world, an international body comprised of tech leaders and “AI researchers, sociologists, economists, game theorists, futurists, political scientists” (p. 237) along with government leaders, and that these members reflect the “socioeconomic, gender, race, religious, political and sexual diversity” of the world (p. 237).
She calls this governing/regulatory body the Global Alliance on Intelligence Augmentation (GAIA), and their core aspirational purpose would be to collectively “facilitate and cooperate on share AI initiatives and policies” (p. 237) and to affirm and create structures to consider, operationalize and protect AI as a public good. In essence, she suggests that these tools are rapidly becoming too powerful to be left merely to the devices of private, corporate and market forces.
Here is an excerpt that clarifies what I consider to be the most important elements of this effort she proposes – which in itself is a fascinating “thought experiment” about what might be to come. I hope that we move towards this kind of global dialogue sooner rather than later – and I hope that we as social workers – can find ourselves as helpful, informative, relevant change agents, social scientists, and supporters of human well-being in an increasingly complicated world.
“GAIA should be considered a framework of rights that balances individual liberties with the greater, global good. It would be better to establish a framework that’s strong on ideals but can be more flexible in interpretation as AI matures. Member organizations would have to demonstrate they are in compliance or face being removed from GAIA. Any framework should include the following principles:
- Humanity should always be at the center of AI’s development.
- AI systems should be safe and secure. We should be able to independently verify their safety and security.
- The Big Nine – including its investors, employees, and the governments it works within – must prioritize safety above speed. Any team working on an AI system – even those outside the Big Nine – must not cut corners in favor of speed. Safety must be demonstrated and discernable by independent observers.
- If an AI system causes harm, it should be able to report out what went wrong, and there should be a governance process in place to discuss and mitigate damage.
- AI should be explainable. Systems should carry something akin to a nutritional label, detailing the training data used, the processes used for learning, the real-world data being used in applications and the expected outcomes. For sensitivity or proprietary systems, trusted third parties should be able to assess and verify an AI’s transparency.
- Everyone in the AI ecosystem – Big Nine employees, managers, leaders, board members; startups (entrepreneurs and accelerators); investors (venture capitalists, private equity firms, institutional investors, and individual shareholders); teachers and graduate students; and anyone else working in AI – must recognize that they are making ethical decisions all the time. They should be prepared to explain all of the decisions they’ve made during the development, testing and deployment processes.
- The Human Values Atlas* should be adhered to for all AI projects. Even narrow AI applications should demonstrate that the atlas has been incorporated.
- There should be a published, easy-to-find code of conduct governing all people who work on AI and its design, build and deployment. The code of conduct should also govern investors.
- All people should have the right to interrogate AI systems. What an AI’s true purpose is, what data it uses, how it reaches its conclusions, and who sees results should be made fully transparent in a standardized format.
- The terms of service for an AI application-or any service that uses AI – should be written in language plain enough that a third grader can comprehend it. It should be available in every language as soon as the application goes live.
- PDR’s (personal data records) should be opt-in and developed using a standard format, they should be interoperable, and individual people should retain full ownership and permission rights. Should PDR’s become heritable, individual people should be able to decide the permissions and uses of their data.
- PDR’s should be decentralized as much as possible, ensuring that no one party has complete control. The technical group that designs our PDRs should include legal and nonlegal experts alike: whitehat (good) hackers, civil rights leaders, government agents, independent data fiduciaries, ethicists, and other professionals working outside of the Big Nine.
- To the extent possible, PDRs should be protected against enabling authoritarian regimes.
- There must be a system of public accountability and an easy method for people to receive answers to questions about their data and how it is mined, refined and use throughout AI systems.
- All data should be treated fairly and equally, regardless of nationality, race, religion, sexual identity, gender, political affirmations, or other uniques beliefs,” (pp. 240-242).
*The idea of a “human values atlas” is presented earlier in the book as the formidable and complex but essential task of creating a living and shared communication/document about what is most centrally valued by humans across cultures and nationalities. This atlas would guide much of the future work in the AI space – without it – we are as Ms. Webb suggests, ceding authority for these matters to potentially conflicting and hidden/opaque corporate forces. She discusses this in greater detail on pages 239-240 of the book.
Here is a 15-minute interview with Ms. Webb on a recent PBS spot.
For the reader’s convenience, here are a couple of additional reviews of this book:
Finally here is some information about recent and current U.S. federal activity on this issue:
Will Trump’s new artificial intelligence initiative make the U.S. the world leader in AI? (2019)
President Obama’s artificial intelligence, automation and the economy plan (2016)
6 comments