Updates to the the Never-Ending Futures Annotated Bibliography Project – May 2019 Edition!

I love to scan the literature for new information. It is a hobby, passion and fortunately – a useful pastime for a scholar! What never ceases to amaze me is the transdisplinary nature of the futures literature. It is never a dull moment in every sense!

Some months ago – I shared a project I’ve been working on. As a tool for my own scholarship, I often organize my resources in an annotated bibliography and I use these regularly as I write/study to keep myself organized. Since my goal is not only to get some papers and books out focused on my passion for futures capacity building in social work, but also to build our collective capacity to be more “foresightful” together, I am pleased to share this resource with all of you.

I’ve added a number of new articles and books that have flown across my radar screen the past couple of months. As an aid to the reader – all new entries are included in light blue text for now!

Here’s a link to take you to the document! Enjoy and join the dialogue to build the future we want.

If you find this helpful – please drop me a line as we continue to build community and networks in social work futures and beyond.

Reflections and Take-Away from Amy Webb’s Book “The Big Nine”

“The future we all want to live in won’t just show up, fully formed. We need to be courageous. We must take responsibility for our actions.” – Amy Webb

I recently finished reading Amy Webb’s fine book “The big nine: How the tech titans thinking machines could warp humanity,” (2019). New York: Public Affairs/Hachette Book Group.

The book is a history and evolution of the power of “the big nine” tech companies (6 US-based, 3 in China), with a primary focus on the power and possibilities of artificial intelligence. It takes a deep look at all the power, opportunities, possibilities (both positive and devastating) that AI brings now and into the future.

The Big Nine defies a “simple” framework (AI is good/AI is bad). Rather it focuses on the idea that AI is almost incomprehensibly powerful and requires the responsible attention of individuals, communities and governments to assure that the highest ideals and possibilities are achieved and the greatest threats are reduced/eliminated (almost as if one might think of the way that we think about power/possibility of nuclear power – though the developmental trajectories have distinct differences).

From a social work perspective, the focus intersects with our own thinking/imagining of the “future” of social justice, human well-being and equity work. What is a future in which a few powerful (largely white, male, economically dominant and western in the US) construct underlying structures and digital machinery that decides, sorts, and controls much of the workings of modern life? How might existing inequities be replicated, multiplied – or conversely, interrupted and resolved? These are essential concerns that social work would be well-advised to factor into the way we think about the future and our work in it. How will these mechanisms (or have already begun to) (re) arrange modern life, who will continue to win and lose, and how will those trajectories play out according to the way social work thinks about ourselves and the work we aspire to do? Likely these will be a combination of ways that we are professionally familiar with (poverty, structural violence, “isms” and the like) as well as new types or variations of oppressions that we can only begin to predict and understand. My recent blog post on algorithmic transparency, bias and justice goes into some of these issues in more detail.

Our values, knowledge and skills regarding the importance and processes of engaging community voices, interrupting oppression, building more just and liberatory structures, and recognizing and addressing structural barriers to well-being could all be important skills as increasing pressure builds regarding recognizing the human rights issues associated with growth in tech that is not reflective of the well-being of all. But we will need to be intentional regarding our needed learning curve to remain relevant in these complex new spaces. I found this book to advance my own thinking/understanding regarding how vast and complex discussions of “big tech” and AI can be, and yet largely understandable using our own frames of political economy, human rights and social work ethics – just in new spaces and new ways. Social workers belong in this conversation, and this blog remains a call to action and invitation to continue dialogue about how we might best do that.

While the book as a whole is readable including three primary sections: 1) overview and evolution of tech in the modern world (formidably challenge for the non-tech reader but she does a fine job of keeping it accessible), 2) a fascinating, inspiring and sobering deep presentation of three possible “futures” concerning AI. These scenarios are crafted with the intention of fully exploring various possibilities that exist for humanity based on decisions that are made (as Ms. Webb might say) while our ability to do so is still collectively within our grasp and 3) a final section that lays out an action plan and analysis of what needs to be done to optimize all that AI has to offer, while simultaneously building a new set of global policy guardrails to protect us, in some respects, from ourselves and the worst of the risks that are increasingly apparent in the rapid evolution of these technologies. The purpose of this post is to share what I considered to be the most substantive part of the book, which is Ms. Webb’s suggestion that to succeed in the years ahead with the complexities (and risks) that AI introduces into our world, an international body comprised of tech leaders and “AI researchers, sociologists, economists, game theorists, futurists, political scientists” (p. 237) along with government leaders, and that these members reflect the “socioeconomic, gender, race, religious, political and sexual diversity” of the world (p. 237).

She calls this governing/regulatory body the Global Alliance on Intelligence Augmentation (GAIA), and their core aspirational purpose would be to collectively “facilitate and cooperate on share AI initiatives and policies” (p. 237) and to affirm and create structures to consider, operationalize and protect AI as a public good. In essence, she suggests that these tools are rapidly becoming too powerful to be left merely to the devices of private, corporate and market forces.

Here is an excerpt that clarifies what I consider to be the most important elements of this effort she proposes – which in itself is a fascinating “thought experiment” about what might be to come. I hope that we move towards this kind of global dialogue sooner rather than later – and I hope that we as social workers – can find ourselves as helpful, informative, relevant change agents, social scientists, and supporters of human well-being in an increasingly complicated world.

“GAIA should be considered a framework of rights that balances individual liberties with the greater, global good. It would be better to establish a framework that’s strong on ideals but can be more flexible in interpretation as AI matures. Member organizations would have to demonstrate they are in compliance or face being removed from GAIA. Any framework should include the following principles:

  1. Humanity should always be at the center of AI’s development.
  2. AI systems should be safe and secure. We should be able to independently verify their safety and security.
  3. The Big Nine – including its investors, employees, and the governments it works within – must prioritize safety above speed. Any team working on an AI system – even those outside the Big Nine – must not cut corners in favor of speed. Safety must be demonstrated and discernable by independent observers.
  4. If an AI system causes harm, it should be able to report out what went wrong, and there should be a governance process in place to discuss and mitigate damage.
  5. AI should be explainable. Systems should carry something akin to a nutritional label, detailing the training data used, the processes used for learning, the real-world data being used in applications and the expected outcomes. For sensitivity or proprietary systems, trusted third parties should be able to assess and verify an AI’s transparency.
  6. Everyone in the AI ecosystem – Big Nine employees, managers, leaders, board members; startups (entrepreneurs and accelerators); investors (venture capitalists, private equity firms, institutional investors, and individual shareholders); teachers and graduate students; and anyone else working in AI – must recognize that they are making ethical decisions all the time. They should be prepared to explain all of the decisions they’ve made during the development, testing and deployment processes.
  7. The Human Values Atlas* should be adhered to for all AI projects. Even narrow AI applications should demonstrate that the atlas has been incorporated.
  8. There should be a published, easy-to-find code of conduct governing all people who work on AI and its design, build and deployment. The code of conduct should also govern investors.
  9. All people should have the right to interrogate AI systems. What an AI’s true purpose is, what data it uses, how it reaches its conclusions, and who sees results should be made fully transparent in a standardized format.
  10. The terms of service for an AI application-or any service that uses AI – should be written in language plain enough that a third grader can comprehend it. It should be available in every language as soon as the application goes live.
  11. PDR’s (personal data records) should be opt-in and developed using a standard format, they should be interoperable, and individual people should retain full ownership and permission rights. Should PDR’s become heritable, individual people should be able to decide the permissions and uses of their data.
  12. PDR’s should be decentralized as much as possible, ensuring that no one party has complete control. The technical group that designs our PDRs should include legal and nonlegal experts alike: whitehat (good) hackers, civil rights leaders, government agents, independent data fiduciaries, ethicists, and other professionals working outside of the Big Nine.
  13. To the extent possible, PDRs should be protected against enabling authoritarian regimes.
  14. There must be a system of public accountability and an easy method for people to receive answers to questions about their data and how it is mined, refined and use throughout AI systems.
  15. All data should be treated fairly and equally, regardless of nationality, race, religion, sexual identity, gender, political affirmations, or other uniques beliefs,” (pp. 240-242).

*The idea of a “human values atlas” is presented earlier in the book as the formidable and complex but essential task of creating a living and shared communication/document about what is most centrally valued by humans across cultures and nationalities. This atlas would guide much of the future work in the AI space – without it – we are as Ms. Webb suggests, ceding authority for these matters to potentially conflicting and hidden/opaque corporate forces. She discusses this in greater detail on pages 239-240 of the book.

Here is a 15-minute interview with Ms. Webb on a recent PBS spot.

For the reader’s convenience, here are a couple of additional reviews of this book:

Technology Review

Fast Company

Venture Beat


Finally here is some information about recent and current U.S. federal activity on this issue:

Will Trump’s new artificial intelligence initiative make the U.S. the world leader in AI? (2019)

President Obama’s artificial intelligence, automation and the economy plan (2016)

Algorithmic Transparency, Bias and Justice

Algorithms are a huge part of modern life. So much so that we sometimes forget they have arrived. Indeed they are primarily “invisible” to everyday people, working behind the scenes to sort data and make decisions that reflect the opinions of a few algorithm designers behind the scenes. Sometimes these algorithms can be life changing/life saving, for example when cancer diagnosis can be made through a combination of machine learning and algorithms that can scan hundreds of thousands of xrays to detect the tiniest irregularity that a human might miss. But other times, like racially biased facial recognition software that might inaccurately identify someone as a criminal suspect – are much more concerning. Increasingly, the ideas of “algorithmic transparency,” “algorithmic racism/bias,” and “algorithmic justice” have come into more prevalent conversation among social justice circles.

There is much learning and development going on with regard to this topic. Of all the “future facing” topics one might consider in terms of urgent need for attention in social work – in my estimation – this is one of the most important. As the rate of adoption of new technologies (most often emerging from the private sector) continues to accelerate, algorithms that don’t incorporate ethical and bias-free dimensions are a frequent point of discussion among social justice advocates. What is the pathway forward and how do we continue to increase social work practice and research attention in this area?

I would suggest that this is the most under-discussed ethical challenge of the future for the profession of social work. We need to dramatically increase the depth, range and focus of our ethical evolution to participate in and shape the future of these technologies that work for people and that prevent harm and injustice. We should concern ourselves with identifying how and where algorithms are starting to emerge and be active in our social work practice spaces (clinical and macro). Collectively – we are starting to develop a shared and critical literacy regarding these important and ubiquitous forces, and challenge a need for clear and explicit ethical guidelines/rules.

For those who are completely new to this topic, here’s a great primer.

While there are pockets of enthusiasm for dialogue about these developments in social work, we have a long way to go to assert where and how we can operate most ethically – and what that looks like given the changing dynamics at play.

Here’s a reading/resource list of resources to get started – with great respect for the groundbreaking work of all who have been leaders in this space.

  • Dr. Desmond Patton is an Associate Professor of Social Work at Columbia University in New York City. I’ve previously listed his work on my blog but want to underscore the significant leadership he’s contributed within social work to this topic. Here’s a recent article he put together for Medium. He’s also the Principal Investigator of the Safe Lab project at Columbia which is a research initiative focused on examining the ways in which youth of color navigate violence on and offline.
  • Data for Black Lives is a national network of over 4,000 activists, organizers, and scientists using data science to create concrete and measurable change in the lives of Black people. For far too long, data has been weaponized against Black communities – from redlining to predictive policing, credit scoring and facial recognition. But we are charting out a new era, where data is a tool for profound social change. (From their website here!)
  • The Institute for the Future has developed an “Ethical OS” toolkit to provide a structure for tech experts to use to deepen their adherence to ethical principles while developing tech tools. Check it out here.

These are the books currently on my shelf on this topic:

Eubanks, V. (2018). Automating inequality: How high tech tools profile, punish and police the poor. New York: St. Martin’s Press. Review here.

Lane, J. (2019). The digital street. New York: Oxford Press. Review here.

Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press. Review here.

O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Broadway Books. Review here – scroll down where her TED talk is included.

Also, I’ve collected numerous recent articles about bias, “isms” and ethics concerns regarding algorithmic transparency/bias as follows:

Behind every robot is a human (2019)

The new digital divide is between people who opt out of algorithms and those who don’t (2019)

Collection of new articles from the Brookings Institute regarding AI and the future (2019)

Artificial intelligence is ripe for abuse, tech researcher warns: A fascist’s dream (2019)

Algorithmic Accountability Act (2019)

Amazon Alexa launches its first HIPAA compliant medical unit (2019)

Facial recognition is big tech’s latest toxic gateway app (2019)

That mental health app might share your data without telling you (2019)

Europe is making AI rules now to avoid a new tech crisis (2019)

AI’s white guy problem isn’t going away (2019)

Europe’s silver bullet in global AI battle: Ethics (2019)

A case for critical public interest technologists (2019)

Ethics alone can’t fix big tech (2019)

Government needs an “ethical framework” to tackle emerging technology (2019)

Tech with a social conscience and why you should care (2019)

Trading privacy for security is another tax on the poor (2019)

Congress wants to protect you from biased algorithms, deep fakes and other bad AI (2019)

AI must confront its missed opportunities to achieve social good (2019)

AI systems should be accountable, explainable and unbiased says EU (2019)

One month, 500 thousand face scans: How China is using AI to profile a minority (2019)

How recommendation algorithms run the world (2019)

Facial recognition is the plutonium of AI (2019)

Facial recognition is accurate if you’re a white guy (2018)

Facial recognition software is biased towards white men, researcher finds (2018)

Qualities of Good Questions – An Essential Futures Frame

Just loved this list of qualities of good questions from Kelly (2016). Good questions are the key to being ready for new futures and ultimately, when executed well, the most human of our strengths. I’ll post a fuller review of the book (which I liked very much!) later, but until then, here’s one from the web. Consider these and add more! Thinking about this “what are the most important things for social work to do to be ready for a dynamic, unpredictable and turbulent future?” I think part of the answer…is challenging ourselves to ask better, deeper, more disruptive questions with courage and creativity…!

“A good question is like the one Albert Einstein asked himself as a small boy ‘what would you see if you were traveling on a beam of light?’ That question launched the theory of relativity (E=MC2) and the atomic age.

  • A good question is not concerned with a correct answer.
  • A good question cannot be answered immediately.
  • A good question challenges existing answers.
  • A good question is one you badly want answered once you hear it, but had no inkling you could before it was asked.
  • A good question creates new territory of thinking.
  • A good question reframes its own answers.
  • A good question is the seed of innovation in science, technology, art, politics and business.
  • A good question is a probe – a ‘what if’ scenario.
  • A good question skirts on the edge of what is known and not known, neither silly nor obvious.
  • A good question cannot be predicted.
  • A good question is one that generates many other questions.
  • A good question may be the last job a machine will ever learn to do.
  • A good question is what humans are for (pp. 288-289).”

Kelly, Kevin (2016). The inevitable: Understanding the 12 technological forces that will shape our future. New York: Penguin Books.

Afrofuturism – Essential Resources for Social Work and Beyond

(Image by John Jennings.)

Thought I would take the opportunity to pull various resources I’ve previously linked to on this blog on the topic of Afrofuturism together to amplify how important, creative and relevant this movement is to futures work. I would suggest that social work could benefit enormously by exploring how these emerging scholarly, artistic and literary resources might enhance social work education. Let’s imagine new ways of learning about culture, history, voices, power and what is possible for the future.

Afrofuturism –A movement in literature, music, art, etc., featuring futuristic or science fiction themes which incorporate elements of black history and culture.  Oxford Dictionaries

Afrofuturism (Wickipedia)

Afrofuturism (Oxford Bibliographies – includes more academic citations) 2017

What the heck is Afrofuturism? (2018)

Octavia’s Brood

St. Louis Afrofuturism (2019)

A beginning guide to Afrofuturism: 9 titles to watch and read (2019)

Afrofuturism – A language of rebellion (2018)

Afrofuturism course overviews from Kalamazoo College, University of California Riverside, Duke University, andUniversity of Chicago

This American Life episode exploring Afrofuturism (2017). (Thanks Dr. Felicia Murray!).

Updated! Annotated Futures Bibliography – A Never Ending Project!

Added a number of new items! Learn, learn, learn!!! Lots of amazing things to consider as you explore the literature about the future.

I’ve gathered many of the resources I’ve been learning with together – but this annotated bibliography will never been done and always in a state of revision! Have fun!!

Reflections and Takeaways from the book “The Future of Professions” by Richard and Daniel Susskind

This is a timely and important book. Every professional (everyone else too) knows and feels that change is accelerating and all around us. I’ll offer a couple of my favorite passages from the book and offer a few of my favorite takeaways. As a dedicated social work professional for more than 25 years now…I was truly transfixed by this scholarly book. It challenged and stretched my thinking…and I found myself alternately cheering and worrying in different parts of the volume. One cannot read the book without a truly expanded and clear sense of the reality that as the world changes, so will (or more to the point – are) the professions. How we change, will at least in part, be something we participate in, hopefully. And that is the great challenge of the book – are we taking stock of the changes in a way that affords us the chance to insert ourselves into the process of change?

Before they jump into the future, however, the Susskinds do an admirable job of helping to ground the reader in the fundamentals of what being a professional is all about, its history and a brief social theory round up of the sociology of professions. This review contains a couple of excerpts I considered so rich and valuable…I just included them directly.

“Our main claim is that we are on the brink of a period of fundamental and irreversible change in the way that the expertise of these specialists is made available in society. Technology will be the main driver of this change. And, in the long run, we will neither need, nor want professionals to work in the way that they did in the twentieth century and before” (p. 1).

“In what we term a ‘print-based industrial society’, the professions have played a central role in the sharing of expertise. They have been the main channel through which individuals and organizations have gained access to certain kinds of knowledge and expertise. However, in a ‘technology-based internet society’, we predict that increasingly capable machines, operating on their own or with non-specialist users, will take on many of the tasks that have been the historic preserve of the professions. We anticipate an “incremental transformation’ in the way we produce and distribute expertise in society. This will lead eventually to a dismantling of the traditional professions. For the current recipients and beneficiaries of the work of the professions, we bring good tidings – of a world in which expertise is more accessible and affordable than ever before. For professional providers, although our thesis may seem threatening, we anticipate that a range of new opportunities will emerge. These are our hopes. But we also recognize that the new systems for sharing expertise cold be misused, and we are troubled by this possibility. In any event, increasingly capable systems will bring transformations to professional work that will resemble the impact of industrialization on traditional craftsmanship (p. 2).

So what is a profession? These authors crafted a definition based on the intersection of numerous dynamics:

“…members of today’s professions, to varying degrees, share four overlapping similarities: (1) they have specialist knowledge; (2) their admission depends on credentials; (3) their activities are regulated; and (4) they are bound by a common sense of values” (p. 15).

Susskind and Susskind suggest that a “grand bargain” is at the center of understanding the relationship between professionals and society. They quote the well-known educational theorist and writer, Donald Schon (1987) who describes this bargain as:

“In return for access to their extraordinary knowledge in matters of great human importance, society has granted them a mandate for social control in their fields of specialization, a high degree of autonomy in their practice, and a license to determine who shall assume the mantle of professional authority” (p. 7).

The Susskinds revised this idea to their own 21st century iteration of this arrangement as follows:

“In acknowledgement of and in return for their expertise, experience and judgement, which they are expected to apply in delivering affordable, accessible and up-to-date, reassuring and reliable services, and on the understanding that they will curate and update their knowledge and methods, train their members, set and enforce standards for the quality of their work, and they will only admit appropriately qualified individuals into their ranks, and that they will always act honestly, in good faith, putting the interests of clients ahead of their own, we (society) place our trust in the professions in granting them exclusivity over a wide range of socially significant services and activities, by paying them a fair wage, by conferring upon them independence, autonomy, rights of self-determination, and by according them respect and status,” p. 22.

They include a terrific section on the influence of Karl Marx – particularly relevant to the profession of social work. As capitalism increasingly impacts the economic systems in which professions practice, fewer individuals can survive as professionals outside of organizations, and pressures to produce revenue and survive in increasingly competitive economic spaces – this “grand bargain” has potential to be compromised.

Rounding out their analysis of the social context of the professions, the Susskinds suggest that professions themselves are resistant to changing themselves AND there has really historically been little in the way to any alternative to our current way of organizing and deploying professional expertise.

They suggest that there are four fundamental questions for 21st century professions:

  1. “Might there be entirely new ways of organizing professional work, ways that are more affordable, more accessible, and perhaps more conducive to an increase in quality than the traditional approach?”
  2. Even if we concede, at least for now, that human beings are indispensable in professional work, odes it follow that all the work that our professionals currently do can only be undertaken by licensed experts?
  3. Bluntly, to what degree do we actually trust professionals to admit that their services could be delivered differently, or that some of their work could responsibly be passed along to non-professionals?
  4. Is the grand bargain actually working? Are our professionals fit for purpose? Are they serving our societies well?” (p. 32)

The book then proceeds to deliver analysis of six major ways in which the authors (and a great deal of literature) suggest that the professions are not working and are falling short of the grand bargain. The next section of the book then dives into reviews of eight different professions including health, education, divinity, law, journalism, management consulting, tax and audit, and architecture – describing and giving examples of the ways in which these professions are being stretched, expanded, transformed and beginning to intersect with artificial intelligence and/or other models of deployment of expertise. Patterns are discussed across all of these challenges and experiences among professions reflecting the simultaneous evolution of each group, the presence of increasing amounts of information in the form of technology (thus increasing demystification of professional activities), increased scrutiny and expectations towards professions and more.

The middle section of the book is dedicated to theoretical analysis of information and technology itself. The history of how information is shared is deeply related (as noted previously) to the emergence and evolution of the professions themselves and various aspects of this are analyzed in future scenarios. Shifting to the future of production and distribution of knowledge, the Susskinds offer various ideas about how the professions may seek to sustain in an increasingly complex practice ecosystem and touch on some of the benefits of societies that are more “knowledge democracies” than “knowledge controlled.” That said, they are also attentive to the risks as well as the benefits of this shift – and explore each in great detail.

The last section of the book dives into the implications for the professionals themselves – focused on issues of trust and anxiety.

The end of the book suggests not an altogether dire though certainly uncertain and rapidly evolving picture for professions. They say:

“We argue that the professions will undergo two parallel sets of changes. The first will be dominated by automation. Traditional ways of working will be streamlined and optimized through the application of technology. The second will be dominated by innovation. Increasingly capable systems will transform the work of professionals, giving birth to new ways of sharing practical expertise. In the long run, this second future will prevail , and our profession will be dismantled incrementally” (p. 271).

“We found that technology and the internet are not just improving old ways of working; they are also enabling us to bring about fundamental change. They are providing new ways to make practical expertise far more widely available. And so, what is coming over the horizon are not just better ways of handling the work within the current remit of the professions, but systems that are greatly extending our capacity to sort out problems that arise from insufficient access to practical expertise” (p. 270).

In this environment – clear questions will remain about the role of humans in an AI-rich (if not dominated) environment. Clearly, they say, AI will reach a point of being able to solve problems more accurately, with greater speed, and with more accessibility than our human bevy of professionals…so what then will be the role of humans? The Susskinds suggest that a great deal of work in terms of sorting the acceptable moral limits of what should remain in the realm of human responsibility and work. This, they go on, is the work that should be being addressed now. Additionally, they describe real fears about “technological unemployment” of the future – simply put that new technologies will displace current workers (but they suggest this may be a multi-decade not overnight phenomenon). To begin to think this through – they contend there are three basic questions that will dictate the progression of decisions in this area:

  1. “What is the new quantity of tasks that have to be carried out?
  2. What is the nature of these tasks?
  3. Who has the advantage in carrying out these tasks?” (p. 287-288).

All that said, the Susskinds suggest that even as certain professions may wane, others may indeed emerge – so the longstanding framework whereby groups of people have to reskill from one era to another may apply.

The book ends with the authors’ suggestions that “how we use technology in the professions, is very much in our own hands” (p. 304).

They go on “it is not simply that we can shape our own future, more than this, we believe that we ought to, from a moral point of view. Two major moral questions arose in this book. The first is whether there are any likely uses of technology – by the professions or by those who replace them – that we regard as morally unacceptable . Should we seek to impose moral contain on the march of technology across the professions (for example whether to turn off a life support system, to be handed over to a machine, no matter how high performing it may be. We call for public debate on the moral issues arising from models for the production and distribution of practical expertise that do not directly involve professionals or para professionals. And we ask that this debate be held sooner rather than later, before our machines become much more capable.

The second moral question is this – who should own and control practical expertise in a technology-based internet society? Although this question belongs to the field of political philosophy, it also raises intensely practical issues. The future of the professions resets largely to the answer we prefer. In print-based industrial societies, the professions generally own and control practical expertise, a state of affairs that is supported by the grand bargain. But if we imagine a future in which much practical expertise can be made available online, it is less obvious that the professions, or indeed anyone, should be entitled to act as its gatekeepers?” (p.304).

“Beyond the professions, there will lie a fork in the road, with two possible routes stretching out. One leads to a society in which practical expertise is shared as an online resource, freely available and maintained in a collaborative spirit. The other route leads to a society in which this knowledge and experience may be available online, but is owned and controlled by providers, so that recipients will generally pay for access to this resource and our collective practical expertise is enclosed and traded, mostly likely by new gatekeepers. The first route leads us to a type of commons where our collective knowledge and expertise, in so far is feasible, is nurtured and shared without commercial gain, while the second takes us to an online marketplace in which practical expertise is invariably bought and sold. From behind the veil of ignorance, which route would leaders take?” (p. 307).

After reading the book – I’m especially motivated to challenge my own profession to actively engage with doing the self-reflection, the foresight, the requisite imagining and scanning to understand and position ourselves in ways that maximize our impact while protecting our values. The greatest challenge will be to balance this self-reflection with a need to avoid self-protectionism. We will have to be smart and brave as we endeavor to navigate an uncertain future for ourselves and the often most vulnerable to whom we are so dedicated.

Susskind, R. & Susskind, D. (2015). The future of professions: How technology will transform the work of human experts. Oxford, UK: Oxford University Press.