Highlights/Download from All Tech is Human Gathering in Seattle, WA – May 18, 2019

This event was billed as an “ethical tech summit” in downtown Seattle. I was excited to participate – though was surprised that it was less “hackathon” and more “thinkathon!” Lots of great community and learning in play.

All Tech is Human’s welcome goes like this:

“You are part of something special happening today. We individually understand the promise and peril of technology, yet collectively struggle to better align its development and implementation with our aspirations as a society. The people in this room are actively working to find a better way. YOU are part of the solution! All Tech is Human aims to co-create a a more thoughtful future towards technology. We say co-create for good reason, as the future depends on a more inclusive process that taps into a diversity of knowledge that can better inform the politics of innovation, including the ever-changing ecosystem of technology product design and development. The dirty truth is that there is no magic bullet for ‘fixing tech.’ Instead, perpetual debate is as important as it is inevitable. Everyone who is impacted by technology should be heard loud and clear as we together explore how we might move forward and create a better tomorrow. Let’s turn up the volume.” David Ryan Polgar

The day started with an overview of the “challenge” of finding our way in a complex new world of technology. Speakers opened the day (Rob Girling and David Ryan Polgar) with remarks and observations of how “tech has altered the human condition” in ways that are not likely to roll back any time soon. The challenge, as it was laid out, is about how to insert more thoughtfulness and humanity in the present and future trajectory of how technology occupies space in the world – and how people (not corporations) can best drive it.

There was lots of discussion about the tensions between computer engineers’ roles in tech (Can I?), the ethicists (Should I?) and the legal experts/lawyers (Must I? Can’t I?). There was discussion of the tension between tech “solutionism” (for every tech problem, there is another (better) tech solution, vs. government “solutionism” in which elected and/or govt. officials declare some aspect of technology out of control and bring out new regulations to try to reign in the “problem.” This led to additional conversation about the politics of tech – and a rationale for how solutions require a “broad, inclusive and multidisciplinary” approach. Finally, there was discussion of how citizens can and should have a role in interacting with those designing, improving and regulating tech so that it truly works for and with people.

The rest of my download will be an assortment of interesting/noteworthy things I learned that I just want to keep track of!

Di Dang, a Design Advocate at Google (@dqpdang) gave the best brief overview of “machine learning” I’ve ever heard: “Computers that can evolve to see patterns without being programmed to do so.” She works in a research group within Google that seeks to use human-centered design to make AI work better for people. Trust was discussed. I worked hard at keeping an open mind, and was aware at how hard it can be not be cynical about the idea of social good and big tech…but I was interested in what she had to say. Here’s their research group. I have some more exploring to do. I’m guarded, but willing to be teachable. I remain worried at how this will all impact the most vulnerable, and it will be hard to move me from that position.

Reid Blackman, who is a founder of a group called Virtue, is a tech ethicist. He sits on a committee for “methods to guide ethical research and design” for artificial intelligence-related technologies. It is heartening to hear folks diving deep on the issue of defining ethics for a new world – but concerning what a political struggle it is to see how much of a struggle it is for these frameworks to take root. You can read more about this work here.

He referenced the Institute for the Future’s Ethical OS model as a tool useful for wrapping our brains around this whole area of practice.

Delany Ruston is a physician and filmmaker who made a film called “Screenagers” about raising kids in a screen-filled tech world. I appreciated the degree to which she’s trying to calm/educate/support worried/frantic parents who feel like they are losing their kids to technology and screens. I haven’t seen the film so can’t comment on it at this point – but I’d like to check it out and may report back later. This is a slippery issue isn’t it? There is a lot of chatter that automatically has a kind of “anti-tech” tone for kids across the board – and I don’t know if that is always helpful. I prefer something that is a little more nuanced – as some are saying…appreciative that there are lots of kinds of screen time. From what I heard, she seems to be embracing of this nuance…but I need to investigate a little deeper before recommending.

There was an excellent session on issues of tech and government/opportunities and challenges. Shankar Narayan, the Technology and Liberty Project Director at the ACLU of Washington gave an overview of his work related to efforts to protect civil liberties in a high tech world. Here is an overview of his work, resources on tech/liberty, and here is a brief overview of recent media coverage of the intense activity of tech and civil liberties in the last couple of years.

From him I learned about something called “affective analysis” being proposed/used which is essentially “machine learning that interprets your faceprint.” This phrase made everyone shudder…and which the ACLU is actively working against, along with facial recognition software. (It was noted, and applauded, that San Francisco is the first major US city to ban use of this technology.) This particular speaker was the most focused on clearly identifying and challenging inequity and racism in the tech sector, the differential impacts (lethal at times) of tech in vulnerable communities and how all of this bodes poorly for the future of democratic societies.

My favorite of Mr. Narayan’s quotes: “It’s a false dichotomy to say that tech and social good can’t co-exist. We are just doing a particularly bad job of getting there.”

Steve Schwartz, Director of Public Affairs for Tableau Software and Tableau Foundation spoke about their efforts to help government and business see, understand and use their data more effectively. This is a software that has a free version that seems to be popular among many in the social sector. I haven’t used it yet, but based on the talk – I’ll definitely explore. You can learn more here. I do think we can and must do better when it comes to understanding and using data to tell the stories we are trying to tell in social work. Those of you who are doing more with data analytics, infographics and the like are my heros. In a visually competitive world – stories with images are powerful. This COULD be a tool to help us.

Another speaker was Amie Thao, who is a Civic Designer for the City of Seattle. Her work involves design-based information and data analytics to tackle civic challenges and advance mayoral priorities including racial equity, affordability, and youth economic opportunity. You can see a little more about her work here. I was super intrigued about what this job is like…she’s the only person in city government who brings her unique combination of tech and civics expertise.

Because I was in this series of sessions, I couldn’t attend another set of sessions that were held simultaneously on “designing tech for inclusion and accessibility.” I’ll list the speakers here so you can explore along with me!

Liz Gerber at Delta Lab, Design for America (@elizgerber) – speaking about human centered design work.

Alexandra Lee (@leejayeun) who spoke about her work at a place called the Civic Design Lab in Oakland, CA. She was sharing about her efforts to apply design thinking, racial equity lens, and system thinking to solve civic resiliency challenges in urban environments.

Anna Zivarts, Rooted in Rights (@annabikes) who spoke about disability/accessibility issues in tech.

Later in the day, I attended sessions about ‘Tech for Good: the Rise of Public Interest Technology.’ Heard from Renee Farris (@farrisra) who works at the Silicon Valley Community Foundation (one of the largest philanthropic institutions in the U.S.). Interesting to hear about their work, and the creative approaches they are using.

Also heard from George Aye (@GeorgeAye), co-founder of the Greater Good Studio in Chicago. They use design principles to tackle community challenges of all kinds. His talk was terrific. His philosophy was clearly stated that when considering community and/or organizational change, “people adopt the change they are part of making.” He shard that there are three principles of good design: Good design honors reality, good design creates ownership and good design builds power. He said designers should study anthropology, social work and organizing as much as traditional design. George emphasized the need to engage those most impacted by the problems we are trying to solve – and who generally have the least amount of power in a traditional sense.

Finally, the last particularly powerful presentation I attended that I wanted to include here was from Yana Calou (@YanaCalou) at CoWork.org which is an organizing group specialized in work with the tech sector. Her presentation was really gripping discussing what has been happening with a gradual “awakening” of tech sector workers about their rights and their need to begin to communicate, organize and work together for a more equitable and transparent workplace. I had not been aware of all that has been happening in this sector, but I’ll list a few articles here that outline much of what she spoke about. We all need to be watching this space closely – the workers in tech are revealing some serious concerns that should cause us all to pay attention. Primary issues are sexism in the workplace/sector, loss of worker control over their own work product, and loss of worker autonomy/privacy.

Employee privacy in the US is at stake as corporate surveillance technology monitors workers’ every move

Google employees are launching a social media blitz to pressure tech giants on workplace harassment issues

Workers’ rights? Bosses don’t care – soon they’ll only need robots 

The Year Tech Workers Realized They Were Workers

How tech workers are fueling a new employee activism movement

The long history behind the Google Walkout

This event was held at a design firm that does work at the intersection of equity/sustainability and community – Artefact.

This is truly only a fraction of all that was discussed, but gives you a flavor of the diverse and complex viewpoints presented. I do think there is a lot of room for improvement in how diversity/equity is actually actualized in these kinds of spaces. While there was some diversity present in the crowd – most folks who were there would agree they have a long way to go.

I met a few folks from venture capital firms, as well as a number of other wonderful people who gathered here because they are curious, unsettled and determined to figure out how tech can do a better job of contributing to a positive future for all. I met a fellow who is getting his Ph.D. in nursing who is doing his dissertation on how VR can help with the healing process. I’m glad I went. I think we have some things to learn about bringing this “civic design” sensibility into our social work spaces and activities. We are strong in many elements of community engagement/organizing, but this additional layer of design frameworks can offer a lot of new energy/possibilities.

Here’s a couple of additional sites/resources I include just for perusing value:

The Bridge (http://thebridgework.com) community connecting leaders in tech, innovation, policy, and politics.

Virtual World Society (http://virtualworldsociety.org)

Also heard this book mentioned a couple of times – The Fuzzy and the Techie. I haven’t heard of it but sounds relevant/interesting! https://www.chronicle.com/article/What-Im-Reading-The-Fuzzy/241820

Recent Ideas from Twitter – Social Work Futures – May 14 , 2019

The NHS and Artificial Intelligence

How will artificial intelligence transform the work of the UK’s National Health Service? Lots of implications for the future of all professional helpers – physical and behavioral health. This piece has some ideas.

Tech and Social Good

There is a lot of discourse about how tech contributes to a variety of social challenges – but what are examples of ways it could contribute to making things better in our world? Here’s a short article with contributors from around the world who are doing exactly that. It challenges us as social workers to imagine, track, and evaluate how these ideas might and might not work for us.

US vs. China in the Tech Race

Reading this article made me reflect more on the book “The Big Nine” by Amy Webb which I reviewed in this blog a couple of weeks ago. What is the US plan for how technology will strengthen our national capacity to succeed in new ways, strengthen democracy and promote economic well-being among our communities? This article suggests we don’t have much of one…particularly when contrasting with China’s multi-decade (even multi-generational) tech plan. It’s a good brief discussion of how, why and when (NOW) we need to advance our collective thinking about how to better prepare ourselves and our country for technology’s next chapter.

A 3D Printed Neighborhood?

Want to stretch your thinking about how to solve the affordable housing crisis in the US and beyond? What if we printed enough for all? Here’s a short piece that imagines that possibility.

How to Revive Your Belief in Democracy?

I have to admit…I wasn’t sure about this TED talk but it got to me. Eric Liu is a “civic evangelist” and is definitely on a mission to strengthen communities through connection and civic renewal. If we believe that a healthy democracy is at the heart of a future we want, this is an inspiring short talk.

How to Think Better About the Future

Here’s a great short piece that covers the fundamentals of futures thinking, foresight and the importance of learning to get comfortable with discomfort to prepare for whatever comes next.

What is the Anthropocene and Why Does It Matter?

In short, it refers to a new era where humanity has impacted the earth’s storyline in an irreversible way. While it doesn’t suggest we are defeated…those who seek to name this “new” period in the earth’s life cycle, alert us to the very real risks and dangers this new phase involves.

The EU’s New Guidelines on Ethical AI

New guidelines (and internationally noteworthy) guidelines are out. Debates and evolution will likely continue – but these are interesting and instructive.

Why social work belongs in the future – and some ideas about how to get there!

Over the last year, I’ve had a LOT of conversations with social workers and social work educators around the country (and beyond) about “the future,” and futures frameworks to guide/expand our thinking about what our future roles might be. In an effort to stimulate a discourse, I’ve put together a lot of posts on this web as a precursor to a book I’m writing on this topic (bounce around to follow the journey), as well as put an annotated bibliography together for social workers to learn about/consider how futures frameworks might enhance our practice. I built a game for social work educators , and have done a number of presentations to social workers nationally on features of futures thinking/practice and introduced how these models might increase our impact. On my sabbatical next year, I’m also excited about the chance to put a “social work futures” course together. I’m grateful that CSWE saw fit to explore this issue in the last few years as well with a special task force on the topic.

As much fun as it can be to learn about essential futures frameworks as a starting point, it is also important to focus in on WHERE social work is most urgently needed in spaces where in many respects, the future is being “decided,” “developed” and “deployed.” What does it mean that these evolutions are in play without us (and the values/skills we bring) and we are not participating nor contributing in a major way?

Here are some starting places where the future is being developed that may/may not (sadly often do not) include social work voices/presence. These are places where SOCIAL WORKERS BELONG!! We are learning that we may not always be invited…so sometimes we just have to invite ourselves and begin contributing. Given how “interdisciplinary” these sectors are, so far, folks I know who have been engaged have found these spaces to be welcoming of our ideas, methods, values and presence. So jump in – here’s some ideas!!

  • Tech for social good hackathons
  • Social enterprise and the role of the private sector in social good
  • Algorithmic transparency, justice and bias work as the evolution of social justice/anti-racism work*
  • Universal guaranteed income and the future of the economy/alternative economic models
  • Smart cities and democracy*
  • The future of work and how to transition vulnerable workers to it
  • Technology access as a human right*
  • Use of big data for social good* (including in policy-making and/or helping communities have access to interpreting/using big data for their own purposes)
  • Development, testing and/or evaluation of apps for mental health and/or other social determinants of health, family well-being, etc.*
  • Technology and health – including access to more equitable distribution of access to health resources, tech-related supports for disabilities, state of the art treatments, etc.*
  • Immigration/relocation issues – relevant to both international immigration/relocation as well as climate change related relocation
  • Disaster/emergency preparedness work
  • Use of technology for community organizing and the future of democracy*
  • Each and every practice area we work in is also on a path to its own “future” – for example, the future of child welfare practice, the future of mental health practice, the future of addictions practice, the future of interpersonal violence, the future of aging practice, the future of homelessness, the future of anti-racism practice and on and on and on. At the VERY least, each of us has an ethical responsibility to learn to track and engage in guiding how our issues are conceptualized, reinforced with best practice, aided by tech where possible, and improved.
  • Futures/foresight learning spaces – like the “foresight practitioner” training offered through the Institute for the Future where I’ve just become a research fellow. (There are other organizations offering similar training – but I’m most familiar with and respectful of this one…!)
  • AND THIS IS JUST A STARTING POINT!!!

*These topics are increasingly coalescing around a new area of practice called “public interest technology” which I’ve written about elsewhere on this blog.

That said, I want to give a shout out to a burgeoning group of social workers and social work educators/researchers who are active in these circles (for example I’m putting together a separate blog post about social workers who develop apps for social change/social good). The folks who are currently doing social work in these spaces are our guides – but as a whole, I believe we need to do a lot to elevate, celebrate and study their work to grow both their impact and those that will learn from and follow them. IF YOU ARE A SOCIAL WORKER OR SOCIAL WORK ACADEMIC WORKING OR DOING RESEARCH IN THIS SPACE – please get in touch. I’d love to highlight your work in what I’m gathering, add you to my growing data base and “boost your signal” to others in our field!!

But I also want to suggest (supportively as well as with a critique) that these topics are seldom covered in a meaningful way in our social work curricula. We need to move more quickly to meet and create the future that we want to see. Our “gaze” needs to lift up to observe, imagine, challenge and move into new spaces, new opportunities with new allies and partners if we hope to have impact in the ways we envision. The world is changing quickly – are we ready?

Updates to the the Never-Ending Futures Annotated Bibliography Project – May 2019 Edition!

I love to scan the literature for new information. It is a hobby, passion and fortunately – a useful pastime for a scholar! What never ceases to amaze me is the transdisplinary nature of the futures literature. It is never a dull moment in every sense!

Some months ago – I shared a project I’ve been working on. As a tool for my own scholarship, I often organize my resources in an annotated bibliography and I use these regularly as I write/study to keep myself organized. Since my goal is not only to get some papers and books out focused on my passion for futures capacity building in social work, but also to build our collective capacity to be more “foresightful” together, I am pleased to share this resource with all of you.

I’ve added a number of new articles and books that have flown across my radar screen the past couple of months. As an aid to the reader – all new entries are included in light blue text for now!

Here’s a link to take you to the document! Enjoy and join the dialogue to build the future we want.

If you find this helpful – please drop me a line as we continue to build community and networks in social work futures and beyond.

Reflections and Take-Away from Amy Webb’s Book “The Big Nine”

“The future we all want to live in won’t just show up, fully formed. We need to be courageous. We must take responsibility for our actions.” – Amy Webb

I recently finished reading Amy Webb’s fine book “The big nine: How the tech titans thinking machines could warp humanity,” (2019). New York: Public Affairs/Hachette Book Group.

The book is a history and evolution of the power of “the big nine” tech companies (6 US-based, 3 in China), with a primary focus on the power and possibilities of artificial intelligence. It takes a deep look at all the power, opportunities, possibilities (both positive and devastating) that AI brings now and into the future.

The Big Nine defies a “simple” framework (AI is good/AI is bad). Rather it focuses on the idea that AI is almost incomprehensibly powerful and requires the responsible attention of individuals, communities and governments to assure that the highest ideals and possibilities are achieved and the greatest threats are reduced/eliminated (almost as if one might think of the way that we think about power/possibility of nuclear power – though the developmental trajectories have distinct differences).

From a social work perspective, the focus intersects with our own thinking/imagining of the “future” of social justice, human well-being and equity work. What is a future in which a few powerful (largely white, male, economically dominant and western in the US) construct underlying structures and digital machinery that decides, sorts, and controls much of the workings of modern life? How might existing inequities be replicated, multiplied – or conversely, interrupted and resolved? These are essential concerns that social work would be well-advised to factor into the way we think about the future and our work in it. How will these mechanisms (or have already begun to) (re) arrange modern life, who will continue to win and lose, and how will those trajectories play out according to the way social work thinks about ourselves and the work we aspire to do? Likely these will be a combination of ways that we are professionally familiar with (poverty, structural violence, “isms” and the like) as well as new types or variations of oppressions that we can only begin to predict and understand. My recent blog post on algorithmic transparency, bias and justice goes into some of these issues in more detail.

Our values, knowledge and skills regarding the importance and processes of engaging community voices, interrupting oppression, building more just and liberatory structures, and recognizing and addressing structural barriers to well-being could all be important skills as increasing pressure builds regarding recognizing the human rights issues associated with growth in tech that is not reflective of the well-being of all. But we will need to be intentional regarding our needed learning curve to remain relevant in these complex new spaces. I found this book to advance my own thinking/understanding regarding how vast and complex discussions of “big tech” and AI can be, and yet largely understandable using our own frames of political economy, human rights and social work ethics – just in new spaces and new ways. Social workers belong in this conversation, and this blog remains a call to action and invitation to continue dialogue about how we might best do that.

While the book as a whole is readable including three primary sections: 1) overview and evolution of tech in the modern world (formidably challenge for the non-tech reader but she does a fine job of keeping it accessible), 2) a fascinating, inspiring and sobering deep presentation of three possible “futures” concerning AI. These scenarios are crafted with the intention of fully exploring various possibilities that exist for humanity based on decisions that are made (as Ms. Webb might say) while our ability to do so is still collectively within our grasp and 3) a final section that lays out an action plan and analysis of what needs to be done to optimize all that AI has to offer, while simultaneously building a new set of global policy guardrails to protect us, in some respects, from ourselves and the worst of the risks that are increasingly apparent in the rapid evolution of these technologies. The purpose of this post is to share what I considered to be the most substantive part of the book, which is Ms. Webb’s suggestion that to succeed in the years ahead with the complexities (and risks) that AI introduces into our world, an international body comprised of tech leaders and “AI researchers, sociologists, economists, game theorists, futurists, political scientists” (p. 237) along with government leaders, and that these members reflect the “socioeconomic, gender, race, religious, political and sexual diversity” of the world (p. 237).

She calls this governing/regulatory body the Global Alliance on Intelligence Augmentation (GAIA), and their core aspirational purpose would be to collectively “facilitate and cooperate on share AI initiatives and policies” (p. 237) and to affirm and create structures to consider, operationalize and protect AI as a public good. In essence, she suggests that these tools are rapidly becoming too powerful to be left merely to the devices of private, corporate and market forces.

Here is an excerpt that clarifies what I consider to be the most important elements of this effort she proposes – which in itself is a fascinating “thought experiment” about what might be to come. I hope that we move towards this kind of global dialogue sooner rather than later – and I hope that we as social workers – can find ourselves as helpful, informative, relevant change agents, social scientists, and supporters of human well-being in an increasingly complicated world.

“GAIA should be considered a framework of rights that balances individual liberties with the greater, global good. It would be better to establish a framework that’s strong on ideals but can be more flexible in interpretation as AI matures. Member organizations would have to demonstrate they are in compliance or face being removed from GAIA. Any framework should include the following principles:

  1. Humanity should always be at the center of AI’s development.
  2. AI systems should be safe and secure. We should be able to independently verify their safety and security.
  3. The Big Nine – including its investors, employees, and the governments it works within – must prioritize safety above speed. Any team working on an AI system – even those outside the Big Nine – must not cut corners in favor of speed. Safety must be demonstrated and discernable by independent observers.
  4. If an AI system causes harm, it should be able to report out what went wrong, and there should be a governance process in place to discuss and mitigate damage.
  5. AI should be explainable. Systems should carry something akin to a nutritional label, detailing the training data used, the processes used for learning, the real-world data being used in applications and the expected outcomes. For sensitivity or proprietary systems, trusted third parties should be able to assess and verify an AI’s transparency.
  6. Everyone in the AI ecosystem – Big Nine employees, managers, leaders, board members; startups (entrepreneurs and accelerators); investors (venture capitalists, private equity firms, institutional investors, and individual shareholders); teachers and graduate students; and anyone else working in AI – must recognize that they are making ethical decisions all the time. They should be prepared to explain all of the decisions they’ve made during the development, testing and deployment processes.
  7. The Human Values Atlas* should be adhered to for all AI projects. Even narrow AI applications should demonstrate that the atlas has been incorporated.
  8. There should be a published, easy-to-find code of conduct governing all people who work on AI and its design, build and deployment. The code of conduct should also govern investors.
  9. All people should have the right to interrogate AI systems. What an AI’s true purpose is, what data it uses, how it reaches its conclusions, and who sees results should be made fully transparent in a standardized format.
  10. The terms of service for an AI application-or any service that uses AI – should be written in language plain enough that a third grader can comprehend it. It should be available in every language as soon as the application goes live.
  11. PDR’s (personal data records) should be opt-in and developed using a standard format, they should be interoperable, and individual people should retain full ownership and permission rights. Should PDR’s become heritable, individual people should be able to decide the permissions and uses of their data.
  12. PDR’s should be decentralized as much as possible, ensuring that no one party has complete control. The technical group that designs our PDRs should include legal and nonlegal experts alike: whitehat (good) hackers, civil rights leaders, government agents, independent data fiduciaries, ethicists, and other professionals working outside of the Big Nine.
  13. To the extent possible, PDRs should be protected against enabling authoritarian regimes.
  14. There must be a system of public accountability and an easy method for people to receive answers to questions about their data and how it is mined, refined and use throughout AI systems.
  15. All data should be treated fairly and equally, regardless of nationality, race, religion, sexual identity, gender, political affirmations, or other uniques beliefs,” (pp. 240-242).

*The idea of a “human values atlas” is presented earlier in the book as the formidable and complex but essential task of creating a living and shared communication/document about what is most centrally valued by humans across cultures and nationalities. This atlas would guide much of the future work in the AI space – without it – we are as Ms. Webb suggests, ceding authority for these matters to potentially conflicting and hidden/opaque corporate forces. She discusses this in greater detail on pages 239-240 of the book.

Here is a 15-minute interview with Ms. Webb on a recent PBS spot.

For the reader’s convenience, here are a couple of additional reviews of this book:

Technology Review

Fast Company

Venture Beat

Wired

Finally here is some information about recent and current U.S. federal activity on this issue:

Will Trump’s new artificial intelligence initiative make the U.S. the world leader in AI? (2019)

President Obama’s artificial intelligence, automation and the economy plan (2016)

Algorithmic Transparency, Bias and Justice

Algorithms are a huge part of modern life. So much so that we sometimes forget they have arrived. Indeed they are primarily “invisible” to everyday people, working behind the scenes to sort data and make decisions that reflect the opinions of a few algorithm designers behind the scenes. Sometimes these algorithms can be life changing/life saving, for example when cancer diagnosis can be made through a combination of machine learning and algorithms that can scan hundreds of thousands of xrays to detect the tiniest irregularity that a human might miss. But other times, like racially biased facial recognition software that might inaccurately identify someone as a criminal suspect – are much more concerning. Increasingly, the ideas of “algorithmic transparency,” “algorithmic racism/bias,” and “algorithmic justice” have come into more prevalent conversation among social justice circles.

There is much learning and development going on with regard to this topic. Of all the “future facing” topics one might consider in terms of urgent need for attention in social work – in my estimation – this is one of the most important. As the rate of adoption of new technologies (most often emerging from the private sector) continues to accelerate, algorithms that don’t incorporate ethical and bias-free dimensions are a frequent point of discussion among social justice advocates. What is the pathway forward and how do we continue to increase social work practice and research attention in this area?

I would suggest that this is the most under-discussed ethical challenge of the future for the profession of social work. We need to dramatically increase the depth, range and focus of our ethical evolution to participate in and shape the future of these technologies that work for people and that prevent harm and injustice. We should concern ourselves with identifying how and where algorithms are starting to emerge and be active in our social work practice spaces (clinical and macro). Collectively – we are starting to develop a shared and critical literacy regarding these important and ubiquitous forces, and challenge a need for clear and explicit ethical guidelines/rules.

For those who are completely new to this topic, here’s a great primer.

While there are pockets of enthusiasm for dialogue about these developments in social work, we have a long way to go to assert where and how we can operate most ethically – and what that looks like given the changing dynamics at play.

Here’s a reading/resource list of resources to get started – with great respect for the groundbreaking work of all who have been leaders in this space.

  • Dr. Desmond Patton is an Associate Professor of Social Work at Columbia University in New York City. I’ve previously listed his work on my blog but want to underscore the significant leadership he’s contributed within social work to this topic. Here’s a recent article he put together for Medium. He’s also the Principal Investigator of the Safe Lab project at Columbia which is a research initiative focused on examining the ways in which youth of color navigate violence on and offline.
  • Data for Black Lives is a national network of over 4,000 activists, organizers, and scientists using data science to create concrete and measurable change in the lives of Black people. For far too long, data has been weaponized against Black communities – from redlining to predictive policing, credit scoring and facial recognition. But we are charting out a new era, where data is a tool for profound social change. (From their website here!)
  • The Institute for the Future has developed an “Ethical OS” toolkit to provide a structure for tech experts to use to deepen their adherence to ethical principles while developing tech tools. Check it out here.

These are the books currently on my shelf on this topic:

Eubanks, V. (2018). Automating inequality: How high tech tools profile, punish and police the poor. New York: St. Martin’s Press. Review here.

Lane, J. (2019). The digital street. New York: Oxford Press. Review here.

Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press. Review here.

O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Broadway Books. Review here – scroll down where her TED talk is included.

Also, I’ve collected numerous recent articles about bias, “isms” and ethics concerns regarding algorithmic transparency/bias as follows:

Behind every robot is a human (2019)

The new digital divide is between people who opt out of algorithms and those who don’t (2019)

Collection of new articles from the Brookings Institute regarding AI and the future (2019)

Artificial intelligence is ripe for abuse, tech researcher warns: A fascist’s dream (2019)

Algorithmic Accountability Act (2019)

Amazon Alexa launches its first HIPAA compliant medical unit (2019)

Facial recognition is big tech’s latest toxic gateway app (2019)

That mental health app might share your data without telling you (2019)

Europe is making AI rules now to avoid a new tech crisis (2019)

AI’s white guy problem isn’t going away (2019)

Europe’s silver bullet in global AI battle: Ethics (2019)

A case for critical public interest technologists (2019)

Ethics alone can’t fix big tech (2019)

Government needs an “ethical framework” to tackle emerging technology (2019)

Tech with a social conscience and why you should care (2019)

Trading privacy for security is another tax on the poor (2019)

Congress wants to protect you from biased algorithms, deep fakes and other bad AI (2019)

AI must confront its missed opportunities to achieve social good (2019)

AI systems should be accountable, explainable and unbiased says EU (2019)

One month, 500 thousand face scans: How China is using AI to profile a minority (2019)

How recommendation algorithms run the world (2019)

Facial recognition is the plutonium of AI (2019)

Facial recognition is accurate if you’re a white guy (2018)

Facial recognition software is biased towards white men, researcher finds (2018)