
Algorithms are a huge part of modern life. So much so that we sometimes forget they have arrived. Indeed they are primarily “invisible” to everyday people, working behind the scenes to sort data and make decisions that reflect the opinions of a few algorithm designers behind the scenes. Sometimes these algorithms can be life changing/life saving, for example when cancer diagnosis can be made through a combination of machine learning and algorithms that can scan hundreds of thousands of xrays to detect the tiniest irregularity that a human might miss. But other times, like racially biased facial recognition software that might inaccurately identify someone as a criminal suspect – are much more concerning. Increasingly, the ideas of “algorithmic transparency,” “algorithmic racism/bias,” and “algorithmic justice” have come into more prevalent conversation among social justice circles.
There is much learning and development going on with regard to this topic. Of all the “future facing” topics one might consider in terms of urgent need for attention in social work – in my estimation – this is one of the most important. As the rate of adoption of new technologies (most often emerging from the private sector) continues to accelerate, algorithms that don’t incorporate ethical and bias-free dimensions are a frequent point of discussion among social justice advocates. What is the pathway forward and how do we continue to increase social work practice and research attention in this area?
I would suggest that this is the most under-discussed ethical challenge of the future for the profession of social work. We need to dramatically increase the depth, range and focus of our ethical evolution to participate in and shape the future of these technologies that work for people and that prevent harm and injustice. We should concern ourselves with identifying how and where algorithms are starting to emerge and be active in our social work practice spaces (clinical and macro). Collectively – we are starting to develop a shared and critical literacy regarding these important and ubiquitous forces, and challenge a need for clear and explicit ethical guidelines/rules.
For those who are completely new to this topic, here’s a great primer.
While there are pockets of enthusiasm for dialogue about these developments in social work, we have a long way to go to assert where and how we can operate most ethically – and what that looks like given the changing dynamics at play.
Here’s a reading/resource list of resources to get started – with great respect for the groundbreaking work of all who have been leaders in this space.
- Dr. Desmond Patton is an Associate Professor of Social Work at Columbia University in New York City. I’ve previously listed his work on my blog but want to underscore the significant leadership he’s contributed within social work to this topic. Here’s a recent article he put together for Medium. He’s also the Principal Investigator of the Safe Lab project at Columbia which is a research initiative focused on examining the ways in which youth of color navigate violence on and offline.
- Algorithmic Justice League (and the work of Dr. Joy Buolmawini). See her amazing TED talk outlining what algorithmic bias is all about.
- Data for Black Lives is a national network of over 4,000 activists, organizers, and scientists using data science to create concrete and measurable change in the lives of Black people. For far too long, data has been weaponized against Black communities – from redlining to predictive policing, credit scoring and facial recognition. But we are charting out a new era, where data is a tool for profound social change. (From their website here!)
- Great “playlist” of resources related to Gender, Race and Power in AI
- The Institute for the Future has developed an “Ethical OS” toolkit to provide a structure for tech experts to use to deepen their adherence to ethical principles while developing tech tools. Check it out here.
These are the books currently on my shelf on this topic:
Eubanks, V. (2018). Automating inequality: How high tech tools profile, punish and police the poor. New York: St. Martin’s Press. Review here.
Lane, J. (2019). The digital street. New York: Oxford Press. Review here.
Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press. Review here.
O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Broadway Books. Review here – scroll down where her TED talk is included.
Also, I’ve collected numerous recent articles about bias, “isms” and ethics concerns regarding algorithmic transparency/bias as follows:
Behind every robot is a human (2019)
The new digital divide is between people who opt out of algorithms and those who don’t (2019)
Collection of new articles from the Brookings Institute regarding AI and the future (2019)
Artificial intelligence is ripe for abuse, tech researcher warns: A fascist’s dream (2019)
Algorithmic Accountability Act (2019)
Amazon Alexa launches its first HIPAA compliant medical unit (2019)
Facial recognition is big tech’s latest toxic gateway app (2019)
That mental health app might share your data without telling you (2019)
Europe is making AI rules now to avoid a new tech crisis (2019)
AI’s white guy problem isn’t going away (2019)
Europe’s silver bullet in global AI battle: Ethics (2019)
A case for critical public interest technologists (2019)
Ethics alone can’t fix big tech (2019)
Government needs an “ethical framework” to tackle emerging technology (2019)
Tech with a social conscience and why you should care (2019)
Trading privacy for security is another tax on the poor (2019)
Congress wants to protect you from biased algorithms, deep fakes and other bad AI (2019)
AI must confront its missed opportunities to achieve social good (2019)
AI systems should be accountable, explainable and unbiased says EU (2019)
One month, 500 thousand face scans: How China is using AI to profile a minority (2019)
How recommendation algorithms run the world (2019)
Facial recognition is the plutonium of AI (2019)
Facial recognition is accurate if you’re a white guy (2018)
Facial recognition software is biased towards white men, researcher finds (2018)
5 comments