Tech for Good London at MozFest House
Mozilla Festival is an annual gathering of educators, activists, technologists, researchers, artists, and young people dedicated to creating a better, healthier internet. This year the festival is being hosted in London and there are still tickets available here: https://www.mozillafestival.org/en/tickets/
Tech for Good London is a meetup group that brings together hackers, coders, developers and designers with people who really understand a social problem and want to build digital solutions to social challenges. It is organised by Dama Sathianathan of Bethnal Green Ventures (BGV) and Ellie Hale of the Centre for the Acceleration of Social Technology (CAST).
I (@jamescurtis29) am a co-founder of the volunteer community Radical Engineers (among other things) who loves going to events and taking notes of things I find interesting. The following notes are summaries from the talks and workshop hosted on 21st and 22nd October 2019 at RSA, London.
AI for Good — Alice Piterova (@ImpactAlice)
Why AI for Good? — The decisions that we make today will impact our future. We must leverage AI to solve social problems. The AI for Good community is already tackling some of the most pressing problems of today such as climate change, domestic violence, access to mental health, etc. They are a tech company that creates intelligent, ethical products for the most vulnerable end users and underserved communities. They make it happen through successful partnerships and exchange of deep domain expertise.
Some examples of their products: rAInbow is a digital companion for women at risk of domestic abuse; SnehAI provides sexual and reproductive health information for adolescents; and FutureMakers is a programme of AI +creativity workshops to prepare young people for jobs of the future.
They have recently launched a community, a forum for the public and non-profit sector to collaborate on AI for Good. Their community launch brought together more than 150 participants from AI practitioners to lawyers to humanitarians. They hope to use this to educate the next generation of companies.
AI for Good is looking to develop and grow their community. Check out www.aiforgood.co.uk for more information.
Issa a Trap — Alex Fefegha (i make stuff.)
Alex works for Comuzi — a south london design studio. Comuzi is a design invention studio which works with inventive companies to explore, imagine and prototype future-forward products integrating positive human interaction and emerging technology. Alex is exploring the issue of understanding how algorithms make decisions.
An interesting example that Alex gave: Netflix uses AI to select which movies are presented to its users and how they are presented. As a result, if you watch films with actors of colour, Netflix’s recommendations will feature other films with actors of colour in the cast and highlight those actors in presenting the film to you — even if they actually play a minor role. This has caused some consternation.
Alex works on projects which explore how technology is having a social impact. E.g. IF is a tech studio which explored the question ‘how do you design an explainable algorithm?’ —IF and Comuzi worked together on creating a mental health chatbot for children that would make its actions explainable. Their resulting project was MoodJar, a fictional mental health chatbot. The main takeaway from this project was that young people want certain boundaries respected if they are going to continue to positively engage with technology. Young people believe and trust that AI is objective. They believe AI should not be making decisions which will have a big impact on their lives, and they don’t want AI to try and develop a human-like relationship with them.
You can read more about Comuzi’s projects here: https://www.comuzi.xyz/
Panel discussion with Sebastien Krier (@sebkrier) and Eleanor O’Keefe (@ellondon)
Ok, summarising a panel discussion is too hard but there was this one tool mentioned that I thought was really cool and I have to share it with everyone — The Geena Davis AI tool to look for gender discrimination in film scripts. You can read more about it here: https://www.theguardian.com/film/2019/oct/09/geena-davis-institute-gender-media-disney-machine-learning-tool
Workshop on the potential fallibility of AI
Dama shared a fantastically useful list of the challenges of machine decision making:
- The lack of accountability — example an inquiry made in 2017 to the fed court in NYC by ProPublica who challenged the municipality to expose and publish the source code for their algorithms which they used to determine potential criminals which was targeting people of colour. It turned out to be heavily flawed and biased.
- Engineering culture — people are not trained to ask fundamental questions about the fairness of AI tools. Disruption and speed are valued over critical examination. Some of the solutions on this was around training the next generation of engineers on diversity and inclusion which has been started to be enacted in modern tech colleges and more progressive steps have been taken in the VC world.
- Data privacy — machine learning requires large amounts of training data which has a consequence on the environment. Companies are being incentivised to build systems which capture data using invasive techniques. A proposed solution to this could be a ‘bill of data rights’ which would issue basic protections and control to users over how their data would be processed.
- Bias and Fairness — Machine decision making often relies on models and frameworks shaped by existing structures in society which results in problematic or discriminatory outcomes. A solution to this could be dataset ‘nutrition labels’ which would drive higher data quality standards. This would highlight potential biases in datasets.
- Outcome-based discrimination — Even when steps have been taken to reduce bias in an intelligent system, that system can still make decisions that have a discriminatory effect on people. An example of this is LGBT YouTube creators report being shadow-banned for publishing LGBT content.
- Behavioural Manipulation and Misinformation — Online platforms collect user data to power algorithms that rapidly generate, curate and recommend content, deepening our susceptibility to behavioural manipulation and misleading, polarizing or inflammatory information. An example of this are deepfakes — there are a lot of fears that deepfakes could be used to disrupt the upcoming US election. Another example is YouTube autoplay — people falling asleep watching videos of some type may wake up watching ones which they would not want to watch.
- Computational propaganda and disinformation — Online platforms that rely on algorithms create opportunities for different actors with varying agendas to exploit or “game” those systems for political and/or financial gain. A proposed solution to this is to invest in media literacy and discernment for people to be able to assess information more keenly. Another is to improve laws around competition and ad transparency. An example of this occurred in South-East Asia at the end of the last election when a news article in Pakistan shared the ‘truth’ that Donald Trump was secretly from Pakistan.
- Security — Online platforms collect user data to power algorithms that rapidly generate, curate and recommend content, deepening our susceptibility to behavioural manipulation and misleading, polarizing or inflammatory information. An example if this is AI being used to carry out cyberattacks more effectively than humans can.
Also, AI has become such a buzzword that there is now an AI toothbrush. This is not particularly relevant, but it is hilarious.
Hope that was useful! If you want to come to the next meetup, register for tickets here: https://www.meetup.com/techforgood/