Tech for Good London at MozFest House

Ellie Hale introduces Tech for Good at MozFest House

Talks

AI for Good — Alice Piterova (@ImpactAlice)

Why AI for Good? — The decisions that we make today will impact our future. We must leverage AI to solve social problems. The AI for Good community is already tackling some of the most pressing problems of today such as climate change, domestic violence, access to mental health, etc. They are a tech company that creates intelligent, ethical products for the most vulnerable end users and underserved communities. They make it happen through successful partnerships and exchange of deep domain expertise.

Issa a Trap — Alex Fefegha (i make stuff.)

Alex works for Comuzi — a south london design studio. Comuzi is a design invention studio which works with inventive companies to explore, imagine and prototype future-forward products integrating positive human interaction and emerging technology. Alex is exploring the issue of understanding how algorithms make decisions.

Panel discussion with Sebastien Krier (@sebkrier) and Eleanor O’Keefe (@ellondon)

Ok, summarising a panel discussion is too hard but there was this one tool mentioned that I thought was really cool and I have to share it with everyone — The Geena Davis AI tool to look for gender discrimination in film scripts. You can read more about it here: https://www.theguardian.com/film/2019/oct/09/geena-davis-institute-gender-media-disney-machine-learning-tool

Workshop on the potential fallibility of AI

Dama shared a fantastically useful list of the challenges of machine decision making:

  • The lack of accountability — example an inquiry made in 2017 to the fed court in NYC by ProPublica who challenged the municipality to expose and publish the source code for their algorithms which they used to determine potential criminals which was targeting people of colour. It turned out to be heavily flawed and biased.
  • Engineering culture — people are not trained to ask fundamental questions about the fairness of AI tools. Disruption and speed are valued over critical examination. Some of the solutions on this was around training the next generation of engineers on diversity and inclusion which has been started to be enacted in modern tech colleges and more progressive steps have been taken in the VC world.
  • Data privacy — machine learning requires large amounts of training data which has a consequence on the environment. Companies are being incentivised to build systems which capture data using invasive techniques. A proposed solution to this could be a ‘bill of data rights’ which would issue basic protections and control to users over how their data would be processed.
  • Bias and Fairness — Machine decision making often relies on models and frameworks shaped by existing structures in society which results in problematic or discriminatory outcomes. A solution to this could be dataset ‘nutrition labels’ which would drive higher data quality standards. This would highlight potential biases in datasets.
  • Outcome-based discrimination — Even when steps have been taken to reduce bias in an intelligent system, that system can still make decisions that have a discriminatory effect on people. An example of this is LGBT YouTube creators report being shadow-banned for publishing LGBT content.
  • Behavioural Manipulation and Misinformation — Online platforms collect user data to power algorithms that rapidly generate, curate and recommend content, deepening our susceptibility to behavioural manipulation and misleading, polarizing or inflammatory information. An example of this are deepfakes — there are a lot of fears that deepfakes could be used to disrupt the upcoming US election. Another example is YouTube autoplay — people falling asleep watching videos of some type may wake up watching ones which they would not want to watch.
  • Computational propaganda and disinformation — Online platforms that rely on algorithms create opportunities for different actors with varying agendas to exploit or “game” those systems for political and/or financial gain. A proposed solution to this is to invest in media literacy and discernment for people to be able to assess information more keenly. Another is to improve laws around competition and ad transparency. An example of this occurred in South-East Asia at the end of the last election when a news article in Pakistan shared the ‘truth’ that Donald Trump was secretly from Pakistan.
  • Security — Online platforms collect user data to power algorithms that rapidly generate, curate and recommend content, deepening our susceptibility to behavioural manipulation and misleading, polarizing or inflammatory information. An example if this is AI being used to carry out cyberattacks more effectively than humans can.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
James Curtis

James Curtis

Co-founder of Radical Engineers. On a mission to create a world where passion can inform ambition. Interested in how technology can expand creative industries.