Fairness and biases in predictive algorithms
Jump to navigation
Jump to search
- Kathy O’Neill: Weapons of Math Destruction
- ProPublica mandatory sentencing reporting (On Machine Bias)
- COMPASS: for-profit proprietary software sold to court systems (mostly states) for scoring risk of recidivism
- Used for decisions in
- Parole
- Bail
- Sentencing
- ProPublica
- When considering an algorithm, one has to choose between 3 types of fairness (pick 2; can’t have all 3)
- Miriad interesting types of social uses for algorithms
- Police brutality
Kathy O’Neill’s concept of WMDs (Weapons of Math Destruction):
- things that are
- Large-scale
- Detrimental to society
- Often, the source of algorithmic bias is not the algorithm itself, but the data that is fed to it. For example, there can be positive feedback loops (crime)
- The human using the algorithm often adds an extra layer of human bias. E.g. a sentencing judge can use a risk score as one input. But that’s also subject to human bias
- And, over time, as society becomes more comfortable with algorithms, it’s possible that humans will put increasing weight on algorithms over time
- To get at algorithms that are proprietary, there are examples of ppl getting proprietary algorithm inputs and outputs through data transparency requests, then reverse engineering how the algorithm works
- Recidivism algorithms can be based on quizzes to prisoners
- There’s also a question of measuring the “success” of algorithms
- Algorithms are notoriously bad at context; they can only know what they’ve been fed
- Humans haven’t agreed on what we ourselves want to optimize for:
- Fairness?
- Redistribution
- Safety
- Making money
- Secret algorithms: even if they’re open, that might not be enough. The training data and inputs is an absolutely essential part of the equation
- The drivers for producing and releasing algorithms are often money, not desire to make the world better.
- And, sometimes even when the motive is benevolent or benign, there are failures of imagination in terms of how people will use it for evil.
- China: “Life score” – maybe Heibo?
- For things like credit scores and predictive algorithms for recidivism, it’s important to consider whether or not the person affected has a method of redress
- Data can be used for good too:
- HRDAG – Human Rights Data Analysis Group
- Social media analysis for predictions of violence
- Solutions:
- Transparency
- Avenues of recourse
- Human training
- Can’t outlaw variables, but you could potentially regulate against known biases
- Any kind of pushback will require tons of statisticians and data scientists pushing for social justice