Are algorithms class-blind?

Toledo 65 algorithm (jmjesus Flickr)

In an earlier post I wrote about a supposedly new solution to the injustice of the American bail system: risk assessment. To briefly summarize: many people in the US are in jail because they can’t afford to post bail, and risk assessment would avoid class bias because such assessment would be based on factors that are known to predict re-offending and not appearing in court. I criticized this idea, because risk assessment introduces a new class bias when factors include employment, housing, community support, and owning a car and a cell phone. Replacing judges’ biased discretion with a biased risk assessment tool does not solve the problem.

A recent article in the New York Times pointed out this problem, discussing how bail decisions use ‘little science’ and that ‘hidden biases against the poor and minorities can easily creep into the decision-making.’ For this and other reasons ‘many law enforcement groups and defense lawyers have supported the use of scientifically validated’ risk assessment tools. The news: ‘Now comes help in a distinctly modern form: an algorithm.’

Scepticism

There is new risk assessment tool based on an algorithm. Interestingly, this new tool, already tested and rolled out in 21 jurisdictions, challenges the widespread belief that class and criminal behaviour are tightly related:

The Arnold assessment has been met with some skepticism because it does not take into account characteristics that judges and prosecutors normally consider relevant: the defendant’s employment status, community ties or history of drug and alcohol abuse.

Instead, after crunching data on one and a half million criminal cases, researchers found that fewer than 10 objective factors — basically age, the criminal record and previous failures to appear in court, with more recent offenses given greater weight — were the best predictors of a defendant’s behavior. Factoring in other considerations did not improve accuracy.

The myth of neutral data

Is an algorithm a solid solution to biased decision making? Are algorithms (inherently) class-blind? I’m not so sure (as usual).

First of all, who decides which factors are included in the calculations and which are not? Such decisions are still moral judgements to be made by (prejudiced) humans. What if ethnic background would predict re-offending? We do not and would not include it – not anymore – because we agree that it is racist and discriminatory. The same could be said of factors such as unemployment and homelessness, however, apparently classism is seen less as a problem. We cannot rely on algorithms to sort out such moral questions.

Second, the offender population on which assessment tools are built and tested – whether using algorithms or human analysis – is skewed, so class and racial biases creep in. Luckily, this new tool found no correlation of risks with unemployment or drug use, but even if it did this is no proof that there is a ‘real’ correlation. There is plenty of evidence that policing and prosecution are skewed towards certain groups so we cannot know the ‘true’ correlations between individual variables and criminal behaviour. (A more advanced explanation of how algorithms are not neutral, ‘even if we had a mythical source of unbiased data,’ can be read here.)

Discriminating algorithms

A few weeks later the New York Times ran an article titled ‘When Algorithms Discriminate.’ It is not about algorithms used for risk assessment in criminal justice, but the concerns that are raised apply: ‘algorithms can reinforce human prejudices’ and

Even if they are not designed with the intent of discriminating against those groups, if they reproduce social preferences even in a completely rational way, they also reproduce those forms of discrimination.

In other words, the factors that are produced by the algorithm are not at all ‘objective factors’, as the article on algorithms for bail decisions claims.

Algorithms can and do discriminate and therefore it is suggested that ‘the question of determining which kinds of biases we don’t want to tolerate is a policy one’. Relying on algorithms still requires moral judgements. ‘Crunching data on one and a half million criminal cases’ cannot correct decades if not centuries of injustice.

Image: Toledo 65 algorithm by jm_escalante on Flickr (this algorithm obviously does not predict crime)

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s