Autonomous weapon systems are the dark side of modern technology. Already, these systems are deciding - fully autonomously - when to shoot and kill, without human intervention. The initiative Stop Killer Robots has recently published a documentary about this topic, and I would like to share it here:
“Immoral Code is a documentary that contemplates the impact of Killer Robots in an increasingly automated world - one where machines make decisions over who to kill or what to destroy.
Automated decisions are being introduced across all parts of society. From pre-programmed bias to data protection and privacy concerns, there are real limitations to these algorithms - especially when that same autonomy is applied to weapons systems.
Life and death decisions are not black and white, on or off, 1s and 0s. They’re complex, and difficult. Reducing these decisions down to automated processes raise many legal, ethical, technical, and security concerns.
The film examines whether there are situations where it’s morally and socially acceptable to take life, and importantly - would a computer know the difference?
Discover more at www.immoralcode.io”
Here’s the link to the video:
Especially during war times this makes me think. Weapon systems are an extreme example, but it already starts with way more subtle contexts: machine learning alhorithms that discriminiate whole groups of people based on some arbitrary criterium, reducing them to stereotypes that we - as human beings - provide in the form of training material. Or even worse: People who hand over decisions to those systems and stop thinking for themselves. Spooky.
If you want to playfully think about some of the grey areas, check out these ‘absurd trolley problems’: