abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb
Article

22 Mar 2018

Author:
Erica Kochi, Business Insider

Commentary: Failure to address discrimination in machine learning can reinforce systemic bias, says UNICEF Head of Innovation

"Machines are making more and more decisions for us, but we need to teach them not to discriminate", 16 Mar 2018

...[A] subset of AI called machine learning...leverages the ability of machines to learn from vast quantities of data and use those lessons to make predictions. Machine learning (ML) is already enabling pathways to financial inclusioncitizen engagementmore affordable healthcare and many more vital systems and services...

...[T]here is a more immediate and less visible risk when machines make decisions: the potential reinforcement of systemic bias and discrimination...As we empower machines to make critical decisions about who can access vital opportunities, we need to prevent discriminatory outcomes...[W]e need to design and use ML applications in a way that not only improves business efficiency but also promotes and protects human rights...Not only can discriminatory outcomes in machine learning undermine human rights, they can also lead to the erosion of public trust in the companies using the technology...

...Events like a Google photo mechanism that mistakenly labeled an image of two black friends as gorillas, and predictive policing tools that have been shown to amplify racial bias, have received extensive and important media coverage...[U]sing ML to make decisions without taking adequate precautions to prevent discrimination is likely to have far-reaching, long-lasting and potentially irreversible consequences...