abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

This page is not available in Burmese and is being displayed in English

Company Response

9 Jul 2018

Author:
HireVue

HireVue response re hiring & algorithmic bias

The HireVue team has always been deeply committed to an ethical, rigorous, and ongoing process of testing for and preventing bias in HireVue Assessments models (or algorithms). We are aware that whenever AI algorithms are created, there is a potential for bias to be inherited from humans. This is a vitally important issue and technology vendors mustmeticulously work to prevent and test for bias before an AI-driven technology is ever put to use... 

When HireVue creates an assessment model or algorithm, a primary focus of the development and testing process is testing for bias in input data that will be used during development of the algorithm or model. The HireVue team carefully tests for potential bias against specific groups before, during, and after the development of a model. No model is deployed until such testing has been done and any factors contributing to bias have been removed. Testing continues to be performed as part of an ongoing process of prevention. HireVue data scientists have created an industry-leading process in this emerging area of AI-driven technology, and have presented that process and other best practices to their colleagues at international conferences on artificial intelligence.



Timeline