As computers become more “intelligent,” some data scientists have been puzzled as they’ve observed their algorithms being sexist or racist. This shouldn’t be surprising, these algorithms were trained with social data that reflect society’s biases, and algorithms amplify these biases to improve their performance metrics.
The good news is that we have many computer scientists who care deeply about the fairness of ML algorithms, and have developed methods to make them less biased than humans. A few years ago, a group of researchers at Microsoft Research and Boston University uncovered gender discrimination inherent in certain linguistic tools used in many search engines. When used to complete the analogy “man is to computer programmer as woman is to ___,” this tool produced the answer “homemaker.” Our team debiased this tool so that it delivered gender neutral results, making it less biased than humans.