It was bound to make headlines: a study claiming that a computer could predict if someone is gay or straight, based on a photo. But the story has moved beyond headlines, to gay rights advocates charging that the study is scientifically invalid and an author of the paper saying he’s under unfair personal attack.
“What really saddens me is that LGBTQ rights groups, [Human Rights Campaign] and GLAAD, who strived for so many years to protect rights of the oppressed, are now engaged in a smear campaign against us with a real gusto,” Michal Kosinski, an assistant professor of organizational behavior at Stanford University, wrote on Facebook about the backlash against his work.
Some background: Kosinski, a psychologist and data scientist, and Yilun Wang, a computer scientist who studied at Stanford, used tens of thousands of pictures from dating sites and corresponding information about sexual preferences to create an algorithm to predict someone’s sexual orientation.
The computer’s eventual prediction-success rate based on a single photo was 81 percent for male faces and 71 percent for female faces. Significantly, though, the computer was choosing between two photos, one of a person who self-identified as gay and one who identified as straight; contrary to some reports about the study, the computer wasn’t looking at a photo and simply saying whether the person was gay or not.
Loading five photos of a person pushed the success rate to 91 percent for men and 83 percent for women. The program also found that gay men’s faces were more “feminine,” with more “gender-atypical” features than straight men’s, and that lesbians’ faces were more “masculine” than straight women’s.
Source: Michal Kosinski via Twitter
That’s a much better success rate than humans trying to exercise their gaydar, according to the paper, and thus, the artificial intelligence was technically successful. Yet the development of such a technology…