Monday, September 18, 2017

The A.I. “Gaydar” Study and the Real Dangers of Big Data

Photograph by Jochen Tack / Alamy
As if LGBT individuals did not already have enough to worry about when it comes to facing discrimination and bigotry, a new study claiming that it can use facial recognition computer programs to identify whether or not one is LGBT would seemingly add to the arsenal of bigoted employers who seek to make the lives of closeted employees even more of a living hell.  Other frightening uses: anti-gay religious affiliated colleges might decide to screen applicants; Christofascists parents seeking to determine whether they have a gay child; right wing churches screening members.  The list goes on and on. The other danger of the software is that it still has a significant margin of error so a goodly number of straights could find themselves incorrectly identified as gay.  The New Yorker looks at this dubious and possible dangerous study and its technology.  Here are excerpts:
In the twenty-first century, the face is a database, a dynamic bank of information points—muscle configurations, childhood scars, barely perceptible flares of the nostril—that together speak to what you feel and who you are. Facial-recognition technology is being tested in airports around the world, matching camera footage against visa photos. Churches use it to document worshipper attendance. China has gone all in on the technology, employing it to identify jaywalkers, offer menu suggestions at KFC, and prevent the theft of toilet paper from public restrooms. Michal Kosinski, an organizational psychologist at the Stanford Graduate School of Business, told the Guardian earlier this week. The photo of Kosinski accompanying the interview showed the face of a man beleaguered. Several days earlier, Kosinski and a colleague, Yilun Wang, had reported the results of a study, to be published in the Journal of Personality and Social Psychology, suggesting that facial-recognition software could correctly identify an individual’s sexuality with uncanny accuracy. The researchers culled tens of thousands of photos from an online-dating site, then used an off-the-shelf computer model to extract users’ facial characteristics—both transient ones, like eye makeup and hair color, and more fixed ones, like jaw shape. Then they fed the data into their own model, which classified users by their apparent sexuality. When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time; for women, its accuracy dropped slightly, to seventy-one per cent. Human viewers fared substantially worse. They correctly picked the gay man sixty-one per cent of the time and the gay woman fifty-four per cent of the time. “Gaydar,” it appeared, was little better than a random guess. The study immediately drew fire from two leading L.G.B.T.Q. groups, the Human Rights Campaign and GLAAD, for “wrongfully suggesting that artificial intelligence (AI) can be used to detect sexual orientation.” They offered a list of complaints, which the researchers rebutted point by point. Yes, the study was in fact peer-reviewed. No, contrary to criticism, the study did not assume that there was no difference between a person’s sexual orientation and his or her sexual identity; some people might indeed identify as straight but act on same-sex attraction. “We assumed that there was a correlation . . . in that people who said they were looking for partners of the same gender were homosexual,” Kosinski and Wang wrote.
 True, the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis. And that didn’t diminish the point they were making—that existing, easily obtainable technology could effectively out a sizable portion of society. To the extent that Kosinski and Wang had an agenda, it appeared to be on the side of their critics. As they wrote in the paper’s abstract, “Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.” The objections didn’t end there. Some scientists criticized the study on methodological grounds. To begin with, they argued, Kosinski and Wang had used a flawed data set. Besides all being white, the users of the dating site may have been telegraphing their sexual proclivities in ways that their peers in the general population did not. . . . . Was the computer model picking up on facial characteristics that all gay people everywhere shared, or merely ones that a subset of American adults, groomed and dressed a particular way, shared?
Carl Bergstrom and Jevin West, a pair of professors at the University of Washington, in Seattle, who run the blog Calling Bullshit, also took issue with Kosinski and Wang’s most ambitious conclusion—that their study provides “strong support” for the prenatal-hormone theory of sexuality, which predicts that exposure to testosterone in the womb shapes a person’s gender identity and sexual orientation in later life. In response to Kosinki and Wang’s claim that, in their study, “the faces of gay men were more feminine and the faces of lesbians were more masculine,” Bergstrom and West wrote, “we see little reason to suppose this is due to physiognomy rather than various aspects of self-presentation.” Regardless of the accuracy of the method, past schemes to identify gay people have typically ended in cruel fashion—pogroms, imprisonment, conversion therapy. The fact is, though, that nowadays a computer model can probably already do a decent job of ascertaining your sexual orientation, even better than facial-recognition technology can, simply by scraping and analyzing the reams of data that marketing firms are continuously compiling about you. Do gay men buy more broccoli than straight men, or do they buy less of it? Do they rent bigger cars or smaller ones? Who knows? Somewhere, though, a bot is poring over your data points, grasping for ways to connect any two of them. Therein lies the real worry. Last week, Equifax, the giant credit-reporting agency, disclosed that a security breach had exposed the personal data of more than a hundred and forty-three million Americans; company executives had been aware of the security flaw since late July but had failed to disclose it. (Three of them, however, had off-loaded some of their Equifax stock.) Earlier this week, ProPublica revealed that Facebook’s ad-buying system had enabled advertisers to target their messages at people with such interests as “How to burn jews” and “History of ‘why jews ruin the world.’ ” The categories were created not by Facebook employees but by an algorithm—yet another way in which automated thinking can turn offensive. “The growing digitalization of our lives and rapid progress in AI continues to erode the privacy of sexual orientation and other intimate traits,” Kosinski and Wang wrote at the end of their paper. They continue, perhaps Pollyannaishly, “The postprivacy world will be a much safer and hospitable place if inhabited by well-educated, tolerant people who are dedicated to equal rights.” A piece of data itself has no positive or negative moral value, but the way we manipulate it does. It’s hard to imagine a more contentious project than programing ethics into our algorithms; to do otherwise, however, and allow algorithms to monitor themselves, is to invite the quicksand of moral equivalence.

I am not paranoid.  I am "out" socially and at work.  My most important client base knows I am gay and simply could care less.  Many gays, however are not so fortunate and many have good reason to want to stay in the closet at work given the utter lack of employment protections at the federal level and in 29 states, including Virginia.  

No comments: