A colleague from the NYU Information Law Institute pointed me to a recent study and news article that examines the degree to which Google’s Ad Sense displays differential ads according to black/white sounding names. The paper is here: http://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf , and the news article is here: http://gizmodo.com/5981665/are-google-searches-racist?post=57014274 . The paper’s main finding is that google’s ad sense shows a statistically significant difference between ads presented for white-sounding names, relative to black-sounding names. Specifically, that results presented for black names show disproportionately more “arrest” ads.
Certainly this is a complicated and important issue. And so for a news story to suggest that “google ads are racist” is a gross mischaracterization of the issue. That ads are differentially displayed is a function of the willingness to pay by the sponsor (instantcheckmate.com for instance). The reinforcing effect (the temporal learning) described on page 34 is a function of humans clicking on them. Indeed, a computer algorithm plays a role in this, but that seems largely irrelevant. The algorithm is performing the function that it was programmed to do: respond to auction bids and human behavior (clicking on ads). Nothing more, nothing less.
But is this too easy an out?
It is certainly valid to pose the question, as a colleague did: what is google’s responsibility with an algorithm that may be facilitating bias of any kind in its ad delivery?
What role does the postal service have in scanning individual letters for evidence of harmful or biased statements? None. It acts as a common carrier, as it should. And so I have difficultly believing that absent any overt and deliberate effort to bias results for legally protected classes, that google has *any* responsibility to artificially adjust the code.