Differential Ad results returned by Google’s ‘Ad Sense’

A colleague from the NYU Information Law Institute pointed me to a recent study and news article that examines the degree to which Google’s Ad Sense displays differential ads according to black/white sounding names. The paper is here: http://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf , and the news article is here: http://gizmodo.com/5981665/are-google-searches-racist?post=57014274 . The paper’s main finding is that google’s ad sense shows a statistically significant difference between ads presented for white-sounding names, relative to black-sounding names. Specifically, that results presented for black names show disproportionately more “arrest” ads.

Certainly this is a complicated and important issue. And so for a news story to suggest that “google ads are racist” is a gross mischaracterization of the issue. That ads are differentially displayed is a function of the willingness to pay by the sponsor (instantcheckmate.com for instance). The reinforcing effect (the temporal learning) described on page 34 is a function of humans clicking on them. Indeed, a computer algorithm plays a role in this, but that seems largely irrelevant. The algorithm is performing the function that it was programmed to do: respond to auction bids and human behavior (clicking on ads). Nothing more, nothing less.

But is this too easy an out?

It is certainly valid to pose the question, as a colleague did: what is google’s responsibility with an algorithm that may be facilitating bias of any kind in its ad delivery?

What role does the postal service have in scanning individual letters for evidence of harmful or biased statements? None. It acts as a common carrier, as it should. And so I have difficultly believing that absent any overt and deliberate effort to bias results for legally protected classes, that google has *any* responsibility to artificially adjust the code.



  1. But Google is not a common carrier– as they argue vehemently in every discussion of their responsibility for search bias. They claim their algorithm is an editorial function protected by free speech. Now, that may be debatable, but what’s clear is that all publications accepting advertising can be held responsible for accepting ads that exclude people from hiring or housing based on their race or other characteristics.

    To the extent that decisions in the algorithm design have disparate racial effects, when an alternative design would not, then a search engine should be held responsible. Saying that the result just reflects bias in other users and the resulting racism is not the responsibility of the designer of the algorithm harnessing that bias seems too easy an out indeed. We have long established that accommodating the bias of customers is not allowed under the law; restaurants used to exclude black people on the basis that they were just reflecting the bias of their white customers but that was eliminated under the law.

  2. Fine, but we’re talking common carrier with regard to ad sense, not search.

    And be held responsible for what, exactly? No one is being excluded from, or denied access to anything.

    But even if so, how would you “solve” the alleged contribution to racial bias? No longer allow bidding on ads for names? This seems like a heavy handed intervention with a very slippery slope.

Comments are closed.