Racial Bias in Online Search Engine Algorithms

I want you to go to Google. In the search bar, type in “three white teenagers”. Hit search. Now click on the images tab. What do you see? Now do another images search, only this time search for “three black teenagers”. Again, what do you see? It is very likely that your search for “three white teenagers” populated a number of different pictures featuring glowing, fresh-faced, wide-eyed, happy-go-lucky Caucasian youngsters. One image might show a group of three young and vivacious white girls, hands on hips, posing for the camera. Another image might show another group of three white teens, smiling innocently into the camera while each holds a soccer ball, football, and basketball. It is also very likely that your search for “three black teenagers” populated only one variety of images: mug shots of black individuals.

But perhaps this is simply just a coincidence. Let’s try a different search, shall we? Try doing an image search for beautiful dreadlocks. What is your result? If it was anything like mine, you perhaps saw row after row of images featuring predominantly white individuals with different styles of dreadlocks. Hm, that’s curious. How about a search for “beautiful braids”? To quote Lewis Carroll: “Curiouser and curiouser!” This search populated even more images of white people than the last, all with various styles of braided hair. Well, this is interesting, particularly noting the widely-established fact that these two particular hairstyles (dreadlocks and braids) not only originated from within the black community but are still most commonly seen within the black community.

Well, let’s try this one more time. This time try a google image search for “professional hairstyles”. What do you see? Again, rows and rows of predominantly white women sporting various updos, ringlets, curls, ponytails, buns, and side sweeps. It seems as though any hairstyle donned by a white individual is deemed as professional, at least by Google. Now, let’s augment this search by typing in “unprofessional hairstyles”. What are the resulting images? Again, row after row of images featuring predominantly black women with a wide variety of different hairstyles.

Does this seem strange to you? It should.

Apparently this is not an uncommon occurrence, as major online tech companies are being criticized as a growing number of individuals have taken to social media to report issues of racial bias in the way these companies display and use information. Most recently, Google has come under fire for obvious racial discrepancies regarding their search results. In fact, less than a month ago your google searches of “black teenagers” would have been quite literally filled with only images of mug shots of young black people. However, these biases have not been isolated solely to incidents involving race, but they also involve issues regarding gender. These racial and gender biases can be seen most clearly and most frequently in three different forms of online programming: search engine algorithms, big data, and online ad delivery.

First of all, what is a search engine algorithm? In its simplest form, a search engine algorithm is a set of rules or a unique formula that the search engine uses to determine the significance of a webpage. [1] These formulas are unique to each individual webpage and can range in their level of complexity, with the most complex of these formulas (such as the ones used by Google) being the most coveted and the most heavily guarded. [2] Online search engine algorithms all have the same basic general construct. They all take into account the relevancy of a particular page (which analyzes the frequency, usage, and specific location of keywords within the website), the individual factors of a search engine (a common individual factor being the number of pages a search engine has indexed), and the off-page factors (such as the frequency of click-through rates). [3] Although most (if not all) online programs involve some type of search engine algorithm that assists in populating results for its users, the most common use of a search engine can be found on popular search engine websites such as: Google, Bing, Yahoo, AOL, and AskJeeves (for us old-timers).

Instances of online racial bias can be seen most clearly through search engine algorithms. Do you recall the experiment we conducted earlier? Similar searches conducted on different search engines produced varying results. For example, a similar search for “black teenagers” as compared to “white teenagers” using the Bing search engine produced different results than that of Google, with images of black teenagers being more consistent with that of the images of white teenagers. Similarly, a search for “three black teenagers” compared to a search for “three white teenagers” using the Yahoo search engine revealed the same stock photos of whimsical teenagers for the search of white teens, while the search for black teens revealed mostly snapshots of racially-biased Google search results for the same, as well as a few peppered images of happy, smiling groups of black teenagers. These results beg the question: is it simply Google that is racist?

Big data is another area in which we can see instances of online racial bias. A similar type of “search engine”, big data describes the large amount of data – both structured and unstructured – that inundates a business on a day to day basis. [4] Basically, big data works similarly to a search engine in that it compiles and stores large amounts of information  and creates suggestions and results tailored to that specific individual. A popular example of one type of use for big data is the Netflix movie suggestion algorithm, which takes the information of users based on what movie or show they just watched and suggests similar movies based on their views and the number of star ratings that the user gave a similar movie. Another example of big data usage is the popular shopping search engine website, Amazon, which compiles information from consumers based on pages they have viewed previously as well as items they have bought and suggests similar items.

The most recent issue with racial bias involving the use of big data suggestion algorithms involved Amazon. The popular e-commerce company recently implemented an upgrade to the Amazon Prime service and now offers Prime Free Same-Day Delivery, which provides Amazon Prime members with same day delivery of more than one million products for no extra fee on order over $35. [5] When Amazon started this new service, it was offered in 27 major metropolitan areas and provided broad coverage in most cities. [6] For example, in its hometown of Seattle, Amazon offered this new service to every zip code within the city, including surrounding suburbs. [7] However, in six major cities, these same services were not offered in zip codes with a high population of black citizens from low socio-economic backgrounds. [8] Which is ironic considering that these types of services would arguably be more beneficial to a black, struggling single mother who (between running from job to job and taking care of her children) is unable find the time or spare the bus fare to go to the store, locate an item, and purchase it, than it would be for a wealthy white yuppie who has no children, has their own vehicle, has the free time to take afternoon Zumba classes  and splurges daily on iced pumpkin spiced lattes from Starbucks.

Online ad delivery is another area where one can see bias and discrimination based on race as well as gender. Ad delivery is the process by which search engines and websites, due to sponsorships and funding from adword companies, display advertisements on their website in the form of picture links and keyword search links based on the content of a user’s search. Recently, Harvard professor Latanya Sweeney conducted a cross-country study of 120,000 internet search ads and found repeated incidents of racial bias. [9] Specifically, her study looked at Google adword buys made by companies that provide criminal background checks. [10] At the time, the results of the study showed that when a search was performed on a name that was “racially associated” with the black community, the results were much more likely to be accompanied by an ad suggesting that the person had a criminal record regardless of whether that person actually did. [11] Obviously, this typically produces adverse results and severely diminishes the employment opportunities for those individuals whose names are accompanied by ads suggesting that they might have a criminal record.

We can also see instances of gender discrimination in targeted online ad delivery.  For instance, Google’s online advertising system showed an ad for high-income jobs to men much more often than it showed the ad to women. [12] Also, research from the University of Washington found that a Google images search for “C.E.O.” produced results where only 11 percent of the images were actually women, even though 27 percent of United States chief executives are women. [13]

The impact that all of these forms of racial and gender discrimination have on current society operates on a subconscious level. It has been argued that these forms of racial and gender bias only serve to reinforce and reify these biases within individuals and society at large. A person searching for pictures of black teenagers and is instead bombarded with images of black teens in a criminal line-up will soon begin to think (or continue to think, if they already adopted this faulty mindset) that all black people are dangerous or criminals. Similarly, employers who are consistently inundated with ads suggesting that individuals with names typically associated with the black community have a criminal record will soon begin to assume that all persons with the name Jerome or Laquiesha have a criminal record. And women who are only targeted with ads promoting lower-paying jobs than men may begin to believe that these are the only jobs that are available or best suited to them. After all, if the internet said it then it has to be true, right?

But are these algorithms, in and of themselves, truly to blame? Although these search engine algorithms are largely self-correcting and self-sustaining, they are all created and programmed by humans. Humans may create these formulas to be unbiased, but they also (whether consciously or subconsciously) input their own human social and cultural stereotypes and biases within these formulas. One area of potential bias comes from the fact that so many of the programmers creating these programs, especially machine-learning experts, are male. [14] And white. Humans recreate their social preferences and biases, and algorithm and data-driven products will always reflect the design choices of the humans who built them. [15]  Once created, these systems simply “learn” from the now embedded information that was originally inputted by their human programmers.

So what does the law say about all of this? Does it say anything? Subsection (a) of section 1981 of the United States Code 42 provides that all persons within the jurisdiction of the United States shall have the same right in every State to the full and equal benefit of all laws and proceedings. [16] Later codifications of this code have included the Civil Rights Act of 1964. Title VII of this Act applies to most employers and prohibits employment discrimination based on race, color, religion, sex, or national origin, and through guidance issuance in 1973, now also extends to persons having criminal records. [17] Title VII does not prohibit employers from obtaining criminal background information; however, certain uses of criminal information, such as a blanket policy or practice of excluding applicants or disqualifying employees based solely upon information indicating an arrest record, can result in a charge of discrimination. [18]

In light of all of this, I would like to pose some questions to you. From the current legislation, there seems to be legitimate legal implications for those who have been found to be engaging in various forms of online discrimination. But who should actually be held liable for these types of abuses? The company, or the individual programmers, or both? Should liability even attach in these instances? Should there be a legally viable cause of action for individuals (or class of peoples) who claimed to have suffered from these types of virtual discrimination (think of instances like Amazon shipping practices or employers influenced by suggestive ads)? Is this even a legitimate issue that should be addressed? If so, what are some potential solutions to this growing problem?

 

 

 

[1] http://www.brickmarketing.com/define-search-engine-algorithm.htm

[2] http://www.brickmarketing.com/define-search-engine-algorithm.htm

[3] http://www.brickmarketing.com/define-search-engine-algorithm.htm

[4] http://www.sas.com/en_us/insights/big-data/what-is-big-data.html

[5] http://www.bloomberg.com/graphics/2016-amazon-same-day/

[6] http://www.bloomberg.com/graphics/2016-amazon-same-day/

[7] http://www.bloomberg.com/graphics/2016-amazon-same-day/

[8] http://www.bloomberg.com/graphics/2016-amazon-same-day/

[9] https://www.fordfoundation.org/ideas/equals-change-blog/posts/can-computers-be-racist-big-data-inequality-and-discrimination/

[10] https://www.fordfoundation.org/ideas/equals-change-blog/posts/can-computers-be-racist-big-data-inequality-and-discrimination/

[11] https://www.fordfoundation.org/ideas/equals-change-blog/posts/can-computers-be-racist-big-data-inequality-and-discrimination/

[12] http://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html?_r=1

[13] http://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html?_r=1

[14] http://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html?_r=1

[15] http://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html?_r=1

[16] 42 USC § 1981.

[17] http://cacm.acm.org/magazines/2013/5/163753-discrimination-in-online-ad-delivery/abstract

[18] http://cacm.acm.org/magazines/2013/5/163753-discrimination-in-online-ad-delivery/abstract

 

 

 

 

 

Advertisements

~ by tmontaque on September 18, 2016.

10 Responses to “Racial Bias in Online Search Engine Algorithms”

  1. Personally, I think one of the biggest areas where the questions you have posed are being debated and answered is the area of publishing mug shots online. Many sites profit by posting mug shots online and then offering to take them down for a fee. Unfortunately, even if you do pay the company, other sites still have the photo up (whether its competitor mug shot sites, blogs that have reposted the image, or cache sites.) Many state legislatures are restricting this practice and many courts are tackling interesting issues in cases where one of these companies is party to suit.

    These sites have high hit counts, which certainly has an impact on the search results that you mentioned in your post. But that begs the question – why aren’t there as many mugshots of white teens coming up? There are certainly plenty of white teen mugshot on these sites, but they remain comparatively absent from the search engine results. Is it because black teens are possibly less like to pay to have the pictures removed? Is it because of disproportionate incarceration rates? (If you’re imprisoned, you don’t have internet access to have the photos removed.) Is it because of the search algorithm? Or is it because of some other reason entirely?

    Furthermore, these sites obviously have a lasting impact on the person’s life. It’s no secret that employers often google prospective employees. With those searches, these images are bound to pop up. Now, some may respond “well employers already know whether you’ve committed a crime,” and that may be true in many cases where employers perform a background check or explicitly ask about criminal history. However, these sites include a picture – which undoubtedly illicits a different response than a simple check mark in a box – and often include details about the charge that are not included in the application process. While I have no studies on the matter to cite to, I’d be willing to hypothesize that an employer was more likely to hire someone who checked the box than someone whose mugshot they saw online.

    More importantly, these pictures may remain online even if the charges have been dropped or the person has been found not guilty. It seems extremely unfair for someone’s mugshot to be plastered across the internet when they have committed no crime. It seems even more unjust for someone to be denied a job because an employer saw that picture online and didn’t know that the person was innocent.

    Public opinion is changing about the “hire-ability” of people with criminal histories – evidence by movements like the “ban the box” campaign. But these sites provide employers with a back alley way of finding out the same information. And, as I mentioned, this back alley is even more dangerous because it may trigger false positives by including mug shots of someone who has committed no crime.

    Fortunately, search engines are seeking to minimize the stigma of these photos by lowering their rankings, so they are less likely to appear in search results. Plus – and here is a throwback to last week – some payment processes have put up payment blockades on these sites.

    Another interesting way that racial bias online has a negative impact on the hiring process is name classifiers. Some companies with online application databases use name classifiers to sort individuals based on race (or rather a guess of what race they probably are based on their name.) Often times, these companies are using this software to fulfill their affirmative action hiring policies and bring likely diverse candidates to the top of the pile. But that begs the question – if some companies are using this tool for inclusion, what about those companies that may use this tool to exclude diverse candidates?

    For a great podcast on this topic, I suggest listening to Reply All’s episode #52 Raising the Bar. The show features an interview with Twitter’s only black engineer who left the company because he was asked to build a name classifier to increase the company’s likelihood of hiring diverse candidates. The episode can be found here: https://gimletmedia.com/episode/52-raising-the-bar/

  2. 1. Prior to addressing the questions posed I will say this: I do not believe based on what I have seen, and what you have posted, that the legal implications are that clear or strong enough – at least not to me. If we are to truly curtail racial bias in areas such as search engine algorithms, big data, and online ad delivery, I believe the legal ramifications of engaging in actions such as these should be more clearly condemned with a statute designed specifically to address this issue. I beat the drum last week on the uncertainty of the law regarding the choking off of payment services to Mega and others, I will beat that drum again this week because I simply do not believe the law, as it stands, is full-throated enough in its condemnation of racial biases online.
    a. That being said, I think that an appropriate stand-alone statute addressing this issue would penalize those who engage in construction and utilization of racially-motivated algorithms, big data, etc. That is to say, if you participate, or contribute to, the intentional use of these techniques, the law should provide a means through which you may be punished. I also feel that an appropriate stand-alone statute addressing this problem would specify that the producer or user of a racially-motivated online ad delivery system or the like did so knowingly and/or intentionally. This would of course be a high burden of proof in a courtroom but one can imagine how an individual who was truly unaware of the racial basis of an algorithm they were using could easily be successfully found liable or, in the event of the implementation of this statute as a criminal law, prosecuted and potentially be incarcerated for something that a contractor or subordinate did (the details of which the accused may have been unaware) or that someone who was specifically ordered to create an algorithm that produces racist results or risk mistreatment by superiors.
    b. While I do believe that liability should attach, I am unsure that there should be a cause of action for each individual person who may have suffered from race-based search engine algorithms and the reason is simply that it seems as if determining things such as damages would be extremely difficult and the calculations inexact as well as the potential for some to abuse the law and file suit without legitimate grounds. It seems that perhaps those that break this statute should be fined a specified amount for the damage done, if proven, and that money should then be placed in the HUD budget or perhaps given to charities designed for the benefit of those demographics damaged by the actions taken by those firms who wronged them.
    c. Lastly, the question of whether this is a legitimate issue that needs to be addressed is clear to me – of course it is and should be. So many in America from many different walks of life do not understand that others from other ethnicities, sexual orientations, etc. do not live in the same everyday reality as many of their peers. The internet is now a large part of how our perceptions of others are determined, it simply is a party of our culture as a nation and world. For many, the internet is a part of the process of acculturation. The internet often drives culture and cultural perceptions. Knowing this means that we must understand the consequences. When young Americans google images in their childhood years that could help to form their opinions of other people from the onset of their lives, we have to ensure that the information they receive will not further misunderstanding or plant seeds for hate later in life. Individuals can post whatever they choose on the internet, but when an individual or company utilizes a technique that hurts people – it becomes the business of all of us to end that conduct by whatever means possible within the confines of our Constitution.

  3. Is this a legitimate issue? I definitely believe so. I was not shocked by the results after searching “three white teenagers” vs. the results of searching “three black teenagers”. However, I was shocked by the “professional vs. unprofessional hairstyles” search. It is alarming to see how the internet results only show black individuals as having unprofessional hairstyles, but when you look at the “professional” results, you see a very disproportionate amount of pictures of black people when compared to white. The internet is arguably the most used tool of our generation. People use the internet for various reasons such as obtaining information, business, entertainment, communication, etc. and if there are discrimination issues with search results and other sources, it is important that we do not take it lightly. As mentioned in the blog post, many employers are utilizing the internet to research potential employees and these discrimination issues may adversely impact minorities, women or other groups without those same people even being aware that this discrimination issue exists! Ordinary people might not even think or know that they can be discriminated against with something as simple as a search result, that why it is imperative that we as a society inform companies, government and everyday people of these issues that go on.

    Although, not necessarily the same issue, I read an article about Airbnb and how they are making policy changes to fight racial bias. [1] The article explained how a black male was told that there was not any availability on these certain dates for a property rental, but he later found out the property was rented out to a white couple on those same exact dates. Now Airbnb is implementing voluntary anti-bias training for hosts and issuing discipline for hosts who are caught violating their anti-discrimination policies. However, it is hard to tell if this was more of a public relations stunt than an actual solution to the problem.

    At the end of the day, I believe what is being displayed on the internet is not necessarily an algorithm’s fault. For example, internet search engines typically just pull data from databases and shows the most relevant results. I think the bigger issue is we still live in a world where people still have much bias and we have systematic racism, which the search engine reflects these issues from the information it pulls.

    It will be interesting to see what legislation develops in regards to “virtual discrimination”. Obviously, discrimination is illegal, but I am curious on how people or companies may be held liable for discrimination from an algorithm. I think it will be difficult to enforce, if there is no evidence showing intent. How would we able to show that the discrimination was on behalf of the company or individual purposely discrimination in comparison to a programmer who may have developed an algorithm which discriminated by mere coincidence or by mistake?

    [1] http://www.latimes.com/business/technology/la-fi-tn-airbnb-discrimination-20160908-snap-story.html

  4. To start id like to say that I do think the issue of racial bias in search engine algorithms, big data, and online ad delivery is a legitimate issue. After reading this weeks blog post I couldn’t help but think about how scary it is that such a small group of people could have such a large influence on basically every aspect of society. As more and more people become more and more reliant and entranced by the internet the more power this small group of people possess. This small group of people with such great power are the large internet companies such as google, yahoo, and bing and the people who work for them. I don’t know about the rest of you but the only thing i know about search engine algorithms is that when I type something in a google search somehow websites related to my search pop up. Since only a small number of people actually know what is going on to produce those specific results, that small number of people can control which websites pop up and which don’t essentially determining which information gets viewed by the public and which information does not. Since so many people now use these search engines their results can influence social norms or express different forms of social bias.

    Given my lack of knowledge on the inner workings of search engine algorithms I find it difficult to answer the other questions posed in this week’s blog post. As for the question of who should be held liable for these types of abuses I definitely believe that the company should be held liable and maybe the individual programmers as well depending on the circumstances. The company should be responsible for the product the make available to the public and should eliminate issues such as racial bias in house before the product reaches the public. Since I don’t know how the algorithms are created or work it is hard for me to say whether i believe the individual programmers should be held liable as well as the companies that employ them. Are the programmers intentionally entering racial bias into the algorithms or are the bias unintentional? If the bias is suspected to be intentional how can you prove it? If it was intentional and it could be proven to be intentional I would definitely hold the individual programmers liable along with the companies they work for.

    As to the question of whether there should be a legally viable cause of action for individuals who claim to have suffered from these types of virtual discrimination, I believe there should be a legally viable cause of action but with a high burden of proof on the party claiming to have been discriminated against. In my opinion there are too many other factors involved in some of the situations mentioned in the blog post, such as the loss of job opportunities, to prove that virtual discrimination was the cause of the party’s injury. A higher burden of proof would allow for some form of remedy to exist while not over burdening the courts with endless virtual discrimination litigation. As to what legal remedies may be available I am not sure since some cases of discrimination may involve an entire race or gender.

  5. A few days ago, the New York Fashion Week took place. However, it does not seem that many people will remember this show for the fashion it displayed, but will instead remember it for Marc Jacob’s use of dreads on his white female models. There were several factors that people were angry about; one of those factors being that Marc did not use any black models for this part of his fashion show. Another factor was that neither Marc nor the designer, who put together the colorful dreads on the white models, acknowledged that this hairstyle was influenced by the Rastafarian culture. [1].

    I agree with what Facebook’s global director of diversity told the Business Insider, that we should see characteristics such as race, gender, sexual orientation, and so forth as adding value to people and companies not as taking away such value. [2]. Unfortunately, not many people share that view whether they like to admit it or not, as is evidenced by the algorithm software ‘learning’ to be racist through the biased searches performed by people using the internet. [3].

    I find it difficult to justify holding a company accountable for an algorithm they do not have control over (since it is the actual programmer(s) who create the algorithm) and for the same algorithm that learns and grows in accordance with the searches performed by internet users. I would like to say we should hold the programmer(s) liable for the results that come back from the algorithm they create; however, this still leads us to the issue that the algorithm learns to be biased from the searches performed by internet users.

    Instead of having companies rely on their own programmers to come up with an algorithm they hope will not exhibit any biases, companies should more readily use services such as Evolv Solutions, which examines a company’s infrastructure and uncovers areas of exposure. [4]. Xerox used Evolv Solutions’ services in regards to incorporating technology into their hiring process and were thus informed that one of the variables they planned to incorporate into their algorithm was going to be very racially discriminatory. [5]. Such services provide invaluable insight and there is no reason that such a company could not return every year to check and/or update algorithms that incorporate machine learning. This would help to eliminate any discrimination and biases an algorithm learns. I acknowledge that this is not a complete solution to the issues presented in this blog, but it may be a good start.

    [1] http://qz.com/783614/the-rainbow-dreadlocks-on-marc-jacobs-runway-have-reopened-the-pandoras-box-of-cultural-appropriation/
    [2] http://www.businessinsider.com/how-algorithms-can-be-racist-2016-4
    [3] http://kuow.org/post/can-computer-programs-be-racist-and-sexist
    [4] http://evolvsolutions.com/2013/index.php/about
    [5] http://technical.ly/philly/2016/05/12/solon-barocas-hiring-racism-big-data/

  6. Institutionalized racism and sexism are pervasive, and now it appears they have even invaded our most often used source of information. I can only imagine the emotional harm that these algorithms inflict on those who are already suffering unfair prejudice, in addition to the harm these algorithms cause by further entrenching negative stereotypes. Today, when a child Googles they expect the results to be indicative of reality. When the images generated from innocent searches like “three black teenagers” or “C.E.O” portray gender and racial bias, it creates in that child the normative impression, with the Google seal, that their race or their sex is inferior. This is a harm that cannot be tolerated.

    The issue however remains as to what can be done in response. To answer the first question, I believe there are avenues of liability, and I believe there should be more. Algorithms that are used to make hiring decisions [1] are probably the most likely targets in wide spread use. One example used previous hiring decisions (made by humans) as a sample to “train” the algorithm on the desired qualities of a new hire. [2]. However, institutionalized racial bias in the original hiring decisions (the sample) taints the objectivity of the algorithm by training the program to select for traits that correlate highly with non-minority candidates. Just as with any hiring process that discriminates on account of race, I believe these algorithms create possible liability under Title VII for unfair hiring practices.

    In this case, and those like it, I believe liability should rest with the company that relies on a system that is patently racist. The tricky part comes when an algorithm is trained by users, instead of just software developers. Not long ago, Twitter tested an artificial intelligence that relied on user tweets as a training sample. [3]. The result was unsettling. Tay, the AI, became hateful and bigoted within 24 hours, and Twitter was quick to take it down. [4]. What this once again revealed, is that people are just as hateful online as they are in person (perhaps worse) and their online activity can influence algorithms.

    There can be no worse application of this “Tay effect” than on criminal justice data. Juries have historically been corrupted by racism and they are not always representative of the population. [5]. In addition, sentencing is often much more stringent for black offenders. [6]. However, despite this data, there is at least one jurisdiction that is considering using an algorithm to determine prison sentences. [7]. I don’t think Tay should serve on a jury, let alone the bench, and I think there should be liability if he/she/it does.

    However, to segue into the second question, unless there is some de jure form of racism or sexism, it is not likely that liability will expand past situations such as the two above. Amazon based their algorithm on data that has been influenced by years of redlining [8], but Amazon did not, itself redline. This can be distinguished from the hiring cases in that those algorithms continue the business’ illegal reliance on racially motivated practices. Amazon, however, did not create the conditions that have concentrated minorities in parts of Chicago and elsewhere, and must navigate the existing landscape. In cases where there has been humiliation [9], it is also difficult to imagine a recovery. This would probably require some associated form of physical or economic harm.

    Even without a huge civil action, this is a serious problem that should be addressed. For now, I think promoting diversity in the algorithm design staff is the best way to start fixing the issues. Having respect and an appreciation for the history of race and gender relations in this country requires us to look beyond data, and sometimes even beyond logic, to actively scrutinize our systems for racial and gender bias.

    There is no way to accomplish this without a diverse staff of decision makers that bring a variety of life experiences and social expectancies. Predictably, without diversity, we may not recognize when something is offensive because we lack sight of the entire field of society’s feelings. We begin making decisions in the dark and sometimes these decisions worsen existing harms.
    By failing to diversify, a company is in essence negligently and immorally running the risk of further harming others. Negligently, because it is well-documented that such harms occur when decision making is heterogeneous, and immorally because through legislation such as Title VII and the Equal Protection Clause we have determined as a society that racism and sexism are unacceptable. It is simply not enough to chalk up these blunders to de facto bias. These outcomes cannot be allowed, and we can and we must do better to actively identify and prevent them.

    Other solutions include promoting technological prowess in public interest lawyers, creating greater transparency in algorithm formulations, and legislation limiting the use of our personal information for programing algorithms. [10]. Noticeably, large tech companies, such as Facebook [11], have acknowledged a dangerous lack of diversity and begun taking steps to diversify. However, there is still a long way to go.

    [1] http://technical.ly/philly/2016/05/12/solon-barocas-hiring-racism-big-data/
    [2] Id.
    [3] http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
    [4] Id.
    [5] http://www.usnews.com/news/blogs/at-the-edge/2015/05/06/institutional-racism-is-our-way-of-life
    [6] http://www.usnews.com/news/blogs/at-the-edge/2015/05/06/institutional-racism-is-our-way-of-life
    [7] https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing#.XrjI0B09G
    [8] http://www.businessinsider.com/how-algorithms-can-be-racist-2016-4
    [9] See http://dataprivacylab.org/projects/onlineads/ ; see also http://www.cnn.com/2015/07/02/tech/google-image-recognition-gorillas-tag/
    [10] http://boingboing.net/2015/12/02/racist-algorithms-how-big-dat.html
    [11] http://www.businessinsider.com/facebooks-2016-strategy-for-improving-diversity-2016-1

  7. I think this issue is 100% legitimate and should be addressed. At the same time I don’t think they reflect any particular racial animus on the part of coders, algorithm creators, or shipping route deciders. To me this reflects the clear deficiencies of so called “color blind” approaches, where claiming to ignore race ends up leading to results that inevitably have racial leanings. For Google and other algorithms I would expect that the racism observed is a reflection of the racism of users rather than the company, not entirely different from say the internet comment sections of any website that doesn’t aggressively moderate. In Amazon’s case the same day delivery maps were probably based almost entirely on delivery cost and prime enrollment maps [1] that had no racial input. It shouldn’t surprise anyone that such maps betray a racial divide because until fairly recently online purchasing has been a fairly luxury product and amazon prime itself is a fairly luxury product. Longstanding socioeconomic factors that have isolated these inner-city minority communities just get compounded in Amazon’s determination of which areas to offer same day delivery. Companies should take a more aggressive and active role in ensuring that their systems do not act in racially biased ways, they should use their positions of authority to help move society forward rather than backward.

    Where it gets tricky is whether we should use to law to encourage/force companies to do so, I worry about the potential overreach if companies are held liable for such issues. Under current law I don’t think it is likely that issues like this (Google algorithms and Amazon same day delivery) would result in liability because of the lack of apparent racial intent/animus. If lawmakers do make it easier so sue and win against tech companies for such failures to account for racial bias I fear that new companies without understanding of what is required of them or what they need to adjust for will be harmed by potential liability. While companies having racial blind spots is bad, I don’t believe liability is necessarily the best solution to them. Education and public pressure are probably better and safer options. Ultimately I think this is more likely to be resolved by the court of public opinion than a legal court of law.

    [1] http://www.bloomberg.com/graphics/2016-amazon-same-day/

  8. Who should be punished for a racist and sexist algorithm? Not the individual programmer (alone). This would only allow company to shift blame on yet another factor that they “cannot control”. However, a company should most definitely be held liable when its algorithm promotes racial biases or relies on it for business practices.

    Algorithms are not as blind as tech companies claim to be. Algorithms do not pick out their own numbers but use the ones picked by their programmers. This allows companies to further rely on racists and sexist data but now they can blame it on a based neutral, unbiased process. Boing Boing discussed how the police entered their already biased data to create an algorithm that now allows them to further discriminate against individuals. The only difference is now they can claim they are not relying on personal bias but on a computer. [1]. Google was called out for tagging African American individuals with racist slurs, mainly showing pornographic material when entering the search term “Latina”, and associating African Americans with terms such as “unprofessional”. Google is a multi-million dollar company. [2] [3]. Blaming such results on a technical glitch is nothing but lazy excuse from them. As the biggest search engine they should be aware of their social responsibility to the public. Also, they have the funds and the man power to test their search results and prevent so called technical glitches. Allowing such giants as Google to get away with obviously promoting racist and sexist biases set a terrible example for other companies. Even after being called out Google did not fix their glitch. I was searching unprofessional hairstyles on my phone as I am typing this blog respond and Google only added maybe two pictures of white hairstyles (one is of a shaved head with the word sausage shaved onto it, and one is of a lady with hot pink hair) that were clearly unprofessional. However, Google continues to show various black hairstyles including a picture of a little girl with natural hair as unprofessional. This shows that companies will not remedy their algorithms if they are not forced to by the government.

    Amazon is more likely to implement same day delivery in predominately Caucasian’s neighborhood than in African American’s. Again, Amazon claims that it was simple math that caused such results and not their personal bias. Again, we are speaking of a HUGE company that should not be able to justify their behavior with some petty excuse.

    The question should not be “What really is the harm when Amazon denies same day delivery” but “How can we prevent future harm through racists and sexist algorithms?”Algorithms soon will be used to determine most decisions in a company. Many companies already use them for their hiring process. [4]. What is even scarier, Pennsylvania will soon rely on an algorithm to determine prison sentencing. They will use a software to determine the likelihood of re-offending. [5].

    We should be aware that algorithms can be manipulated so companies’ can promote their racist and sexist agenda and beliefs. And laws should reflect such possibilities. I do not think that our current laws are sufficient enough to prevent such behavior.

    [1] http://boingboing.net/2015/12/02/racist-algorithms-how-big-dat.html
    [2] http://kuow.org/post/can-computer-programs-be-racist-and-sexist
    [3] http://www.businessinsider.com/how-algorithms-can-be-racist-2016-4
    [4]http://www.npr.org/sections/alltechconsidered/2015/03/23/394827451/now-algorithms-are-deciding-whom-to-hire-based-on-voice
    [5]. https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing#.mswype8zx

  9. I thought the Amazon Prime Free Same-Day Delivery issue you discussed was interesting so I looked into it a bit further. In an analysis report done by Bloomberg, Amazon’s Vice President for global communications said that the ethnic compositions of neighborhoods were not part of the data Amazon examined when drawing up its map to decide where they would provide same-day delivery services. Although there’s no concrete evidence that Amazon decided to provide delivery services based on race, the Bloomberg report found that same day delivery services were not offered to six major cities with predominantly Black and Hispanic zip codes which include:

    -South, Southwest and West End, Atlanta
    -Southside and Roseland, Chicago
    -Roxbury, Boston
    -Lake Highlands, Pleasant Grove, Oak Cliff and Red Bird, Dallas
    -The Bronx and some areas of Queens, New York
    -Trinidad, Capitol Hill, Fort Dupont, Anacostia and Congress Heights, Washington D.C

    Amazon only decided to expand services to these areas a week and a half later after Bloomberg’s report came out causing an uproar. Amazon claims that they decided to provide same-day delivery services to zip codes where there was a high concentration of Prime members, then expand to other areas over time. They also claim that although they can’t release the specifics on how they decided what areas to offer same-day delivery, their approach was based on a cost and efficiency perspective. I have a hard time believing that Amazon was not influenced by racial bias. Amazon claims that the areas listed above were too far from their warehouses; however, the report provides several examples of where Amazon provided same-day delivery services to white neighborhoods further away from their distribution centers than Black neighborhoods. If distance, cost, man power and time were of such great concern to Amazon, then why would they skip over Black neighborhoods with Prime members that are closer or positioned nearby by white neighborhoods? It seems to me that racial bias was a factor especially considering the fact that Amazon is a shopping search engine known for using big data to cater to their customers’ preferences.

    I also thought you brought up a good point when you said that many of the programmers creating these algorithms are white males. You pointed out that humans recreate their social preferences and biases and that in turn may influence the creation of an algorithm. I believe that this is a legitimate issue and one potential solution to this growing problem is not a legal answer. I believe companies should seek to hire programmers of various races and ethnicities. I don’t think all white male programmers are racist, but if you only have a majority of one race and gender, then their collective preferences and biases will prevail and manifest themselves in the algorithms created. If you want algorithms that are truly reflective of society at large, the people who create them should reflect the global community.

    http://www.bloomberg.com/graphics/2016-amazon-same-day/

  10. I have done quite a bit of research on implicit biases while in law school. I must say that I was ignorant to the prevalence of implicit biases in everyday life. It is undeniable how much biases impact society, especially when the bias is implicit and most people do not think twice about their actions or train of thought.

    While reading for this week’s blog, I just kept thinking about how easily these algorithms are dictated by the individual programmer’s implicit biases. We all know that racism still exists but I tend to be optimistic and say that racism is much less prevalent in my generation than it was/is in older generations. One thing is abundantly clear though, while implicit biases are thriving and are enhanced by algorithms, racism and other biases are going to become more powerful. Society needs to stop that before it gets further out of hand.

    Humans are the problem and we cannot just blame technology. I understand that algorithms are basically self-sufficient and intake information and output data on a daily basis. However, we cannot just rely on technology to go unmonitored and then when the algorithm is being racially biased just raise our hands in the air and claim “The computer did it, not me!” Technology programmers need to be more critical of themselves while creating software. They also need to analyze their own software better by at least attempting to foreshadow the adjustments the algorithms will make.

    There needs to be some sort of legislation or regulation in place to hold people and companies accountable. Biases develop in areas that may not be anticipated. However, I think that when it can be proven that there was intent or at least a willful ignorance when creating algorithms that have racial (or gender) bias then the programmer needs to be held accountable. Additionally, if a company is aware of how the algorithm works and what factors are absent and could lead to biases, the company should be held accountable.

    Technology is too prevalent in society for people to get away with using advanced technology to further their bias. Human beings as a whole need to become more self-aware of what impact their actions have on others. Am I saying that any algorithm that has a racial bias should lead to criminal or civil charges against the programmer or company? Absolutely not. Sometimes, things happen on accident. It is not always predictable how these self-operating algorithms will function day-to-day given what information it receives. But, when the programmer or company is aware of the bias outcome of the algorithm (either at the outset or if they do not fix the problem once they realize it), then some penalties should be involved.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: