I want you to go to Google. In the search bar, type in “three white teenagers”. Hit search. Now click on the images tab. What do you see? Now do another images search, only this time search for “three black teenagers”. Again, what do you see? It is very likely that your search for “three white teenagers” populated a number of different pictures featuring glowing, fresh-faced, wide-eyed, happy-go-lucky Caucasian youngsters. One image might show a group of three young and vivacious white girls, hands on hips, posing for the camera. Another image might show another group of three white teens, smiling innocently into the camera while each holds a soccer ball, football, and basketball. It is also very likely that your search for “three black teenagers” populated only one variety of images: mug shots of black individuals.
But perhaps this is simply just a coincidence. Let’s try a different search, shall we? Try doing an image search for beautiful dreadlocks. What is your result? If it was anything like mine, you perhaps saw row after row of images featuring predominantly white individuals with different styles of dreadlocks. Hm, that’s curious. How about a search for “beautiful braids”? To quote Lewis Carroll: “Curiouser and curiouser!” This search populated even more images of white people than the last, all with various styles of braided hair. Well, this is interesting, particularly noting the widely-established fact that these two particular hairstyles (dreadlocks and braids) not only originated from within the black community but are still most commonly seen within the black community.
Well, let’s try this one more time. This time try a google image search for “professional hairstyles”. What do you see? Again, rows and rows of predominantly white women sporting various updos, ringlets, curls, ponytails, buns, and side sweeps. It seems as though any hairstyle donned by a white individual is deemed as professional, at least by Google. Now, let’s augment this search by typing in “unprofessional hairstyles”. What are the resulting images? Again, row after row of images featuring predominantly black women with a wide variety of different hairstyles.
Does this seem strange to you? It should.
Apparently this is not an uncommon occurrence, as major online tech companies are being criticized as a growing number of individuals have taken to social media to report issues of racial bias in the way these companies display and use information. Most recently, Google has come under fire for obvious racial discrepancies regarding their search results. In fact, less than a month ago your google searches of “black teenagers” would have been quite literally filled with only images of mug shots of young black people. However, these biases have not been isolated solely to incidents involving race, but they also involve issues regarding gender. These racial and gender biases can be seen most clearly and most frequently in three different forms of online programming: search engine algorithms, big data, and online ad delivery.
First of all, what is a search engine algorithm? In its simplest form, a search engine algorithm is a set of rules or a unique formula that the search engine uses to determine the significance of a webpage.  These formulas are unique to each individual webpage and can range in their level of complexity, with the most complex of these formulas (such as the ones used by Google) being the most coveted and the most heavily guarded.  Online search engine algorithms all have the same basic general construct. They all take into account the relevancy of a particular page (which analyzes the frequency, usage, and specific location of keywords within the website), the individual factors of a search engine (a common individual factor being the number of pages a search engine has indexed), and the off-page factors (such as the frequency of click-through rates).  Although most (if not all) online programs involve some type of search engine algorithm that assists in populating results for its users, the most common use of a search engine can be found on popular search engine websites such as: Google, Bing, Yahoo, AOL, and AskJeeves (for us old-timers).
Instances of online racial bias can be seen most clearly through search engine algorithms. Do you recall the experiment we conducted earlier? Similar searches conducted on different search engines produced varying results. For example, a similar search for “black teenagers” as compared to “white teenagers” using the Bing search engine produced different results than that of Google, with images of black teenagers being more consistent with that of the images of white teenagers. Similarly, a search for “three black teenagers” compared to a search for “three white teenagers” using the Yahoo search engine revealed the same stock photos of whimsical teenagers for the search of white teens, while the search for black teens revealed mostly snapshots of racially-biased Google search results for the same, as well as a few peppered images of happy, smiling groups of black teenagers. These results beg the question: is it simply Google that is racist?
Big data is another area in which we can see instances of online racial bias. A similar type of “search engine”, big data describes the large amount of data – both structured and unstructured – that inundates a business on a day to day basis.  Basically, big data works similarly to a search engine in that it compiles and stores large amounts of information and creates suggestions and results tailored to that specific individual. A popular example of one type of use for big data is the Netflix movie suggestion algorithm, which takes the information of users based on what movie or show they just watched and suggests similar movies based on their views and the number of star ratings that the user gave a similar movie. Another example of big data usage is the popular shopping search engine website, Amazon, which compiles information from consumers based on pages they have viewed previously as well as items they have bought and suggests similar items.
The most recent issue with racial bias involving the use of big data suggestion algorithms involved Amazon. The popular e-commerce company recently implemented an upgrade to the Amazon Prime service and now offers Prime Free Same-Day Delivery, which provides Amazon Prime members with same day delivery of more than one million products for no extra fee on order over $35.  When Amazon started this new service, it was offered in 27 major metropolitan areas and provided broad coverage in most cities.  For example, in its hometown of Seattle, Amazon offered this new service to every zip code within the city, including surrounding suburbs.  However, in six major cities, these same services were not offered in zip codes with a high population of black citizens from low socio-economic backgrounds.  Which is ironic considering that these types of services would arguably be more beneficial to a black, struggling single mother who (between running from job to job and taking care of her children) is unable find the time or spare the bus fare to go to the store, locate an item, and purchase it, than it would be for a wealthy white yuppie who has no children, has their own vehicle, has the free time to take afternoon Zumba classes and splurges daily on iced pumpkin spiced lattes from Starbucks.
Online ad delivery is another area where one can see bias and discrimination based on race as well as gender. Ad delivery is the process by which search engines and websites, due to sponsorships and funding from adword companies, display advertisements on their website in the form of picture links and keyword search links based on the content of a user’s search. Recently, Harvard professor Latanya Sweeney conducted a cross-country study of 120,000 internet search ads and found repeated incidents of racial bias.  Specifically, her study looked at Google adword buys made by companies that provide criminal background checks.  At the time, the results of the study showed that when a search was performed on a name that was “racially associated” with the black community, the results were much more likely to be accompanied by an ad suggesting that the person had a criminal record regardless of whether that person actually did.  Obviously, this typically produces adverse results and severely diminishes the employment opportunities for those individuals whose names are accompanied by ads suggesting that they might have a criminal record.
We can also see instances of gender discrimination in targeted online ad delivery. For instance, Google’s online advertising system showed an ad for high-income jobs to men much more often than it showed the ad to women.  Also, research from the University of Washington found that a Google images search for “C.E.O.” produced results where only 11 percent of the images were actually women, even though 27 percent of United States chief executives are women. 
The impact that all of these forms of racial and gender discrimination have on current society operates on a subconscious level. It has been argued that these forms of racial and gender bias only serve to reinforce and reify these biases within individuals and society at large. A person searching for pictures of black teenagers and is instead bombarded with images of black teens in a criminal line-up will soon begin to think (or continue to think, if they already adopted this faulty mindset) that all black people are dangerous or criminals. Similarly, employers who are consistently inundated with ads suggesting that individuals with names typically associated with the black community have a criminal record will soon begin to assume that all persons with the name Jerome or Laquiesha have a criminal record. And women who are only targeted with ads promoting lower-paying jobs than men may begin to believe that these are the only jobs that are available or best suited to them. After all, if the internet said it then it has to be true, right?
But are these algorithms, in and of themselves, truly to blame? Although these search engine algorithms are largely self-correcting and self-sustaining, they are all created and programmed by humans. Humans may create these formulas to be unbiased, but they also (whether consciously or subconsciously) input their own human social and cultural stereotypes and biases within these formulas. One area of potential bias comes from the fact that so many of the programmers creating these programs, especially machine-learning experts, are male.  And white. Humans recreate their social preferences and biases, and algorithm and data-driven products will always reflect the design choices of the humans who built them.  Once created, these systems simply “learn” from the now embedded information that was originally inputted by their human programmers.
So what does the law say about all of this? Does it say anything? Subsection (a) of section 1981 of the United States Code 42 provides that all persons within the jurisdiction of the United States shall have the same right in every State to the full and equal benefit of all laws and proceedings.  Later codifications of this code have included the Civil Rights Act of 1964. Title VII of this Act applies to most employers and prohibits employment discrimination based on race, color, religion, sex, or national origin, and through guidance issuance in 1973, now also extends to persons having criminal records.  Title VII does not prohibit employers from obtaining criminal background information; however, certain uses of criminal information, such as a blanket policy or practice of excluding applicants or disqualifying employees based solely upon information indicating an arrest record, can result in a charge of discrimination. 
In light of all of this, I would like to pose some questions to you. From the current legislation, there seems to be legitimate legal implications for those who have been found to be engaging in various forms of online discrimination. But who should actually be held liable for these types of abuses? The company, or the individual programmers, or both? Should liability even attach in these instances? Should there be a legally viable cause of action for individuals (or class of peoples) who claimed to have suffered from these types of virtual discrimination (think of instances like Amazon shipping practices or employers influenced by suggestive ads)? Is this even a legitimate issue that should be addressed? If so, what are some potential solutions to this growing problem?
 42 USC § 1981.