The Spoke / Why Google Isn’t Always Right Subscribe

Keyboard with Google logo for a shift key.
Google can be wrong. It's important to understand why.

On September 15, 2017, Buzzfeed reported that Google Ads allowed certain ads to be targeted to users who had done searches that included the keywords “black people ruin everything,” “jewish parasite,” and more. When Buzzfeed contacted them, Google removed most of the Buzzfeed-identified offensive keywords from its allowable word database, but many more problematic keywords are still available for user targeting. This incident speaks to the larger difficulties of motivating large tech companies to aggressively fight online hate and misinformation while simultaneously pursuing their strong financial stake in selling ads of all kinds.

In our society, when we’re not sure of something, we ask Google. Google spends its resources scouring the Web trying to predict what will be the most relevant content to show users for their 3.5 billion searches per day. Usually, Google’s guesses are accurate, but sometimes they’re not. If you Google: “four presidents in the klan,” “are women evil," or other phrases that express ideas that are racist, sexist, xenophobic, etc., you can see this effect in practice. More poignantly, after the recent Las Vegas shooting, factually incorrect Google results from 4chan (a platform with a history of spreading fake news and best known for internet trolls) were featured in Google’s “Top Stories” panel. This shows that Google can present us information that is relevant—in the sense that the result is related in some way to what was searched—but not necessarily accurate.

While the word “google” is part of our daily vocabulary (as a verb as well as a noun), people are often unclear about the scope and limits of Google. For instance, Google does not produce much web content; this is why we never cite Google as a source. On the other hand, most of us have used Google to locate relevant sources. Google searches are free to end users; however, Google charges advertisers money to show ads on Google’s result pages. Maybe you have noticed that ads often appear on the top of a Google search results page. However, Google ads are not limited to Google search result pages. Website content creators can sell ad space on their web pages to Google. This means that Google also controls the advertisements that appear on non-Google webpages. To generate as much advertising revenue as possible, Google has acquired companies in every subset of the online advertising industry. In July, it was reported by Recode that Google would raise one third of the world’s $223.7 billion ($73.8 billion) in digital ad revenue in 2017.

Google couples its large market share with easy to use features. These features make it easy for businesses (small and large alike) to advertise more effectively. It only takes about five minutes to set up Google Ads, making businesses better able to target to users with their relevant keywords. But how does targeted advertising work? Let’s say you’re a dentist specializing in children’s dentistry. Of course, you want to advertise locally. Google allows you to do that. You also can be very specific with your search keywords and tailor your ads to people who search for things like: “my child’s tooth ache” or “does my kid have a cavity.” You aren’t limited to queries such as “toothache” or “dentist.” Google makes advanced targeting easy to do, and will even suggest additional keywords for your ad campaign.

I can type almost any word into the Google Ads targeting platform. This is how Buzzfeed discovered all of the horrible things Google Ads allows advertisers to target (basically, Buzzfeed tried typing in racist keywords and it worked). In the course of my investigation, I’ve discovered many more offensive things that Google allows advertisers to use as keywords when creating targeted ads. Google has repeatedly committed to fighting misinformation and hate on their platform. But, in the realm of advertising, a piece of their business that significantly impacts their bottom line, they are dragging their feet.

What is Google’s incentive to fix these problems? Frankly, there really isn’t a strong motivating reason for Google to do better. Currently, we allow tech companies like Google to largely self-regulate their behavior. As a result, these companies only respond to advertiser pressure and widespread public outrage. However, for the public to effectively pressure companies like Google, it must be informed about what they are doing. Better education about the effect of these companies on our society is required. Assessing the credibility of sources and fact checking online should be skills that every Internet user possesses. Understanding when Google has failed users should be a commonplace skill, especially for a population that sends it thousands of requests per second. But we can never forget that when Google improves, it is never for our sakes, but for its advertisers. Google will only be challenged to improve its services for internet users under the force of continued external pressure. 

 

A 2018 Albright Fellow, Emma Lurie is a Wellesley College junior majoring in computer science with a minor in Chinese. She participates in research concerning online misinformation and Web literacy in the Wellesley College CRED lab

 

Photo Credit: Jane0606, "A keyboard with a button Google," via Shutterstock. 

 

Related