Google Search and the Development of Public Opinion

In 2005, the World Wide Web had 11.5 billion indexable documents.1 By 2007 that number had more than doubled to 25 billion, and by 2012 it doubled again, topping out at more than 56 billion.2 The sheer size of the web has generated a mythology of the space as “a wild, ungovernable, and thus ungoverned realm.”3 Adding to our sense of the rough and tumble wilderness of the Web is the very nature of URLs—there is often nothing intuitive about site addresses. Without search engines, most sites would be practically impossible to find: “to exist is to be indexed.”4 Siva Vaidhyanathan describes early versions of the web as “an intimidating collection, interlinked but unindexed…exciting and democratic—to the point of anarchy…unimaginably vast…It all seemed so hopeless and seedy…Then came Google.”5 Google serves as the Web’s major port of entry for two thirds of the world’s internet users and processes more than one hundred billion queries per month.6 Since its inception in the late 1990s, the company’s stated goal has been to customize search in order to anticipate and answer the questions posed by each and every user. They are not just creating an index—they are working to give everyone their own personal index.7 For some, this customization is a comforting way of dealing with the vastness of the Web. For others, however, it raises questions about our willingness to let one company sift through all the information and tell us what is “good, true, valuable, and relevant.”8 The Web is the place where many of us encounter the opinions of others and develop our understanding of the public conversation. The increased customization of the search process distorts our conception of that conversation. Dealing with the distortion requires a combination of technical knowledge and awareness of the human motives underlying each new innovation.

Educational initiatives in digital and data literacy are critical to fostering the responsible use of information technologies, such as the Google search engine, and creating an informed public. Henry Jenkins, in his report on the development of media literacy for participatory culture, has argued that, in addition to technical skills, Web users need to be taught how to think critically about the information they encounter online.9 In this essay, I argue that for automated technologies, such as search engines and other databases, we need to develop a critical perspective that emphasizes human motive in the engineering process. The Google search engine provides an extended example of how that approach might work.

In 2006, Eric Schmidt, at the time Google’s CEO, discussed his five-year plan and his ultimate goals for the search engine. He foresaw the day when the site would be able to answer abstract questions, be able to answer hypothetical questions, and eventually be able to anticipate our questions.10 The goal of the search engine answering questions points to the company’s notion that the site could engage in a sort of dialogue with the user. What is concerning about this dialogue is that while the creators of the site see it as such, the users of the site likely do not. In Caroline Miller’s work on technology for public speaking and writing, she has argued that the problem with our interactions with automated systems is a uniquely rhetorical one.11 We do not view automated systems as agents. Most people see their interactions with technology as one-sided—i.e. “I am using Google to learn information about Michele Bachmann,” and not “Google is telling me about Michele Bachmann.” With social media, there is a possibility of other agents being exposed to our rhetoric, so we see ourselves as part of a conversation. However, when we interact with automated systems directly, we do not understand ourselves as participants in a discursive moment because we lack the awareness of the other. As a result, we lose a sense of our own agency in the interaction. Miller explains that agency is often experienced as a kinetic energy that is generated by our mindfulness of an interaction with another living being.12 Jenkins’s recommendations for approaching digital literacy from a critical perspective focus on interactions within social networking technologies, as opposed to fully automated systems. We need to expand on Jenkins’s approach by developing a way to critically approach a system that lacks the consciousness of the other apparent in social networking technologies. A focus on human motives can access the notion of human interactivity in automation and increase our capacity to view ourselves as actors.

Critical literacy should be combined with technical knowledge to help users conceptualize the role of human motive in the engineering of machines. Coding is sometimes seen as a first in-road to developing digital literacy. Organizations such as Codeacademy—founded in 2011 and used by my than five million people to date—offer coding classes in multiple programming languages for free to people all over the world.13 Initiatives such as this one are critical step towards giving Web users the technical savvy necessary to be active as opposed to passive consumers of information online. With that said, Jenkins points out that these initiatives taken up separate from critical education encourage students to see being makers and users as separate activities.14 David Rushkoff, author of Program or Be Programmed and an employee of Codacademy, says that programmers are the ones shaping our world and programming is about interacting with the world being shaped for us.15 Integrating the critical component will help students to bring their knowledge of programming into their other interactions with digital systems even when they are performing the role of passive user. The remainder of this essay offers two methods for enacting the approach. The first is a historical evaluation of the development of the Google search engine aimed at uncovering the motivations that are built into the system. The second is a short experiment, which demonstrates how those motivations impact and influence public conversation. Before dealing with these critical approaches, it is important to conceptualize the role of the search engine within public conversation.

Search Engines and the Public Sphere

Search engines play a critical role in the distribution of ideas that informs our engagement with the public sphere. The public sphere, broadly conceived, according to Jürgen Habermas, is “a realm of our social life in which something approaching public opinion can be formed. Access is guaranteed to all citizens. A portion of the public sphere comes into being in every public conversation in which private individuals assemble to form a public body.”16 The role of search engines in the creation of publics is deeply tied to what Michael Warner describes as the textual nature of publics: “Publics are essentially intertextual frameworks for understanding texts against an organized background of the circulation of other texts, all interwoven not just by citational references but by the incorporation of a reflexive circulatory field in the mode of address and consumption.”17 One of the major issues as texts circulate through conversations, and contribute to the creation of public opinion, is how well each member of the public understands areas of conflict and dispute. To that end, Peter Dahlgren has argued that we should think of the Internet not as a public sphere proper, but as a site of development for shared understanding.18 Ideally, when we do research online—for example, learning about Barack Obama—we would be exposed to a variety of pages that have opinions both similar to and different from our own. That exposure makes us aware of the broad range of ideas floating around and helps us to see where we fit into the conversation. The “key assumption here is that a viable democracy must have an anchoring at the level of citizens’ lived experiences, personal resources, and subjective dispositions.”19 In that model, search engines are a critical window into public conversation. Siva Vaidhyanathan points out, in reference to Google, that there is a danger in letting one company, even one with honorable goals, develop the lens we use to see the world.20 Any engine will have a perspective, and relying on that perspective makes it difficult to see the full picture.

The problem of a distorted public opinion is not unique to the Internet. High levels of media consumption are often linked to a distorted sense of the world. For example, researchers looking at television developed what is known as cultivation theory, which states “that the more time people spend ‘living’ in the television world, the more likely they are to believe [the] social reality portrayed on television.”21 One outcome is that people who watch a lot of television have a disproportionate fear of crime, and falsely assume that a large portion of the population is employed in careers that fight crime.22 The major difference between the distortion that comes from television and the distortion that comes from a search engine is our perception of the technology. In the case of “traditional news outlets, most viewers of conservative or liberal news sources know that they’re going to a station curated to serve a particular political viewpoint. But Google’s agenda is opaque. Google doesn’t tell you who it thinks you are or why it’s showing you the results you are seeing.”23 We tend to be aware that there are human agendas behind the creation of television shows and movies, but with the search engine, we believe we are dealing with a machine that is free of these motives. As a result, search engines can “serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.”24 Users tend to think of search engines as utilitarian, efficient, practical, and, above all: objective.25 If a search engine customizes results to only show us the pages of others who agree with our political opinion, then we begin to approach anyone who disagrees with us as if they are part of a very small minority.26 To overcome the distortion created by search engines, we have to overcome our belief in the objectivity of search engines. It may be useful to approach the search engine less as a piece of technology and more as a medium that is transmitting ideas. Evaluating the human motives behind the creation of the search engine is an important first step.

Searching for the Public on Google

A search engine is not a single piece of technology; it is made up of multiple complex algorithms used to collect, store, analyze, and retrieve data. Robert Kowalski argues that cultural analysis of algorithms should focus on the following equation: “Algorithm=Logic+Control.”27 Put another way, the algorithm combines the logic of the data itself with a system that controls how that data is stored and retrieved. In 1998, when Larry Page and Sergey Brin, the creators of Google, published an essay describing the prototype of the search engine, their language revealed a focus on the control aspect of the algorithm.28 Even in early days of the Internet, the designers knew the Web was too big to navigate without help, and users were already unwilling to look past the first ten search results. To achieve user satisfaction, they had to present the best results, and that required a lot of decisions about what constitutes the best. The focus on controlling data to deliver the best result is the original motive underlying the engine. Google has developed well past the 1998 prototype, and today the exact details of the search process are, of course, a proprietary secret. However, there is a lot that can be understood about the motives and goals behind controlling the information by combing through Google’s blog posts and press announcements about the development of various parts of the search engine. In Google’s documentation of the engineering process, we can see the motivations for the choices that created the engine that exists today. The developments can be broken down into a set of projects and goals: the index, PageRank, search customization, Panda, and Google Instant. Each of these aspects of the development of the search engine reveals places where programmers are attempting to develop and control the material form of information online.

The Index

The foundation of any search engine is its index. When you run a search on Google, you are not actually searching the web. What you are searching is an index of the web, which “like the list in the back of a book, helps you pinpoint exactly the information you need.”29 The process of creating the index means that there is always a delay between what is available and what you are seeing. Google’s process begins with a program called a spider, which automatically fetches a few pages, then follows the links those pages point to, and the links those pages point to, and so on to create an index. As the pages are searched, every page is copied, and it is the copies of those pages that you are searching when you search the web.30 The original indexing process for Google involved building the entire index and then starting over to go back and look for changes. As a result, the index was constantly days and even weeks behind on site changes. Most of the developments in the indexing process have focused on increasing the immediacy of the information available. In 2010, Google released a new indexing system called Caffeine. Three years later, at a press conference for their fifteenth anniversary, the company’s engineers referred to this as the most important change since the engine was developed.31 The Caffeine was built to accommodate the diversity of content—images, blogs, news sources, social networks—on the web and to update all the types of content faster.32 Prior to Caffeine, Google’s indexing system operated with different layers that were not made available until the entire Web was analyzed, which could take weeks. With the Caffeine system, it “analyze(s) the Web in small portions and update” continuously.33 For this to work, “every second Caffeine processes hundreds of thousands of pages in parallel. If this were a pile of paper it would grow three miles taller every second. Caffeine takes up nearly 100 million gigabytes of storage in one database and adds new information at a rate of hundreds of thousands of gigabytes per day.”34 There is still a delay between what you see and what is available, but the delay time has been substantially shortened.

The Caffeine index is designed to give the user a sense of continuously updated information, but the delay in the system means that what you are seeing is potentially inaccurate in multiple ways. One potential inaccuracy is the number of search results listed at the top of the page—often in the millions. Note that when it says the number of pages found it says “about ___ results.” The “about” is important. When Google searches the index, it take the most recent slice of data and use it to estimate how many results will be available in the rest of the system.35 Most people do not navigate past the first page of results, and for those that do the other pages are being constructed while they sort through them. Because this number is an estimate, some users that work their way through all the pages find that the number of results is substantially less than reported. The Caffeine index allows Google to construct a sense of both the immediacy and the vastness of the Web. The primary page gives you the most current information while telling you the information has been culled from millions of results.


Once Google has created an index, the next level of control is in the retrieval process. Where developments in the indexing process have focused on speed, developments in the retrieval process have focused on popularity, customization, and quality. The primary model for Google’s search algorithm is the PageRank scoring system—it calculates the number of incoming links a page has and uses that to place the page in its database and determine relative relevance.36 Larry Page, one of the founders of Google, developed the system out of the practices he saw his professors engage in while he was at Stanford.37 In academic communities, “professors count how many times their papers had been cited as a rough index of how important they were. Like academic papers, he realized, the pages that a lot of other pages cite—say, the front page of Yahoo—could be assumed to be more ‘important,’ and the pages that those pages voted for would matter more.”38 However, PageRank wasn’t the end; the goal was to crack the problem of getting the most relevant search results. PageRank provides a way for figuring out what the majority of Web users think is the most important site, but it does not tell us what an individual user will find most relevant. To get at this, Google began mining the Web for other signals. When you run a search, Google looks at all the results and asks 200 questions to determine which ones you want—how many times does the term appear on the page? Do the words appear in the title? Are there synonyms for the words? Are these words adjacent? Is it a quality site? What is the pages PageRank score?—this whole process takes about half a second.39 The signals are combined with data gathered about users to develop customized results.

Search Customization

In June 2005, the Google blog included an announcement that the engine would be using site histories for users who were signed in to customize their search results.40 Shortly thereafter, in December 2009, search customization went from being a service available when the user was signed in to a default setting for every search. The service enabled the site to “customize search results for you based upon 180 days of search activity linked to an anonymous cookie in your browser.”41 In the early days of customized search, there was an option on the main search page to view and manage your search history, and the site offered a “view customizations” link that enabled users to see what was being customized in their results.42 Near the end of 2010, both of those options disappeared. While customization was always set as the default, these buttons made users aware of the setting and made it easy to quickly opt out of the service. Since the introduction of customization, users have had the option to either go through their settings and delete specific items from their history, delete their whole history, pause the collection of Web history, or stop Web history collection all together. However, all of these are deliberate actions the user must take. The default is to collect. As Google customization has improved over time, web history has become less vital for telling users what they want to know. When someone does a search, Google uses fifty-seven signals, ranging from the user’s location, what type of browser is used, how fast the query is typed, and other factors to determine what that user is likely seeking.43 The goal is to develop a semantic search function, which provides search results that are contextualized within the given situation when the search is run.44


PageRank and search customization were both about using the opinions of users—individually and en mass—to determine relevance. Recently, that approach has been combined with attempts at measuring site quality. In February 2011, the Google blog announced a change to the algorithm that was significant enough to noticeably impact 11.8% of search queries.45 This change to the algorithm distinguishes between “high-quality” and “low-quality” sites and uses that standard to sort search results. In an interview at the 2011 TED Conference, the developers of the project said that they had code-named it Panda.46 The Google blog lists more than twenty questions that are used to assess the quality of given site.47 Among those questions are “Does the article describe both sides of a story?” and “Is the site a recognized authority on its topic?” As Amit Singhal and Matt Cutts, primary engineers for the project, explain: “Google depends on the high-quality content created by wonderful websites around the world, and we do have a responsibility to encourage a healthy web ecosystem. Therefore, it is important for high-quality sites to be rewarded, and that’s exactly what this change does.”48 The focus on site quality takes the question of relative relevance to a new level. Many of the examples of quality sites given by the algorithms designers were sites created by well-recognized institutions: The New York Times, government sites, IRS, and Wikipedia.49 The shift in the algorithm from opinion to quality puts an emphasis on established institutions as information providers. It should be noted that the Panda algorithm is weighted more heavily in the creation of the search results page than the customization algorithms. Therefore, institutionalized sites receive greater preference than sites that might be specific to the user’s needs.

Google Instant

The indexing, retrieval, and ranking processes in the search engine are about controlling how Google provides users with results. Google Instant attempts to control the way users search. In September 2010, with the introduction of Google Instant, the site began providing users with both search results and search terms. Google Instant is an update of a project called Google Suggest, which provided real time search suggestions. The mission of Google Suggest was to aid users when they developed queries, so that the queries would be more complete, have fewer spelling errors, and save keystrokes.50 This eventually developed into Google Instant. During early demonstrations of the technology, the engineers focused on customization and semantic learning to guess what you want as you are typing it. The final version searches before you type by anticipating what you are looking for: “No one wants search results for [‘bike h’] in the process of searching for [bike helmets].”51 According to Marrissa Mayer, former VP of Search Products and User Experience at Google, “Instant takes what you have typed already, predicts the most likely completion and streams results in real-time for those predictions—yielding a smarter and faster search that is interactive, predictive and powerful.”52 The service focuses on three major features: Dynamic Results (it appears as you type), Predictions (guessing the rest of your query before you are done), and Scroll to Search (look through predictions to pick the best term).53 The user can immediately see the system physically respond to typing. Google prides itself on constantly crawling the Web for up-to-the-minute search results, and the average query response is one-fourth of one second.54 Mayer said at the announcement of Google Instant: “Users tend to spend nine seconds on average entering a search query into Google. After they hit the search button, the query spends an average of three hundred milliseconds traversing Google’s servers before results hurtle back to the users, who spend an average of fifteen seconds picking a selection from the results.”55 Google Instant saves users two to five seconds on a search. Applied to the number of searches Google processes per day this adds up over time to an estimated 350 million hours of user time each year.56 Google Instant makes the search process appear to physically respond to the user, and the increasing speed with which users choose results allows the engine to assert more control at each stage of the process.

The Google Instant search results are a product of what other people search on the Web and the company’s customization efforts.57 In the beginning, almost all the Instant Search results shown came from terms that other users had searched. When corporations outside Google figured that out, there were attempts to spam the search engine by entering the name of a company with various combinations of words. To weed this out, Google only includes searches that have reached a threshold of relevancy. Much like the indexing system, the algorithm keeps track of words that are used together and is flexible enough to create new combinations of words from the available searches. This means that while the Instant Search suggestions are based on other people’s searches, there is a chance the search suggestions you are getting are unique. Finally, the Instant Search algorithm runs in conjunction with the same customization software originally used to generate search results. The software is what Eric Schmidt likely envisioned when he said that Google would someday anticipate your questions. This program does not employ the Panda algorithm, which means that Google is using a different process for generating results than for making search suggestions.

This brief review of major developments in the Google search engine is just a beginning. In September 2013, at a press conference for Google’s fifteenth anniversary, the company announced more changes for the future of the search engine.58 The changes focused on integrating search more completely with Google Glass, the company’s iOS, and mobile platforms. One big announcement was an algorithmic change called Hummingbird, which is impacting more than 90% of search results and is designed to help the engine answer more complex, sentence style queries.59 Amit Singhal voiced his hope that, with all of these developments, “Google will have a conversation with you.”60 The engine continues to change, but the goal expressed by Singhal is quite similar to the one expressed by Eric Schmidt seven years earlier. While we cannot foresee exactly where the engine will go technologically, a survey of the history of its development, and the desires of the engineers, reveals much about the goals pushing Google forward. Knowledge of the technological pieces, and the goals that went into their construction, can help users to see the human on the other side of the search process and engage the process more critically.

Experimenting with Google

A critical perspective for automated technology requires integrating knowledge about the development of the engine with real world interactions with the search process. Using the search engine, comparing our experiences to others, and critically discussing those experiences translates the information about how and why the engine works the way it does into something usable day-to-day. To that end, the last section of this article discusses three instances of tests and experiments performed using the search engine. The tests demonstrate different potential issues with the search engine and provide examples of how educators might incorporate testing the search engine into their classroom.

In the summer of 2011, I did a small test where I invited eleven friends from different parts of the country to run searches for “Michele Bachmann” and send me their results. Eli Pariser, in his book The Filter Bubble, discusses a similar test of the search engine. Both my experiment and Pariser’s demonstrate the fact that customization is impacting the way Google shows users the world. With that said, the Google search process changed between the two experiments, and the impact of the customization was apparent in different ways. Pariser had two of his friends search for “BP” in the spring of 2010.61 Not only did the two friends see different search results, the number of results differed—180 million results for one and 139 million for the other.62 Just like Pariser’s experiment, I saw drastic differences in the number of results returned, ranging from 6,290,000 to 22,000,000. However, I did not see the differences Pariser saw on the search results page. It is important to note that Pariser ran his test (1) before Google implemented the Panda algorithm and (2) before the introduction of Google Instant. As mentioned before, with Panda, if two users enter the same search term, the quality control function will ensure that the top results returned are relatively uniform. Algorithmically, Panda comes before customization. So, quality is preferred, and customized results are pushed down in relevancy. The individuals I had run the search all received the same first page of results. Pariser saw Google’s customization process in the search results, but in my experiment, I saw it in the suggested search function. Some suggested search phrases were the same for everyone (e.g. “Michele Bachmann quotes” and “Michele Bachmann for President”). However, there were a many search phrases that showed up selectively. Some of these suggested phrases pointed to cultural arguments about Bachmann. A male, in his early 30s, living in upstate New York got “Michele Bachmann bikini” and “Michele Bachmann hot.” These suggestions came right after “Michele Bachmann for President,” which may have encouraged a reader, considering the suggestions in order, to take the presidential bid less seriously. Interestingly, a married mother of two in Minnesota, Bachmann’s home state, received these suggestions, in this order: “Michele Bachmann joke,” “Michele Bachmann for President,” and “Michele Bachmann crazy.” These kinds of results paint a very specific picture of Bachmann’s presidential bid. By comparison, some users received neutral results that might encourage them to take Bachmann’s bid more seriously. The behavior of the suggested search function points to the kind of distortion of public conversation discussed previously.

Comparing results and engaging in a critical discussion about the search process can reveal several ways that Google distorts our sense of the public sphere. The tests that Pariser and I conducted were aimed at understanding the role of customization in the search process. Researchers Paul Baker and Amanda Potts experimented with Google Instant to show how cultural assumptions about race can manifest themselves in the suggested search terms.63 They entered the beginnings of questions in the search engine, such as “Why do black people…” and “Why do white people…” and then noted the suggestions provided by Google’s autocomplete. The results showed that the search engine tended to complete the questions with suggestions that reinforced cultural stereotypes about race. The scholars argue that these suggestions by the engine legitimate preconceived notions by giving the impression that a set of beliefs is widely held.64 Teachers might consider experimenting with the Google search engine as part of information and digital literacy education. Having students reflect on their interactions with the search engine and compare them with the interactions of their peers provides a basis for conversation about how the human intentions engineered into the system become part of our political and cultural dialogue.

Final Thoughts

The potential issues that arise from the Google search engine are not uncommon to technology in general. Vaidhyanathan explains that the issue is the “black box of technological design. Although consumers and citizens are invited to be dazzled by the interface…they are rarely invited in to view how it works. Because we cannot see inside the box, it’s difficult to appreciate the craft, skill, risk, and brilliance.”65 The simplicity of what we see allows us to avoid the notion of the system as agent. Rhetorical agency has to do with our understanding of our capacity to act and our sense of self within performative moments.66 When our primary exposure to the public is performed through forms and plugins within the system, we lose a sense of ourselves as actors.67 Intimidation and a lack of knowledge often cause users to trust the search engine in a non-critical way; “the trust bias is reinforced by the fact that most people who use Google do so in a very unsophisticated way while nonetheless expressing a high level of confidence about their own skills at navigating a search system.”68 The task facing educators focused on digital literacy is to challenge users to see inside the black box and develop critical tools for making choices about what they see.


Baker, Paul, and Amanda Potts. “‘Why Do White People Have Thin Lips?’ Google and the Perpetuation of Stereotypes via Auto-Complete Search Forms.” Critical Discourse Studies 10, no. 2 (2013): 187-204.

Bohn, Dieter. “Garage Brand: Google Taps Its Founding Myth in Search of a New Beginning.” The Verge, September 26, 2013.

Boulton, Clint. “Google Instant Provides Predictive Search.” eWeek. September 8, 2010.

Brin, Sergey, and Larry Page. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” Computer Networks and ISDN Systems 30, no. 1 (1998): 107-117.

Claburn, Thomas. “Google Instant Makes Search Psychic.” Information Week, September 8, 2010.

Cohen, Jonathan, and Gabriel Weimann. “Cultivation Revisited: Some Genres Have Some Effects on Some Viewers.” Communication Reports 13, no. 2 (2000): 99-114.

Dahlgren, Peter. “The Internet, Public Spheres, and Political Communication: Dispersion and Deliberation.” Political Communication 22, no. 2 (2005): 147-162.

de Kunder, Maurice. “The Side of the Web.” Daily Estimated Size of the World Wide Web. Accessed September 21, 2013.

Efrati, Amir. “Google Gives Search a Refresh.” Wall Street Journal, March 15, 2012,

Ellery, Peter, William Vaughn, Jane Ellery, Jennifer Bott, Kristin Ritchey, and Lori Byers. “Understanding Internet Health Search Patterns: An Exploration in the Usefulness of Google Trends.” Journal of Communication in Healthcare 1, no. 4 (2008): 441-456.

Finley, Klint. “Codecademy Hires Program or Be Programmed Author Douglas Rushkoff to Promote Code Literacy.” TechCrunch, July 26, 2012,

Frier, Sarah. “Codecademy Raises $10M, Sees Job Service as Part of Its Future.”, June 19, 2012,

Google. “How Google Works.” Accesssed March 4, 2011.

———. “Technology Overview.” Accessed March 15, 2011.

———. “Autocomplete.” February 27, 2011. Accessed March 15 2011.

Google Press Center. Google Press Day. Webcast video. May 10, 2006.

Griffin, Em. Communication, Communication, Communication. New York: McGraw-Hill, 2012.

Grimes, Carrie. “Our New Search Index: Caffeine.” Official Google Blog. June 8, 2010,

Gulli, Antonio, and Alessio Signorini. “The Indexable Web is More than 11.5 Billion Pages.” Paper presented at the 14th International Conference on the World Wide Web (special interest tracks and posters), Chiba, Japan, 2005.

Habermas, Jurgen. “The Public Sphere: An Encyclopedia Article (1964).” In Media and Cultural Studies, KeyWorks, edited by Meenaskshi Durham and Douglas Kellner, 73-78. New York, NY: Wiley-Blackwell, 2006.

Hills, Ken, Michael Petit, and Kylie Jarrett. Google and the Culture of Search. New York: Routledge, 2013.

Hof, Robert. “Meet Hummingbird: Google Just Revamped Search To Handle Your Long Questions Better.” Forbes, September 26, 2013,

Horling, Brian and Matthew Kulick. “Personalized Search For Everyone.” Official Google Blog. December 4, 2009,

Introna, Lucas and Helen Nissenbaum. “Shaping the Web: Why the Politics of Search Engines Matters.” The Information Society 16, no 3 (2000): 169-185.

Jenkins, Henry. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. Cambridge, MA: The Massachusetts Institute of Technology Press, 2009.

Kamvar, Sep. “Search Gets Personal.” Official Google Blog. June 28, 2005,

Kowalski, Robert. “Algorithm=Logic+Control.” Communications of the ACM 22, no. 7 (1979): 424-436.

Lagorio-Chafkin, Christine. “2 Guys Who Want to Teach the World to Code.” Inc., July 2, 2012,

Langville, Amy, and Carl Meyer. Google’s PageRank and Beyond: The Science of Search Engine Rankings. Princeton: Princeton University Press, 2006.

Levy, Steven. “TED 2011: The ‘Panda’ That Hates Farms: A Q&A With Google’s Top Search Engineers.” Wired Magazine, March 3, 2011,

Liu, Jennifer. “At a Loss for Words?” Official Google Blog, August 25, 2008,

Marmanis, Haralambos, and Dmitry Babenko. Algorithms of the Intelligent Web. Stamford, CT: Manning, 2009.

Mayer, Marissa. “Universal Search: The Best Answer is Still the Best Answer.” Official Google Blog, May 16, 2007,

Miller, Carolyn. “What Can Automation Tell Us About Agency?” Rhetoric Society Quarterly 37, no. 2 (2007): 137-157.

Pan, Bing, Helene Hembrooke, Thorsten Joachims, Lori Lorigo, Geri Gay, and Laura Granka. “In Google We Trust: Users’ Decisions on Rank, Position, and Relevance.” Journal of Computer-Mediated Communication 12, no. 3 (2007): 801-823.

Pariser, Eli. The Filter Bubble. New York: Penguin Press, 2011.

Rushkoff, David. Program or Be Programmed: 10 Commandments of a Digital Age. New York: O/R Books, 2010.

Singhal, Amit. “More Guidance on Building High Quality Sites.” Google Webmaster Central Blog, May 6, 2011,

Singhal, Amit, and Matthew Cutts. “Finding More High-Quality Sites in Search.” Official Google Blog, February 24, 2011,

Sullivan, Danny. “Google: 100 Billion Searches Per Month, Search to Integrate Gmail, Launching Enhanced Search App for iOS.” Search Engine Land, August 8, 2012,

Warner, Michael. Publics and Counterpublics. New York: Zone Books, 2002.

Vaidhyanathan, Siva. The Googlization of Everything (And Why We Should Worry). Los Angeles: University of California Press, 2011.


Notes    (↵ returns to text)

  1. Antonio Gulli and Alessio Signorini, “The Indexable Web Is More Than 11.5 Billion Pages,” Paper presented at the 14th International Conference on the World Wide Web (special interest tracks and posters), Chiba, Japan, 2005.
  2. Maurice de Kunder, “The Side of the Web,” September 21, 2013,; Bing Pan et al., “In Google We Trust: Users’ Decisions on Rank, Position, and Relevance,” Journal of Computer-Mediated Communication 12, no. 3 (April 2007): 802.
  3. Siva Vaidhyanathan, The Googlization of Everything (And Why We Should Worry) (Los Angeles, CA: University of California Press, 2011), xi.
  4. Lucas Introna and Helen Nissenbaum, “Shaping the Web: Why the Politics of Search Engines Matters,” The Information Society 16, no. 3 (2000): 171.
  5. Vaidhyanathan, The Googlization of Everything, 1.
  6. Peter Ellery et al., “Understanding Internet Health Search Patterns: An Exploration in the Usefulness of Google Trends,” Journal of Communication in Healthcare 1, no. 4 (2008): 441; Danny Sullivan, “Google: 100 Billion Searches Per Month, Search to Integrate Gmail, Launching Enhanced Search App for iOS,” Search Engines Land, August 8, 2012,
  7. Eli Pariser, The Filter Bubble (New York, NY: Penguin Press, 2011), 127.
  8. Vaidhyanathan, The Googlization of Everything, 7.
  9. Henry Jenkins, Confronting the Challenges of Participatory Culture: Media Education for the 21st Century (Cambridge, MA: The Massachusetts Institute of Technology Press, 2009), 28.
  10. Google Press Center, “Google Press Day,” May 10, 2006,
  11. Carolyn Miller, “What Can Automation Tell Us About Agency?,” Rhetoric Society Quarterly 37, no. 2 (2007): 142.
  12. Ibid., 147.
  13. Sarah Frier, “Codecademy Raises $10M, Sees Job Service as Part of Its Future,”, June 19, 2012,; Christine Lagorio-Chafkin, “2 Guys Who Want to Teach the World to Code,” Inc., July 2, 2012,
  14. Jenkins, Confronting the Challenges of Participatory Culture, 30-32.
  15. David Rushkoff, Program or Be Programmed: 10 Commandments of a Digital Age (New York: O/R Books, 2010), 8; Klint Finley, “Codecademy Hires Program or Be Programmed Author Douglas Rushkoff to Promote Code Literacy,” TechCrunch, July 26, 2012,
  16. Jurgen Habermas, “The Public Sphere: An Encyclopedia Article (1964),” in Media and Cultural Studies, KeyWorks, edited by Meenaskshi Durham and Douglas Kellner (New York: Wiley-Blackwell, 2006), 49.
  17. Michael Warner, Publics and Counterpublics (New York: Zone Books, 2002), 16.
  18. Peter Dahlgren, “The Internet, Public Spheres, and Political Communication: Dispersion and Deliberation,” Political Communication 22, no. 2 (2005): 147-162.
  19. Ibid., 158.
  20. Vaidhyanathan, The Googlization of Everything, 197.
  21. Jonathan Cohen and Gabriel Weimannl, “Cultivation Revisited: Some Genres Have Some Effects on Some Viewers,” Communication Reports 13, no. 2 (2000): 99.
  22. Em Griffin, Communication, Communication, Communication (New York: McGraw-Hill, 2012), 366-377.
  23. Pariser, The Filter Bubble 10.
  24. Ibid., 15.
  25. Ken Hills, Michael Petit, and Kylie Jarrett, Google and the Culture of Search (New York: Routledge, 2013), 5.
  26. Pariser, The Filter Bubble, 161-164.
  27. Robert Kowalski, “Algorithm=Logic+Control,” Communications of the ACM 22, no. 7 (1979): 424.
  28. Sergey Brin and Larry Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer Networks and ISDN Systems 30, no. 1 (1998): 107-117.
  29. Carrie Gimes, “Our New Search Index: Caffeine,” Official Google Blog, June 8, 2010,
  30. Amy Langville and Carl Meyer, Google’s PageRank and Beyond: The Science of Search Engine Rankings (Princeton, NJ: Princeton University Press, 2006), 19.
  31. Robert Hof, “Meet Hummingbird: Google Just Revamped Search To Handle Your Long Questions Better,” Forbes, September 26, 2013,
  32. Ibid.
  33. Ibid.
  34. Ibid.
  35. Langville & Meyer, Google’s PageRank and Beyond, 100.
  36. Haralambos Marmanis and Dmitry Babenko, Algorithms of the Intelligent Web (Stamford, CT: Manning, 2009), 36.
  37. Ibid., 37.
  38. Pariser, The Filter Bubble, 31.
  39. Google, “How Google Works,” accessed March 4, 2011,
  40. Sep Kamvar, “Search Gets Personal,” Official Google Blog, June 28, 2005,
  41. Brian Horling and Matthew Kulick, “Personalized Search For Everyone,” Official Google Blog, December 4, 2009,
  42. Ibid.
  43. Pariser, The Filter Bubble, 2.
  44. Amir Efrati, “Google Gives Search a Refresh,” Wall Street Journal, March 15, 2012,
  45. Amit Singhal and Matthew Cutts, “Finding More High-Quality Sites in Search,” Official Google Blog, February 24, 2011,
  46. Stephen Levy, “TED 2011: The ‘Panda’ That Hates Farms: A Q&A With Google’s Top Search Engineers,” Wired Magazine, March 3, 2011,
  47. Amit Singhal, “More Guidance on Building High Quality Sites,” Google Webmaster Central Blog, May 6, 2011,
  48. Singhal and Cutts, “Finding More High-Quality Sites in Search.”
  49. Stephen Levy, “TED 2011.”
  50. Jennifer Liu, “At a Loss for Words?” Official Google Blog, August 25, 2008,
  51. Marissa Mayer, “Universal Search: The Best Answer is Still the Best Answer,” Official Google Blog, May 16, 2007,
  52. Mayer, “Universal Search.”
  53. Ibid. Interestingly, the default in the search settings for “Google Instant Predictions” is set not to show the results while searchers type. For users to access that function on Chrome, they have to turn it on in the search settings.
  54. Google, “Technology Overview,” accessed March 15, 2011,
  55. Clint Boulton, “Google Instant Provides Predictive Search,” eWeek, September 8, 2010,
  56. Thomas Claburn, “Google Instant Makes Search Psychic,” Information Week, September 8, 2010,
  57. Google,Autocomplete,” accessed March 15, 2011,
  58. Dieter Bohn, “Garage Brand: Google Taps Its Founding Myth in Search of a New Beginning,” The Verge, September 26, 2013,
  59. Robert Hof, “Meet Hummingbird: Google Just Revamped Search To Handle Your Long Questions Better,” Forbes, September 26, 2013,
  60. Bohn, “Garage Brand.”
  61. Pariser, The Filter Bubble, 2.
  62. Ibid.
  63. Paul Baker and Amanda Potts, “‘Why Do White People Have Thin Lips?’ Google and the Perpetuation of Stereotypes via Auto-Complete Search Forms,” Critical Discourse Studies 10, no. 2 (2013): 187.
  64. Ibid, 201.
  65. Vaidhyanathan, The Googlization of Everything, 52.
  66. Miller, “What Can Automation Tell Us About Agency?,” 139.
  67. Warner, Publics and Counterpublics, 113.
  68. Vaidhyanathan, The Googlization of Everything, 59.
Amber Davisson

About Amber Davisson

Amber Davisson is a Lecturer in the School of Communication at DePaul University in Chicago, IL. She teaches the core courses in the department’s Digital Communication Masters Program. Her recent book, Lady Gaga and the Remaking of Celebrity Culture, explores the pop star’s use convergence culture to develop a rich, multi-dimensional relationship with her fans.

No comments yet.

Leave a Reply


To proceed with your comment, please solve the following simple math problem. This will show that you are human and not a robotic spam generator.

What is 11 + 4 ?
Please leave these two fields as-is: