Google: Finding the Truth

On the day I started this master’s, I was asked by an unconvinced family member to sum up the point of the course. Put on the spot (and not really knowing enough about it myself), part of my limited response suggested the point was to make information accessible to all. At the time, I was not ready for the retort I received: “isn’t that what Google’s for?” Upon trying to fight my corner (which involved reciting anything I could remember from Lyn that day), my opponent left the conversation unconvinced. With every session I undertook and every morsel of information I have since acquired during this course, this question has burned in my brain. 

During session five of the DITA module, Searching for the Data, one of the aspects we discussed was googling, an activity people seem to enjoy every day due to it being a seemingly easy way to search for information. Unbeknown to some users, search engines like Google, are not providing us with the best objective results, rather they are spitting out the ones that are best paid for (The Unlikely Techie, 2020). Highlighted in the aforementioned conversation, some people think of multinational advertising platforms, like Google, as public libraries, i.e. trusted and credible sources of information (Waller, 2009). But in actual fact, they are systems designed by humans, with algorithms that replicate the inequality that are built into our social structure (Rollmann, 2018). People need to ask themselves whose interests this data science has in mind. (The Unlikey Techie, 2020). Otherwise, as David suggested in our session, we will lose our ability to understand the whole picture without even realising it.

So, do we have the right to expect search engines, like Google, to be truthful and moral? Despite Floridi’s love of truth as a philosopher, he argues that allowing multi-national companies to have this much control is very unsafe, and therefore, Google should not be in charge of what is true (Webber, 2015). Furthermore, Google’s search algorithm’s design centres upon efficiency, not finding objective truth, and it has never claimed to deliver the best information (Webber, 2015). Therefore, it seems we cannot expect this company to act as an objective public service when we know its goal is to make money. 

As previously discussed, the machine-learning algorithms used by these search engines are trained exclusively on human behaviour (Miller, 2015). It is this human behaviour, and its biases, that drive the pushing of unhealthy information from companies, like Google, onto us. Unfortunately, this shows that even though users claim they want the most truthful information, our algorithm-influencing clicks show that we are part of the problem (Miller, 2015). It would appear that society can only improve when credible fields like investigative journalism, prosper, and complex issues drive educated debate (Webber, 2015). 

Clearly, companies like Google, will not opt to ignore its profitable algorithm and provide us with more truthful information – why should it? However, Floridi argues that these companies should react to its users need for truth on some issues (Webber, 2015). In order to do this, Floridi suggests that a universal search engine should present competing companies’ findings next to one another, acting as a democratically empowered quality control (Webber, 2015). I wholeheartedly agree with Floridi’s idea seeing as it reflects the eye-opening web search activity we carried out in the fifth DITA session, in which we explored the difference between the results in search engines, like Google, Bing and Duck Duck Go, and the possible reasons for these differences. However, until this form of quality control is employed by these companies, it is vital that people educate themselves in this area. The International Federation of Library Associations (IFLA) state that it is the role of the library to help patrons develop their digital and algorithm literacy skills so that they understand how algorithms affect their accessing and receiving of information (IFLA, 2020). Optimistically speaking, it could be this education that alters the human-based algorithms for the better and allows truthfulness and objectivity to flourish. With this in mind, I intend to begin my responsibilties as a LIS professional by answering the question: “isn’t that what Google’s for?”

References

IFLA (2020) IFLA Statement on Libraries and Artificial Intelligence. Available at: https://www.ifla.org/publications/node/93397[Accessed 8 November 2020].

Noble, S U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. [e-book] New York: NYU Press. Available through: http://0-search.ebscohost.com.wam.city.ac.uk/login.aspx?direct=true&db=nlebk&AN=1497317&site=ehost-live[Accessed 8 November 2020].

Miller, C. (2015) When Algorithms Discriminate The New York Times. [blog] 9 July. Available at: https://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html?abt=0002&abg=1 [Accessed 8 November 2020].

Rollmann, H (2018) Don’t Google It: How Search Engines Reinforce Racism Pop Matters. [blog] 30 January. Available at: https://www.popmatters.com/algorithms-oppression-safiya-umoja-noble-2529677349.html

The Unlikely Techie (2020) This Is How Search Engines Reinforce Racism Medium. [blog] 6 September. Available at: https://medium.com/swlh/this-is-how-search-engines-reinforce-racism-43471ef85b46 [Accessed 8 November 2020].

Waller, V (2009) The relationship between public libraries and Google: Too much information First Monday. 14 (9). Available at: https://doi.org/10.5210/fm.v14i9.2477

Webber, D. (2015) Should Search Algorithms Be Moral?: A Conversation with Google’s In-House Philosopher Quartz. [blog] 5 August. Available at: https://qz.com/451051/should-search-algorithms-be-moral-a-conversation-with-googles-in-house-philosopher/ [Accessed 8 November 2020].

(Image courtesy of Fanquiao Wang)

3 thoughts on “Google: Finding the Truth

  1. An enjoyable and thought provoking read, I especially loved the anecdotal introduction. And… a subject matter of which we are all guilty of! Once again it seems to boil down to our desperate need for comprehensive digital and information literacy. As you explain, these unreliable and oppressive algorithms are somewhat out of the average person’s control, and in the hands of Big Tech companies but, yes, we can fight back by building our understanding of the true nature of these search engines. If you are interested in this area, I recently read Weapons of Math Destruction by Cathy O’Neil which has lots of examples of predatory algorithms throughout digital history, I recommend it!

    Liked by 1 person

  2. This was a very interesting read discussing how “Googling” is such a terrifying term in and of itself. Whatever information you might have, comes from who can pay to show it to you- so it does bring us to the question of what exactly does it mean for information professionals to show the difference between truth that is paid for and truth that is subject to peer-review. I very much enjoyed the way you concluded with coming to an understanding of your perspective and using Floridi to back it. An excellent post.

    Like

  3. You have raised such a big issue here. Google is popular, making Google more… popular. It has definitely a power in shaping the way we see the world by leading us to information their algorithms find relevant. But how can we access to information WE find relevant to us among 1200 petabytes of digital information. Thanks for this post.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: