top of page
Search
  • Dror Margalit

A Love Letter for Data Gathering and Concerns Around Privacy

I have a love-fear relationship with data gathering and privacy in our current media environment. In essence, gathering data and processing it exponentially improves human functions and exposes people to new possibilities and leisure activities. However, the way that some organizations use data raises alarming concerns regarding privacy and human rights. Of course, human rights should be above all other considerations, but do we want to give up on the technological advances and accessible information that data gathering and processing enable? One thing is for sure, ethical deliberations regarding data and privacy deserve more sophisticated consideration.

Source: Unsplash | Markus Spiske

Before 2006, when someone wanted to get from point A to point B using a car, they could use a physical map or a GPS. If they chose to use a GPS, they had to buy the hardware, attach it to their car, and update its maps every once in a while. Needless to say that when roads changed, the maps did not update automatically and if there was a traffic jam, the driver could not know about it. Then Waze, a satellite navigation software today owned by Google, was invented based on a simple question: why not use the real-time data from the drivers and let artificial intelligence update maps automatically and inform drivers on traffic? Today, data-based GPS applications replaced all of the old systems that could not keep up the pace of our changing world. The ever-evolving system of Waze is only one example of how gathering people’s data affect our everyday lives and improve them. As I write this article, for example, I enjoy the data that Google collected to develop Doc’s autofill and autocorrect systems, making my writing more fluent.


I am stating that to claim that we cannot give up on data gathering. From big purposes such as mapping the world in real-time to leisure activities such as listening to music I love on Spotify - I want systems to learn and improve themselves. That is not to say that the possibilities of data gathering justify the abuse of data and privacy. Instead, I want to claim that we must not ask whether data gathering is virtuous but how to collect and use data virtuously. And this is a complex issue for three main reasons.


First, how can we know where the line is between information accessibility and deception? Take, for example, Google’s search engine. For years, the system has crawled the web, collecting data on websites and people’s behavior. Using that data, Google sorted information on a massive scale, making it accessible globally, instantly. When I do research, for example, I am happy that I do not have to sort all of the web by myself, thanks to Google.


But there is a problem. Sorting information for people and presenting them only with what is “relevant” for them means that people are not exposed to the whole information. In other words, the way that Google uses the data it has on its users to provide them with their search results is deceptive; it does not show the whole truth. A common example is that when people search “climate change” on Google, some would get results saying that it is the largest crisis humanity faces, and others will see the misinformation that it is a hoax. Because many people consider Google a legitimate platform to seek the truth, their perception of truth can be changed based on results that do not show the full picture or show a false one.


This brings me to the second reason ethical deliberations regarding privacy and data are complex: can we enable freedom of speech without spreading misinformation? Social media platforms use sophisticated systems to sort information and show their users the most relevant content. Those systems enable people to see what their friends share, develop communities, and organize. A wonderful example of it is the way the Black Lives Matter movement organized the largest protests in the moment’s history last year. This could not have happened without the help of how social media sorts data and shows the relevant content to each user. How could the BLM movement organize if social media platforms did not know how to differentiate between racial justice content to, say, cat videos, and present the relevant content to the right people.


However, this is not the only thing happening in our media climate. Some social media know so much about their users that they can present them with content that changes their behavior and make them stay longer on the platform. This becomes extremely problematic in a company such as Meta (formerly known as Facebook) if they prioritize profit over truth, as former employee Frances Haugen testified last month. Meta uses data to present content that would keep users on its platforms longer, regardless of whether it is accurate or not. Therefore, Meta’s platforms became vehicles for spreading misinformation, enlarging its speed and reach (the opposite of the media’s moral duty of truth-telling). Furthermore, spreading misinformation has harmful circumstances on people’s lives. For example, a study published in The American Journal of Tropical Medicine and Hygiene found that by April 2020, “approximately 800 people have died, whereas 5,876 have been hospitalized and 60 have developed complete blindness after drinking methanol as a cure of coronavirus,” following the spread of the misconception that highly concentrated alcohol could disinfect the body from Covid-19. It was not the virus that physically harmed those people but the spread of misinformation.


The last issue that we need to regard concerning data abuse is privacy for the sake of privacy. Many people are unaware that social media and tech companies consistently monitor their behavior on the web and to some extent in the physical world. As we saw in the examples above, when data is used virtuously, it can advance humanity, but what if I do not want companies to collect my data, sell it to other companies and potentially expose it to theft. Also, what about the companies’ responsibility to let their users know that it is happening clearly, and what are the consequences. The fact that the use of data is probably written somewhere in the terms and conditions of some web services does not mean that the people know what it truly means.


Unfortunately, I do not have a clear answer to where to draw the line of virtuous use of data. As I mentioned before, the ability to collect data and process it advances humanity. It enables machines to learn, which expands human functions at an accelerated speed. However, the way that data is used by some companies today helps misinformation spread, harm people, and abuse people’s privacy. Therefore, I believe that data and privacy deserve a more sophisticated ethical debate, not considering whether or not the use of data is virtuous but how to make it virtuous.






References:

Larry Buchanan, Quoctrung Bui and Jugal K. Patel. “Black Lives Matter May Be the Largest Movement in U.S. History.” New York Times. 2020.

Islam, Md Saiful, et al. “COVID-19–related infodemic and its impact on public health: A global social media analysis.” The American Journal of Tropical Medicine and Hygiene 103.4 (2020).

bottom of page