“…In the next three to five years, Facebook COO Sheryl Sandberg told one group the idea of a Web site that isn’t customized to a particular user will seem quaint. Yahoo Vice President Tapan Bhat agrees: ‘The future of the Web is personalization, now the web is about me'” (Pariser 8).
In the article titled “The Filter Bubble: What the Internet is Hiding from You” Eli Pariser describes that starting in December of 2009, major websites such as Google and Facebook altered their servers to better personalize the user’s ads, search pages, and home screens to fit their interests. For instance if two people on different computers search the same term in Google, they could get very different results based on Google’s precise algorithm. It is astounding to the reader that most people did not know that there was even a change to Google’s server, for now Google checks anything and everything users search. The quote listed above was quite surprising to me. In the next couple of years, if websites do not personalize their sites with the user’s interests, then it will seem rather odd. The web is no longer about giving the user the information that they are looking for, but to also create a unique experience for everyone. This unique experience is now coined by Eli Pariser as “the filter bubble.” The filter bubble is interesting because no one chooses to enter this bubble, it is invisible, and yet everyone is in it. This ideal will change the way we view the internet and ultimately the way the internet will view us.
The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. (Lafrance)
This article details the bubble that is created through the filtering of news based on an individual’s internet activity. Algorithms that narrow down the search results based on activity presents news articles that would interest that individual based on previous search histories and posts on Facebook and other social media accounts. This method of presenting only a narrow selection of news articles creates a “bubble” in which the consumer is only presented certain information that they are likely to agree with or enjoy reading. This produces individuals that are not well rounded in terms of articles they read and information they are able to get a hold of. This can cause an even more prevalent divide in political contexts when conservatives are reading only conservative articles and liberals are reading only liberal articles. Neither side is ever exposed to the other’s thoughts or opinions leading to narrow-minded thinkers. This seemingly harmless method of presenting news to consumers can turn out to make a big impact on society.
“Our media is a perfect reflection of our interests and desires” (Pariser 12).
In Eli Pariser’s article “The Filter Bubble: What the Internet is Hiding from You”, he discusses how top Internet sites have developed an algorithm to personalize individual’s searches. This strategy of personalization is being used among sites such as Yahoo, Google, Facebook, YouTube, and Microsoft Live. Everything someone searches and clicks on is being observed by Google to enhance their ability to tailor future searches to the interests of that individual. Because of this, a group of people searching the same thing may be given different results based on the “invisible autopropaganda”. Praiser devised the phrase “filter bubble” to describe the specialized world of information that everyone has based on the prediction engines of these sites. This quote I chose concisely summarizes the modern movement of major Internet sites having the ability to narrow our searches and customize ads, creating a unique online universe for everyone.
“we shape our tools and therefore our tools shape us” (McLuhan)
At the beginning of Praisers writing, he opens up with a quote by Marshall McLuhan, a media theorist. What I immediately thought off when reading this quote is the human invention of technology and social media. In creating these things, we have unconsciously molded our society. We didn’t intended on social media shaping every aspect of our life, but it did. We used facebook and twitter to influence how we judged people and how we presented ourselves to others. If these tools were never created, we couldn’t have used them to the extent we use now. So in a way, we created our own downfall with technology because it was the product of our own creation. Praiser will expand on this idea throughout the rest of his introduction.
“And personalization is shaping how information flows far beyond Facebook, as websites from Yahoo News to the New York Times-funded startup News.me cater their headlines to our particular interests and desires… the algorithms that orchestrate our ads are starting to orchestrate our lives” (Pariser 9)
Eli Pariser is an executive of the website Upworthy that strives to encourage the consumption of viral-worthy news stories, while also taking part in political and internet activism to expose the negative effects that accompany our increasingly personalized internet experience. In this particular passage, Pariser discusses how news outlets are increasingly using similar techniques as advertisers in order to cater to the desires of their readers, thus creating a personalized news consumption experience. This technique of tracking the interest of Internet users is being utilized by a wide variety of websites, making it so that the interest-tracking algorithm is beginning to have more control over what options we have presented to us. Due to this process, our consumption of news becomes increasingly specific, thus limiting the type of information and choices that we are being given, and developing a stronger control over our lives.
I like how you picked this quote. I believe it presents both sides of the story well. We are more likely to search the things we are interested in. As a result, these are the kinds of things the algorithm produces. If this is the case, how is anyone who often uses technology as a means of a pass time or research able to learn anything beyond their specified horizons? It is a difficult question to try and answer.
“Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine-learning to get to know their users will increasingly play a role in shaping ultra-personalized news products.” (Adrianne Lafrance)
The article “The Power of Personalization” explores both the pros and the cons of the always expanding and increasingly personable power of the internet. There are both pros and cons to being able to personalize search engines. This was never prevalent until 9/11 when people began searching “twin towers” and nothing more would come up than what the twin towers were used for. Google advanced their search engine so key words would bring about recent events regarding the search. This personalization was shortly followed by other industries. Search engines use an algorithm to filter information towards the online habits of a specific person. Although this is convenient, Lafrance argues this also provides grounds for dispute, privacy, or political issues. If certain people are only exposed to articles they appear to be interested in, then they are put into a “bubble” where they are less likely to expand their horizons. Lafrance discusses the pros and cons of this personalization.
“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.
-Mark Zuckerberg, Facebook founder
We shape our tools, and thereafter our tools shape us.
-Marshall McLuhan, media theorist” (Pariser 1).
“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.
-Mark Zuckerberg, Facebook founder
We shape our tools, and thereafter our tools shape us.
-Marshall McLuhan, media theorist” (Pariser 1)
These two quotes in the opening of the introduction in Eli Pariser’s piece truly emphasize the dilemma with personalization filters. Eli Pariser, chief executive of Upworthy, has the goal to promote more meaningful internet content. The first quote that Pariser includes is by Mark Zuckerberg. In this quote, Zuckerberg is merely stating that everyone has and prioritizes their interests. The quote is very ironic, and it is an attempt to immediately appeal to the reader’s pathos. In my opinion, knowing who Zuckerberg is along with the content of the quote, he is implying that not everyone shares the same interests; therefore, he is indirectly claiming that there is no problem with personalization filters. As aforementioned, Pariser purposefully includes this quote in the opening because it grabs the reader’s attention by comparing a trivial matter to something of that with deep significance and highlights the deceptive and misleading nature of the internet.
This quote is then followed by Marshall McLuhan who states that there is reciprocity in the shaping of our ideas and our “tools.” As explained in the piece, our digital footprint is constantly tracked on the internet, and, because of this, there are many filters applied to the information we receive. As explained, “personalization filters serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown” (Pariser 15). As shown, there are many dangers associated with these personalization filters that leave us with information we feel comfortable reading and talking about. Pariser’s piece, overall, serves as a warning, as it seems as though, if we continue on this path of ignorantly being misled by these filters, we will become a more polarized world with few multifaceted standpoints.
“In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone’s phone private screen, so it’s not visible to us. And that’s different. And that’s just one of the basic things that artificial intelligence can do.” – Zeynep Tufekci “We’re making a dystopia just to make people click on ads”
Zeynep Tufekci, a sociologist who specializes in technology’s interactions with mankind, explores how big data from the ever growing advertising industry has changed our interactions with the internet in drastic ways. She describes that internet companies now routinely collect and sell information (such as internet searches) to understand people’s future buying habits. This accumulation of data goes unbeknownst to individuals, yet large online corporations capitalize on this abundance of information by creating algorithms that can systematically targeting individuals as future buyers of specific products tailored to them. In fact, Tufekci notes that these companies prey on individuals’ buying “weaknesses” to prey upon those most likely to buy specific products. The biggest difference, she asserts, is that now these seemingly “free” internet ad-oriented platforms make significant profits for companies such as Facebook and Google. She concludes that while social media can be used as a tool to bolster social movements, this infrastructure can also be used in dystopian ways to spy on citizens and influence people’s behavior in certain contexts.
“News organizations share information that is true and hopefully engaging. Technology companies like Google and Facebook enable the sharing of information that is engaging and hopefully true” (LaFrance).
In the article “The Power of Personalization” Adrienne Lafrance describes the cons of the ability to personalize news and articles in our modern-day society. Lafrance asserts that news personalization has created a bubble that filters news to the individual’s political stances, beliefs, and interests. Through this, false or inaccurate news sources can arise due to popularity and self-interest. With a wide array of possibly inaccurate sources, we ultimately lose the common ground in real news. Lafrance believes that news organizations try their best in order to be a reliable source and provide valid news. On the other hand, she claims that search engines companies, through a bubble of personalization, create a wide array of personalized news that hopefully is true.
“The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change” (Tufekci, 18:45).
Zeynep Tufekci, a professor, writer, programmer, and social impacts of technology researcher, explores big questions about our societies and lives as big data, algorithms, and connectivity spread. In her TED Talk “We’re making a dystopia just to make people click on ads,” Tufekci illustrates how internet-enabled technology companies collect data in a wide array of methods to provide personalized advertisements. She expresses concern for the expansion of these algorithms and advertisements as she says: “The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change” (Tufekci, 18:45). Algorithmically created advertisements can persuade audiences in far more ways than just the purchase of a product. As the algorithms have become more advanced to data collection expansion, political, personal, and social aspects of our everyday lives are also being targeted.
“The problem is, we no longer really understand how these complex algorithms work. We don’t understand how they’re doing this categorization. It’s giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore how exactly it’s operating any more than you’d know what I was thinking right now if you were shown a cross section of my brain. It’s like we’re not programming anymore, we’re growing intelligence that we don’t truly understand” (Tufekci).
Zeynep Tufekci, a techno-sociologist concerned with the social implications of the rise of technology, presents a thought-provoking argument in her TED talk, ‘We’re Building a Dystopia Just to Make People Click On Ads”. The passage alludes to the complexity and mysteriousness of the algorithm responsible for remembering our online data. The worrisome truth behind her statement is that nobody, not even the creators of this algorithm, can truly control or even understand the beast that they’ve created, “any more than you’d know what [Tufekci] was thinking right now if you were shown a cross section of [her] brain”. This chilling idea leads Tukekci to claim that intelligence is being cultivated that, at this point, isn’t even comprehensible anymore. This passage, as well as the title of her piece, show the depth and possible danger of the world of digital memory.
“They are predicting engines, constantly creating and refining a theory of who you are and what you’ll do and want next. Together these engines create a unique universe of information for each of us — what I’ve come to call a filter bubble — which fundamentally alters the ways we encounter ideas and information” (Pariser, 9).
This rather powerful quote that comes from Eli Pariser, a generator and advocate of reliable internet content, radiates a notion that cyber space is taking over our personal identification. It seems as if this bubble that sites like Google and Facebook generate is making our future decisions and choices for us, considering they know every one of our past. Pariser uses his knowledge of personalization to inform readers in his book “The Filter Bubble: What the Internet is Hiding From you” that what ads or information files into our screen is actually a “unique universe of information for each of us” (Pariser,9). As the left-wing advocate describes, our beliefs and desires are easily prompted and reinforced by filtered information that data collectors believe match and further what we have previously searched. In response, our perspective is narrowed by what our self from yesterday desired to know and we are therefore furthered from what others around us might desire to believe or know. Additionally, as Pariser describes these said “engines” never stop improving, or in other words they are constantly generating ways to identify who we are and what patterns they might create for the future.
“In the next three to five years, Facebook COO Sheryl Sandberg told one group, the idea of a Web site that isn’t customized to a particular user that will seem quaint. Yahoo Vice President Tapan Bhat agrees: ‘The future of the web is about personalization… now the web is about ‘me’. It’s about weaving the web together in a way that is smart and personalized for the user'” (Pariser 8).
Eli Pariser, author of “The Filter Bubble: What The Internet Is Hiding From You” and internet activist, describes the future path of the internet that will affect its users. The internet has and will become less about exposing individuals to all forms of information, but rather secluding and isolating them to information that is appealing to the user based on prior search histories. This lack of exposure to diverse ideas has the benefit of creating stronger bonds between individuals who share common interests but creates more polarization between different groups of people who only latch to the specific information they want to hear. It is rather dangerous how this affects individuals on massive scales of all social aspects in subtle and unknown ways, countering a new way of how we retain information and the thoughts and behaviors we form and develop from these personalized and filtered systems.
Zeynep Tufekci’s Ted Talk, titled “We’re building a dystopia just to make people click on ads,” explores the technologically authoritarian dystopia that society is currently aimed towards. Furthermore, the definition of a dystopia is the following: “An imagined society in which there is great suffering or injustice,” and the suffering arising from this inevitable authoritarian state will deeply impact the society we love. Building on this concern, Tufekci says the following quote in the concluding minutes of her Ted Talk:
“…authoritarianism will envelope us like a spider’s web and we may not even know we’re in it” (Zeynep Tufekci, 17:33-18:48).
Here, she sheds light to the darkened secrecy of the technological, societal, and mental control that the world’s richest companies and people have gained over those who use their digital applications. Some examples of these corporations are Google and most importantly, Facebook. The question that we must explore as a society concerns one concept: What shall we do to combat the ever-growing impact of technological authoritarianism in our society?
“Emerging technologies will only exacerbate the existing problems with algorithmically promoted junk information.”
From “The Power of Personalization” by Adrienne Lafrance
The internet and the information that certain websites are able to access are developing at an alarming rate, leaving its users to question what they really should be sharing. I thought this article, as well as the TED talk, brought up some interesting points regarding the invasion of privacy that the internet brings to society. With simple things, such as the ads we click or the stories we read, the websites we are visiting are able to learn so much about us without us even realizing it. The fact that they are gaining control, according to the TED Talk, because of this is a scary thought and can cause anxiety about the future of technology. The internet, when used for its intended purpose is extremely beneficial and has lead to great innovation and the availability of information right at our fingertips. However, when it becomes too invasive in our privacy is when it becomes a problem. The internet has evolved so much, even within the past 4 years, and it will be interesting to explore the possibilities for the future of innovation.
“As a public and as citizens, we no longer know if we’re seeing the same information, or what anybody else is seeing, and without a common basis for information, little by little, public debate is becoming impossible” (15:16).- Zeynep Tufekci, “We’re building a dystopia just to make people click on ads”.
Zeynep Tufekci, a Techno-sociologist, explains the issues regarding the current power and future of ad technology. The power of ad technology has become so advanced and convincing, it can force a person into a certain political, behavioral, emotional, and physical state. It targets individuals based on their data history, compiles that data to recognize personal traits and recommends them ads based on that. The technology can even influence political votes by the hundreds of thousands. With different information being displayed to different individuals, and the personal influence of algorithms, “public debate is becoming impossible” (Tufekci). The ability of algorithms targeting weakened individuals who are more suspectable to influential content is already happening. While the common person may think they have total control and freedom, or non-biased opinions, their whole identity could be created by computers.
“Now, if authoritarianism is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it.” (Tufekci, 16:33)
“Now, if authoritarianism is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it.” (Tufekci, 16:33)
Zeynep Tufekci, a Turkish techno-sociologist, writer and academic, is concerned that the information harvested by algorithms used by companies such as Google and Facebook will give the government immense power over us in her TED talk, “We’re Building A Dystopia Just To Make People Click On Ads”. Tufekci notes that the way we typically think of an authoritarian government is one that uses fear to terrorize people and keep them in check. But the type of authoritarian government that threatens to rise is one that quietly watches, stores, and analyzes our every move through algorithms. She highlights that sites like Facebook and Youtube can uncover our “individual weaknesses and vulnerabilities” through our internet use and use that information to try to advertise things to us when we are at our weakest state, and influence our decisions. Since each of us has our own personal screens, we do not know what is going on on each other’s screens, and therefore this authoritarianism has the potential to creep up on us without our knowledge.
“Soon, we likely will be able to engage with voice-activated assistants such as Siri and Alexa beyond just asking for the day’s news. We’ll be able to interrupt and ask questions—not just in order to put things in context and deepen our understanding of current events, but to personalize them.”
From the Power of Personalization by Adrienne LaFrance
Adrienne LaFrance, the editor of TheAtlantic.com, writes about the effects of personalization and how it limits the type of information each user receives. In the article she discusses how voice activated devices, such as the Amazon Echo and Google Home, will soon be used for engaging with the news. Currently, these devices deliver brief news flashes from default or user selected channels. These devices also constantly collect data from what the users ask. Devices with voices are growing as a source of the news as it mimics conversation with a human. LaFrance discusses how in the future, these devices will personalize the news for consumers much like Facebook does. This leads to a limited perspective since the devices cause the user to listen to the news they want to hear.
“Meanwhile, in May, Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query.” (Lafrance).
In this short excerpt, Lafrance touches upon the growing loss of privacy stemming from invasive “personalized” information grabbing. Large companies including Google, Facebook and many others, have employed use of the personal information they receive about their site’s users. They then use the information to tailor online experiences for their users, which has caused a growing outcry among worried online communities. Lafrance describes how data from a wide array of services is combined to artificially guess what content one would like to be exposed to on the internet. While these practices seem as though they are helpful In steering us towards information we may be interested in, many argue that it also forces us into a sheltered box by eliminating variety. Despite what these protestors may say, personalization continues to be a growing practice in a number of different fields, and its effects are only beginning to be felt.
“As data-tracking becomes more sophisticated, voice recognition software advances, and tech companies leverage personalization for profit, personalization will only become more acute,” (LaFrance).
Adrienne LaFrance, a career writer and editor, is essentially saying that companies in the near future will be able to perfect personalization. It is a science still widely evolving, and the increase of data-tracking will only move this idea forward. Her entire article discusses the power of personalization, as well as it’s more sinister uses. Certainly, as this technology develops there will be people inclined to misuse it for their benefit. However, it must be noted that it does make businesses more efficient in their everyday sales, which helps the customer as well by showing them what they want. While personalization is dangerous and will continue to become more dangerous, it also has positive attributes that must not be ignored for a fair examination of personalization.
“More and more, your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click” (Pariser 3).
Eli Pariser, political activist and chief executive of Upworthy, notes that the increasing personalization of Google search results causes a computer to become reflective of “your own interests”, based on previous online activity (3). His piece, “The Filter Bubble”, discusses Google and Facebook’s personalization of ads and search results based on algorithmic data. We enter “The Filter Bubble” the second that we beginning interacting with the internet, allowing sites like Google to learn more about us. While users often browse the internet to gain knowledge, Google gains knowledge about the user, so that future interactions may be more personalized.
“But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it. ” (Tufekci 17:20-18:20).
Zeynep Tufekci is a writer, academic, and techno sociologist from Turkey. She is known mainly for her research on the impact that emerging technology could have in politics and corporate responsibility. Tufekci’s TED talk “We’re building a dystopia just to make people click on ads” primarily serves as an explanation of her research. For instance, she mentions the direct correlation that ads have in the number of voters for an election. In the midterms of 2010, Facebook put out ads that had a person’s friends with the caption “I voted” and this caused “an additional 340,000 voters in that election.” This experiment was repeated and an “additional 270,000 voters” appeared for the 2012 elections. The idea of Facebook ads being an influence in politics is quite daunting, but why are people not in an uproar and resisting? Tufekci goes on to explain that this is because the “authoritarianism” portrayed is subtle and it trapped us in a “spider’s web” that we aren’t aware of. Each one of our screens is feeding information off of us constantly, and if we do not recognize the point Tufekci tries to make then corporations will continue to find out our personal weaknesses and use them against us without our knowledge.
“The race to know as much as possible about you has become the central battle of the era for internet giants like Google, Facebook, Apple, and Microsoft.” (Pariser 6)
The big internet giants – Google, Facebook, Apple, and Microsoft – are in sort of an arms race for information. What information you ask? Information about you. I like to think make the comparison between these tech companies and nuclear physicists. Nuclear physicists seek to understand the physics behind the nucleus of an atom… Before nuclear physicists, chemists found that everything was made up of atoms, then discovered each atom was made up of particles called protons neutrons and electrons, each atom having a nucleus of protons and neutrons. Nuclear physicists then took it one step further. What are these particles made of? What holds the nucleus together? So using their particle accelerators, they magnified protons neutrons and electrons just enough to find each are made of three smaller particles called quarks. How do these quarks work? They zoom further, to find the quarks are held together by gluons, and the quarks have charge and mass and their own properties and so much more. Nuclear physicists have relentless efforts to probe further and gain information, and so do the big internet companies. In my analogy, we are the atom, and with every click every time we open an app or post a picture, those nuclear physicists are learning more and more about us. Similar to nuclear physicists they collect data, but unlike nuclear physicists who are working to better the world, those tech companies are looking to better their bank accounts.
Adrienne LaFrance, the editor of TheAtlantic.com, is an established writer who specialized in the media, technology, and politic, making her highly qualified to write about the combination of technology and politics in her article “The Power of Personalization”. In her article she discusses early on that when users of the internet can decide what sort of news is presented to them they suffer. Instead of being exposed to all the news and forming an opinion when all the facts are present, people are limiting themselves to a small window of what they choose to see. This internet phenomenon, LaFrance notes, is far from beneficial to society. She says that instead of people engaging in meaningful conversation and debating current topics, we are losing “the common ground of news”. The argument could then be made that this aspect of technology is a driving force in political strife among the people of our country. Is there a strong correlation between the opposing ideas of people based on how they filter their news? An interesting research focus.
“As data-tracking becomes more sophisticated, voice recognition software advances, and tech companies leverage personalization for profit, personalization will only become more acute. This is potentially alarming given the growth of websites—news-oriented and otherwise—inhabiting the political extremes, which on Facebook are easy to mistake for valid sources of news. When users can customize their news, and customize to these political and social extremes, civic discourse can suffer.”
Adrienne LaFrance is an editor of TheAtlantic.com who focuses her writing on technology, politics, and media. In this passage, she writes about how data tracking has become more advanced and how it should raise questions. Because companies with this technology are focused on the bottom line, they are less concerned with the potential harms of this technology. The data tracking technology decreases public debate because it tailors peoples agendas towards what they want to hear and see. People want to hear what they want to hear so they don’t see the other side of news and aren’t exposed to what others are concerned about.
“Most of us assume that when we google a term, we all see the same results—the ones that the company’s famous Page Rank algorithm suggests are the most authoritative based on other pages’ links. But since December 2009, this is no longer true. Now you get the result that Google’s algorithm suggests is best for you in particular—and someone else may see something entirely different. In other words, there is no standard Google anymore (Pariser 2).”
In “The Filter Bubble”, activist and internet personality Eli Pariser discusses how the information readily available on the internet is wildly affected by filter algorithms. Writing that when one searches online, “[they] get the result that Google’s algorithm suggests is best for [them] in particular” (2), Pariser points out the growing discrepancy in information presented to various users. As Pariser postulates, this disconnect poses a possible danger in that it filters what each person sees, exposing each person to their own individual microcosms of sources. There is “no standard” for what each person is shown, and they exist in their own internet bubble, force fed sources catered to them and only them.
“This is all wrong. If you secretly harbor the idea that Snapchat is frivolous or somehow a fad, it’s time to re-examine your certainties. In fact, in various large and small ways, Snap has quietly become one of the world’s most innovative and influential consumer technology companies” (Manjoo).
Although there can be several negative implications with social media apps, there is the availability for these apps to bring people closer together, at times. This source is in support of the fact that social media has become a revolutionary tool in today’s society. This article by Manjoo emphasizes how Snapchat even includes aspects of journalism on the app. Farhad Manjoo attended Cornell University and has been recognized as the “State of the Art” columnist for The New York Times. He also explains that Snapchat is a refreshing change as it does not use feeds like most social media does. He also states that it feels as though the app encourages people to put on less of a show to others and that the app is more similar to that of real life. I found his points to be quite interesting, although the article was written back in 2016, and think it would be a good piece to incorporate into my paper.
“First, some quick background on how BroApp works: It not only sends scheduled texts, but comes preloaded with 12 messages to help users get started. The developers also took steps to conceal the automation going on behind the scenes; in places designated “no bro zones,” the app is automatically disabled. (After all, the jig is up if your girlfriend received an automatic text from you while you’re at her place.)” (Selinger)
In this passage from “Today’s Apps Are Turning Us Into Sociopaths”, Selinger describes an app that seemingly imitates life. However, this isn’t simply a normal daily task. This is a highly emotional and specific task. It describes how BroApp formulates “scheduled texts” for men to send to their significant others, hopefully creating a successful relationship. This app even goes so far as to disable itself when in a location where its purpose could be compromised. To any guy that is unsure of how to navigate a relationship, this app could appear as a dream come true. However, there are serious implications that come with this app in relation to identity. It is known that our own identity is formed in part through our relationships and how we interact with individuals. If an app is in control of the messages that are being sent to the most important people in our lives, therefore forming/strengthening relationships, aren’t they essentially creating an identity for us? Apps such as these are formulating a narrative for the users, without giving them the ability to tell their own story through the lens of their self image.
“If we care about intellectual freedom and free speech, we must protect intellectual privacy. Protecting free speech is no good if surveillance stops us from thinking up anything new or original to say. We want to be safe, and we don’t want to regulate businesses needlessly, but sensitive data about our mental activities need special protection” (Richards 4).
In “The Electronic Panopticon”, distinguished professor of law and a top professional in privacy law, information law, and freedom of expression, Neil Richards warns about the ways our mental leisures on the internet is being used as a tool for those who are in control of it and a weapon against those who use it to be able to control, censor, and hinder them. With the digital revolution being relatively new, the rapid progression of its growth and use is continuously evolving as well as its growing threat that its users face oblivion to under the guise of promotional ads and personalized internet experience for more “efficient” use. He advocates that we should be more cognizant of our personal freedoms online for the right of individual privacy and use to “search more fearlessly, with more confidence in and greater loyalty to their digital intermediaries” (4).
“There’s one identity that counts, and it’s the facebook identity” (Silverman, p.156)
Silvermans article highlight how one’s own identity is clouded with the social media. For the reason that people work so hard to make their identity something that may not be true, is for the sake of what others think of them. In my essay, I would like to use his idea to highlight how people use social media to change who they are and portray a false sense of identity. People edit photos and do things for the sake of posting. In doing this we may not be living to our true identity but instead molding an identity of what we think we should be like. In addition, Silverman is a writer for the national post as well as the guardian and has had several pieces of writings dealing with social media downfalls that were published. Indicating that he has a strong understanding of the material I would like to discuss.
“What is “fake” anyway, if the photos are on some level more real representations of life? Part of the appeal of posting photos to platforms like Instagram is sharing the experience and being able to look back at the memory later on, and it’s discouraging that the meaning — if there’s any to begin with — might be lost or cast aside to an account designed to compartmentalize less-than-perfect expression” (Coulehan).
Many young people feel compelled to only post attractive or aesthetically pleasing pictures on their online accounts, so much of who they are may be lost in the process. Making a “fake Instagram” account, commonly known as a “finsta”, is very popular amongst young people because they are typically private accounts that not many people follow, so they feel they can post funny or private information about themselves that they wouldn’t typically post on a “rinsta”. In this way, though, a finsta ends up representing more of the person’s life, because they feel much less restricted by worrying about how many likes they’ll receive. Consequently, the rinsta loses much less meaning as a memory box of sorts, because the value is “cast aside” into a fake account that isn’t seen by many and is purposed to be silly and is not taken very seriously, says Erin Coulehan, a journalist who has worked with Rolling Stone, Elle magazine, and more. It’s pivotal to see that on a social media account, many users will only represent attractive pieces of their life, saving their more negative or unappealing experiences for a fake account for just their closest friends.
They can’t decide which of the 30 burgers on the menu they want to eat, and they can’t decide which slab of meat on Tinder they want to date. And when they do decide, they tend to be less satisfied with their choices, just thinking about all the sandwiches and girlfriends they could have had instead.
This passage from Julie Beck’s article, The Rise of Dating App Fatigue, describes how more choices is not always desirable. We as a people are always striving to provide ourselves with a wider array of choices, whether it be at a restaurant or on a dating app. We believe that if we have more choices, we will be able to find something more perfectly suited for us. This belief is based on fact, and more choices is often a good thing, however it is rare that the negative aspects of choice are considered. As Beck writes, more choices can often mean more indecision, unless there is a clear best, it becomes very difficult to determine which choice you want, and after the choice is made we often have to deal with the angst of wondering whether our choice was the right one. In some cases, the amount of choices we are given is so daunting that we are unable to even choose something, and when that happens we are no longer progressing. Our growth is stunted in other ways as well. With more choices, many of us are more likely to choose what we are familiar with, thus eliminating all new experiences from our life. While there are many positive aspects of choice, we must stay wary of how it might affect us negatively as well.
“While young people of every generation have struggled with how to project their identities onto the greater world, teenagers of 2015 arguably have it worse. Given the pervasiveness of social media, the feedback mechanism never shuts down.” (Safronova)
Valeriya Safronova, “On Fake Instagram, a Chance to Be Real,” New York Times 18 Nov 2015
In the article “On Fake Instagram, a Chance to Be Real”, Valeriya Safronova describes how teenagers today present fake images of themselves online that are not reflective of who they truly are on their “real Instagrams”, but on their fake Instagrams post ideas and experiences that truthfully depict their identity. Reading this article, I found many parallels between her ideas and my experiences, especially with social media. Oftentimes, I will see friends or acquaintances posting something on social media that does not truly reflect who they are, making them appear to be a completely different person online than in real life. I understand why people only post happy moments online, but it is important to keep in mind that a person is so much more than what they post on social media. “Finstas”, which have become extremely popular in the past few years have become a more realistic depiction of someone, but it is concerning how posting the truths of life are found on “fake” accounts. I think this reflects the barriers that exist in society to young people expressing their identity and owning who they really are.
Hidden in plain sight, the technological world that appears to cocoon and satisfy online and social desires has a deeply, deeply manipulative twist. The coupons and special deals that are emailed mixed with personalized advertisements on webpages allow companies to dissect information about us based on what we choose to click and type. This thought aligns with Jonathan Zittrain’s quote from Alexander Furnas’s article, which portrays consumers as fish that are being baited by a company interested in their information. Once the consumer is baited by way of a discount deal or free gift, they are lured in further and further into the company’s grasp to be informationally and financially manipulated. As time elapses, these tech companies learn more and more about their consumers simply by following their train of thought on the Internet, enabling them to do whatever they please under our noses.
“Dataism thrives on the assumption that gathering data happens outside any preset framework—as if Twitter facilitates microblogging just for the sake of generating “life” data—and data analysis happens without a preset purpose—as if data miners analyze those data just for the sake of accumulating knowledge about people’s behavior. It may not always be simple to identify in what context (meta)data are generated and for what purposes they are processed. And yet it is crucial to render hidden prerogatives explicit if researchers want to keep up users’ trust in the datafication paradigm. Trust is partly grounded in the persuasive logic of a dominant paradigm; for another part, though, faith resides with the institutions that carry the belief in Big Data.” – José van Dijck, Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology
Most “free” apps and websites truly capitalize through collecting personal data about their users; although, many of its users are entirely unaware of this data collection. In his article, “Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology,” José van Dijck of the University of Amsterdam takes a critical eye on the modern for-profit accumulation of data by internet companies. He critiques the notion that big data is the new, best way to accumulate large swaths of depersonalized data about the population as a whole, noting that this highly unregulated data processing is used to accumulate data on individuals. He reveals that the widespread collection of data flourishes on the belief that big data does not target individuals and only collects aggregate behaviors; once that trust is degraded, the entire system completely disintegrates. After events such as Snowden, the public trust in the big data system slightly eroded as the intense surveillance of citizens was revealed. Thus, the system that surveils us is also the one we must turn a blind eye to.
“Amazon is considering offering developers raw transcripts of what people say to the Echo, according to a July report in The Information. Newsrooms will have to grapple with whether it’s ethical to use data from those transcripts as a way to make money, a move that would certainly enrage some privacy-minded consumers. For publishers, that could be an important revenue stream, but it could also creep audiences out and lessen trust, not enhance it..” (LaFrance)
The idea of someone being recorded in their home and having his/her words transcribed and used so that a company can make even more money is creepy, yet this hypothetical could become reality soon. The Amazon Echo is an AI assistant for everyday life at home, just verbally give commands to the Echo and hear/see a response within seconds. Although the device is quite useful and exciting, most people are not aware of the fact that anything they say to AI is automatically stored in a database somewhere. LaFrance’s article goes into detail about what might the transcripts be used for by Amazon. For instance, Amazon is hoping to give the transcripts to developers to further the advancement of its AI and to increase revenue with the advancements. However, this idea raises the issue of whether or not selling the transcripts of consumers from within their private homes is ethical. With the advancement of technology more companies seem to be gradually taking away the privacy of its consumers for its own benefit.
“A majority of teens with dating experience (76%) say they have only dated people they met via offline methods.” (Teens, Technology, and Romantic Relationships)
It was about 12:30 in the morning when I asked my roommate “How do I ask this girl on a date?”. I didn’t know what to do, I met this really nice girl in my lab, and after some time of getting to know her I knew I wanted to ask her out, but how? Similar to many teenagers, most of the people I asked out were people I had met online, followed on Instagram, or added on snapchat. It was so simple when I could just “DM” a girl I had never met and say “Hey, do you know how much a polar bear weighs? Enough to break the ice, hi I am Chris.” How often that line worked is not important, that is my own issue, but what is important is that I had no experience asking a girl out that I had met in person. Is this an issue? As a society, it is so easy to become unaware of social interactions and the value of personal interaction. Today, 76% of teens with dating experience have only dated people they met offline, which may seem high, but with the growing use of technology, and children getting phones and access to apps like snapchat and instagram at younger ages, that number is bound to decrease (Teens). What is the impact of that?
“As discussed, one of the main concerns underlying the data fetishist critique is that a trust in numbers will trump other forms of subjective, intuitive and embodied knowledge…In a discussion that took place in a break-out session that we facilitated at the 1700 new media & society 19(11) QS conference 2014 (QS14), one participant explained that when tracking physical activity,
‘there is a communication between your subjective knowledge and your objective knowledge that you are creating. You always do this kind of cross-check. I feel like I get a lot of activity, but my data shows me I am not, then I have a conversation: “what do I trust?” and “how do I recalibrate?” So that I am starting to say, “now what I feel aligns with my objective data and I trust my objective data more.” Or you say, “I trust my subjective data more, my subjective feeling, intuition more, and I can now process that data in a way that aligns with the subjective feeling’” (Tamar Sharon and Dorien Zandbergen 1699/1700).
The main point of the point of the passage is to show the way in which self-tracking is the subjective and objective view of the data obtained. These different viewpoints can have an impact on how people act in response. The authors describe the relationship between numbers and subjective experience as not being “a zero-sum game”. In saying this, they mean the opinions of an individuals on their data will not perfectly align with the number a phone generates. A good example is steps in a day. if someone believes they have taken 10,000 steps in a day, but their phone only reports 4,000, the user will likely be confused. As a result, people can respond by doing more in order to achieve what they feel they deserve. This is also prevalent in other ways such as food-tracking or heart rate tracking, which can have similar impacts on the way in which people act.
“Now that our privacy is worth something, every side of it is being monetized. We can either trade it for cheap services or shell out cash to protect it. It is increasingly seen not as a right but as a luxury good. When Congress recently voted to allow internet service providers to sell user data without users’ explicit consent, talk emerged of premium products that people could pay for to protect their browsing habits from sale. And if they couldn’t afford it? As one congressman told a concerned constituent, “Nobody’s got to use the internet.” Practically, though, everybody’s got to. Tech companies have laid claim to the public square: All of a sudden, we use Facebook to support candidates, organize protests and pose questions in debates. We’re essentially paying a data tax for participating in democracy.” (Amanda Hess)
In the passage I pulled from “How Privacy Became a Commodity for the Rich and Powerful”, by Amanda Hess (internet culture writer for the New York Times), she is warning us of the increasing value of our privacy and how companies are exploiting that. As we leave a larger digital foot-print behind us, and we continue to grow as a digital society, our privacy on the internet is becoming anything but private. There are options though. Recently, Congress voted to allow internet providers to sell out user data and there has been talk of products that could be paid for and provide protection. Ultimately, this creates a problem because only those who can afford such a product will have their privacy protected. And down the line, that raises problems of equal protection. By continuing to use internet services, because they’re essentially impossible to use, we are paying a tax to these companies in the form of our personal data.
“…In the next three to five years, Facebook COO Sheryl Sandberg told one group the idea of a Web site that isn’t customized to a particular user will seem quaint. Yahoo Vice President Tapan Bhat agrees: ‘The future of the Web is personalization, now the web is about me'” (Pariser 8).
In the article titled “The Filter Bubble: What the Internet is Hiding from You” Eli Pariser describes that starting in December of 2009, major websites such as Google and Facebook altered their servers to better personalize the user’s ads, search pages, and home screens to fit their interests. For instance if two people on different computers search the same term in Google, they could get very different results based on Google’s precise algorithm. It is astounding to the reader that most people did not know that there was even a change to Google’s server, for now Google checks anything and everything users search. The quote listed above was quite surprising to me. In the next couple of years, if websites do not personalize their sites with the user’s interests, then it will seem rather odd. The web is no longer about giving the user the information that they are looking for, but to also create a unique experience for everyone. This unique experience is now coined by Eli Pariser as “the filter bubble.” The filter bubble is interesting because no one chooses to enter this bubble, it is invisible, and yet everyone is in it. This ideal will change the way we view the internet and ultimately the way the internet will view us.
The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. (Lafrance)
This article details the bubble that is created through the filtering of news based on an individual’s internet activity. Algorithms that narrow down the search results based on activity presents news articles that would interest that individual based on previous search histories and posts on Facebook and other social media accounts. This method of presenting only a narrow selection of news articles creates a “bubble” in which the consumer is only presented certain information that they are likely to agree with or enjoy reading. This produces individuals that are not well rounded in terms of articles they read and information they are able to get a hold of. This can cause an even more prevalent divide in political contexts when conservatives are reading only conservative articles and liberals are reading only liberal articles. Neither side is ever exposed to the other’s thoughts or opinions leading to narrow-minded thinkers. This seemingly harmless method of presenting news to consumers can turn out to make a big impact on society.
“Our media is a perfect reflection of our interests and desires” (Pariser 12).
In Eli Pariser’s article “The Filter Bubble: What the Internet is Hiding from You”, he discusses how top Internet sites have developed an algorithm to personalize individual’s searches. This strategy of personalization is being used among sites such as Yahoo, Google, Facebook, YouTube, and Microsoft Live. Everything someone searches and clicks on is being observed by Google to enhance their ability to tailor future searches to the interests of that individual. Because of this, a group of people searching the same thing may be given different results based on the “invisible autopropaganda”. Praiser devised the phrase “filter bubble” to describe the specialized world of information that everyone has based on the prediction engines of these sites. This quote I chose concisely summarizes the modern movement of major Internet sites having the ability to narrow our searches and customize ads, creating a unique online universe for everyone.
“we shape our tools and therefore our tools shape us” (McLuhan)
At the beginning of Praisers writing, he opens up with a quote by Marshall McLuhan, a media theorist. What I immediately thought off when reading this quote is the human invention of technology and social media. In creating these things, we have unconsciously molded our society. We didn’t intended on social media shaping every aspect of our life, but it did. We used facebook and twitter to influence how we judged people and how we presented ourselves to others. If these tools were never created, we couldn’t have used them to the extent we use now. So in a way, we created our own downfall with technology because it was the product of our own creation. Praiser will expand on this idea throughout the rest of his introduction.
“And personalization is shaping how information flows far beyond Facebook, as websites from Yahoo News to the New York Times-funded startup News.me cater their headlines to our particular interests and desires… the algorithms that orchestrate our ads are starting to orchestrate our lives” (Pariser 9)
Eli Pariser is an executive of the website Upworthy that strives to encourage the consumption of viral-worthy news stories, while also taking part in political and internet activism to expose the negative effects that accompany our increasingly personalized internet experience. In this particular passage, Pariser discusses how news outlets are increasingly using similar techniques as advertisers in order to cater to the desires of their readers, thus creating a personalized news consumption experience. This technique of tracking the interest of Internet users is being utilized by a wide variety of websites, making it so that the interest-tracking algorithm is beginning to have more control over what options we have presented to us. Due to this process, our consumption of news becomes increasingly specific, thus limiting the type of information and choices that we are being given, and developing a stronger control over our lives.
I like how you picked this quote. I believe it presents both sides of the story well. We are more likely to search the things we are interested in. As a result, these are the kinds of things the algorithm produces. If this is the case, how is anyone who often uses technology as a means of a pass time or research able to learn anything beyond their specified horizons? It is a difficult question to try and answer.
“Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine-learning to get to know their users will increasingly play a role in shaping ultra-personalized news products.” (Adrianne Lafrance)
The article “The Power of Personalization” explores both the pros and the cons of the always expanding and increasingly personable power of the internet. There are both pros and cons to being able to personalize search engines. This was never prevalent until 9/11 when people began searching “twin towers” and nothing more would come up than what the twin towers were used for. Google advanced their search engine so key words would bring about recent events regarding the search. This personalization was shortly followed by other industries. Search engines use an algorithm to filter information towards the online habits of a specific person. Although this is convenient, Lafrance argues this also provides grounds for dispute, privacy, or political issues. If certain people are only exposed to articles they appear to be interested in, then they are put into a “bubble” where they are less likely to expand their horizons. Lafrance discusses the pros and cons of this personalization.
“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.
-Mark Zuckerberg, Facebook founder
We shape our tools, and thereafter our tools shape us.
-Marshall McLuhan, media theorist” (Pariser 1).
“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.
-Mark Zuckerberg, Facebook founder
We shape our tools, and thereafter our tools shape us.
-Marshall McLuhan, media theorist” (Pariser 1)
These two quotes in the opening of the introduction in Eli Pariser’s piece truly emphasize the dilemma with personalization filters. Eli Pariser, chief executive of Upworthy, has the goal to promote more meaningful internet content. The first quote that Pariser includes is by Mark Zuckerberg. In this quote, Zuckerberg is merely stating that everyone has and prioritizes their interests. The quote is very ironic, and it is an attempt to immediately appeal to the reader’s pathos. In my opinion, knowing who Zuckerberg is along with the content of the quote, he is implying that not everyone shares the same interests; therefore, he is indirectly claiming that there is no problem with personalization filters. As aforementioned, Pariser purposefully includes this quote in the opening because it grabs the reader’s attention by comparing a trivial matter to something of that with deep significance and highlights the deceptive and misleading nature of the internet.
This quote is then followed by Marshall McLuhan who states that there is reciprocity in the shaping of our ideas and our “tools.” As explained in the piece, our digital footprint is constantly tracked on the internet, and, because of this, there are many filters applied to the information we receive. As explained, “personalization filters serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown” (Pariser 15). As shown, there are many dangers associated with these personalization filters that leave us with information we feel comfortable reading and talking about. Pariser’s piece, overall, serves as a warning, as it seems as though, if we continue on this path of ignorantly being misled by these filters, we will become a more polarized world with few multifaceted standpoints.
“In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone’s phone private screen, so it’s not visible to us. And that’s different. And that’s just one of the basic things that artificial intelligence can do.” – Zeynep Tufekci “We’re making a dystopia just to make people click on ads”
Zeynep Tufekci, a sociologist who specializes in technology’s interactions with mankind, explores how big data from the ever growing advertising industry has changed our interactions with the internet in drastic ways. She describes that internet companies now routinely collect and sell information (such as internet searches) to understand people’s future buying habits. This accumulation of data goes unbeknownst to individuals, yet large online corporations capitalize on this abundance of information by creating algorithms that can systematically targeting individuals as future buyers of specific products tailored to them. In fact, Tufekci notes that these companies prey on individuals’ buying “weaknesses” to prey upon those most likely to buy specific products. The biggest difference, she asserts, is that now these seemingly “free” internet ad-oriented platforms make significant profits for companies such as Facebook and Google. She concludes that while social media can be used as a tool to bolster social movements, this infrastructure can also be used in dystopian ways to spy on citizens and influence people’s behavior in certain contexts.
“News organizations share information that is true and hopefully engaging. Technology companies like Google and Facebook enable the sharing of information that is engaging and hopefully true” (LaFrance).
In the article “The Power of Personalization” Adrienne Lafrance describes the cons of the ability to personalize news and articles in our modern-day society. Lafrance asserts that news personalization has created a bubble that filters news to the individual’s political stances, beliefs, and interests. Through this, false or inaccurate news sources can arise due to popularity and self-interest. With a wide array of possibly inaccurate sources, we ultimately lose the common ground in real news. Lafrance believes that news organizations try their best in order to be a reliable source and provide valid news. On the other hand, she claims that search engines companies, through a bubble of personalization, create a wide array of personalized news that hopefully is true.
“The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change” (Tufekci, 18:45).
Zeynep Tufekci, a professor, writer, programmer, and social impacts of technology researcher, explores big questions about our societies and lives as big data, algorithms, and connectivity spread. In her TED Talk “We’re making a dystopia just to make people click on ads,” Tufekci illustrates how internet-enabled technology companies collect data in a wide array of methods to provide personalized advertisements. She expresses concern for the expansion of these algorithms and advertisements as she says: “The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change” (Tufekci, 18:45). Algorithmically created advertisements can persuade audiences in far more ways than just the purchase of a product. As the algorithms have become more advanced to data collection expansion, political, personal, and social aspects of our everyday lives are also being targeted.
“The problem is, we no longer really understand how these complex algorithms work. We don’t understand how they’re doing this categorization. It’s giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore how exactly it’s operating any more than you’d know what I was thinking right now if you were shown a cross section of my brain. It’s like we’re not programming anymore, we’re growing intelligence that we don’t truly understand” (Tufekci).
Zeynep Tufekci, a techno-sociologist concerned with the social implications of the rise of technology, presents a thought-provoking argument in her TED talk, ‘We’re Building a Dystopia Just to Make People Click On Ads”. The passage alludes to the complexity and mysteriousness of the algorithm responsible for remembering our online data. The worrisome truth behind her statement is that nobody, not even the creators of this algorithm, can truly control or even understand the beast that they’ve created, “any more than you’d know what [Tufekci] was thinking right now if you were shown a cross section of [her] brain”. This chilling idea leads Tukekci to claim that intelligence is being cultivated that, at this point, isn’t even comprehensible anymore. This passage, as well as the title of her piece, show the depth and possible danger of the world of digital memory.
“They are predicting engines, constantly creating and refining a theory of who you are and what you’ll do and want next. Together these engines create a unique universe of information for each of us — what I’ve come to call a filter bubble — which fundamentally alters the ways we encounter ideas and information” (Pariser, 9).
This rather powerful quote that comes from Eli Pariser, a generator and advocate of reliable internet content, radiates a notion that cyber space is taking over our personal identification. It seems as if this bubble that sites like Google and Facebook generate is making our future decisions and choices for us, considering they know every one of our past. Pariser uses his knowledge of personalization to inform readers in his book “The Filter Bubble: What the Internet is Hiding From you” that what ads or information files into our screen is actually a “unique universe of information for each of us” (Pariser,9). As the left-wing advocate describes, our beliefs and desires are easily prompted and reinforced by filtered information that data collectors believe match and further what we have previously searched. In response, our perspective is narrowed by what our self from yesterday desired to know and we are therefore furthered from what others around us might desire to believe or know. Additionally, as Pariser describes these said “engines” never stop improving, or in other words they are constantly generating ways to identify who we are and what patterns they might create for the future.
“In the next three to five years, Facebook COO Sheryl Sandberg told one group, the idea of a Web site that isn’t customized to a particular user that will seem quaint. Yahoo Vice President Tapan Bhat agrees: ‘The future of the web is about personalization… now the web is about ‘me’. It’s about weaving the web together in a way that is smart and personalized for the user'” (Pariser 8).
Eli Pariser, author of “The Filter Bubble: What The Internet Is Hiding From You” and internet activist, describes the future path of the internet that will affect its users. The internet has and will become less about exposing individuals to all forms of information, but rather secluding and isolating them to information that is appealing to the user based on prior search histories. This lack of exposure to diverse ideas has the benefit of creating stronger bonds between individuals who share common interests but creates more polarization between different groups of people who only latch to the specific information they want to hear. It is rather dangerous how this affects individuals on massive scales of all social aspects in subtle and unknown ways, countering a new way of how we retain information and the thoughts and behaviors we form and develop from these personalized and filtered systems.
Zeynep Tufekci’s Ted Talk, titled “We’re building a dystopia just to make people click on ads,” explores the technologically authoritarian dystopia that society is currently aimed towards. Furthermore, the definition of a dystopia is the following: “An imagined society in which there is great suffering or injustice,” and the suffering arising from this inevitable authoritarian state will deeply impact the society we love. Building on this concern, Tufekci says the following quote in the concluding minutes of her Ted Talk:
“…authoritarianism will envelope us like a spider’s web and we may not even know we’re in it” (Zeynep Tufekci, 17:33-18:48).
Here, she sheds light to the darkened secrecy of the technological, societal, and mental control that the world’s richest companies and people have gained over those who use their digital applications. Some examples of these corporations are Google and most importantly, Facebook. The question that we must explore as a society concerns one concept: What shall we do to combat the ever-growing impact of technological authoritarianism in our society?
“Emerging technologies will only exacerbate the existing problems with algorithmically promoted junk information.”
From “The Power of Personalization” by Adrienne Lafrance
The internet and the information that certain websites are able to access are developing at an alarming rate, leaving its users to question what they really should be sharing. I thought this article, as well as the TED talk, brought up some interesting points regarding the invasion of privacy that the internet brings to society. With simple things, such as the ads we click or the stories we read, the websites we are visiting are able to learn so much about us without us even realizing it. The fact that they are gaining control, according to the TED Talk, because of this is a scary thought and can cause anxiety about the future of technology. The internet, when used for its intended purpose is extremely beneficial and has lead to great innovation and the availability of information right at our fingertips. However, when it becomes too invasive in our privacy is when it becomes a problem. The internet has evolved so much, even within the past 4 years, and it will be interesting to explore the possibilities for the future of innovation.
“As a public and as citizens, we no longer know if we’re seeing the same information, or what anybody else is seeing, and without a common basis for information, little by little, public debate is becoming impossible” (15:16).- Zeynep Tufekci, “We’re building a dystopia just to make people click on ads”.
Zeynep Tufekci, a Techno-sociologist, explains the issues regarding the current power and future of ad technology. The power of ad technology has become so advanced and convincing, it can force a person into a certain political, behavioral, emotional, and physical state. It targets individuals based on their data history, compiles that data to recognize personal traits and recommends them ads based on that. The technology can even influence political votes by the hundreds of thousands. With different information being displayed to different individuals, and the personal influence of algorithms, “public debate is becoming impossible” (Tufekci). The ability of algorithms targeting weakened individuals who are more suspectable to influential content is already happening. While the common person may think they have total control and freedom, or non-biased opinions, their whole identity could be created by computers.
“Now, if authoritarianism is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it.” (Tufekci, 16:33)
“Now, if authoritarianism is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it.” (Tufekci, 16:33)
Zeynep Tufekci, a Turkish techno-sociologist, writer and academic, is concerned that the information harvested by algorithms used by companies such as Google and Facebook will give the government immense power over us in her TED talk, “We’re Building A Dystopia Just To Make People Click On Ads”. Tufekci notes that the way we typically think of an authoritarian government is one that uses fear to terrorize people and keep them in check. But the type of authoritarian government that threatens to rise is one that quietly watches, stores, and analyzes our every move through algorithms. She highlights that sites like Facebook and Youtube can uncover our “individual weaknesses and vulnerabilities” through our internet use and use that information to try to advertise things to us when we are at our weakest state, and influence our decisions. Since each of us has our own personal screens, we do not know what is going on on each other’s screens, and therefore this authoritarianism has the potential to creep up on us without our knowledge.
“Soon, we likely will be able to engage with voice-activated assistants such as Siri and Alexa beyond just asking for the day’s news. We’ll be able to interrupt and ask questions—not just in order to put things in context and deepen our understanding of current events, but to personalize them.”
From the Power of Personalization by Adrienne LaFrance
Adrienne LaFrance, the editor of TheAtlantic.com, writes about the effects of personalization and how it limits the type of information each user receives. In the article she discusses how voice activated devices, such as the Amazon Echo and Google Home, will soon be used for engaging with the news. Currently, these devices deliver brief news flashes from default or user selected channels. These devices also constantly collect data from what the users ask. Devices with voices are growing as a source of the news as it mimics conversation with a human. LaFrance discusses how in the future, these devices will personalize the news for consumers much like Facebook does. This leads to a limited perspective since the devices cause the user to listen to the news they want to hear.
“Meanwhile, in May, Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query.” (Lafrance).
In this short excerpt, Lafrance touches upon the growing loss of privacy stemming from invasive “personalized” information grabbing. Large companies including Google, Facebook and many others, have employed use of the personal information they receive about their site’s users. They then use the information to tailor online experiences for their users, which has caused a growing outcry among worried online communities. Lafrance describes how data from a wide array of services is combined to artificially guess what content one would like to be exposed to on the internet. While these practices seem as though they are helpful In steering us towards information we may be interested in, many argue that it also forces us into a sheltered box by eliminating variety. Despite what these protestors may say, personalization continues to be a growing practice in a number of different fields, and its effects are only beginning to be felt.
“As data-tracking becomes more sophisticated, voice recognition software advances, and tech companies leverage personalization for profit, personalization will only become more acute,” (LaFrance).
Adrienne LaFrance, a career writer and editor, is essentially saying that companies in the near future will be able to perfect personalization. It is a science still widely evolving, and the increase of data-tracking will only move this idea forward. Her entire article discusses the power of personalization, as well as it’s more sinister uses. Certainly, as this technology develops there will be people inclined to misuse it for their benefit. However, it must be noted that it does make businesses more efficient in their everyday sales, which helps the customer as well by showing them what they want. While personalization is dangerous and will continue to become more dangerous, it also has positive attributes that must not be ignored for a fair examination of personalization.
“More and more, your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click” (Pariser 3).
Eli Pariser, political activist and chief executive of Upworthy, notes that the increasing personalization of Google search results causes a computer to become reflective of “your own interests”, based on previous online activity (3). His piece, “The Filter Bubble”, discusses Google and Facebook’s personalization of ads and search results based on algorithmic data. We enter “The Filter Bubble” the second that we beginning interacting with the internet, allowing sites like Google to learn more about us. While users often browse the internet to gain knowledge, Google gains knowledge about the user, so that future interactions may be more personalized.
“But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it. ” (Tufekci 17:20-18:20).
Zeynep Tufekci is a writer, academic, and techno sociologist from Turkey. She is known mainly for her research on the impact that emerging technology could have in politics and corporate responsibility. Tufekci’s TED talk “We’re building a dystopia just to make people click on ads” primarily serves as an explanation of her research. For instance, she mentions the direct correlation that ads have in the number of voters for an election. In the midterms of 2010, Facebook put out ads that had a person’s friends with the caption “I voted” and this caused “an additional 340,000 voters in that election.” This experiment was repeated and an “additional 270,000 voters” appeared for the 2012 elections. The idea of Facebook ads being an influence in politics is quite daunting, but why are people not in an uproar and resisting? Tufekci goes on to explain that this is because the “authoritarianism” portrayed is subtle and it trapped us in a “spider’s web” that we aren’t aware of. Each one of our screens is feeding information off of us constantly, and if we do not recognize the point Tufekci tries to make then corporations will continue to find out our personal weaknesses and use them against us without our knowledge.
“The race to know as much as possible about you has become the central battle of the era for internet giants like Google, Facebook, Apple, and Microsoft.” (Pariser 6)
The big internet giants – Google, Facebook, Apple, and Microsoft – are in sort of an arms race for information. What information you ask? Information about you. I like to think make the comparison between these tech companies and nuclear physicists. Nuclear physicists seek to understand the physics behind the nucleus of an atom… Before nuclear physicists, chemists found that everything was made up of atoms, then discovered each atom was made up of particles called protons neutrons and electrons, each atom having a nucleus of protons and neutrons. Nuclear physicists then took it one step further. What are these particles made of? What holds the nucleus together? So using their particle accelerators, they magnified protons neutrons and electrons just enough to find each are made of three smaller particles called quarks. How do these quarks work? They zoom further, to find the quarks are held together by gluons, and the quarks have charge and mass and their own properties and so much more. Nuclear physicists have relentless efforts to probe further and gain information, and so do the big internet companies. In my analogy, we are the atom, and with every click every time we open an app or post a picture, those nuclear physicists are learning more and more about us. Similar to nuclear physicists they collect data, but unlike nuclear physicists who are working to better the world, those tech companies are looking to better their bank accounts.
“When users can customize their news, and customize to these political and social extremes, civic discourse can suffer.” (LaFrance)
Adrienne LaFrance, the editor of TheAtlantic.com, is an established writer who specialized in the media, technology, and politic, making her highly qualified to write about the combination of technology and politics in her article “The Power of Personalization”. In her article she discusses early on that when users of the internet can decide what sort of news is presented to them they suffer. Instead of being exposed to all the news and forming an opinion when all the facts are present, people are limiting themselves to a small window of what they choose to see. This internet phenomenon, LaFrance notes, is far from beneficial to society. She says that instead of people engaging in meaningful conversation and debating current topics, we are losing “the common ground of news”. The argument could then be made that this aspect of technology is a driving force in political strife among the people of our country. Is there a strong correlation between the opposing ideas of people based on how they filter their news? An interesting research focus.
“As data-tracking becomes more sophisticated, voice recognition software advances, and tech companies leverage personalization for profit, personalization will only become more acute. This is potentially alarming given the growth of websites—news-oriented and otherwise—inhabiting the political extremes, which on Facebook are easy to mistake for valid sources of news. When users can customize their news, and customize to these political and social extremes, civic discourse can suffer.”
Adrienne LaFrance is an editor of TheAtlantic.com who focuses her writing on technology, politics, and media. In this passage, she writes about how data tracking has become more advanced and how it should raise questions. Because companies with this technology are focused on the bottom line, they are less concerned with the potential harms of this technology. The data tracking technology decreases public debate because it tailors peoples agendas towards what they want to hear and see. People want to hear what they want to hear so they don’t see the other side of news and aren’t exposed to what others are concerned about.
“Most of us assume that when we google a term, we all see the same results—the ones that the company’s famous Page Rank algorithm suggests are the most authoritative based on other pages’ links. But since December 2009, this is no longer true. Now you get the result that Google’s algorithm suggests is best for you in particular—and someone else may see something entirely different. In other words, there is no standard Google anymore (Pariser 2).”
In “The Filter Bubble”, activist and internet personality Eli Pariser discusses how the information readily available on the internet is wildly affected by filter algorithms. Writing that when one searches online, “[they] get the result that Google’s algorithm suggests is best for [them] in particular” (2), Pariser points out the growing discrepancy in information presented to various users. As Pariser postulates, this disconnect poses a possible danger in that it filters what each person sees, exposing each person to their own individual microcosms of sources. There is “no standard” for what each person is shown, and they exist in their own internet bubble, force fed sources catered to them and only them.
“This is all wrong. If you secretly harbor the idea that Snapchat is frivolous or somehow a fad, it’s time to re-examine your certainties. In fact, in various large and small ways, Snap has quietly become one of the world’s most innovative and influential consumer technology companies” (Manjoo).
Although there can be several negative implications with social media apps, there is the availability for these apps to bring people closer together, at times. This source is in support of the fact that social media has become a revolutionary tool in today’s society. This article by Manjoo emphasizes how Snapchat even includes aspects of journalism on the app. Farhad Manjoo attended Cornell University and has been recognized as the “State of the Art” columnist for The New York Times. He also explains that Snapchat is a refreshing change as it does not use feeds like most social media does. He also states that it feels as though the app encourages people to put on less of a show to others and that the app is more similar to that of real life. I found his points to be quite interesting, although the article was written back in 2016, and think it would be a good piece to incorporate into my paper.
“First, some quick background on how BroApp works: It not only sends scheduled texts, but comes preloaded with 12 messages to help users get started. The developers also took steps to conceal the automation going on behind the scenes; in places designated “no bro zones,” the app is automatically disabled. (After all, the jig is up if your girlfriend received an automatic text from you while you’re at her place.)” (Selinger)
In this passage from “Today’s Apps Are Turning Us Into Sociopaths”, Selinger describes an app that seemingly imitates life. However, this isn’t simply a normal daily task. This is a highly emotional and specific task. It describes how BroApp formulates “scheduled texts” for men to send to their significant others, hopefully creating a successful relationship. This app even goes so far as to disable itself when in a location where its purpose could be compromised. To any guy that is unsure of how to navigate a relationship, this app could appear as a dream come true. However, there are serious implications that come with this app in relation to identity. It is known that our own identity is formed in part through our relationships and how we interact with individuals. If an app is in control of the messages that are being sent to the most important people in our lives, therefore forming/strengthening relationships, aren’t they essentially creating an identity for us? Apps such as these are formulating a narrative for the users, without giving them the ability to tell their own story through the lens of their self image.
“If we care about intellectual freedom and free speech, we must protect intellectual privacy. Protecting free speech is no good if surveillance stops us from thinking up anything new or original to say. We want to be safe, and we don’t want to regulate businesses needlessly, but sensitive data about our mental activities need special protection” (Richards 4).
In “The Electronic Panopticon”, distinguished professor of law and a top professional in privacy law, information law, and freedom of expression, Neil Richards warns about the ways our mental leisures on the internet is being used as a tool for those who are in control of it and a weapon against those who use it to be able to control, censor, and hinder them. With the digital revolution being relatively new, the rapid progression of its growth and use is continuously evolving as well as its growing threat that its users face oblivion to under the guise of promotional ads and personalized internet experience for more “efficient” use. He advocates that we should be more cognizant of our personal freedoms online for the right of individual privacy and use to “search more fearlessly, with more confidence in and greater loyalty to their digital intermediaries” (4).
“There’s one identity that counts, and it’s the facebook identity” (Silverman, p.156)
Silvermans article highlight how one’s own identity is clouded with the social media. For the reason that people work so hard to make their identity something that may not be true, is for the sake of what others think of them. In my essay, I would like to use his idea to highlight how people use social media to change who they are and portray a false sense of identity. People edit photos and do things for the sake of posting. In doing this we may not be living to our true identity but instead molding an identity of what we think we should be like. In addition, Silverman is a writer for the national post as well as the guardian and has had several pieces of writings dealing with social media downfalls that were published. Indicating that he has a strong understanding of the material I would like to discuss.
“What is “fake” anyway, if the photos are on some level more real representations of life? Part of the appeal of posting photos to platforms like Instagram is sharing the experience and being able to look back at the memory later on, and it’s discouraging that the meaning — if there’s any to begin with — might be lost or cast aside to an account designed to compartmentalize less-than-perfect expression” (Coulehan).
Many young people feel compelled to only post attractive or aesthetically pleasing pictures on their online accounts, so much of who they are may be lost in the process. Making a “fake Instagram” account, commonly known as a “finsta”, is very popular amongst young people because they are typically private accounts that not many people follow, so they feel they can post funny or private information about themselves that they wouldn’t typically post on a “rinsta”. In this way, though, a finsta ends up representing more of the person’s life, because they feel much less restricted by worrying about how many likes they’ll receive. Consequently, the rinsta loses much less meaning as a memory box of sorts, because the value is “cast aside” into a fake account that isn’t seen by many and is purposed to be silly and is not taken very seriously, says Erin Coulehan, a journalist who has worked with Rolling Stone, Elle magazine, and more. It’s pivotal to see that on a social media account, many users will only represent attractive pieces of their life, saving their more negative or unappealing experiences for a fake account for just their closest friends.
They can’t decide which of the 30 burgers on the menu they want to eat, and they can’t decide which slab of meat on Tinder they want to date. And when they do decide, they tend to be less satisfied with their choices, just thinking about all the sandwiches and girlfriends they could have had instead.
This passage from Julie Beck’s article, The Rise of Dating App Fatigue, describes how more choices is not always desirable. We as a people are always striving to provide ourselves with a wider array of choices, whether it be at a restaurant or on a dating app. We believe that if we have more choices, we will be able to find something more perfectly suited for us. This belief is based on fact, and more choices is often a good thing, however it is rare that the negative aspects of choice are considered. As Beck writes, more choices can often mean more indecision, unless there is a clear best, it becomes very difficult to determine which choice you want, and after the choice is made we often have to deal with the angst of wondering whether our choice was the right one. In some cases, the amount of choices we are given is so daunting that we are unable to even choose something, and when that happens we are no longer progressing. Our growth is stunted in other ways as well. With more choices, many of us are more likely to choose what we are familiar with, thus eliminating all new experiences from our life. While there are many positive aspects of choice, we must stay wary of how it might affect us negatively as well.
“While young people of every generation have struggled with how to project their identities onto the greater world, teenagers of 2015 arguably have it worse. Given the pervasiveness of social media, the feedback mechanism never shuts down.” (Safronova)
Valeriya Safronova, “On Fake Instagram, a Chance to Be Real,” New York Times 18 Nov 2015
In the article “On Fake Instagram, a Chance to Be Real”, Valeriya Safronova describes how teenagers today present fake images of themselves online that are not reflective of who they truly are on their “real Instagrams”, but on their fake Instagrams post ideas and experiences that truthfully depict their identity. Reading this article, I found many parallels between her ideas and my experiences, especially with social media. Oftentimes, I will see friends or acquaintances posting something on social media that does not truly reflect who they are, making them appear to be a completely different person online than in real life. I understand why people only post happy moments online, but it is important to keep in mind that a person is so much more than what they post on social media. “Finstas”, which have become extremely popular in the past few years have become a more realistic depiction of someone, but it is concerning how posting the truths of life are found on “fake” accounts. I think this reflects the barriers that exist in society to young people expressing their identity and owning who they really are.
“Jonathan Zittrain noted last summer, ‘If what you are getting online is for free, you are not the customer, you are the product’” (Furnas).
Hidden in plain sight, the technological world that appears to cocoon and satisfy online and social desires has a deeply, deeply manipulative twist. The coupons and special deals that are emailed mixed with personalized advertisements on webpages allow companies to dissect information about us based on what we choose to click and type. This thought aligns with Jonathan Zittrain’s quote from Alexander Furnas’s article, which portrays consumers as fish that are being baited by a company interested in their information. Once the consumer is baited by way of a discount deal or free gift, they are lured in further and further into the company’s grasp to be informationally and financially manipulated. As time elapses, these tech companies learn more and more about their consumers simply by following their train of thought on the Internet, enabling them to do whatever they please under our noses.
“Dataism thrives on the assumption that gathering data happens outside any preset framework—as if Twitter facilitates microblogging just for the sake of generating “life” data—and data analysis happens without a preset purpose—as if data miners analyze those data just for the sake of accumulating knowledge about people’s behavior. It may not always be simple to identify in what context (meta)data are generated and for what purposes they are processed. And yet it is crucial to render hidden prerogatives explicit if researchers want to keep up users’ trust in the datafication paradigm. Trust is partly grounded in the persuasive logic of a dominant paradigm; for another part, though, faith resides with the institutions that carry the belief in Big Data.” – José van Dijck, Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology
Most “free” apps and websites truly capitalize through collecting personal data about their users; although, many of its users are entirely unaware of this data collection. In his article, “Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology,” José van Dijck of the University of Amsterdam takes a critical eye on the modern for-profit accumulation of data by internet companies. He critiques the notion that big data is the new, best way to accumulate large swaths of depersonalized data about the population as a whole, noting that this highly unregulated data processing is used to accumulate data on individuals. He reveals that the widespread collection of data flourishes on the belief that big data does not target individuals and only collects aggregate behaviors; once that trust is degraded, the entire system completely disintegrates. After events such as Snowden, the public trust in the big data system slightly eroded as the intense surveillance of citizens was revealed. Thus, the system that surveils us is also the one we must turn a blind eye to.
“Amazon is considering offering developers raw transcripts of what people say to the Echo, according to a July report in The Information. Newsrooms will have to grapple with whether it’s ethical to use data from those transcripts as a way to make money, a move that would certainly enrage some privacy-minded consumers. For publishers, that could be an important revenue stream, but it could also creep audiences out and lessen trust, not enhance it..” (LaFrance)
The idea of someone being recorded in their home and having his/her words transcribed and used so that a company can make even more money is creepy, yet this hypothetical could become reality soon. The Amazon Echo is an AI assistant for everyday life at home, just verbally give commands to the Echo and hear/see a response within seconds. Although the device is quite useful and exciting, most people are not aware of the fact that anything they say to AI is automatically stored in a database somewhere. LaFrance’s article goes into detail about what might the transcripts be used for by Amazon. For instance, Amazon is hoping to give the transcripts to developers to further the advancement of its AI and to increase revenue with the advancements. However, this idea raises the issue of whether or not selling the transcripts of consumers from within their private homes is ethical. With the advancement of technology more companies seem to be gradually taking away the privacy of its consumers for its own benefit.
“A majority of teens with dating experience (76%) say they have only dated people they met via offline methods.” (Teens, Technology, and Romantic Relationships)
It was about 12:30 in the morning when I asked my roommate “How do I ask this girl on a date?”. I didn’t know what to do, I met this really nice girl in my lab, and after some time of getting to know her I knew I wanted to ask her out, but how? Similar to many teenagers, most of the people I asked out were people I had met online, followed on Instagram, or added on snapchat. It was so simple when I could just “DM” a girl I had never met and say “Hey, do you know how much a polar bear weighs? Enough to break the ice, hi I am Chris.” How often that line worked is not important, that is my own issue, but what is important is that I had no experience asking a girl out that I had met in person. Is this an issue? As a society, it is so easy to become unaware of social interactions and the value of personal interaction. Today, 76% of teens with dating experience have only dated people they met offline, which may seem high, but with the growing use of technology, and children getting phones and access to apps like snapchat and instagram at younger ages, that number is bound to decrease (Teens). What is the impact of that?
“As discussed, one of the main concerns underlying the data fetishist critique is that a trust in numbers will trump other forms of subjective, intuitive and embodied knowledge…In a discussion that took place in a break-out session that we facilitated at the 1700 new media & society 19(11) QS conference 2014 (QS14), one participant explained that when tracking physical activity,
‘there is a communication between your subjective knowledge and your objective knowledge that you are creating. You always do this kind of cross-check. I feel like I get a lot of activity, but my data shows me I am not, then I have a conversation: “what do I trust?” and “how do I recalibrate?” So that I am starting to say, “now what I feel aligns with my objective data and I trust my objective data more.” Or you say, “I trust my subjective data more, my subjective feeling, intuition more, and I can now process that data in a way that aligns with the subjective feeling’” (Tamar Sharon and Dorien Zandbergen 1699/1700).
The main point of the point of the passage is to show the way in which self-tracking is the subjective and objective view of the data obtained. These different viewpoints can have an impact on how people act in response. The authors describe the relationship between numbers and subjective experience as not being “a zero-sum game”. In saying this, they mean the opinions of an individuals on their data will not perfectly align with the number a phone generates. A good example is steps in a day. if someone believes they have taken 10,000 steps in a day, but their phone only reports 4,000, the user will likely be confused. As a result, people can respond by doing more in order to achieve what they feel they deserve. This is also prevalent in other ways such as food-tracking or heart rate tracking, which can have similar impacts on the way in which people act.
“Now that our privacy is worth something, every side of it is being monetized. We can either trade it for cheap services or shell out cash to protect it. It is increasingly seen not as a right but as a luxury good. When Congress recently voted to allow internet service providers to sell user data without users’ explicit consent, talk emerged of premium products that people could pay for to protect their browsing habits from sale. And if they couldn’t afford it? As one congressman told a concerned constituent, “Nobody’s got to use the internet.” Practically, though, everybody’s got to. Tech companies have laid claim to the public square: All of a sudden, we use Facebook to support candidates, organize protests and pose questions in debates. We’re essentially paying a data tax for participating in democracy.” (Amanda Hess)
In the passage I pulled from “How Privacy Became a Commodity for the Rich and Powerful”, by Amanda Hess (internet culture writer for the New York Times), she is warning us of the increasing value of our privacy and how companies are exploiting that. As we leave a larger digital foot-print behind us, and we continue to grow as a digital society, our privacy on the internet is becoming anything but private. There are options though. Recently, Congress voted to allow internet providers to sell out user data and there has been talk of products that could be paid for and provide protection. Ultimately, this creates a problem because only those who can afford such a product will have their privacy protected. And down the line, that raises problems of equal protection. By continuing to use internet services, because they’re essentially impossible to use, we are paying a tax to these companies in the form of our personal data.