If you use any social media, you have probably noticed how different and unique your feed is from the one of your friends. The same is true for your Amazon recommendations, Netflix suggestions, Google News homepage, and so on. In the last years, almost every online company has adopted increasingly sophisticated algorithms that analyze users’ behavior, understand their interests, and select the best content to show accordingly. Each user is exposed to a unique universe of results that is specifically tailored to their interests. Although this may sound like a useful feature, its consequences are potentially detrimental. Inevitably, the personalized feed is the result of a drastic and automatic selection among all available content, and what is filtered out becomes unreachable. Users therefore end up being isolated from and unaware of the posts that are regarded incompatible with their tastes and interests by the algorithm. This resulting state of intellectual isolation is what the internet activist Eli Pariser called filter bubble when he coined the term.

Confirmation bias: the reason why filter bubbles are effective
Recommendation algorithms serve one main purpose: increase the time users spend on the platform by making it as enjoyable and addictive as possible. Filter bubbles allow users to stay in a sort of online comfort zone: the posts are mostly interesting; opinions are shared; and critiques or opposing viewpoints are excluded. The way these bubbles are built is perfectly compatible with behavioral aspects of our brain. One of the main biases exploited for their construction is confirmation bias, namely the tendency to look for, favor and believe in information that confirms or supports one’s pre-existing beliefs and values. As a result of this, people unconsciously tend to select information that supports their views and to ignore contrary information, which is exactly the mechanism that forms filter bubbles. Online, just like in real life, we prefer being exposed to content we already agree with and that confirms our prior beliefs, rather than considering information that challenge them.
Are filter bubbles dangerous?
Filter bubbles look quite harmless if the recommendation algorithms filter out Amazon products or TV series. However, when these algorithms are employed by Facebook, Twitter, or Google News, they start selecting which news and opinions are shown in a user’s feed, based on their compatibility with the individual’s tastes and beliefs. This kind of content contributes to shaping the perception of reality one has, especially in a world where more and more citizens regularly get news from social media. Indeed, to very different personalized results correspond very different and possibly distorted views of reality: users may therefore end up having different ideas of what is currently a hot topic, which are the prevailing opinions among the public, or which fake news have been debunked. Alex Hern, UK Technology Editor for The Guardian, affirms that, this is exactly the reason why filter bubbles play a key role in the spread of fake news and in polarization. He cites as example the role social networks are believed to have played in spreading disinformation before the Brexit Referendum and the US Elections of 2016. Hern also writes that “more than 60% of Facebook users are entirely unaware of any curation on Facebook at all1”. This is worrying, since being unaware of the existence and the functioning of the algorithms amplifies the risk involved.

“Bad algorithms or biased users? 2”
Is it just the algorithms’ fault? There is considerable debate as to whether the problem lies with the algorithms or with the users. Some scholars claim that users have a key responsibility in the creation of the bubble, since they feed the algorithm with clicks and interactions, and therefore it should be their responsibility to try to burst it. Zimmer, Scheibe, Stock, & Stock (2019), professors at the Heinrich Heine University, explain in their paper “Fake News in Social Media: Bad Algorithms or Biased Users?” how algorithms cannot create filter bubbles by themselves, but they need the active collaboration of the individual users. Algorithms, they explain, amplify and consolidate one’s existing behavioral patterns. Ekström, Niehorster, & Olsson (2022) reach a similar conclusion: they claim that users autonomously and automatically apply a selection of the content they see and decide which content they want to be exposed to. Users, they write, engage in what they call “self-segregating behavior3”, meaning that users “segregate” themselves into a personalized feed even in the absence of an algorithm by unconsciously selecting which content deserves a click and which should be ignored. This results in what they call self-imposed filter bubbles. Users, in conclusion, are co-responsible in creating filter bubbles. However, one thing not properly underlined by these scholars that should be kept in mind is that amplifying a human bias in a potentially dangerous way is still an issue that needs to be tackled.
Is TikTok’s algorithm different?
Most of the available studies on filter bubbles have been focusing on the algorithms used by social media. In particular, the majority of them took as benchmark for their analysis the algorithms working behind Facebook. However, it is worth asking if the conclusions that have been reached are also applicable to the new entry in the social media giants’ panorama: TikTok. This social network is different from the previous ones, and not just for its immersive “For You Page”, its video-exclusive content and ease of use, but also for its algorithm. Alex Hern believes that the enormous and immediate success of the company is largely due to its hyper-tailored and addictive recommendation system. The “For You Page” captures and monopolizes users’ attention much more quickly and effectively compared to other social media. As you scroll, the algorithm keeps learning your preferences and becomes better and better at predicting what you may like.

As Hern explains: “One crucial innovation is that, unlike older recommendation algorithms, TikTok does not just wait for the user to indicate that they like a video with a thumbs up […] Instead, it appears to actively test its own predictions, experimenting by showing videos that it thinks might be enjoyable and gauging the response4”. With such a powerful algorithm, the way filter bubbles form on this platform is faster, subtler, and even less in the hands of the user. TikTok, according to some journalists, is particularly susceptible to the spread of misinformation and the creation of bubbles because of its key features. In particular the ease with which it is possible to remix and repost videos, and the difficulties in moderating a platform only containing videos are the key features that make TikTok difficult to control. For all these reasons, TikTok and its effects are receiving more and more attention from the media and will continue being analyzed in the next years.
Conclusions
Should we expect tech companies to code their algorithms in a way that helps us escape our bubble? Even if this measure may be useful, it is clearly against their economic interest, it may take a long time and even new laws. Fortunately, we are not powerless. As users, we have a significant role in building our bubble, but we can also take measures to reduce its effect. As Katherine Schulten suggests in her article for the New York Times, we can take measures like diversifying our news diet, adding new and different sources to our feed, and seeking out challenging perspectives and people to follow.
References
[1] Hern, A. (2017, May 22). How Social Media Filter Bubbles and Algorithms Influence the Election. The Guardian. Available at https://www.theguardian.com/technology/2017/may/22/social-media-election-facebook-filter-bubbles
[2] Zimmer, F., Scheibe, K., Stock, M., & Stock, W. G. (2019). Fake News in Social Media: Bad Algorithms or Biased Users? Journal of Information Science Theory & Practice (JIStaP), 7(2), 40–53. https://doi.org/10.1633/JISTaP.2019.7.2.4
[3] Ekström, A. G., Niehorster, D. C., & Olsson, E. J. (2022). Self-imposed filter bubbles: Selective attention and exposure in online search. Computers in Human Behavior Reports, 7. https://doi.org/10.1016/j.chbr.2022.100226
[4] Hern, A. (2022, October 24). How TikTok’s Algorithm Made it a Success: ‘It Pushes the Boundaries’. The Guardian. Available at https://www.theguardian.com/technology/2022/oct/23/tiktok-rise-algorithm-popularity
Sources
Hern, A. (2022, October 24). How TikTok’s Algorithm Made it a Success: ‘It Pushes the Boundaries’. The Guardian. Available at https://www.theguardian.com/technology/2022/oct/23/tiktok-rise-algorithm-popularity
Hosanagar, K. (2016, November 25). Blame the Echo Chamber on Facebook. But Blame Yourself, Too. Wired. Available at https://www.wired.com/2016/11/facebook-echo-chamber/
Paul, K. (2022, October 25). ‘We Risk Another Crisis’: TikTok in Danger of Being Major Vector of Election Misinformation. The Guardian. Available at https://www.theguardian.com/technology/2022/oct/24/tiktok-election-misinformation-voting-politics
Schulten, K. (2016, September 29). Is Your Online World Just a ‘Filter Bubble’ of People With the Same Opinions? New York Times. Available at https://www.nytimes.com/2016/09/29/learning/is-your-online-world-just-a-filter-bubble-of-people-with-the-same-opinions.html