top of page
  • Glenn

Popping your bubble



The influence of social media newsfeeds on our decisions has received a lot of attention lately. A prominent example is the suspected role of so called fake news in the 2016 US election. According to one study, more than 25% of adults visited a fake news website in the run-up to the 2016 US election. We tend to share and like stories that confirm our existing beliefs. These stories then appear on the timelines of our often like-minded friends. Facebook’s algorithms that select the content that appears on these timelines further reinforces these feedback loops. The end result is that we only get exposed to views that confirm our existing world view, creating an ‘echo chamber’. This further polarizes debate and makes it harder for diverse points of view to coexist and for constructive conversation to occur. Effort have been made to address this problem with social media. For example, EscapeYourBubble is a Chrome extension that adds posts from reputable news sources that contrast with your world view, to your news feed. 


The problem of reinforcing feedbacks loops is not unique to social media. The problem emerges from the machine learning technology itself. A similar issue is often encountered with product recommenders used by online retailers like Amazon or streaming video sites like Netflix. Recommender systems are often trained to recognise products similar to those a customer has purchased previously, and suggest these to the customer. This can lead to a lack of novelty in customer recommendations, or customers constantly being recommended items that they already have. 


Further evidence of machine learning reinforcing beliefs can be illustrated with an example from human resources. Suppose we have a historical record of employee performance that we wish to use to guide future appointments. We might point our algorithm at this record to identify which applicants are most similar to individuals that have performed well in the past. We might find that this algorithm suggests that only male applicants are well suited to the role. In this case it is likely that our algorithm is reflecting historical biases that prevented women from succeeding. Clearly, the appropriate response to this result is not to blindly follow the recommendation and further reinforce an egregious practice, but rather to break the cycle and create an environment more conducive to gender equality.


The key message from these examples if that sometimes we need to pop our bubble and inject some humanity into our algorithms. Whether it be exposing ourselves to content we may not agree with, recommending products that might be considered unconventional, or giving that loan or job to someone who might have been unfairly discriminated against in the past, we have the power to break these feedback loops and design algorithms that show the way to a better world.


5 views0 comments

Recent Posts

See All
bottom of page