“Science fiction has predicted everything from the internet to mobile phones. Could it help us create concrete-free cities of the future?”
“That is the state of the art today—that you need tons of data to teach a machine. […] State of the art changes with research.”
Crackle, snap, pop. One of the most common applications of machine learning today is in recommendation systems. Netflix and YouTube use it to push you new shows and videos; Google and Facebook use it to rank the content in your search results and newsfeed. While the systems offer a great deal of convenience, they also cause two undesirable side effects. You may have heard them before: filter bubbles and echo chambers.
In a new paper, researchers at DeepMind analyzed how different recommendation algorithms can speed up or slow down both phenomena, which they refer to in academic-speak as “degenerate feedback loops.” (In other words, the higher the degeneracy, the stronger the filter bubble or echo chamber effect.)
They ran a simulation for five different algorithms, which use different principles to select content to push to the user. They found that the more the algorithm prioritized accurately predicting exactly what the user wanted, the more it sped up the system’s degeneracy. Therefore, the best way to combat the formation of filter bubbles and echo chambers is to design algorithms with a greater emphasis on the random exploration of new content. Growing the overall set of information from which the recommendations are drawn can have a combative effect as well. It’s worth noting a likely trade-off, however—that users will find their recommendation systems less accurate.
The researchers caveat that their analysis is limited because it is based on a fully theoretical simulation that didn’t involve real user input. More work must be done to better understand how user dynamics might change the behavior of these systems.