Big Tech manipulating research into its harm to society
For almost a decade, researchers have been gathering evidence that the social media platform Facebook disproportionately amplifies low-quality content and misinformation.
So it was something of a surprise when in 2023 the journal Science published a study that found Facebook’s algorithms were not major drivers of misinformation during the 2020 United States election.
This study was funded by Facebook’s parent company, Meta. Several Meta employees were also part of the authorship team. It attracted extensive media coverage. It was also celebrated by Meta’s president of global affairs, Nick Clegg, who said it showed the company’s algorithms have “no detectable impact on polarisation, political attitudes or beliefs.”
But the findings have recently been thrown into doubt by a team of researchers led by Chhandak Bagch from the University of Massachusetts Amherst. In an eLetter also published in Science, they argue the results were likely due to Facebook tinkering with the algorithm while the study was being conducted.
In a response eLetter, the authors of the original study acknowledge their results “might have been different” if Facebook had changed its algorithm in a different way. But they insist their results still hold true.
The whole debacle highlights the problems caused by Big Tech funding and facilitating research into their own products. It also highlights the crucial need for greater independent oversight of social media platforms.
Merchants of doubt
Big tech has started investing heavily in academic research into its products. It has also been investing heavily in universities more generally. For example, Meta and its chief, Mark Zuckerberg, have collectively donated hundreds of millions of dollars to more than