How tech firms have tried to stop disinformation and voter intimidation – and come up short
The major social media firms have taken a largely piecemeal and fractured approach to managing the problem.
Nov. 2, 2020 • ~9 min
The major social media firms have taken a largely piecemeal and fractured approach to managing the problem.
That "friend of a friend" post you're thinking about sharing on social media could make you an unwitting accomplice in a disinformation campaign.
Fake videos generated with sophisticated AI tools are a looming threat. Researchers are racing to build tools that can detect them, tools that are crucial for journalists to counter disinformation.
It's easy to edit video of public figures to make them appear asleep, confused, drunk or cognitively impaired when they are not. The technique is being used to undermine Joe Biden's campaign.
Banning the Chinese-owned social media platforms raises free speech concerns and could worsen the US-China trade war.
Many people who participate in disinformation campaigns are unwitting accomplices and much of the information they spread is accurate, which makes it all the harder to identify the campaigns.
A social media researcher explains how bots and sock puppet accounts manipulate and polarize public debate.
Facebook, Google and Twitter are stepping up to block misinformation and promote accurate information about the coronavirus. Their track records on self-policing are poor. The results so far are mixed.
Much of the world is moving online in response to the coronavirus pandemic. Society's newly increased dependence on the internet is bringing the need for good cyber policy into sharp relief.
A scholar who has reviewed the efforts of nations around the world to protect their citizens from foreign interference says there is no magic solution, but there's plenty to learn and do.
/
7