Can we police the web to prevent violent content posting?
On March 27, the Global Policy Institute held a panel discussion on “Censorship versus Responsibility on the Web: Repercussions from the New Zealand Mass Murders”.
The panelists included: Chris Lewis, Vice President, Public Knowledge; Jennifer Golbeck, Director of the Human-Computer Interaction Lab and Associate Professor, University of Maryland; and David McCabe, Tech Reporter, Axios. Richard Leiby, Assignment Editor & Writer, The Washington Post, was the moderator. Paolo von Schirach, President, Global Policy Institute opened the event.
After the recent horrendous massacre of Muslims in New Zealand mosques – telegraphed on the Web and live-streamed on Facebook – pressure on social media companies to monitor/shut down accounts such as the ones operated by the alleged killer is intense. All panelists cautioned that sanitizing the web by surgically removing “bad content” in a timely fashion may not be possible, since we are trying to prevent the posting of violent content, while at the same time keeping the internet as a free space for all, this way preserving our constitutionally protected freedom of speech. As the experts approach this task, the hard question is: Who gets to decide what is admissible or non-admissible content? Besides, monitoring literally millions of users is a daunting task. That said, the different internet platforms could agree on new standards, universally applicable, that would provide better protection for all users.
Watch the full video here
Watch the short video here