Google announced four new measures to combat the spread of terrorist content on YouTube. Google senior vice president and general counsel Kent Walker revealed the four new measures to identify and remove content that promotes terrorism.
Here are the four steps:
First, the company will increase the use of technology to help identify extremist and terrorism-related videos. The company will devote more engineering resources to apply their most advanced machine learning research to train new “content classifiers” to help them more quickly identify and remove extremist and terrorism-related content.
Second, the company will greatly increase the number of independent experts in YouTube’s Trusted Flagger programme. Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech.
Third, the company will be taking a tougher stance on videos that do not clearly violate their policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.
Finally, YouTube will expand its role in counter-radicalisation efforts. Building on their successful Creators for Change programme promoting YouTube voices against hate and radicalisation.
The company also said that they committed to working with industry colleagues—including Facebook, Microsoft, and Twitter—to establish an international forum to share and develop technology and support smaller companies and accelerate our joint efforts to tackle terrorism online.
(Photo source: https://tctechcrunch2011.files.wordpress.com)