Here are some excerpts from the source he flashed the headline of.
"Google general counsel Kent Walker in a June Financial Times op-ed, which announced YouTube was taking several steps to inhibit extremist videos. These steps included investing in machine-learning technology to help identify videos associated with terrorism, increasing the number of “Trusted Flaggers” to identify content that can be used to radicalize terrorists, and redirecting potential extremist recruits to watch counterterrorism videos instead."
Trusted flagers? Flags can be appealed, so if a video genuinely doesn't have anything wrong with it, this shouldn't be an issue. If it actually prevents terror, this is unilaterally a good thing, no?
"Now, when YouTube decides that a flagged video doesn’t break policy but still contains “controversial religious or supremacist content,” the video will be put in a “limited state.” Here, the video will exist in a sort of limbo where it won’t be recommended or monetized. It also won’t include suggested videos or allow comments or likes."
Oh look, the video is still allowed to remain on the website too. Really makes the use of "censorship" an obvious buzzword rather than genuine analysis.
"The update also touted the success of the machine-learning-driven removal of content, claiming that over the last month, YouTube algorithms have found 75 percent of policy-violating extremist content before a human was able to flag the videos."
Sounds good to me.