White supremacists are riling up thousands on social media | AP News
It is believed that more intensive scrutiny of their online presence could be helpful in ferreting them out and preventing future attacks.
But there are many more false alarms which could make it more problematic. By using subtle language and code words, it makes it more difficult to detect.
"White Boy Summer"?
I wonder if the effects of the pandemic have had a role to some degree, since a lot of kids have had to attend school online, instead of having in person classes where they would have direct contact and socialization with others.
And what about this online monitoring by AI? They seem to find ways around that by using subtle language and code words. Meta says it employs over 350 experts "with backgrounds from national security to radicalization research, dedicated to ridding the site of such hateful speech."
There's also the practice of infiltration, as one of the people quoted in the article was a former infiltrator. That's helped to some degree, as some violent acts have been thwarted by informants before they even happened (such as the plan to kidnap the governor of Michigan a while back).
What are effective means of combating this?
WASHINGTON (AP) — The social media posts are of a distinct type. They hint darkly that the CIA or the FBI are behind mass shootings. They traffic in racist, sexist and homophobic tropes. They revel in the prospect of a “white boy summer.”
White nationalists and supremacists, on accounts often run by young men, are building thriving, macho communities across social media platforms like Instagram, Telegram and TikTok, evading detection with coded hashtags and innuendo.
Their snarky memes and trendy videos are riling up thousands of followers on divisive issues including abortion, guns, immigration and LGBTQ rights. The Department of Homeland Security warned Tuesday that such skewed framing of the subjects could drive extremists to violently attack public places across the U.S. in the coming months.
These type of threats and racist ideology have become so commonplace on social media that it’s nearly impossible for law enforcement to separate internet ramblings from dangerous, potentially violent people, Michael German, who infiltrated white supremacy groups as an FBI agent, told the Senate Judiciary Committee on Tuesday.
It is believed that more intensive scrutiny of their online presence could be helpful in ferreting them out and preventing future attacks.
“It seems intuitive that effective social media monitoring might provide clues to help law enforcement prevent attacks,” German said. “After all, the white supremacist attackers in Buffalo, Pittsburgh and El Paso all gained access to materials online and expressed their hateful, violent intentions on social media.”
But there are many more false alarms which could make it more problematic. By using subtle language and code words, it makes it more difficult to detect.
But, he continued, “so many false alarms drown out threats.”
DHS and the FBI are also working with state and local agencies to raise awareness about the increased threat around the U.S. in the coming months.
The heightened concern comes just weeks after a white 18-year-old entered a supermarket in Buffalo, New York, with the goal of killing as many Black patrons as possible. He gunned down 10.
That shooter claims to have been introduced to neo-Nazi websites and a livestream of the 2019 Christchurch, New Zealand, mosque shootings on the anonymous, online messaging board 4Chan. In 2018, the white man who gunned down 11 at a Pittsburgh synagogue shared his antisemitic rants on Gab, a site that attracts extremists. The year before, a 21-year-old white man who killed 23 people at a Walmart in the largely Hispanic city of El Paso, Texas, shared his anti-immigrant hate on the messaging board 8Chan.
References to hate-filled ideologies are more elusive across mainstream platforms like Twitter, Instagram, TikTok and Telegram. To avoid detection from artificial intelligence-powered moderation, users don’t use obvious terms like “white genocide” or “white power” in conversation.
They signal their beliefs in other ways: a Christian cross emoji in their profile or words like “anglo” or “pilled,” a term embraced by far-right chatrooms, in usernames. Most recently, some of these accounts have borrowed the pop song “White Boy Summer” to cheer on the leaked Supreme Court draft opinion on Roe v. Wade, according to an analysis by Zignal Labs, a social media intelligence firm.
"White Boy Summer"?
Facebook and Instagram owner Meta banned praise and support for white nationalist and separatists movements in 2019 on company platforms, but the social media shift to subtlety makes it difficult to moderate the posts. Meta says it has more than 350 experts, with backgrounds from national security to radicalization research, dedicated to ridding the site of such hateful speech.
“We know these groups are determined to find new ways to try to evade our policies, and that’s why we invest in people and technology and work with outside experts to constantly update and improve our enforcement efforts,” David Tessler, the head of dangerous organizations and individuals policy for Meta, said in a statement.
A closer look reveals hundreds of posts steeped in sexist, antisemitic, racist and homophobic content.
U.S. extremists are mimicking the social media strategy used by the Islamic State group, which turned to subtle language and images across Telegram, Facebook and YouTube a decade ago to evade the industry-wide crackdown of the terrorist group’s online presence, said Mia Bloom, a communications professor at Georgia State University.
“They’re trying to recruit,” said Bloom, who has researched social media use for both Islamic State terrorists and far-right extremists. “We’re starting to see some of the same patterns with ISIS and the far-right. The coded speech, the ways to evade AI. The groups were appealing to a younger and younger crowd.”
Law enforcement agencies are already monitoring an active threat from a young Arizona man who says on his Telegram accounts that he is “leading the war” against retail giant Target for its Pride Month merchandise and children’s clothing line and has promised to “hunt LGBT supporters” at the stores. In videos posted to his Telegram and YouTube accounts, sometimes filmed at Target stores, he encourages others to go to the stores as well.
Target said in a statement that it is working with local and national law enforcement agencies who are investigating the videos.
As society becomes more accepting of LGBTQ rights, the issue may be especially triggering for young men who have held traditional beliefs around relationships and marriage, Bloom said.
“That might explain the vulnerability to radical belief systems: A lot of the beliefs that they grew up with, that they held rather firmly, are being shaken,” she said. “That’s where it becomes an opportunity for these groups: They’re lashing out and they’re picking on things that are very different.”
I wonder if the effects of the pandemic have had a role to some degree, since a lot of kids have had to attend school online, instead of having in person classes where they would have direct contact and socialization with others.
And what about this online monitoring by AI? They seem to find ways around that by using subtle language and code words. Meta says it employs over 350 experts "with backgrounds from national security to radicalization research, dedicated to ridding the site of such hateful speech."
There's also the practice of infiltration, as one of the people quoted in the article was a former infiltrator. That's helped to some degree, as some violent acts have been thwarted by informants before they even happened (such as the plan to kidnap the governor of Michigan a while back).
What are effective means of combating this?