Facebook Deploys AI to Fight ISIS Content
Terrorists have many tools in their arsenals, but perhaps none are so deadly as a Facebook post.
Groups like Islamic State (ISIS) form scads of fake social media profiles and use them to flood platforms with extremist content. These posts encompass everything from condemning the behaviors of globalized citizens to romanticizing life inside an IS caliphate. This content reaches out to intimidate and spread fear, and it also serves to attract young idealists who may be convinced to fight — and die — for the goals of terrorist organizations.
Now, Facebook is finally responding to criticisms that it does too little to control this type of content and the radicalizing consequences that can come with it.
Fighting terrorist groups’ activities has proved just as difficult as fighting terrorists abroad, but Facebook has turned to new technology used by cyber security experts to weed out radical posts. The efforts of Facebook to develop terrorist-spotting algorithms could soon benefit government technology as well as provide improved cyber security for businesses looking to preemptively spot bad actors.
How Image Analysis, Algorithms and Machine Learning Fight Terrorism
With over 2 billion users in their systems, keeping track of every Facebook post and account manually would prove impossible. A lack of dedicated resources within the company and even fewer results caused many critics to lash out that Facebook was not taking the problem seriously. When Twitter was purging 377,000 accounts for their pro-terrorism content, Facebook was advocating for users to use “counter-speech” approaches that could somehow convince people to reverse course before they were fully radicalized.
Facebook is currently taking a much more active role and has enlisted advanced automatic analysis tools in its campaign against terrorist groups.
For instance, image matching tools can quickly identify shared posts of the same image. So when tools remove one offending image depicting extremist content, Facebook could quickly do an image analysis to look for identical matches and remove others at the same time. Since groups like ISIS often rely on spam tactics, blasting repeated images across multiple accounts, catching duplicates could be an effective tool for stemming the tide of extremist content.
Facebook engineers are also developing algorithms that can spot “clusters” of accounts that share related content or seem to engage with one-another’s posts. Since fake “sock puppet” accounts can be made in seconds, looking for similar networks of interactions can allow Facebook to quickly root out fake accounts related to previously banned ones. This system can also help Facebook monitor individuals who appear to newly enter this network, such as a real account that “likes” a post with extremist content.
Tools like these allow Facebook to pro-actively seek out posts from groups like ISIS rather than relying on users to flag offending posts for them.
AI Technology Can Enhance Cyber Security for Businesses
The AI and algorithmic tools that Facebook employs for counter-terrorism already have counterparts in the cyber security industry.
For example, rather than waiting for matches of known malware, modern antivirus and anti-intrusion tools can use cloud-based analytics to preemptively identify activities that look like previous threats. Monitoring for clustered activities related to a previous intrusion attempt can similarly flag suspicious actions before they become a problem, not after.
If you are interested in implementing the latest cloud and AI-based cyber security solutions for your business, then you can receive cyber security consulting to bring you up-to-date with the most cutting edge technologies. Contact us to get started today.