Under fire from Congress and the intelligence services over its wait-and-see attitude in the face of Russian influence campaigns on its network ahead of 2016 US presidential election and threatened with being broken up, Facebook took action.
The company created by Marc Zuckerberg had to improve its threat intelligence capabilities and a key component of this has been to dip into the pool of former US intelligence types.
According to a report in Intelligence Online, as a further sign of the way Big Tech firms are ramping up their threat intelligence capacities, one of the authors of Facebook’s State of Influence Operations 2017-2020, published on 26 May, was David Agranovich, director of threat disruption.
Before joining the social network in 2018, this former Pentagon analyst was intelligence director for the White House’s National Security Council (NSC), the report said.
Co-author Nathaniel Gleicher, now head of Facebook’s security policy, previously oversaw cybersecurity at CSN, building on his experience as senior counsel for the computer crime and intellectual property section of the Department of Justice.
The most powerful US intelligence agencies are keeping track of the trend along with the big tech firms: Mike Torrey, Facebook’s threat investigator since September 2018 and another co-author of the recent report, previously served as a cyber analyst for the NSA and the CIA, the report said.
To curb threats from Moscow, Facebook hired Russian-speaking Olga Belogova as policy manager for influence operations.
A former journalist, she briefly worked on the former USSR region for the Department of State and the Pentagon, and now heads Facebook’s influence operations policy manager.
Facebook’s head of global threat intelligence strategy, Ben Nimmo, is somewhat of an exception as he has mainly worked on disinformation for the think tank Atlantic Council’s DFR Lab, and Graphika, though he was also part of NATO, the report said.
According to the latest report, the United States and Ukraine are among the countries most targeted by disinformation campaigns, whether domestic or foreign.
For Facebook, the most active disinformators on the network remain Russia’s Internet Research Agency and other companies linked to Yevgeny Prigozhin, as well as Iranian and Burmese state authorities and media, but also US extremists and Ukrainian communication agencies and political parties, the report said.
Nicknamed “Putin’s chef” because his company has catered for the Kremlin, Prigozhin was sanctioned by the European Union on grounds of undermining peace in Libya by supporting the Wagner private military company.
Prigozhin, 59, was earlier sanctioned by the US for his links to Wagner, which has been accused of sending mercenaries to fight in conflicts in Libya, Syria and countries in sub-Saharan Africa.
Despite the sanctions, Prigozhin has branched out into various businesses, uses shadowy offshore firms and enjoys a lavish jet-set lifestyle.
According to the Facebook report, threat trends continue to evolve. Here are some of the key trends and tactics that have been observed:
- A shift from “wholesale” to “retail” IO (Information Operations) : Threat actors pivot from widespread, noisy deceptive campaigns to smaller, more targeted operations.
- Blurring of the lines between authentic public debate and manipulation: Both foreign and domestic campaigns attempt to mimic authentic voices and co-opt real people into amplifying their operations.
- Perception Hacking: Threat actors seek to capitalize on the public’s fear of IO to create the false perception of widespread manipulation of electoral systems, even if there is no evidence.
- IO as a service: Commercial actors offer their services to run influence operations both domestically and internationally, providing deniability to their customers and making IO available to a wider range of threat actors.
- Increased operational security: Sophisticated IO actors have significantly improved their ability at hiding their identity, using technical obfuscation and witting and unwitting proxies.
What can be done about these threats? The threat report highlights a number of actions to make IO less effective, easier to detect and more costly for cybercriminals.
Because expert investigations are hard to scale, it’s important to combine them with automated detection systems.
This in turn allows investigators to focus on the most sophisticated adversaries and emerging risks coming from yet unknown actors.
In addition to stopping specific operations, platforms should keep improving their defenses to make the tactics that threat actors rely on less effective: for example, by improving automated detection of fake accounts.
Lessons are incorporated from CIB (Coordinated Inauthentic Behavior) disruptions back into the products, and red team exercises are run to better understand the evolution of the threat and prepare for highly-targeted events like elections.
Influence operations are rarely confined to one medium.
While each service only has visibility into activity on its own platform, all of
society — including independent researchers, law enforcement and journalists — can connect the dots to better counter IO.
One area where a whole-of-society approach is particularly impactful
is in imposing costs on threat actors to deter adversarial behavior.
For example, to leverage public transparency and predictability in enforcement it’s important to expose and ban the people behind IO operation.
According to SOPHOS.com, scams on Facebook include cross-site scripting, clickjacking, survey scams and identity theft.
One of the scammers’ favorite methods of attack of the moment is known as cross-site scripting or “Self-XSS.”
Facebook scams also tap into interest in the news, holiday activities and other topical events to get you to innocently reveal your personal information.
Facebook posts such as “create a Royal Wedding guest name” and “In honor of Mother’s Day” seem innocuous enough, until you realize that information such as your children’s names and birthdates, pet’s name and street name now reside permanently on the Internet.
Since this information is often used for passwords or password challenge questions, it can lead to identity theft.
Other attacks on Facebook users include “clickjacking” or “likejacking,” also known as “UI redressing.”
This malicious technique tricks web users into revealing confidential information or takes control of their computer when they click on seemingly innocuous webpages.
Clickjacking takes the form of embedded code or script that can execute without the user’s knowledge. One disguise is a button that appears to perform another function.
Clicking the button sends out the attack to your contacts through status updates, which propagates the scam.
Scammers try to pique your curiosity with messages like “Baby Born Amazing effects” and “The World Funniest Condom Commercial – LOL.”
Both clickjacking scams take users to a webpage urging them to watch a video. By viewing the video, it’s posted that you “like” the link and it’s shared with your friends, spreading it virally across Facebook.
Sources: Asia Times would like to thank Pierre Gastineau at Intelligence Online, Facebook’s State of Influence Operations, SOPHOS, Euractiv.com