Fraudulent uses of Facebook are as numerous as the number of users of the social network. In recent years, critics have been calling on the American giant to do more and more to fight head-on against trolls, fake news and other online scams.
Now Facebook wants to move up a gear by creating a platform managed by robots. This platform will be designed to prevent people from abusing its system. The technological challenge is to create a world of robots that can imitate fraudulent behaviour.
In the current context, the issue of scams or trolls is more than ever part of the tech news and beyond. True to its anti-censorship policy, the Californian company entrusted its researchers with the mission of testing a dedicated platform – basically a ghost Facebook where non-existent users can love, share and make friends (or harass, abuse and scam) away from human eyes.
This Web Simulation Environment (WES) program tested Facebook’s system for reliability and integrity of shared data and privacy protection.
Researchers used several areas of study, including machine learning software and artificial intelligence-assisted games.
They have therefore set up a system where, for example, a “scamming” robot can be trained to interact with “target” robots that exhibit similar behaviour to the real victims of Facebook scams.
Other robots can be trained to invade the privacy of fake users or to search for “bad” content that violates Facebook rules.
Another lesson from this study is that testing from robots helps to detect bugs. The researchers used simulation to create users whose sole purpose was to steal information from other robots to share it in the system.
Now the social network will be able to anticipate bad behaviour, such as when scammers access other users’ data after an update.
In the test, some robots could gain access to the “real” Facebook as long as they did not access data that violated privacy rules. They could then react to this data.
Within this large-scale fake network, they could also act and make arbitrary observations. However, the researchers warn that “robots must be properly isolated from real users to ensure that the simulation does not lead to unexpected interactions between robots and real users”.
Researchers are likely to limit these interactions in a fake virtual network because they want to prevent a possible catastrophic impact of robots’ actions on the social network.
We are therefore still far from a heated debate between a man and a machine, or even a blue card scam led by an artificial intelligence that has gone wrong.