According to a report issued by financial times (Opens in a new tab)Google is working on a tool that could help edit outliers for small businesses like startups that may not have the resources to do so.
The internal project, worked on by Google’s Jigsaw division, which is tasked with challenging threats to open societies, was developed in collaboration with the UN-backed Tech Against Terrorism.
Google says the initiative is designed to help moderators detect and remove potentially illegal content, including racist and other hateful comments, from a website.
Google to combat terrorism
The project was made possible by a database of terrorist elements provided by the Global Internet Forum, which was founded by a group of tech giants including the likes of Google, Meta, Microsoft and Twitter.
It is specifically designed to support small businesses that are unable to provide the resources needed for effective moderation, whether it be large teams of workers or expensive AI tools.
It is a tool that is expected to be valuable at a time when extremists who have been banned from major networks are opting for smaller platforms to express their views. It also serves as a safeguard for companies that react to the EU’s Digital Services Act and the UK’s upcoming Online Safety Bill, which penalizes companies that do not remove such content.
For now, it appears that it will operate on an opt-in basis, meaning that companies whose primary purpose is to harbor such messages will continue to do so even in the face of potential fines.
It is believed that two (unnamed) companies will test the code later this year, which indicates that full deployment is still some time away.
Elsewhere, Meta has launched its own tool it calls Hasher-Matcher-Actioner (HMA). Like Project Jigsaw, it was designed to prevent the spread of hateful content, building on the platform’s existing photo and video content moderation tools.