Protect your users from online hate speech and cyberbullying

Sherloq uses the latest advances in deep learning and natural language processing to detect hate speech and cyber bullying, helping you create a safe environment for your users.

Cyberbullying and hate speech are a growing threat

Cases of online harassment and hate speech are making the headlines more often than ever. User churn and loss of advertising revenue are also increasingly problematic:

Users who experience toxicity are 320% more likely to quit.

About 38% of editors at Wikimedia have experienced some form of harassment and half of those are less likely to contribute further.

Social media platforms are facing millions in fines in Europe for not removing hateful content fast enough.

Current approaches to tackle this problem rely on manual moderation and blacklisted words. These approaches are slow, inaccurate and costly.

How can Sherloq help?

Using the latest advances in deep learning and natural language processing, Sherloq is unique in its ability to learn at the sentence and paragraph level. A breakthrough in NLP allowing the detection of different types of hate speech, from simple swear words to more complex patterns.

Using this powerful engine, Sherloq analyzes the content users generate and provides detailed analytics and insights about both toxic content and users.

Sherloq offers one-click integration with most online social networks and a simple API for custom platforms.




Please try a random example


Please try a random example

Leave us your email to get early access to Sherloq.