TensorFlow.js toxicity classifier demo

This is a demo of the TensorFlow.js toxicity model, which classifies text according to whether it exhibits offensive attributes (i.e. profanity, sexual explicitness). The samples in the table below were taken from this Kaggle dataset.

Enter text below and click 'Classify' to add it to the table.

Classify