We can all guess at what "toxic comments" are. There are enough of them on the web that it's a pretty easy "know it when you see it" judgement. Jigsaw's machine learning algorithm, Perspective, will moderate toxic comments for you.
"The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how 'toxic' they are and whether similar language had led other people to leave conversations..."
My only concern is that if a site has really shoddy moderation...isn't there a chance that some not-so toxic, just challenging, comments could be removed? Heated debate is a wonderful tool for learning, in my opinion. Yes, there is a line - and it's a line that I know the location of personally and instinctively. Automating that line generation seems a bit questionable to me. But perhaps it will be a flexible tool - maybe Perspective will moderate according to the broad needs of each individual community? I suppose if the members accept such a thing, there would be no issue. I don't think I would implement it on anything I was a moderator or admin of, however.