Reportedly it's our responsibility to create a safe environment on Instagram," said a statement from Adam Mosseri, head of the visually focused social platform owned by Facebook. "This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.
One new tool being rolled out is a warning generated by artificial intelligence to notify users their comment may be considered offensive before it is posted. "This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification," Mosseri said.
"From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect." Another new tool is aimed at limiting the spread of abusive comments on a user's feed. "We've heard from young people in our community that they're reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life," Mosseri commented.