The user will be able to more or less restrict the influence of a person seeking to harm him by approving or not his comments and thus making them visible or not for others.
Instagram announced on Monday to implement new tools to fight online harassment to ensure a “safe environment” for its users. Among these new features is a warning, generated by artificial intelligence software, that will be sent to people who want to publish offensive comments about the popular image sharing application.
The goal here is to limit the influence of a person seeking to harm others.
“This intervention allows people to reflect and cancel their remark, and it allows the recipient to avoid receiving this harmful comment,” explained Adam Mosseri, the boss of Instagram. Another tool called “restriction” aims to reduce a possible flow of negative remarks about the account of a user who is a victim of harassment.
This feature is intended for a reluctant user, for fear of escalating the situation, to block or unsubscribe from a stalker’s account.
The user will be able to more or less restrict the influence of a person seeking to harm him by approving or not his comments and thus making them visible or not for others. “Restricted people will not be able to see if you are active on Instagram or if you have read their direct messages,” explained Mr. Mosseri.