Facebook is using new AI tools and algorithms to detect posts that may indicate a user is suicidal or thinking about harming themselves.
Posts can already be flagged and reported by other users who see indications of suicidal thoughts, but the new artificial intelligence tools identify signals such as comments asking if a user is OK, then alert the Facebook team about the pattern recognition.
Posts that are flagged either by other users or by the AI tools are then reviewed by Facebook staff to determine if intervention is needed. Interventions may include help from first responders or messages from Facebook staff about resources such as crisis help lines or the National Eating Disorder Association, CNN reported.
AI also is being used to prioritize flagged posts so that users can get help more quickly.
“In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times,” Facebook CEO Mark Zuckerberg posted Monday. “With all the fear about how AI may be harmful in the future, it’s good to remind ourselves how AI is actually helping save people’s lives today.”
Those numbers do not include the posts flagged by other Facebook users.
The new AI tools will be used globally, except in the EU where privacy restrictions on data are more stringent than elsewhere, CNN reported.
Some on Twitter wondered whether their posts would get responses from the new algorithms.
© 2025 Newsmax. All rights reserved.