IrisChat is designed to support mental health and emotional wellbeing. Because users may sometimes share content related to distress, self-harm, or crisis situations, IrisChat maintains the following safety protocols in accordance with applicable law and our commitment to user wellbeing.
IrisChat includes automated safety features designed to detect certain types of crisis-related content in user conversations, including expressions of suicidal ideation, self-harm, or statements indicating a risk to the user's safety or the safety of others.
When such content is detected, IrisChat is designed to display a notification directing the user to relevant crisis service providers, which may include:
IrisChat's crisis detection is performed by automated systems. These systems are not guaranteed to identify all situations that may require intervention. The classifier may fail to detect crisis content in some cases.
You should not rely on IrisChat to recognize, detect, or respond to a mental health crisis or medical emergency.
If you or someone you know is experiencing a mental health crisis, suicidal thoughts, or a medical emergency, please contact emergency services (911) or the 988 Suicide & Crisis Lifeline (call or text 988) immediately. Do not use IrisChat to report an emergency.
IrisChat maintains protocols to prevent the generation of content that promotes, encourages, or provides instructions for suicide or self-harm. These protocols are applied across all user interactions.
FL105 reviews and updates these safety protocols on an ongoing basis. This page reflects the protocols currently in effect.
For questions about IrisChat's safety practices, contact us at privacy@irischat.ai.