Crisis and Safety Protocol
Updated: December 4, 2025
Wayhaven is an AI well-being coach designed to support everyday challenges and emotional wellness for students and other adult users. It is not a crisis service, emergency responder, or clinical provider and does not perform diagnosis, risk assessment, or treatment. When conversations touch on thoughts of suicide or self-harm, Wayhaven follows a dedicated harm to self protocol that prioritizes human support and limits the content that the AI can provide in these situations.
Purpose and Scope
Our protocol is designed to do three things whenever a user mentions suicidal ideation, suicide, or self-harm:
- Stop normal coaching conversation so that the AI does not continue as a “companion” on this topic.
- Avoid generating content that could encourage, normalize, or provide tips, instructions, or encouragement related to suicidal or self-harm acts.
- Proactively guide the user toward trained human support, including crisis hotlines, crisis text lines, and local resources.
This protocol applies to all harm to self content, whether the user describes current or past thoughts or behaviors, and whether the language is direct or indirect.
How we detect concerning content
Every message is screened by an automated safety system that is designed to identify language about:
- Wanting to die, disappear, or not exist
- Thoughts of harming oneself
- References to suicide, self-harm methods, or plans
- Statements that are ambiguous but may point to harm to self, such as “I can’t do this anymore”
For ambiguous statements, the AI may use limited clarifying language in plain terms before routing based on the user’s voluntary response, without conducting a clinical risk evaluation. This helps reduce false positives and avoid prematurely interrupting a user’s experience when their wording may not actually refer to harm to self.
The AI is instructed not to seek additional details about suicidal thoughts, intent, plans, or methods beyond that limited clarifying language.
What happens when harm to self content is detected
Once the harm to self protocol is triggered, the AI:
- Sets clear boundaries
- Acknowledges the user’s distress in supportive, nonjudgmental language.
- States that as an AI coach, it cannot safely help with suicidal or self-harm situations and that a trained human is needed.
- Does not offer coping tips, advice, or wellness tools in conversations where a user has described suicidal thoughts together with details such as intent, a method, a plan, or actions toward harming themselves.
- When the harm to self protocol is active and the user’s content reflects thoughts like not wanting to be here or other painful ideas about their own well-being, without describing intent, a method, a plan, or actions toward harming themselves, and they decline to contact a human support option, the AI may offer a brief grounding or distress tolerance exercise that remains within its wellness scope. These exercises are only offered after crisis resources have been presented and the user has chosen not to reach out.
- Provides crisis referrals that include hotlines and text lines
- Presents local or institution-specific resources when available, such as campus counseling, after-hours lines, or other designated supports.
- Provides national crisis options that connect the user with trained professionals, such as:
- Suicide & Crisis Lifeline (call or text 988 or chat online)
- Directions to call emergency services or go to the nearest emergency department in a medical or mental health emergency.
- Resources are presented as direct, clickable links or phone numbers whenever the user’s device supports that capability.
- Addresses barriers to reaching out
- If a user states that they do not want to contact a resource or expresses ambivalence about reaching out, the AI explores barriers in a limited, structured way. Examples include concerns about whether the resource will help, privacy concerns, practical obstacles, or not knowing what to say. For each barrier type, the AI uses brief, evidence-informed messages aimed at encouraging help-seeking, while staying within a non-clinical, wellness scope.
- The AI does not minimize the user’s experience, does not promise outcomes, and does not attempt to evaluate the user’s level of risk.
- Supports follow-through when the user agrees to seek help
- When a user chooses to contact a crisis service or other human support, the AI may offer to collaborate on a short, structured “action plan” to make that step more concrete. This can include asking how the user plans to contact the resource, where they will be when they do so, and whether they want someone with them. The goal is to support follow-through, not to evaluate, monitor, or clinically manage risk, which is the role of human providers.
- The AI then asks the user to let it know once they have connected with human support and returns to non-crisis topics only after this confirmation.
- Limits continued engagement
- In situations where a user voluntarily shares suicidal thoughts together with details such as intent, method, plan, or actions that they have taken toward harming themselves, the AI stops all standard coaching and restricts its role to:
- Validating the user’s distress in plain, compassionate language.
- Presenting crisis and emergency resources as described above.
- Encouraging immediate connection with a human provider.
Local resources and institutional alerts
Wayhaven is designed to surface local resources alongside national crisis services whenever such information has been provided by a partner organization. Examples include campus counseling centers, after-hours crisis lines, basic needs programs, or other designated campus supports with trained staff.
For organizations that choose to participate in crisis alerting, Wayhaven can generate alerts to partner organizations when harm to self conversations activate our alerting feature. In those cases:
- A notification is sent to the organization’s designated contact(s) using the communication method that they specify, such as phone and email.
- The alert provides information that the partner organization may use in deciding whether or how to follow up under its own policies and practices. Wayhaven does not control or guarantee whether an organization reviews or acts on an alert.
- If an institution declines crisis alerts, the system still detects and logs the incident internally, and the user receives the same crisis referrals and resource information described above. This log is part of automated system behavior and does not imply human monitoring.
Wayhaven itself does not function as a first responder and does not replace campus, community, or national crisis services.
Oversight and ongoing improvement
Wayhaven’s harm to self protocol is developed and maintained by clinical leadership with expertise in digital mental wellness, in collaboration with product and engineering teams. The protocol is implemented through system-level guardrails, AI prompts, and post-hoc review of de-identified transcripts that have triggered crisis pathways, with the goal of continuously improving safety and reducing the likelihood that the AI will generate unsafe or inappropriate content related to suicidal ideation, suicide, or self-harm.
We review and update these safeguards as laws, best practices, and safety technology evolve.
Contact information
If you have any questions or concerns about this policy, please contact us by emailing us at support@wayhaven.com.