Designing an AI companion for survivors of sexual violence

Designing an AI companion for survivors of sexual violence

Project Type

Independent Foundation

For

Beyond Believed

Year

2020-Present

Beyond Believed is an independent Social Purpose Foundation I founded to explore a difficult question: what would it look like for technology to show up for survivors of sexual violence in a way that is validating, trauma‑informed, and actually useful — without pretending to replace human care?

The starting point was the gap I saw between what survivors are often offered and what they actually need in the messy middle after harm. Hotlines and legal resources exist, but much of the experience is still shaped around systems’ needs: reporting, documentation, “next steps.” Survivors, meanwhile, are often cycling through shock, self‑blame, and uncertainty. They’re searching for language, trying to understand what happened, and deciding who (if anyone) to tell — all while navigating tools and platforms that are not designed with their emotional reality in mind. Large triggering statistics and donate buttons dominate the focus on many organization websites aimed to support survivors.

My goal with Beyond Believed was to create an AI‑powered companion that centers survivors instead of institutional systems. I wanted to create a space that could help someone name what they’re going through, understand options at their own pace, and feel believed — while being explicit about the limits and risks of technology in this context.

I approached this as both strategic research and a technological safety viewpoint expanding my knowledge on, trauma psychology, trust and safety design with AI. I began with a deep dive into trauma‑informed care principles, crisis response best practices, and existing survivor‑support ecosystems. From there, I spoke with more survivors and dug deeper into the landscape to understand additional failure modes: experiences of being minimized or disbelieved, confusing legal and medical language, and digital products that push people into rigid pathways before they’re ready.

Using those insights, I designed core interaction patterns for the companion: how it greets someone, how it responds to disclosures, what it never says, and how it handles uncertainty or risk. A major focus was on developing the LLM working prompt. I worked through multiple iterations of conversational flows to avoid blame, reduce pressure, and clearly signal autonomy (“here are some options if and when you want them”) rather than urgency or judgment. I also explored how to surface information about reporting, evidence, and care in a way that was specific and accurate without becoming clinical or overwhelming.

Because trust and safety are non‑negotiable in this space, I treated constraints as first‑class design inputs. I mapped out scenarios where AI should defer, slow down, or hand off (for example, disclosures of imminent harm), and how to communicate those boundaries honestly. In parallel, I’ve been building Beyond Believed as an active network, not just a product: doing ongoing business development with legal, therapeutic, and advocacy professionals and growing a vetted database of supportive providers across the U.S. That directory is core to the model — it allows the companion to hand someone off to real human care instead of trying to “hold” everything itself.

I was also intentional about not funneling survivors into a reporting‑first pathway. Rather than pushing toward police or institutional systems by default, I structured recommendations so that, when reporting or formal steps are relevant, they’re framed as options and connected to trusted, vetted humans who can walk someone through the process. The emphasis stays on centering the survivor’s pace and consent, while making sure there is always a clear, human next step available when they’re ready.

Beyond Believed has become a throughline in how I approach responsible technology: as a space where interaction design, research, and ethics are inseparable, and where success is measured not just by what a product can do, but by how it treats people at their most vulnerable. The work has been recognized with three Anthem Awards.

AI · Responsible Tech · Product Strategy · Trauma‑Informed Design

Next Project