What inspired you to join the alignAI project?

I’ve always been drawn to the intersection of technology and mental health. I have a background in psychology, and I was especially interested in the relationship between social media and psychological well-being during my studies. Through that work, I noticed how technology is increasingly shaping our emotional and cognitive experiences. As AI tools become more common in mental health support, I saw alignAI as the perfect opportunity to shift my focus toward a newer, rapidly evolving field, which allows me to continue exploring human-tech interactions with Large Language Models.

What is the focus of your research within alignAI?

My work focuses on identifying ethical risks in current AI mental health tools and helping to build standards to guide the responsible and user-centered design of LLMs. This involves evaluating and validating these tools to ensure they are safe, fair and genuinely supportive in mental health contexts.

What excites you the most about working at the intersection of AI and mental health?

Mental health is deeply human, while AI is often seen as the opposite – something highly technical, rational and distant. Bringing the two together opens up many opportunities. The real challenge is building something effective without losing the care and empathy mental health support requires. I’m excited to be part of a field that’s still evolving, and to help shape what ethical, human-centered AI can look like.

How do you see interdisciplinary collaboration shaping the future of AI, whether in your project or beyond?

AI alignment isn’t just a technical issue, it’s also a human one. That’s why interdisciplinary collaboration is essential for developing LLM tools that are not only functional, but also genuinely respectful of human needs and values. I believe the future of AI depends on this kind of cooperation, where no single discipline holds all the answers.

If you had to explain your research to a friend outside academia, how would you describe it?

I usually say, “It’s a bit like quality-checking a digital therapist. These chatbots are designed to support your mental health, but just because they sound caring doesn’t mean they’re doing a good job all the time. My research looks at how we can evaluate them in a more thoughtful way. Kind of making sure they’re actually helpful, responsible and not doing harm behind the scenes.”

Where can people follow your work?

I’ll be sharing updates on my LinkedIn page as the project progresses. You can also find updates on the alignAI website.

Doctoral Candidate Simay Toplu