Advancing AI safety through cutting-edge research and innovative development projects.
Our Commitment
Our commitment extends beyond education. We actively engage in research and development to advance the field of AI safety.
Through rigorous investigation and innovative solutions, we aim to contribute to the global understanding of AI alignment challenges while developing practical tools that benefit the broader research community.
Our Focus Areas
Research
- Trustworthy AI systems
- Neural network reliability assessment
- Foundational model evaluation
- Adversarial robustness
- AI control mechanisms
Development
- AI research automation tools
- Collective epistemics frameworks
- Open source contributions
- Research infrastructure
- Community tools and resources
Research Impact
Our work contributes to the global understanding of AI safety challenges while providing practical solutions for the research community.
Collaboration Opportunities
Student Research
Opportunities for students to contribute to ongoing research projects
Faculty Partnerships
Collaborations with RIT faculty across multiple departments
Industry Connections
Partnerships with leading AI safety organizations
Open Source
Contributing tools and research to the broader community
Get Involved in Our Research
Interested in contributing to AI safety research? Whether you’re a student looking for research opportunities or a researcher interested in collaboration, we’d love to hear from you.
Contact Research Team Explore Our ProgramsAdvancing AI safety through rigorous research and innovative development.
