Build production-ready ML skills specifically tailored for AI alignment research. This completely free INTENSIVE 8-week program transforms mathematically-minded students into capable AI safety researchers.
📋 Who Should Apply
- Background Required: Undergraduate mathematics + Python programming experience
- Preferred (but not required): Data structures & algorithms experience
- Time Commitment: Full-time availability for 8 weeks (40+ hours/week)
- Mindset: Genuine commitment to AI safety career path
- Motivation: Strong desire to contribute to AI alignment research
Note: This is an intensive, full-time bootcamp designed to rapidly upskill talented individuals for AI safety careers. Please ensure you can commit fully before applying.
Technical Intensive
Comprehensive 8-Week Curriculum
Inspired by MLAB and ARENA curricula, our program covers:
- Neural network fundamentals with PyTorch from scratch
- Convolutional Neural Networks (CNNs) implementation
- Recurrent Neural Networks (RNNs) and LSTM architectures
- Implement ResNet architecture from the ground up
- Transformer architecture (“Attention is All You Need”) from scratch
- GPT-2 implementation and fine-tuning
- BERT implementation and fine-tuning
- Key concepts and importance of interpretability in AI safety
- Model-agnostic interpretation methods (SHAP, LIME, etc.)
- Visualizing and understanding Convolutional Neural Networks
- Transformer interpretability techniques and tools
- Adversarial robustness and its implications for safety
- Mechanistic interpretability using TransformerLens
- In-context learning and induction heads analysis
- Superposition and feature disentanglement
- RL fundamentals: value functions, policy gradients, Q-learning
- Vanilla Policy Gradient implementation
- Proximal Policy Optimization (PPO) from scratch
- Deep Q-Network (DQN) implementation
- Reinforcement Learning from Human Feedback (RLHF)
- Gym & Gymnasium environments and safety considerations
- GPU optimization and efficient training techniques
- Distributed computing for machine learning
- Data parallelism, tensor parallelism, and pipeline parallelism
- Fine-tuning large language models efficiently
- Memory optimization and gradient checkpointing
Choose one of these influential paper replications or tackle novel AI alignment problems:
- “Towards Monosemanticity” (Anthropic) – Feature extraction and visualization
- “Automated Circuit Discovery” – Mechanistic interpretability methods
- “How does GPT-2 Compute Greater-Than?” – Algorithmic reasoning in LLMs
- “Attribution Patching” – Causal intervention techniques
- Custom Project: Work on concrete AI alignment problems with mentor guidance
Program Structure
(40+ hours per week)
expert mentorship
highly motivated participants
research experience
What You’ll Gain
- Production ML Skills: Implement state-of-the-art models from scratch
- AI Safety Expertise: Deep understanding of alignment-relevant techniques
- Research Portfolio: Completed projects demonstrating your capabilities
- Network Access: Connections with AI safety researchers and practitioners
- Career Readiness: Skills needed for immediate contribution to AI alignment work
Career Pathways After Completion
- AI safety research positions at leading organizations
- PhD programs in AI safety or machine learning
- Alignment team positions at AI companies
- Independent research fellowships
- Starting AI safety organizations or initiatives
- Continued learning through our AI Alignment 102 program
📋 Apply for ML Skill-Up Bootcamp
Next cohort starts: October 2025 • Application deadline: September 1, 2025
Limited to 12 participants per cohort – early application recommended
Comprehensive Application
Math background, coding portfolio, motivation essay, and time commitment verification
Technical Assessment
Coding challenge and technical interview to assess readiness
Fit Interview
Discussion about commitment, goals, and program expectations
Cohort Selection
Join your intensive bootcamp cohort and begin transformation
Program Schedule
Questions?
Contact us at contact@rocaisc.org or schedule a call with our admissions team.
This intensive program is offered at no cost as part of our commitment to building the next generation of AI safety researchers.
