ML Bootcamp

Build production-ready ML skills specifically tailored for AI alignment research. This completely free INTENSIVE 8-week program transforms mathematically-minded students into capable AI safety researchers.

📋 Who Should Apply

  • Background Required: Undergraduate mathematics + Python programming experience
  • Preferred (but not required): Data structures & algorithms experience
  • Time Commitment: Full-time availability for 8 weeks (40+ hours/week)
  • Mindset: Genuine commitment to AI safety career path
  • Motivation: Strong desire to contribute to AI alignment research

Note: This is an intensive, full-time bootcamp designed to rapidly upskill talented individuals for AI safety careers. Please ensure you can commit fully before applying.

💻

Technical Intensive

8 Weeks Full-Time • Hands-On Coding • Expert Mentorship
This intensive bootcamp rapidly transforms students with strong mathematical foundations into skilled AI safety researchers. Through hands-on implementation of cutting-edge techniques, you’ll gain the practical skills needed to contribute meaningfully to AI alignment research from day one.

Comprehensive 8-Week Curriculum

Inspired by MLAB and ARENA curricula, our program covers:

🧠 Weeks 1-2: Deep Learning & Transformers
  • Neural network fundamentals with PyTorch from scratch
  • Convolutional Neural Networks (CNNs) implementation
  • Recurrent Neural Networks (RNNs) and LSTM architectures
  • Implement ResNet architecture from the ground up
  • Transformer architecture (“Attention is All You Need”) from scratch
  • GPT-2 implementation and fine-tuning
  • BERT implementation and fine-tuning
🔍 Weeks 3-4: Interpretability & Mechanistic Analysis
  • Key concepts and importance of interpretability in AI safety
  • Model-agnostic interpretation methods (SHAP, LIME, etc.)
  • Visualizing and understanding Convolutional Neural Networks
  • Transformer interpretability techniques and tools
  • Adversarial robustness and its implications for safety
  • Mechanistic interpretability using TransformerLens
  • In-context learning and induction heads analysis
  • Superposition and feature disentanglement
🎮 Week 5: Deep Reinforcement Learning
  • RL fundamentals: value functions, policy gradients, Q-learning
  • Vanilla Policy Gradient implementation
  • Proximal Policy Optimization (PPO) from scratch
  • Deep Q-Network (DQN) implementation
  • Reinforcement Learning from Human Feedback (RLHF)
  • Gym & Gymnasium environments and safety considerations
⚡ Week 6: Training at Scale
  • GPU optimization and efficient training techniques
  • Distributed computing for machine learning
  • Data parallelism, tensor parallelism, and pipeline parallelism
  • Fine-tuning large language models efficiently
  • Memory optimization and gradient checkpointing
🎯 Weeks 7-8: Capstone Research Project

Choose one of these influential paper replications or tackle novel AI alignment problems:

  • “Towards Monosemanticity” (Anthropic) – Feature extraction and visualization
  • “Automated Circuit Discovery” – Mechanistic interpretability methods
  • “How does GPT-2 Compute Greater-Than?” – Algorithmic reasoning in LLMs
  • “Attribution Patching” – Causal intervention techniques
  • Custom Project: Work on concrete AI alignment problems with mentor guidance

Program Structure

Duration 8 weeks full-time intensive
(40+ hours per week)
Format Hands-on coding with
expert mentorship
Cohort Size Small groups of 8-12
highly motivated participants
Outcome Portfolio-ready projects +
research experience

What You’ll Gain

  • Production ML Skills: Implement state-of-the-art models from scratch
  • AI Safety Expertise: Deep understanding of alignment-relevant techniques
  • Research Portfolio: Completed projects demonstrating your capabilities
  • Network Access: Connections with AI safety researchers and practitioners
  • Career Readiness: Skills needed for immediate contribution to AI alignment work

Career Pathways After Completion

  • AI safety research positions at leading organizations
  • PhD programs in AI safety or machine learning
  • Alignment team positions at AI companies
  • Independent research fellowships
  • Starting AI safety organizations or initiatives
  • Continued learning through our AI Alignment 102 program

📋 Apply for ML Skill-Up Bootcamp

Next cohort starts: October 2025 • Application deadline: September 1, 2025

Limited to 12 participants per cohort – early application recommended

1

Comprehensive Application

Math background, coding portfolio, motivation essay, and time commitment verification

2

Technical Assessment

Coding challenge and technical interview to assess readiness

3

Fit Interview

Discussion about commitment, goals, and program expectations

4

Cohort Selection

Join your intensive bootcamp cohort and begin transformation

Apply Now →

Program Schedule

2026 Cohort May – July
Application Deadlines 3 weeks before each start date
Response Time 1-2 weeks after final interview

Questions?

Contact us at contact@rocaisc.org or schedule a call with our admissions team.

This intensive program is offered at no cost as part of our commitment to building the next generation of AI safety researchers.