AI Safety Fundamentals Programme
Overview
The AI Safety fundamentals programme is designed to make the space of AI alignment and governance of risks from advanced AI more penetrable.
In this programme, we bring together experts and knowledgeable facilitators with participants, to discuss 12 weeks of curated reading that together introduces the field. The programme involves introduction lectures, talks from guest speakers in the alignment space, and concludes with capstone projects. These are self-directed projects that help participants to advance their relevant skills or knowledge, or start to contribute to the field.
There are two tracks available: Alignment and Governance. The Alignment Curriculum explores problems and solutions to aligning the outputs of advanced AI with humanity’s intentions. The Governance Curriculum explores questions around how AI is interacting with other emerging technologies, global governance systems, AI politics, and more. Programme goals
Target Audience
We accept a broad range of people for this programme. For example, past participants have been:
Students, academics or professionals who are motivated by AI safety arguments or other arguments about long-term risks from AI, and want to learn more about current technical and governance research/questions
People who may be interested in pursuing a career in ensuring future AI systems are beneficial for humanity
The technical track is most useful to people with technical backgrounds (e.g. maths, CS or some other mathematical science). Note the curriculum is intended to be accessible for those who aren’t familiar with machine learning, and participants will be put in groups with others from similar backgrounds.
The governance track to be accessible to people from a broader range of academic backgrounds.