October 10, 2024

We are publishing the second chapter of our textbook: Risks Landscape

The second chapter in the fundamentals series explores the landscape of AI safety risks. Building upon the potential paths to AGI outlined in the first chapter, we map out how these paths lead to corresponding paths to risk.

We decompose these risks into three main groups: Misuse, Misalignment, and Systemic risks. We explain misuse risks such as bioterrorism, cyberterrorism, and warfare. We introduce the concept of (mis)alignment, discussing why it's a particularly challenging problem, with brief overviews of issues like specification gaming and goal misgeneralization. We also discuss systemic risks, including those arising from accidents, persuasion, power concentration, and epistemic erosion. Additionally, we explore factors that may exacerbate all these risks, such as indifference, race dynamics, unpredictability, and large-scale deployment.

The goal of this chapter is to provide a concrete, comprehensive overview of AI-related risks and their underlying factors. This foundation is crucial for understanding the governance and technical solutions discussed in later chapters, as each of these solutions aims to address one or more of the risks introduced here.

Lire le chapitreLire le chapitre
Sign up for our newsletter