Research Group
Safety- and Efficiency- aligned Learning

Investigating the feasibility of technical solutions to safety, security in machine learning.

Thumb ticker xxl 20240912 jonas geiping gruppe.jpg

The SEAL group is interested in questions of safety and efficiency in modern machine learning. There are a number of fundamental machine learning questions that come up in these topics that we still do not understand well. In safety, examples are questions about the principles of data poisoning, the subtleties of water-marking for generative models, privacy questions in federated learning, or adversarial attacks against large language models. Can we ever make these models “safe”, and how do we define this? Are there feasible technical solutions that reduce harm? Further, the research group is interested in questions about the efficiency of modern AI systems, especially for large language models. How efficient can we make these systems, can we train strong models with little compute? Can we extend the capabilities of language models with recursive computation? How do efficiency modifications impact the safety of these models?

People

Thumb ticker thumb ticker sm niki

Niccolò Ajroldi

  • Research Engineer
Thumb ticker johannesbertram

Johannes Bertram

  • Student Assistant
Thumb ticker albert

Albert Catalán Tatjer

  • Ph. D. Student
Thumb ticker resized shashwat

Shashwat Goel

  • Ph. D. Student
Thumb ticker 1665253774271

Xueyan Li

  • Ph. D. Student
Thumb ticker resized alexander

Alexander Panfilov

  • Ph. D. Student
Thumb ticker guinan

Guinan Su

  • Ph. D. Student