28 November 2024

From Tübingen to Vancouver: ELLIS Institute at NeurIPS 2024

Thumb ticker xxl cover picture

Groundbreaking Research, Posters, and Our Booth!

The ELLIS Institute Tübingen will proudly showcase 2 oral presentations and 26 poster contributions at NeurIPS 2024 in Vancouver!

From December 11th to 13th, our scientists will present recent advances on topics ranging from measuring and benchmarking LLM alignment using survey data, formalizing causality with exchangable data and improving neural network architectures, to designing privacy backdoors in pre-trained models, interpretability at scale and deep learning theory.

This year, we are also excited to host a booth at NeurIPS! If you’re attending, stop by to meet our team, learn more about our work, and say hello! Exhibit hours: Tuesday, Dec 10th from 12pm - 8pm // Wednesday, Dec 11th from 9am - 5pm // Thursday, Dec 12th from 9am - 4pm.

Below is a detailed schedule of our contributions, including times and session numbers for each presentation or poster. In the list, the bold text highlights the names of our PIs.

Most of our PIs will be at NeurIPS in person!

Oral Contributions

Thursday, Dec 12, 10:40 - Oral Session 3B
Questioning the Survey Responses of Large Language Models Ricardo Dominguez-Olmedo, Moritz Hardt, Celestine Mendler-Dünner.
Friday, Dec 13, 10:20 - Oral Session 5B
Do Finetti: On Causal Effects for Exchangeable Data Siyuan Guo, Chi Zhang, Karthika Mohan, Ferenc Huszar, Bernhard Schölkopf.

Poster Contributions

Wednesday, Dec 11, 11:00 - Poster Session 1
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, Nicholas Carlini.
Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs Abhimanyu Hans, Yuxin Wen, Neel Jain, John Kirchenbauer, Hamid Kazemi, Prajwal Singhania, Siddharth Singh, Gowthami Somepalli, Jonas Geiping, Abhinav Bhatele, Tom Goldstein.
Drift-Resilient TabPFN: In-Context Learning Distribution Shifts on Tabular Data Kai Helli, David Schnurr, Noah Hollmann, Samuel Müller, Frank Hutter
HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models Rhea Sanjay Sukthanker, Arber Zela, Benedikt Staffler, Aaron Klein, Lennart Purucker, Jörg K.H. Franke, Frank Hutter
Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks Jerome Sieber, Carmen Amo Alonso, Alexandre Didier, Melanie Zeilinger, Antonio Orvieto
Cooperate or Collapse: Emergence of Sustainability in a Society of LLM Agents Giorgio Piatti, Zhijing Jin, Max Kleiman-Weiner, Bernhard Schölkopf, Mrinmaya Sachan, Rada Mihalcea
Wednesday, Dec 11, 16:30 - Poster Session 2
A Novel Approach to Loss Landscape Characterization without Over-Parametrization Rustem Islamov, Niccolò Ajroldi, Antonio Orvieto, Aurelien Lucchi
Super Consistency of Neural Network Landscapes and Learning Rate Transfer Lorenzo Noci, Alexandru Meterez, Thomas Hofmann, Antonio Orvieto
Thursday, Dec 12, 11:00 - Poster Session 3
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova, Chinmay Hegde, Frank Hutter, Micah Goldblum, Niv Cohen, Colin White
Theoretical Foundations of Deep Selective State-Space Models Nicola Muca Cirone, Antonio Orvieto, Benjamin Walker, Cristopher Salvi, Terry Lyons
Recurrent neural networks: vanishing and exploding gradients are not the end of the story Nicolas Zucchet, Antonio Orvieto
On Affine Homotopy between Language Encoders Robin Chan, Reda Boumasmoud, Anej Svete, Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Mennatallah El-Assady, Ryan Cotterell
Metrizing Weak Convergence with Maximum Mean Discrepancies Carl-Johann Simon-Gabriel, Alessandro Barp, Bernhard Schölkopf, Lester Mackey
From Causal to Concept-Based Representation Learning Goutham Rajendran, Simon Buchholz, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar
Questioning the survey responses of large language models Ricardo Dominguez-Olmedo, Moritz Hardt, Celestine Mendler-Dünner.
Thursday, Dec 12, 16:30 - Poster Session 4
Causal vs. Anticausal merging of predictors Sergio Garrido Mejia, Patrick Blöbaum, Bernhard Schölkopf, Dominik Janzing
Limits of Transformer Language Models on Learning to Compose Algorithms Jonathan Thomm, Aleksandar Terzic, Giacomo Camposampiero, Michael Hersche, Bernhard Schölkopf, Abbas Rahimi
Friday, Dec 13, 11:00 - Poster Session 5
Algorithmic Collective Action in Recommender Systems: Promoting Songs by reordering playlists Joachim Baumann, Celestine Mendler-Dünner.
Evaluating Language Models as Risk Scores Andre Cruz, Moritz Hardt, Celestine Mendler-Dünner.
Transformers Can Do Arithmetic with the Right Embeddings Sean McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, Tom Goldstein.
Improving Deep Learning Performance through Constrained Parameter Regularization Jörg K.H. Franke, Michael Hefenbrock, Gregor Koehler, Frank Hutter
Measuring Per-Unit Interpretability at Scale Without Humans Roland S. Zimmermann, David Klindt, Wieland Brendel
Do Finetti: On Causal Effects for Exchangeable Data Siyuan Guo, Chi Zhang, Karthika Mohan, Ferenc Huszar, Bernhard Schölkopf
Friday, Dec 13, 16:30 - Poster Session 6
An engine not a camera: Measuring performative power of online search Celestine Mendler-Dünner, Gabriele Carovano, Moritz Hardt.
CALVIN: Improved Contextual Video Captioning via Instruction Tuning Gowthami Somepalli, Arkabandhu Chowdhury, Jonas Geiping, Ronen Basri, Tom Goldstein, David W. Jacobs.
Rule Extrapolation in Language Modeling: A Study of Compositional Generalization on OOD Prompts Anna Mészáros, Patrik Reizinger, Szilvia Ujváry, Wieland Brendel, Ferenc Huszar