NeurIPS 2024 Workshops (2024)

Fine-Tuning in Modern Machine Learning: Principles and Scalability

Workshop

Fanghui Liu · Grigorios Chrysos · Beidi Chen · Rebekka Burkholz · Saleh Soltan · Angeliki Giannou · Masashi Sugiyama · Volkan Cevher

[ East Exhibition Hall A ]

Abstract

This workshop aims to contribute to the recent radical paradigm shift for fine-tuning in modern machine learning, both theoretically, computationally, and systematically. It encourages researchers to push forward the frontiers of theoretical understanding of fine-tuning, devising expeditious and resource-efficient inference and fine-tuning methods in machine learning systems, enabling their deployment within constrained computational resources.

AI4Mat-2024: NeurIPS 2024 Workshop on AI for Accelerated Materials Design

Workshop

Santiago Miret · N M Anoop Krishnan · Marta Skreta · Stefano Martiniani · Geemi Wellawatte · George Karypis

[ West Meeting Room 211-214 ]

Abstract

The AI for Accelerated Materials Discovery (AI4Mat) Workshop NeurIPS 2024 provides an inclusive and collaborative platform where AI researchers and material scientists converge to tackle the cutting-edge challenges in AI-driven materials discovery and development. By taking a comprehensive look at automated materials discovery spanning AI-guided design, synthesis and automated material characterization, we hope to create an opportunity for deep, thoughtful discussion among researchers working on these interdisciplinary topics, and highlight ongoing challenges in the field. This year, AI4Mat will focus on two major themes:

Why Isn't it Real Yet? This discussion centers on why AI in materials science has not yet experienced the type of exponential growth seen in adjacent fields at the intersection of science and AI, such as large language models (LLM), multi-modal AI, drug discovery and computational biology.

AI4Mat Unique Challenges: Managing Multimodal, Incomplete Materials Data: A unique challenge in materials science is managing multimodal, incomplete data that is collected from diverse types of real-world equipment, including synthesis and characterization tools. Additionally, datasets and scientific understanding are often incomplete given the fact that fundamental physics and chemistry phenomena are sometimes unknown.

The Fourth Workshop on Efficient Natural Language and Speech Processing (ENLSP-IV): Highlighting New Architectures for Future Foundation Models

Workshop

Mehdi Rezagholizadeh · Peyman Passban · Yu Cheng · Soheila Samiee · Yue Dong · Vahid Partovi Nia · Qun Liu · Boxing Chen

[ West Meeting Room 301 ]

Abstract

The fourth version of the Efficient Natural Language and Speech Processing (ENLSP-IV) workshop will focus on how to make large language and foundation models more efficient in terms of Architecture, Training, and Inference in their real-world applications. This year, following the trend of industry and academia, we put more emphasis on investigating new architectures to make future language and foundation models more efficient. Moreover, we highlight the importance of comprehensive evaluation and benchmarking new efficient models from different practical aspects. The workshop program offers an interactive platform for gathering experts and talents from academia and industry through invited talks, panel discussion, paper submission, reviews, interactive poster sessions, oral presentations and a couple of mentorship sessions for new researchers. This will be a unique opportunity to discuss and share challenging problems, build connections, exchange ideas and brainstorm, and foster future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, hardware, optimization, theory and applications.

AIM-FM: Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond

Workshop

Yixuan Yuan · Yao Qin · Xiang Li · Ying Wei · Bulat Ibragimov · Linda Petzold

[ East Ballroom A, B ]

Abstract

Towards next-generation medical analysis: Unlock the potential of medical foundation models for more explainable, robust, secure diagnosis solutions. Workshop Homepage: https://wymancv.github.io/AIM-FM-NeurIPS-Workshop/

Causality and Large Models

Workshop

Felix Leeb · Ching Lam Choi · Luigi Gresele · Josef Valvoda · Andrei Nicolicioiu · Xiusi Li · Patrik Reizinger · Louis-Pascal Xhonneux · Haoxuan Li · Mengyue Yang · Bernhard Schölkopf · Dhanya Sridhar

[ East Exhibition Hall C ]

Abstract

Our workshop aims to explore the synergies between causality and large models, also known as ``foundation models,'' which have demonstrated remarkable capabilities across multiple modalities (text, images, audio, etc.). Despite their high performance, the opaque nature of these large models raises crucial questions regarding their trustworthiness, especially in safety-critical domains. A growing community of researchers is turning towards a more principled framework to address these concerns, better understand the behavior of large models, and improve their reliability: causality.Specifically, this workshop will focus on four directions: causality in large models, to assess their causal reasoning abilities, causality for improving large models, causality with large models to enhance causal inference and discovery methods, and causality of large models to understand and control their internal mechanisms. The invited speakers and panelists (almost all of which have already been confirmed to attend) represent a diverse set of perspectives and expertise, across both academia and industry.The workshop is organized by a team of 12 members from six different institutions across North America, Europe, and Asia, ensuring diversity across research interests, backgrounds, and demographics. Visit our website: https://calm-workshop-2024.github.io/

Statistical Frontiers in LLMs and Foundation Models

Workshop

Anastasios Angelopoulos · Stephen Bates · Alexander D'Amour · Tatsunori Hashimoto · Jessica Hullman · Fanny Yang

[ West Ballroom A ]

Abstract

We propose a workshop on the emerging frontier at the intersection between statistics and foundation models. Rigorous evaluation of large foundation models such as LLMs is necessary for reliable deployment, but it poses a towering challenge due to a lack of datasets and the black-box nature of many such models. The proposed workshop brings together the community working on understanding and improving LLMs with new statistical methodologies, and explores topics including benchmarking, measuring and correcting bias, automatic evaluation, watermarking, models/data auditing, and uncertainty quantification.

3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)

Workshop

Sijia Liu · Kathrin Grosse · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Yao Qin · Baharan Mirzasoleiman · Sanmi Koyejo · Yuguang Yao · Yihua Zhang

[ East Ballroom C ]

Abstract

Adversarial machine learning (AdvML), a discipline that delves into the interaction of machine learning (ML) with ‘adversarial’ elements, has embarked on a new era propelled by the ever-expanding capabilities of artificial intelligence (AI). This momentum has been fueled by recent technological breakthroughs in large multimodal models (LMMs), particularly those designed for vision and language applications. The 3rd AdvML-Frontiers workshop at NeurIPS’24 continues the success of its predecessors, AdvML-Frontiers’22-23, by delving into the dynamic intersection of AdvML and LMMs. The rapid evolution of LMMs presents both new challenges and opportunities for AdvML, which can be distilled into two primary categories: AdvML for LMMs and LMMs for AdvML. This year, in addition to continuing to advance AdvML across the full theory-algorithm-application stack, the workshop is dedicated to addressing the intricate issues that emerge from these converging fields, with a focus on adversarial threats, cross-modal vulnerabilities, defensive strategies, multimodal human/AI feedback, and the overarching implications for security, privacy, and ethics. Join us at AdvML-Frontiers'24 for a comprehensive exploration of adversarial learning at the intersection with cutting-edge multimodal technologies, setting the stage for future advancements in adversarial machine learning. The workshop also hosts the 2024 AdvML Rising Star Award.

Audio Imagination: NeurIPS 2024 Workshop AI-Driven Speech, Music, and Sound Generation

Workshop

Anurag Kumar · Zhaoheng Ni · Shinji Watanabe · Wenwu Wang · Yapeng Tian · Berrak Sisman

[ West Meeting Room 114, 115 ]

Abstract

Generative AI has been at the forefront of AI research in the most recent times. A large number of research works across different modalities (e.g., text, image and audio) have shown remarkable generation capabilities. Audio generation brings its own unique challenges and this workshop is aimed at highlighting these challenges and their solutions. It will bring together researchers working on different audio generation problems and enable a concentrated discussions on the topic. The workshop will include invited talks, high-quality papers presented through oral and poster sessions, and a panel discussion including experts in the area to further enhance the quality of discussion on audio generation research. A crucial part of audio generation research is its perceptual experience by humans. To enable this, \emph{we also propose to have an onsite demo session during the workshop where presenters can showcase their audio generation methods and technologies}, leading to a unique experience for all workshop participants.

Workshop on Responsibly Building Next Generation of Multimodal Foundation Models

Workshop

Maitreya Patel · Changhoon Kim · Siwon Kim · Chaowei Xiao · Zhe Gan · 'YZ' Yezhou Yang

[ West Meeting Room 217-219 ]

Abstract

The rapid evolution of multimodal foundation models, capable of processing and generating language, images, video, and audio, has transformed numerous fields, including robotics, healthcare, and AI-driven media. However, these advancements bring forth significant challenges related to reliability, security, and societal impact. Instances of model hallucinations and the inadvertent generation of harmful content by Text-to-Image (T2I) models underscore the need for responsible and sustainable development practices.Our workshop aims to address these critical issues by establishing design principles that prioritize precautionary measures over reactive solutions. We will explore methodologies to enhance the reliability and robustness of multimodal models, focusing on fairness, security, and the mitigation of misinformation. By emphasizing preemptive strategies during dataset curation and model pre-training, we aim to reduce the extensive resource demands traditionally associated with iterative refinement processes.Key topics of discussion will include the identification of reliability concerns stemming from data quality, model architecture, and training strategies. Additionally, we will explore novel design principles that ensure the responsible and sustainable advancement of multimodal generative models. Our goal is to foster a collaborative environment where leading researchers and practitioners can develop actionable frameworks that align with ethical standards and maximize societal benefits.Through keynote talks, panel discussions, and interactive sessions, this …

Algorithmic Fairness through the lens of Metrics and Evaluation

Workshop

Awa Dieng · Miriam Rateike · Jamelle Watson-Daniels · Golnoosh Farnadi · Nando Fioretto

[ West Meeting Room 111, 112 ]

Abstract

We are proposing the Algorithmic Fairness through the lens of Metrics and Evaluation (AFME)workshop, which is the fifth edition of this workshop series on algorithmic fairness. While previouseditions have explored foundational work on causal approaches to fairness and the intersection offairness with other fields of trustworthy machine learning namely interpretability, robustness, privacyand temporal aspects, this year’s workshop aims to timely reflect on fairness metrics definitions andevaluation methods.Indeed, with rapid advances in large generative models and international regulatory efforts as well aspertinent calls to understand fairness in context, it is crucial to revisit the suitability of existing fairnessmetrics and explore new bias evaluation frameworks. Our workshop aims to provide a venue to haverigorous interdisciplinary discussions around these critical topics and foster reflections on the necessityand challenges in defining adaptable fairness metrics and designing reliable evaluation techniques.## TopicThe discussion on defining and measuring algorithmic (un)fairness has predominantly been afocus in the early stages of algorithmic fairness research [Dwork et al., 2012, Zemel et al., 2013, Hardtet al., 2016, Zafar et al., 2017, Agarwal et al., 2018] resulting in four main fairness denominations:individual or group [Binns, 2020], statistical or causal [Makhlouf et al., 2023], equalizing or non-equalizing [Diana et al., 2021], and temporal …

Mathematics of Modern Machine Learning (M3L)

Workshop

Kaifeng Lyu · Bingbin Liu · Sadhika Malladi · Samory Kpotufe · Stefanie Jegelka · Tengyu Ma · Zhiyuan Li

[ East Meeting Room 1-3 ]

Abstract

This workshop explores theory for understanding and advancing modern ML practices, with a focus on mathematical models for empirical deep learning phenomena.

Generative AI and Creativity: A dialogue between machine learning researchers and creative professionals

Workshop

Y Cooper · Holden Lee · Hugo Berard

[ West Meeting Room 201 ]

Abstract

The transformative potential of generative AI will not be fully attained until AI researchers develop a deeper understanding of the creative process of human artists, and build constructive partnerships based on that understanding. This workshop is intended to foster such connections.

Pluralistic Alignment Workshop

Workshop

Mikhail Terekhov · Moksh Jain · Ruyuan Wan · Maarten Sap · Mitchell Gordon · Dongyeop Kang · Caglar Gulcehre · Amy Zhang · He He

[ West Meeting Room 116, 117 ]

Abstract

Aligning AI with human preferences and societal values is increasingly important. Yet, today’s AI alignment methods have been shown to be insufficient for capturing the vast space of complex – and often conflicting – real-world values. Our workshop will discuss how to integrate diverse perspectives, values, and expertise into pluralistic AI alignment. We aim to explore new methods for multi-objective alignment by drawing inspiration from governance and consensus-building practices to address conflicting values in pluralistic AI alignment. Discussion will include technical approaches for dataset collection, algorithms development, and the design of human-AI interaction workflows that reflect pluralistic values among diverse populations. By gathering experts from various fields, this workshop seeks to foster interdisciplinary collaboration and push the boundaries of the understanding, development and practice of pluralistic AI alignment.

Symmetry and Geometry in Neural Representations

Workshop

Christian A Shewmake · Simone Azeglio · Bahareh Tolooshams · Sophia Sanborn · Nina Miolane

[ West Ballroom C ]

Abstract

In recent years, there has been a growing appreciation for the importance of respecting the topological, algebraic, or geometric structure of data in machine learning models. In parallel, an emerging set of findings in computational neuroscience suggests that the preservation of this kind of mathematical structure may be a fundamental principle of neural coding in biology. The goal of this workshop is to bring together researchers from applied mathematics and deep learning with neuroscientists whose work reveals the elegant implementation of mathematical structure in biological neural circuitry. Group theory and differential geometry were instrumental in unifying the models of 20th-century physics. Likewise, they have the potential to unify our understanding of how neural systems form useful representations of the world.

Language Gamification

Workshop

Shangmin Guo · Yi Ren · Elle Michelle Yang · Mathieu Rita · Florian Strub

[ West Meeting Room 220-222 ]

Abstract

Ludwig Wittgenstein, in his seminal work "Philosophical Investigations", introduced the concept of "language games." This framework views language as an adaptive system where words acquire meaning through use, emphasizing its social and interactive nature. Research in cognitive science reinforces this notion, highlighting that genuine language acquisition thrives on dynamic and context-driven interactions. Language emergence simulations further demonstrate the critical role of language transmission within a population of agents in shaping modern languages. Game theory experiments showcase the superiority of interactive self-play loops compared to traditional imitation-based models. But... meanwhile... the core training paradigm in language processing remains purely based on supervised and preference losses, and it has barely changed over the past years. Besides, some limitations in LLMs, e.g., restricted planning abilities and insufficient personalization, suggest a potential deficiency in their training: the lack of interaction. Inspired by these observations, our workshop explores the concept of Language Gamification to enable interactive LLM finetuning at scale.This training paradigm encompasses interactive training or evaluation loops that enable LLMs to bootstrap and ground their language through multi-agent interactions. Following this definition, the workshop invites an exploration of Language Gamification through a diverse set of methodological perspectives and research backgrounds, offering a series of …

Workshop on Behavioral Machine Learning

Workshop

Keyon Vafa · Serina Chang · Katie Collins · Diag Davenport · Katy Gero · Jon Kleinberg · Ilia Sucholutsky · Kawin Ethayarajh

[ East Meeting Room 19, 20 ]

Abstract

Across many application areas, machine learning (ML) systems rely on human data. Yet these systems often leave unmodelled the psychological processes that generate human data, or abstract these rich mental processes into simple models. Fortunately, there's a field full of insights about human behavior: the behavioral sciences. However, these insights are often qualitative. Integrating them into machine learning systems requires converting them into computable models and designing machine learning systems to incorporate them. The goal of this workshop is to explore incorporating insights from the behavioral sciences into machine learning systems. Our workshop will focus on one specific question in this broad area: how can we incorporate behavioral insights into formal computable models? Translating behavioral insights into computable models would enable them to interact with ML systems: behavioral models can improve ML models, and vice-versa. We hope to bring together computer scientists across many subfields with behavioral scientists to drive progress in this interdisciplinary area.

Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning

Workshop

Mengye Ren · Paul Vicol · Naila Murray · Renjie Liao · Beidi Chen · Wei-Chiu Ma

[ West Exhibition Hall A ]

Abstract

In the rapidly evolving landscape of AI, the development of adaptive foundation models represents a ground-breaking shift towards AI systems that can continually learn, adapt, and evolve in response to new information, changing environments, and user preferences. This workshop aims to explore cutting-edge advancements in adaptive foundation models, focusing on methodologies that enable continual weight updates, memory-efficient fine-tuning, and personalized adaptation to diverse tasks and domains. We feature invited talks by experts in LLMs, diffusion models, multimodal learning, continual learning, and efficient ML to explore this interdisciplinary topic. We host workshop paper submissions and invite oral papers for contributed talks. In addition, there is a panel discussion with the invited speakers.

Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design

Workshop

Alexander Terenin · Natalie Maus · Renato Berlinghieri · Zi Wang

[ East Meeting Room 8, 15 ]

Abstract

Recent advances in ML and AI have led to impressive achievements, yet models often struggle to express uncertainty, and more importantly, make decisions that account for uncertainty. This hinders the deployment of AI models in critical applications, ranging from scientific discovery, where uncertainty quantification is essential, to real-world scenarios with unpredictable and dynamic environments, where models may encounter data vastly different from their training sets.Through the use of probability, Bayesian methods offer a powerful framework to address these limitations by quantifying uncertainty, incorporating prior knowledge, enabling adaptive decision-making and information gathering in uncertain environments. These approaches have led to significant progress and success in relevant fields, tackling critical problems such as drug discovery, hyperparameter tuning and environmental monitoring. However, challenges remain in both theory and practice, such as establishing performance guarantees and scaling up these methods to handle the complexity and dimensionality of larger data and models. On the other hand, the development of frontier models (e.g., based on large language models) presents new opportunities to enhance Bayesian methods with stronger priors and tools not previously available.This workshop aims to bring together researchers from different but closely related areas, including Bayesian optimization, active learning, uncertainty quantification, Gaussian processes, spatiotemporal modeling, …

GenAI for Health: Potential, Trust and Policy Compliance

Workshop

Junyuan Hong · Pranav Rajpurkar · Jason Fries · Marina Sirota · Ying Ding

[ East Meeting Room 16 ]

Abstract

Generative AI (GenAI) emerged as a strong tool that can revolutionize healthcare and medicine. Yet the public trust in using GenAI for health is not well established due to its potential vulnerabilities and insufficient compliance with health policies. The workshop aims to gather machine learning researchers and healthcare/medicine experts from both academia and industry to explore the transformative potential of GenAI for health. We will delve into the trustworthiness risks and mitigation of cutting-edge GenAI technologies applicable in health applications, such as Large Language Models, and multi-modality large models. By fostering multidisciplinary communication with experts in government policies, this workshop seeks to advance the integration of GenAI in healthcare, ensuring safe, effective, ethical, and policy-compliant deployment to enhance patient outcomes and clinical research.

NeuroAI: Fusing Neuroscience and AI for Intelligent Solutions

Workshop

Forough Habibollahi · Moein Khajehnejad · Adeel Razi · Jason Eshraghian · Noor Sajid · Anthony M Zador · Yoshua Bengio

[ West Ballroom B ]

Abstract

Examining the fusion of neuroscience and AI, this workshop aims to unlock brain-inspired algorithms while advancing both biological and artificial intelligence.

Workshop on Video-Language Models

Workshop

Aiden Lee · Minjoon Seo · Sangdoo Yun · Sangho Lee · Jiasen Lu · Md Mohaiminul Islam · Yanbei Chen · Linjie Li

[ East Meeting Room 13 ]

Abstract

The growing relevance of video-language models in both academia and industry highlights the necessity for a dedicated workshop to address the unique challenges and opportunities this field presents. This workshop is designed to accelerate the development and practical application of video foundation models, which are crucial for interpreting and utilizing the extensive amounts of video data that make up a significant portion of global data. These models are increasingly vital for a range of applications, from video search and content creation to surveillance and robotics. Confirmed speakers are leading researchers in this field from UT Austin, University of Tübingen, and University of Bristol (Tentative), as well as prominent industry figures from Meta, Google DeepMind, and Microsoft, ensuring a rich exchange of knowledge. The diverse organizing team from universities, industry, and non-profit research institutes aims to foster broad participation and collaboration. This workshop aims to push the boundaries of video-language models, ensuring their development and deployment are ethical and responsible. It will serve as a platform for sharing knowledge, fostering collaborations, and setting future research directions in this rapidly advancing field.

MATH-AI: The 4th Workshop on Mathematical Reasoning and AI

Workshop

Alex Gu · Gabriel Poesia · Cedegao (Ced) Zhang · Hattie Zhou · Pan Lu · Swaroop Mishra · Kai-Wei Chang · Armando Solar-Lezama

[ West Meeting Room 118-120 ]

Abstract

Mathematical reasoning is a fundamental aspect of human cognition that has been studied by scholars ranging from philosophers to cognitive scientists and neuroscientists. Mathematical reasoning involves analyzing complex information, identifying patterns and relationships, and drawing logical conclusions from evidence. It is central to many applications in science, engineering, finance, and everyday contexts. Recent advancements in large language models (LLMs) have unlocked new opportunities at the intersection of artificial intelligence and mathematical reasoning, ranging from new methods that solve complex problems or prove theorems, to new forms of human-machine collaboration in mathematics and beyond. Our proposed workshop is centered on the intersection of deep learning and mathematical reasoning, with an emphasis on, but not limited to, large language models. Our guiding theme is: ``To what extent can machine learning models comprehend mathematics, and what applications could arise from this capability?''

Socially Responsible Language Modelling Research (SoLaR)

Workshop

Usman Anwar · David Krueger · Yejin Choi · Maarten Sap · Alan Chan · Yawen Duan · Robert Kirk · Xin Chen, Cynthia · Abulhair Saparov · Kayo Yin · Liwei Jiang · Valentina Pyatkin

[ West Meeting Room 121, 122 ]

Abstract

NeurIPS 2024 workshop Socially Responsible Language Modelling Research (SoLaR), proposed herein has two goals: (a) highlight novel and important research directions in responsible LM research across various sub-communities. (b) Promote interdisciplinary collaboration and dialogue on socially responsible LM research across communities. For example, between i) the AI safety and FATE (fairness, accountability, transparency, and ethics) communities and ii) technical and policy communities. To achieve this goal, we have assembled a diverse line-up of speakers who will talk about LM research in the context of governance, ethics, fairness, safety and alignment. We will also be holding a panel on whether or not it is socially responsible to continue the pursuit for AGI-like, more capable and more general-purpose LMs; an extremely timely topic considering multiple leading AI labs are explicitly focusing on achieving this goal.

Table Representation Learning Workshop (TRL)

Workshop

Madelon Hulsebos · Haoyu Dong · Laurel Orr · Qian Liu · Vadim Borisov

[ East Meeting Room 11, 12 ]

Abstract

Tables are a promising modality for representation learning and generative models with too much application potential to ignore. However, tables have long been overlooked despite their dominant presence in the data landscape, e.g. data management, data analysis, and ML pipelines. The majority of datasets in Google Dataset Search, for example, resembles typical tabular file formats like CSVs. Similarly, the top-3 most-used database management systems are all intended for relational data. Representation learning for tables, possibly combined with other modalities such as code and text, has shown impressive performance for tasks like semantic parsing, question answering, table understanding, data preparation, and data analysis (e.g. text-to-sql). The pre-training paradigm was shown to be effective for tabular ML (classification/regression) as well. More recently, we also observe promising potential in applying and enhancing generative models (e.g. LLMs) in the domain of structured data to improve how we process and derive insights from structured data.

The Table Representation Learning workshop has been the key venue driving this research vision and establishing a community around TRL. The goal of the third edition of TRL at NeurIPS 2024 is to:
1) showcase the latest impactful TRL research, with a particular focus on industry insights this year,
2) …

UniReps: Unifying Representations in Neural Models

Workshop

Marco Fumero · Zorah Lähner · Luca Moschella · Clémentine Dominé · Donato Crisostomi · Kimberly Stachenfeld

[ East Exhibition Hall B, C ]

Abstract

Neural models tend to learn similar representations when subject to similar stimuli; this behavior has been observed both in biological and artificial settings. The emergence of these similar representations is igniting a growing interest in the fields of neuroscience and artificial intelligence. To gain a theoretical understanding of this phenomenon, promising directions include: analyzing the learning dynamics and studying the problem of identifiability in the functional and parameter space. This has strong consequences in unlocking a plethora of applications in ML from model fusion, model stitching, to model reuse and in improving the understanding of biological and artificial neural models, including large retrained foundation models. The objective of the workshop is to discuss theoretical findings, empirical evidence and practical applications of this phenomenon, benefiting from the cross-pollination of different fields (ML, Neuroscience, Cognitive Science) to foster the exchange of ideas and encourage collaborations. Overall the questions we aim to investigate are when, why and how internal representations of distinct neural models can be unified into a common representation.

Attributing Model Behavior at Scale (ATTRIB)

Workshop

Elisa Nguyen · Sadhika Malladi · Andrew Ilyas · Logan Engstrom · Sam Park · Tolga Bolukbasi

[ West Meeting Room 205-207 ]

Abstract

Recently-developed algorithmic innovations (e.g., transformers, diffusion models , state-space models) and large-scale datasets (e.g., Common Crawl, LAION) have given rise to machine learning models with impressive capabilities. As the cost of training such large models grows, and as systems based on them are used widely, it is increasingly important to understand how different design choices combine to induce observed behaviors. For example, we still do not fully understand how the composition of training datasets influences model behavior (e.g., how does training on code data affect reasoning capabilities in other domains?), how to attribute capabilities to subcomponents (e.g., can we identify which subnetwork of an LLM implements addition), and which algorithmic choices really drive performance (e.g., how can we best align models to human preferences?). Behavioral attribution is also important in light of recent concerns about harmful model behavior and several works suggest that these behaviors can be attributed to training data or model architecture and size.The core challenge in all of these questions is that of model behavior attribution.That is, the question of relating model behavior back to factors in the machine learning pipeline---such as the choice of training dataset or particular training algorithm---that produced this model. This workshop …

Workshop on Scalable Continual Learning for Lifelong Foundation Models

Workshop

Beyza Ermis · Arslan Chaudhry · Çağatay Yıldız · Rahaf Aljundi · Bo Liu

[ West Meeting Room 109, 110 ]

Abstract

In the past, continual learning (CL) was often overlooked as problems solvable by CL were efficiently addressed by offline learning, where resource efficiency wasn't a significant bottleneck. However, with the advent of foundation models, the landscape has shifted. For the pursuit of increasingly general intelligence, current foundation models are fundamentally limited by their training on static data, leading to outdated encoded information, saturation in knowledge accumulation, and wasteful use of compute resources. The increasing size of ML models puts ever more emphasis on scalable CL, as even fine-tuning large models is becoming increasingly resource-intensive and time-consuming. CL now emerges as a crucial framework in this new era, essential for dealing with the evolving scale and complexity of ML models. Yet, even the most recent methods in CL fall short of effectively addressing the challenges posed by the current data and compute scales. At this workshop, we discuss recent advances in scalable CL that could potentially replace static foundation model training, enabling us to model dynamic real-world information. Our workshop aims to bring together experts and researchers from various domains, including language, vision, speech, and multimodal AI to exchange ideas and foster collaboration. We are committed to advancing the development of …

5th Workshop on Self-Supervised Learning: Theory and Practice

Workshop

XuDong Wang · Ishan Misra · Mathilde Caron · Tengda Han · Pengtao Xie

[ West Meeting Room 202-204 ]

Abstract

At NeurIPS from 2020 to 2024, we successfully organized the 1st, 2nd, 3rd and 4t workshops on Self-Supervised Learning – Theory and Practice. These events attracted a diverse audience from multiple domains, including vision, speech, NLP, robotics, ML theory, and industry practitioners. Building on the success of these previous workshops, we are excited to continue organizing the workshop on self-supervised learning this year. Self-supervised learning (SSL) is an approach of representation learning that does not rely on human-labeled data. Instead, it creates auxiliary tasks from unlabeled input data and learns representations by solving these tasks. SSL has shown significant success across various domains such as images (e.g., MAE, DINO, MoCo, PIRL, SimCLR), speech (e.g., wav2vec, Whisper), and text (e.g., BERT, GPT, Llama). It has also demonstrated promising results in other data modalities including graphs, time-series, and audio. Recent large language models—predominantly trained on web-scale data using self-supervised methods—have exhibited remarkable generalizability and are beginning to transform numerous research fields. SSL, without using human-provided labels, can achieve performance comparable to or even surpassing that of fully supervised methods. Furthermore, generative SSL techniques such as Imagen, Stable Diffusion, and SORA have significantly enhanced the artistic capabilities of AI models. Existing research on …

Safe Generative AI

Workshop

Dianbo Liu · Ling Pan · Tailin Wu · Bonaventure F. P. Dossou · Emmanuel Bengio · Yilun Du · Dinghuai Zhang · Yoshua Bengio

[ East Exhibition Hall A ]

Abstract

In the past two years, generative AI has been the major driving force behind the development of advanced AI productssuch as ChatGPT4, AlphaFold, and StableDiffusion. These technologies, while significantly improving productivity for many, have raised significant safety concerns. However, there has been no workshop focusing on this topic in the past two years. This workshop, emphasizing AI safety concerns related to the use of generative AI, is very needed for the community. Generative AI, including large language models, vision-language models, diffusion models, and many more, has significantly aided various aspects of both academia and industry. In scientific discovery, these aspects encompass experimental design, hypothesis formulation, theoretical reasoning, and observation organization. In commercial applications, generative models such as large language models and diffusion algorithms have changed the lifestyles and workflows of billions around the world. This workshop aims to convene experts from various fields to address these challenges and explore potential solutions.

Red Teaming GenAI: What Can We Learn from Adversaries?

Workshop

Valeriia Cherepanova · Bo Li · Niv Cohen · Yifei Wang · Yisen Wang · Avital Shafran · Nil-Jana Akpinar · James Zou

[ West Meeting Room 301 ]

Abstract

The development and proliferation of modern generative AI models has introduced valuable capabilities, but these models and their applications also introduce risks to human safety. How do we identify risks in new systems before they cause harm during deployment? This workshop focuses on red teaming, an emergent adversarial approach to probing model behaviors, and its applications towards making modern generative AI safe for humans.

2nd Workshop on Touch Processing: From Data to Knowledge

Workshop

Roberto Calandra · Haozhi Qi · Perla Maiolino · Mike Lambeta · Yasemin Bekiroglu · Jitendra Malik

[ West Meeting Room 111, 112 ]

Abstract

Touch is a crucial sensor modality for both humans and robots, as it allows us to directly sense object properties and interactions with the environment. Recently, touch sensing has become more prevalent in robotic systems, thanks to the increased accessibility of inexpensive, reliable, and high-resolution tactile sensors and skins. Just as the widespread availability of digital cameras accelerated the development of computer vision, we believe that we are rapidly approaching a new era of computational science dedicated to touch processing. We believe that AI/ML will play a critical role in successfully processing touch as a sensing modality. However, this raises important questions regarding which computational models are best suited to leverage the unique structure of touch, similar to how convolutional neural networks leverage spatial structure in images. The development and advancement of touch processing will greatly benefit a wide range of fields, including tactile and haptic use cases. For instance, advancements in tactile processing (from the environment to the system) will enable robotic applications in unstructured environments, such as agricultural robotics and telemedicine. Understanding touch will also facilitate providing sensory feedback to amputees through sensorized prostheses and enhance future AR/VR systems.

System-2 Reasoning at Scale

Workshop

Shikhar Murty · Federico Bianchi · Róbert Csordás · Nouha Dziri · Alex Gu · Shunyu Yao · Christopher D Manning · Yejin Choi

[ West Ballroom B ]

Abstract

Our workshop focuses on improving reasoning in neural networks, particularly the challenges and strategies for achieving System-2 reasoning in transformer-like models. The workshop addresses issues like distinguishing memorization from rule-based learning, understanding, syntactic generalization, and compositionality. The workshop also covers the importance of understanding how systematic models are in their decisions for AI safety, integrating neural networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities. We have (tentatively) confirmed a distinguished group of speakers and panelists who are some of the most influential figures in recent literature on reasoning. Considering how important these topics are today and our distinguished lineup of speakers, we expect \textbf{more than 500 participants to the workshop}.

Workshop

Alexander Pan · Kimin Lee · Bo Li · Karthik Narasimhan · Dawn Song · Isabelle Barrass

[ West Ballroom C ]

Abstract

Foundation models are increasingly being augmented with new modalities and access to a variety of tools and software. Systems that can take action in a more autonomous manner have been created by assembling agent architectures or scaffolds that include basic forms of planning and memory or multi-agent architectures. As these systems are made more agentic, this could unlock a wider range of beneficial use-cases, but also introduces new challenges in ensuring that such systems are trustworthy. Interactions between different autonomous systems create a further set of issues around multi-agent safety. The scope and complexity of potential impacts from agentic systems means that there is a need for proactive approaches to identifying and managing their risks. Our workshop will surface and operationalize these questions into concrete research agendas.

Compositional Learning: Perspectives, Methods, and Paths Forward

Workshop

Ying Wei · Jonathan Richard Schwarz · Yilun Du · Laurent Charlin · Mengye Ren · Matthias Bethge

[ West Meeting Room 118-120 ]

Abstract

Compositional learning, inspired by the human ability to derive complex ideas from simpler constituents, seeks to equip machines with analogous capabilities for understanding, reasoning, and adaptive learning. This methodology bolsters machines' ability to generalize to out-of-distribution samples through the recombination of learned components, proving effective across diverse tasks such as machine translation, visual reasoning, image generation, reinforcement learning, and more. Despite notable advancements, persistent challenges remain in achieving robust compositional generalization and reasoning within even the most advanced foundation models. Our workshop aims to discuss these challenges as well as untapped opportunities ahead from the following four aspects: exploring the capacity for compositionality in foundation models and dissecting the underlying mechanisms of their compositional learning; devising reliable and model-agnostic strategies for constructing compositional systems; establishing theoretical and empirical connections between modular architectures and compositional generalization; and extending compositional learning principles to continual learning contexts. By confronting these themes, we aim to foster a collaborative exploration of theoretical and empirical dimensions of compositional learning, thus advancing understanding and practical applications of compositional learning.

The First Workshop on Large Foundation Models for Educational Assessment

Workshop

Sheng Li · Zhongmin Cui · Jiasen Lu · Deborah Harris · Dongliang Guo · Daiqing Qi

[ East Meeting Room 19, 20 ]

Abstract

The advanced generative artificial intelligence (AI) techniques, such as large language models and large multimodal models, are transforming many aspects of educational assessment. The integration of AI into education has the potential to revolutionize not only test development and evaluation but also the way students can learn. Over the past years, some successful adoptions of machine learning in this area are using natural language processing for automated scoring, or applying collaborative filtering to predict student responses. The rapid advances of large foundation models (e.g., ChatGPT, GPT-4, Llama, Gemini) demonstrate the potential of intelligent assessment with data-driven AI systems. These models could potentially benefit test construct identification, automatic item generation, multimodal item design, automated scoring, and assessment administration. Meanwhile, new research challenges arise in the intersection of AI and educational assessments. For instance, the explainability and accountability of current large foundations models are still inadequate to convince the stakeholders in the educational ecosystem, which limits the adoption of AI techniques in large-scale assessments. Also, it is still unclear whether the large foundation models are capable of assisting complex assessment tasks that involve creative thinking or high-order reasoning. Tackling these research challenges would require collaborative efforts from researchers and practitioners in both …

International Workshop on Federated Foundation Models in Conjunction with NeurIPS 2024 (FL@FM-NeurIPS'24)

Workshop

Sai Praneeth Karimireddy · Xiaoxiao Li · Songtao Lu · Stacy Patterson · Pascal Poupart · Han Yu

[ East Meeting Room 8, 15 ]

Abstract

The rise of foundation models (FMs) amplifies the importance and relevance of federated learning (FL) as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data. With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field. This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive …

Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

Workshop

Jiaqi Ma · Chirag Agarwal · Sarah Tan · Himabindu Lakkaraju · Usha Bhalla · Zana Bucinca · Junwei Deng · Alex Oesterling · Shichang Zhang

[ East Meeting Room 1-3 ]

Abstract

This workshop brings together ML and policy experts to identify and address various technical, policy, and fair use challenges that arise when regulating ML models.

Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI

Workshop

Avijit Ghosh · Usman Gohar · Yacine Jernite · Lucie-Aimée Kaffee · Alberto Lusoli · Jennifer Mickel · Irene Solaiman · Arjun Subramonian · Zeerak Talat

[ East Meeting Room 16 ]

Abstract

Generative AI systems are becoming increasingly prevalent in society across modalities, producing content such as text, images, audio, and video, with far-reaching implications. The NeurIPS Broader Impact statement has notably shifted norms for AI publications to consider negative societal impact. However, no standard exists for how to approach these impact assessments. While new methods for evaluation of social impact are being developed, including notably through the NeurIPS Datasets and Benchmarks track, the lack of standard for documenting their applicability, utility, and disparate coverage of different social impact categories stand in the way of broad adoption by developers and researchers of generative AI systems. By bringing together experts on the science and context of evaluation and practitioners who develop and analyze technical systems, we aim to help address this issue through the work of the NeurIPS community.

Interpretable AI: Past, Present and Future

Workshop

Suraj Srinivas · Michal Moshkovitz · Chhavi Yadav · Lesia Semenova · Nave Frost · Vinayak Abrol · Bitya Neuhof · Valentyn Boreiko · Dotan Di Castro · Himabindu Lakkaraju · Kamalika Chaudhuri

[ East Ballroom A, B ]

Abstract

This workshop is the second in a series focused on interpretability and explainability. The first workshop, titled "XAI in Action: Past, Present, and Future Applications," was held at NeurIPS 2023. In this edition, we aim to bridge classical interpretability and modern methods for foundation models. We retain the core organizing team from the previous workshop while welcoming three new members. Additionally, we have introduced research roundtables to support community building.

Optimization for ML Workshop

Workshop

Jelena Diakonikolas · Dan Garber · Cristóbal Guzmán · Courtney Paquette · Sebastian Stich

[ West Ballroom A ]

Abstract

Optimization lies at the heart of many machine learning algorithms and enjoys great interest in our community. Indeed, this intimate relation of optimization with ML is the key motivation for the OPT series of workshops. We aim to foster discussion, discovery, and dissemination of state-of-the-art research in optimization relevant to ML.

The focus of OPT 2024 is on "Scaling up optimization". The advent of large language models (LLMs) has changed our perceptions of the landscape of optimization and is resulting in the emergence of new interesting questions related to scaling. For instance, we can view optimization as a sequence of problems parameterized by the size of the model. Questions naturally arise around scaling and optimization. Are there natural model size dependent learning rates that allow extrapolation from smaller models to large ones, and therefore facilitating fine-tuning? Or given a fixed compute budget, how should one choose the hyper-parameters of the model (e.g., width size, depth size, architecture, batch) so as to minimize the loss function? How dependent are these scaling laws on the optimization algorithm? Answers to these questions would have a huge impact in AI – saving time and millions of dollars in training, plus helping reduce AI’s environmental …

AI for New Drug Modalities

Workshop

Masatoshi Uehara · Su-In Lee · Mengdi Wang · Le Song · Nicola Richmond · Xiner Li · Maria Brbic · Mohammad Lotfollahi · Ehsan Hajiramezanali

[ West Meeting Room 109, 110 ]

Abstract

The primary objective of this workshop is to bridge the gap between AI and emerging drug modalities, such as gene and cell therapies, and RNA-based drugs. These modalities are important for disease mechanisms which were previously considered difficult, if not impossible, to target. They offer novel mechanisms of action, target previously undruggable proteins, and can address unmet medical needs with higher precision and efficacy. Traditional modalities such as small-molecule drugs, recombinant proteins, and monoclonal antibodies have formed the basis of numerous existing treatments. Recently, there has been growing recognition that AI methods have the potential to expedite breakthroughs in these established modalities. This same potential extends to the emerging field of new drug modalities, where AI can play a transformative role in accelerating the discovery and development process, overcoming challenges, and ultimately bringing innovative therapies to patients more efficiently and effectively. We aim to bring specialists in these modalities and ML communities together to discuss and investigate how AI can accelerate drug development in these emerging modalities.

Intrinsically Motivated Open-ended Learning (IMOL)

Workshop

Cédric Colas · Nadia Ady · Junyi Chu · Cansu Sancaktar · Laetitia Teodorescu · Guy Davidson · Gaia Molinaro

[ West Meeting Room 217-219 ]

Abstract

The Intrinsically Motivated Open-ended Learning workshop (IMOL) will gather researchers interested in the autonomous development of open-ended repertoires of knowledge and skills in intrinsically motivated humans and machines.

Foundation Model Interventions

Workshop

Pau Rodriguez · Arno Blaas · Desi R Ivanova · Sahra Ghalebikesabi · Yuki M Asano · Katherine Metcalf · Xavier Suau

[ West Meeting Room 121, 122 ]

Abstract

The increasing capabilities of foundation models have raised concerns about their potential to generate undesirable content, perpetuate biases, and promote harmful behaviors. To address these issues, we propose a workshop that focuses on understanding the inner workings of foundation models and identifying actionable mechanisms involved in generation. Recent studies have shown promise in directly intervening on model activations or a low-rank subset of the weights to provide fine-grained control over model generation to mitigate the generation of harmful and toxic content. This workshop aims to bring together researchers to explore methods for improving the controllability of foundation models and developing a better understanding of their behavior and potential misuse.

D3S3: Data-driven and Differentiable Simulations, Surrogates, and Solvers

Workshop

Tribhuvanesh Orekondy · Arash Behboodi · Emilia Magnani · Philipp Hennig · Paris Perdikaris · Kamyar Azizzadenesheli

[ West Meeting Room 116, 117 ]

Abstract

Advancing machine learning research to aid simulations (e.g., improve accuracy, speed) in various domains (e.g., physics, graphics).

NeurIPS 2024 Workshop: Machine Learning and the Physical Sciences

Workshop

Siddharth Mishra-Sharma · Nicole Hartman · Vinicius Mikuni · Mariel Pettee · Sebastian Wagner-Carena · Antoine Wehenkel · Kyle Cranmer · Savannah Thais · Benjamin Nachman · Brian Nord

[ East Exhibition Hall B, C ]

Abstract

Physical sciences and machine learning: more than the sum of their parts. Join us to discuss research at the convergence of these fields!

Foundation Models for Science: Progress, Opportunities, and Challenges

Workshop

Wuyang Chen · Pu Ren · Elena Massara · Yongji Wang · N. Benjamin Erichson · Laurence Perreault-Levasseur · Bo Li · Swarat Chaudhuri

[ West Meeting Room 202-204 ]

Abstract

The integration of artificial intelligence (AI) and machine learning (ML) into scientific discovery represents a pivotal shift in traditional methodologies. Historically, scientific exploration has been systematic and logical, but AI and ML promise to transform fundamental discoveries. This shift enhances interdisciplinary dialogue and stimulates innovative problem-solving, enriching the scientific community's ability to tackle complex problems. Foundation models, such as GPT-3 and CLIP, have revolutionized computer vision and natural language processing, providing versatile, pre-trained bases for various applications. Leveraging these models addresses critical challenges like long-term planning and multi-modal reasoning, essential for applications in robotics and dialogue systems. The integration of AI-for-Science and foundation models offers a transformative force in scientific domains, solving complex problems and enabling domain-specific adaptations. This synergy is poised to radically improve the modeling of complex phenomena, making it a crucial investment for future scientific advancements. This workshop aims to bring together experts to discuss and collaborate on transformative questions and challenges in advancing scientific problems through foundation models.

Multimodal Algorithmic Reasoning Workshop

Workshop

Anoop Cherian · Kuan-Chuan Peng · Suhas Lohit · Honglu Zhou · Kevin Smith · Tim Marks · Juan Carlos Niebles · Petar Veličković

[ West Exhibition Hall A ]

Abstract

In this workshop, we plan to gather researchers working in neural algorithmic learning, multimodal reasoning, and cognitive models of intelligence to showcase their cutting-edge research, discuss the latest challenges, as well as bring to the forefront problems in perception and language modeling that are often overlooked but are pivotal in achieving true artificial general intelligence. An emphasis of this workshop is on the emerging topic of multimodal algorithmic reasoning, where a reasoning agent is required to automatically deduce new algorithms/procedures for solving real-world tasks, e.g., algorithms that use multimodal foundational models for analysis, synthesis, and planning, new approaches towards solving challenging vision-and-language mathematical (Olympiad type) reasoning problems, deriving winning strategies in multimodal games, procedures for using tools in robotic manipulation, etc. We hope to deep dive into this exciting topic at the intersection of multimodal learning and cognitive science to understand what we have achieved thus far in machine intelligence and what we are lacking in relation to the human way of thinking -- through talks from outstanding researchers and faculty that could inspire the audience to search for the missing rungs on the ladder to true intelligence.

Time Series in the Age of Large Models

Workshop

Arjun Ashok · Imry Kissos · Kashif Rasul · Abdul Fatir Ansari · Moshe Unger · Pedro Mercado · Laurent Callot · Stephan Johannes

[ West Meeting Room 220-222 ]

Abstract

This workshop will delve into aspects of time series prediction and analysis in the age of large models, focusing on the topics of building foundation models for time series, leveraging pretrained models of other modalities for time series, multimodal time series models and time series evaluation and real-world applications.

ML with New Compute Paradigms

Workshop

Jannes Gladrow · Babak Rahmani · Julie Grollier · Peter McMahon · Ruqi Zhang · Jack Kendall

[ West Meeting Room 114, 115 ]

Abstract

Digital computing is approaching fundamental limits and faces serious challenges in terms of scalability, performance, and sustainability. At the same time, generative AI is fuelling an explosion in compute demand. There is, thus, a growing need to explore non-traditional computing paradigms, such as (opto-)analog, neuromorphic hardware, and physical systems.Expanding on last year's successful NeurIPS workshop, which was the first of its kind in this community, we aim to bring together researchers from machine learning and alternative computation fields to establish new synergies between ML models and non-traditional hardware. Co-designing models with specialized hardware, a feature that has also been key to the synergy of digital chips like GPUs and deep learning, has the potential to offer a step change in the efficiency and sustainability of machine learning at scale. Beyond speeding up standard deep learning, new hardware may open the door for efficient inference and training of model classes that have been limited by compute resources, such as energy-based models and deep equilibrium models. So far, however, these hardware technologies have fallen short due to inherent noise, device mismatch, a limited set of compute operations, and reduced bit-depth. As a community, we need to develop new models and algorithms that …

Scientific Methods for Understanding Neural Networks: Discovering, Validating, and Falsifying Theories of Deep Learning with Experiments

Workshop

Zahra Kadkhodaie · Florentin Guth · Sanae Lotfi · Davis Brown · Micah Goldblum · Valentin De Bortoli · Andrew Saxe

[ West Meeting Room 205-207 ]

Abstract

While deep learning continues to achieve impressive results on an ever-growing range of tasks, our understanding of the principles underlying these successes remains largely limited. This problem is usually tackled from a mathematical point of view, aiming to prove rigorous theorems about optimization or generalization errors of standard algorithms, but so far they have been limited to overly-simplified settings. The main goal of this workshop is to promote a complementary approach that is centered on the use of the scientific method, which forms hypotheses and designs controlled experiments to test them. More specifically, it focuses on empirical analyses of deep networks that can validate or falsify existing theories and assumptions, or answer questions about the success or failure of these models. This approach has been largely underexplored, but has great potential to further our understanding of deep learning and to lead to significant progress in both theory and practice. The secondary goal of this workshop is to build a community of researchers, currently scattered in several subfields, around the common goal of understanding deep learning through a scientific lens.

Workshop on Machine Learning and Compression

Workshop

Yibo Yang · Karen Ullrich · Justus C. Will · Ezgi Ozyilkan · Elza Erkip · Stephan Mandt

[ West Meeting Room 211-214 ]

Abstract

Machine learning and compression have been described as "two sides of the same coin", and the exponential amounts of data being generated in diverse domains underscores the need for improved compression as well as efficient AI systems. Leveraging deep generative models, recent machine learning-based methods have set new standards for compressing images, videos, and audio. Despite these strides, significant challenges, such as computational efficiency and theoretical limitations, remain. Parallel advances in large-scale foundation models further requires research in efficient AI techniques such as model compression and distillation. This workshop aims to unite researchers from machine learning, data/model compression, and information theory. It will focus on enhancing compression techniques, accelerating large model training and inference, exploring theoretical limits, and integrating information-theoretic principles to improve learning and generalization. By bridging disciplines, we seek to catalyze the next generation of scalable, efficient information-processing systems.

Machine Learning for Systems

Workshop

Xinlei XU · Dan Zhang · Mangpo Phothilimthana · Divya Mahajan · Haoran Qiu · Patrick Musau

[ West Meeting Room 201 ]

Abstract

Machine Learning (ML) for Systems describes the application of machine learning techniques to problems related to computer systems. By leveraging supervised learning and reinforcement learning (RL) approaches, machine learning can replace longstanding heuristics that currently drive many of these systems. This includes a wide range of topics, including multi-objective tasks such as designing new data structures, integrated circuits, or design verification, as well as implementing control algorithms for applications such as compilers, databases, memory management, or ML frameworks. While the systems community increasingly recognizes the importance of ML in solving a variety of different systems problems, ML for Systems remains an emerging area without widely established best practices, methods and strategies for the application of state-of-the-art machine learning techniques. The goal of this workshop is to provide an interdisciplinary venue for ML and Systems experts to push this boundary and start new directions within the ML for Systems area. This year, we will encourage work in key emerging areas such as Large Language Model (LLM) training and serving.

Tackling Climate Change with Machine Learning

Workshop

Tejasri Nampally · Diego Kiedanski · Amrita Gupta · Yazid Mikail · Arthur Ouaknine · Bistra Dilkina · Yoshua Bengio

[ East Ballroom C ]

Abstract

Machine learning is emerging as a valuable tool in mitigating and adapting to climate change, while climate change has been noted as a valuable area for inspiring cutting-edge algorithms in machine learning. This workshop is intended to form connections and foster cross-pollination between researchers in machine learning and experts in complementary climate-relevant fields, in addition to providing a forum for those in the machine learning community who wish to tackle climate change. This workshop distinguishes itself from previous editions of the popular ‘Tackling Climate Change with Machine Learning’ workshop series by focusing on a key challenge: questioning common machine learning assumptions in the context of climate impact. Specifically, we will concentrate on two questions that are very timely for the machine learning community: (i) the various climate-related benefits and costs of large vs small models, (ii) the design of effective benchmarks for climate-related applications.

NeurIPS'24 Workshop on Causal Representation Learning

Workshop

Guangyi Chen · Haoxuan Li · Sara Magliacane · Zhijing Jin · Biwei Huang · Francesco Locatello · Peter Spirtes · Kun Zhang

[ East Exhibition Hall C ]

Abstract

Advanced Artificial Intelligence (AI) techniques based on deep representations, such as GPT and Stable Diffusion, have demonstrated exceptional capabilities in analyzing vast amounts of data and generating coherent responses from unstructured data. They achieve this through sophisticated architectures that capture subtle relationships and dependencies. However, these models predominantly identify dependencies rather than establishing and making use of causal relationships. This can lead to potential spurious correlations and algorithmic bias, limiting the models’ interpretability and trustworthiness.In contrast, traditional causal discovery methods aim to identify causal relationships within observed data in an unsupervised manner. While these methods show promising results in scenarios with fully observed data, they struggle to handle complex real-world situations where causal effects occur in latent spaces when handling images, videos, and possibly text.Recently, causal representation learning (CRL) has made significant progress in addressing the aforementioned challenges, demonstrating great potential in understanding the causal relationships underlying observed data. These techniques are expected to enable researchers to identify latent causal variables and discern the relationships among them, which provides an efficient way to disentangle representations and enhance the reliability and interpretability of models.The goal of this workshop is to explore the challenges and opportunities in this field, discuss recent progress, …

Machine Learning in Structural Biology

Workshop

Gabriele Corso · Simon Duerr · Gina El Nesr · Zeming Lin · Sergey Ovchinnikov · Vignesh Ram Somnath · Hannah Wayment-Steele

[ East Meeting Room 11, 12 ]

Abstract

Structural biology is the study of biological function with the awareness that molecules exist in four dimensions. AlphaFold2 demonstrated deep learning’s capability to solve one low-hanging problem in this field: predicting a single protein structure from its sequence, trained from the ~180,000 structures standardized and collected in the Protein Data Bank. However, there remain many harder unsolved challenges that need progress if we wish to understand and design functional biomolecules. There is a strong need to gather deep learning experts and biologists together to address these challenges. We have assembled and confirmed a set of diverse speakers who are world leaders in current challenges, including how to incorporate large-scale stability datasets, dynamics, ligand binding, into the fold of modern deep learning for structural biology. One of the biggest bottlenecks for all of these problems is the data available for training and how to create clear and stringent tests to evaluate progress. Our workshop will highlight two new benchmarks in a special track of our call for papers, and create a platform for open-sourced models, in collaboration with HuggingFace. We anticipate this workshop to be of great interest to many NeurIPS attendees, and to create lasting impact in establishing benchmarks and …

Workshop on Open-World Agents: Synnergizing Reasoning and Decision-Making in Open-World Environments (OWA-2024)

Workshop

Xiaojian (Shawn) Ma · Siyuan Qi · Hangxin Liu · Zihao Wang · Xue Feng · Shaofei Cai · Zhi Gao · Anji Liu · Yitao Liang · Yaodong Yang · Zilong Zheng · Qing Li · Siyuan Huang · Shuang Li · Ruiqi Gao · Dave Chen · Angel Chang · Song-Chun Zhu

[ East Meeting Room 13 ]

Abstract

In recent years, AI has made significant strides in achieving success across various domains, demonstrating capabilities that often surpass human performance in specific tasks. However, the real world presents challenges that go beyond single tasks, objectives, or predefined, static environments. We propose to consider open-world environments as the new habitat for AI agents: highly diverse and dynamic, fully interactive, teaming up with infinite and creative tasks, and requiring continuing learning and growth. Therefore, open-world agents, are expected to possess remarkable problem-solving capabilities across all cognitive functions, notably, reasoning and decision-making compared to specialized AI agents.

This workshop aims to bring together researchers from various fields to discuss emerging topics about reasoning and decision-making in open-world environments. This topic can be overly broad, but we are particularly interested in synergizing reasoning and decision-making, i.e., open-world agents that can simultaneously perform reasoning (e.g., QA, dialogue) and decision-making (e.g., planning and control), and how such unification helps tackle the challenges brought by the open world to both parties. To this end, the related fields are not limited to interleaved reasoning with decision-making, reasoning in embodied learning agents, LLM tool usage, reinforcement learning in open-world environments, open vocabulary learning, continued learning, multi-agent learning, and …

NeurIPS 2024 Workshops (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Patricia Veum II

Last Updated:

Views: 6432

Rating: 4.3 / 5 (64 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Patricia Veum II

Birthday: 1994-12-16

Address: 2064 Little Summit, Goldieton, MS 97651-0862

Phone: +6873952696715

Job: Principal Officer

Hobby: Rafting, Cabaret, Candle making, Jigsaw puzzles, Inline skating, Magic, Graffiti

Introduction: My name is Patricia Veum II, I am a vast, combative, smiling, famous, inexpensive, zealous, sparkling person who loves writing and wants to share my knowledge and understanding with you.