**NeurIPS 2022 - Papers that Piqued**

#### December 2022

NeurIPS x New Orleans ended yesterday π’ closing out an in person & virtual deep learning whirlwind! Before I dive in, I'm going to share some π 2022 NeurIPS factoids. Let's go! At NeurIPS 2022 out of the 9,634 submitted papers, 2,672 were accepted π± In the accepted papers, what do you think the 5 most frequent title bigrams were? π€... It probably won't surprise you that they were: (1) neural network, (2) reinforcement learning, (3) language models, (4) graph neural, (5) federated learning. And boy did those trends come to life in the poster sessions! Everyone π― brought their A-game both to the conference and ~The Big Easy~. So, if you don't recognize the photo above π you spent too much time at the poster sessions π

In this blog post, I'm going to to start by listing all of the outstanding Main Track and Dataset & Benchmark papers, so that you can easily access the papers the reviewers thought were π₯π₯π₯ Then I'm going to share 25 papers that piqued my interest across 5 areas: Transformers, Self-Supervised Learning, Autoencoders, Time Series, Graph Neural Networks. This list is certainly NOT comphrensive, but gives a taste of what we experienced over the last two weeks. How did I pick these 25 papers? π€ In person, I visited ~literally~ all of the posters and, at minimum, read every single title. After returning to Caltech, I went through all 2,672 posters again, whittled my list down to 179 papers that piqued my interest, then semi-randomly selected 5 posters for each of 5 topics that motivate me.

As you make your way though this list, when you click on a paper it will take you to the paper's review page where you can π what the reviewers thought and check out the full paper. Hopefully you resonate with at least a few of my paper picks! With that, let's get started π©βπ»ππ¨βπ»

β¦ Is Out-of-Distribution Detection Learnable?

β¦ Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding

β¦ Elucidating the Design Space of Diffusion-Based Generative Models

β¦ ποΈ ProcTHOR: Large-Scale Embodied AI Using Procedural Generation

β¦ Using natural language and program abstractions to instill human inductive biases in machines

β¦ A Neural Corpus Indexer for Document Retrieval

β¦ High-dimensional limit theorems for SGD: Effective dynamics and critical scaling

β¦ Gradient Descent: The Ultimate Optimizer

β¦ Riemannian Score-Based Generative Modelling

β¦ Gradient Estimation with Discrete Stein Operators

β¦ An empirical analysis of compute-optimal large language model training

β¦ Beyond neural scaling laws: beating power law scaling via data pruning

β¦ On-Demand Sampling: Learning Optimally from Multiple Distributions

β¦ LAION-5B: An open large-scale dataset for training next generation image-text models

β¦ MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge

β¦ Brain Network Transformer

β¦ Staircase Attention for Recurrent Processing of Sequences

β¦ Recipe for a General, Powerful, Scalable Graph Transformer

β¦ Improving Transformer with an Admixture of Attention Heads

β¦ Recurrent Memory Transformer

β¦ Improving Self-Supervised Learning by Characterizing Idealized Representations

β¦ Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods

β¦ Graph Self-supervised Learning with Accurate Discrepancy Learning

β¦ HierSpeech: Bridging the Gap between Text and Speech by Hierarchical Variational Inference using Self-supervised Representations for Speech Synthesis

β¦ VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training

β¦ Masked Autoencoders As Spatiotemporal Learners

β¦ Masked Autoencoders that Listen

β¦ Embrace the Gap: VAEs Perform Independent Mechanism Analysis

β¦ Hybrid Neural Autoencoders for Stimulus Encoding in Visual and Other Sensory Neuroprostheses

β¦ Exploring the Latent Space of Autoencoders with Interventional Assays

β¦ Multivariate Time-Series Forecasting with Temporal Polynomial Graph Neural Networks

β¦ Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN)

β¦ Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

β¦ BILCO: An Efficient Algorithm for Joint Alignment of Time Series

β¦ WaveBound: Dynamic Error Bounds for Stable Time Series Forecasting

β¦ Co-Modality Graph Contrastive Learning for Imbalanced Node Classification

β¦ OTKGE: Multi-modal Knowledge Graph Embeddings via Optimal Transport

β¦ Provably expressive temporal graph networks

β¦ Learning to Reconstruct Missing Data from Spatiotemporal Graphs with Sparse Observations

β¦ Template based Graph Neural Network with Optimal Transport Distances

Glad you made it through this tiny taste of NeurIPS 2022! Hopefully some of these papers piqued your interest too π

In this blog post, I'm going to to start by listing all of the outstanding Main Track and Dataset & Benchmark papers, so that you can easily access the papers the reviewers thought were π₯π₯π₯ Then I'm going to share 25 papers that piqued my interest across 5 areas: Transformers, Self-Supervised Learning, Autoencoders, Time Series, Graph Neural Networks. This list is certainly NOT comphrensive, but gives a taste of what we experienced over the last two weeks. How did I pick these 25 papers? π€ In person, I visited ~literally~ all of the posters and, at minimum, read every single title. After returning to Caltech, I went through all 2,672 posters again, whittled my list down to 179 papers that piqued my interest, then semi-randomly selected 5 posters for each of 5 topics that motivate me.

As you make your way though this list, when you click on a paper it will take you to the paper's review page where you can π what the reviewers thought and check out the full paper. Hopefully you resonate with at least a few of my paper picks! With that, let's get started π©βπ»ππ¨βπ»

**Outstanding Papers β**

*Main Track*β¦ Is Out-of-Distribution Detection Learnable?

β¦ Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding

β¦ Elucidating the Design Space of Diffusion-Based Generative Models

β¦ ποΈ ProcTHOR: Large-Scale Embodied AI Using Procedural Generation

β¦ Using natural language and program abstractions to instill human inductive biases in machines

β¦ A Neural Corpus Indexer for Document Retrieval

β¦ High-dimensional limit theorems for SGD: Effective dynamics and critical scaling

β¦ Gradient Descent: The Ultimate Optimizer

β¦ Riemannian Score-Based Generative Modelling

β¦ Gradient Estimation with Discrete Stein Operators

β¦ An empirical analysis of compute-optimal large language model training

β¦ Beyond neural scaling laws: beating power law scaling via data pruning

β¦ On-Demand Sampling: Learning Optimally from Multiple Distributions

*Dataset & Benchmarks*β¦ LAION-5B: An open large-scale dataset for training next generation image-text models

β¦ MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge

**If you did register for the conference, I highly recommend you check out the NeurIPS 2022 Visualization Tool to investigate papers' relatedness. If you didn't register, you'll have to imagine searching this glorious paper space shown below π€**__π¨Pro Tip:__**Transformers π€**

β¦ Brain Network Transformer
β¦ Staircase Attention for Recurrent Processing of Sequences

β¦ Recipe for a General, Powerful, Scalable Graph Transformer

β¦ Improving Transformer with an Admixture of Attention Heads

β¦ Recurrent Memory Transformer

**Self-Supervised Learning π π **

β¦ Improving Self-Supervised Learning by Characterizing Idealized Representations
β¦ Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods

β¦ Graph Self-supervised Learning with Accurate Discrepancy Learning

β¦ HierSpeech: Bridging the Gap between Text and Speech by Hierarchical Variational Inference using Self-supervised Representations for Speech Synthesis

β¦ VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training

**Autoencoders β§**

β¦ Masked Autoencoders As Spatiotemporal Learners
β¦ Masked Autoencoders that Listen

β¦ Embrace the Gap: VAEs Perform Independent Mechanism Analysis

β¦ Hybrid Neural Autoencoders for Stimulus Encoding in Visual and Other Sensory Neuroprostheses

β¦ Exploring the Latent Space of Autoencoders with Interventional Assays

**Time Series β**

β¦ Multivariate Time-Series Forecasting with Temporal Polynomial Graph Neural Networks
β¦ Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN)

β¦ Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

β¦ BILCO: An Efficient Algorithm for Joint Alignment of Time Series

β¦ WaveBound: Dynamic Error Bounds for Stable Time Series Forecasting

**Graph Neural Networks π΅βπ΄**

β¦ Co-Modality Graph Contrastive Learning for Imbalanced Node Classification
β¦ OTKGE: Multi-modal Knowledge Graph Embeddings via Optimal Transport

β¦ Provably expressive temporal graph networks

β¦ Learning to Reconstruct Missing Data from Spatiotemporal Graphs with Sparse Observations

β¦ Template based Graph Neural Network with Optimal Transport Distances

Glad you made it through this tiny taste of NeurIPS 2022! Hopefully some of these papers piqued your interest too π