Skip to the content.

Accepted Papers

Note that to attend the event, a registration on the ICLR website is required. All workshop events (except Poster session and open discussion) can be followed using the ICLR link or use the zoom link by clicking on “join zoom” on the ICLR link.

The table below reports accepted papers and their id. Note that those ids are used to identify corresponding posters in the poster session using Gather.town link.

Paper id Title
1 GradMax: Gradient Maximizing Neural Network Growth
2 GRADIENT MATCHING FOR EFFICIENT LEARNING
3 FULLY QUANTIZING TRANSFORMER-BASED ASR FOR EDGE DEPLOYMENT
4 ActorQ: Quantization for Actor-Learner Distributed Reinforcement Learning
5 Optimizer Fusion: Efficient Training with Better Locality and Parallelism
6 Grouped Sparse Projection for Deep Learning
7 Gradient descent with momentum using dynamic stochastic computing
8 Memory-Bounded Sparse Training on the Edge
9 A Fast Method to Fine-tune Neural Networks for the Least Energy Consumption on FPGAs
10 Self-reflective Variational Autoencoder
11 Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions
12 An Exact Penalty Method for Binary Training
13 Training CNNs faster with Input and Kernel Downsampling
14 On-FPGA Training with Ultra Memory Reduction: A Low-Precision Tensor Method
15 MoIL: Enabling Efficient Incremental Training on Edge Devices
16 Heterogeneous Zero-Shot Federated Learning with New Classes for Audio Classification
17 Scaling Deep Networks with the Mesh Adaptive Direct Search Algorithm

The table below reports the 3 winners of the competition and their id. Note that those ids are used to identify corresponding posters in the poster session using Gather.town link.

Paper id Title
18 Improving ResNet-9 Generalization Trained on Small Datasets
19 Efficient Training Under Limited Resources
20 Training a 5000×32×32×3 RGB Dataset on NVIDIA TESLA V100 in 10 Minutes