It probably doesn’t surprise you that I write.
As my narrowness talk accumulated 10k views in 3 days, I feel like clarifying a few behind-the-scene facts.
In a year, I received and answered roughly two hundred emails from people of divergent walks of life, sharing a seemingly universal pain: finding it hard to “make it” in ML research.
- When less is more: Simplifying inputs aids neural network understanding
- Natural Adversarial Objects
Data Centric AI, NeurIPS 2021 arXiv
- Language Models are Few-shot Multilingual Learners
EMNLP 2021 MRL Workshop arXiv
- Why is Pruning at Initialization Immune to Reinitializing and Shuffling?
SNN Workshop 2021 arXiv
- When does loss-based prioritization fail?
ICML 2021 SubSetML Workshop arXiv
- Supermasks in Superposition
NeurIPS 2020 arXiv Blog post Code
- Estimating Q(s,s') with Deep Deterministic Dynamics Gradients
ICML 2020 arXiv Video Code