A little over a year ago, I wrote a 6000-word retrospective, A Year of MLC: Selfish Takes Only, reflecting on building ML Collective, the non-profit and non-traditional researchers community, for a full year.
It probably doesn’t surprise you that I write.
As my narrowness talk accumulated 10k views in 3 days, I feel like clarifying a few behind-the-scene facts.
- Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
- When less is more: Simplifying inputs aids neural network understanding
- Natural Adversarial Objects
Data Centric AI, NeurIPS 2021 arXiv
- Language Models are Few-shot Multilingual Learners
EMNLP 2021 MRL Workshop arXiv
- Why is Pruning at Initialization Immune to Reinitializing and Shuffling?
SNN Workshop 2021 arXiv
- When does loss-based prioritization fail?
ICML 2021 SubSetML Workshop arXiv
- Supermasks in Superposition
NeurIPS 2020 arXiv Blog post Code