I had a fun time chatting with Sara and Sean at their fantastic Underrated ML podcast.
Rosanne Liu joins the podcast, and talks about the beauty of a thorough research treatment of small ideas – like why have modern deep neural networks stopped using pooling layers?
— underrated_ml (@underrated_ml) April 13, 2020
Does pooling actually deliver on translation invariance? https://t.co/OULJDQB3ma pic.twitter.com/Y9BFFI0AQY
The endearing dynamic between the two hosts, the a perfect ratio of 70% solid content + 30% pure laughters (give or take), and the unique target of “underrated ML papers” are what give this podcast a special charm.
It is certainly why I’ve been an absolute fan, ever since the very episode.
A few other episodes there, including a more recent one with Kyunghyun Cho, are all very worth tuning in for.
Fun fact: the first author of the paper I picked later reached out to say thanks. How this little piece of voice had reached him amongst all the insurmountable noise in the world is beyond me… magic?
What’s your pick of underrated papers or ideas? Send your thoughts their way (links included on their webpage).
Be the exact opposite of Reviewer 2!!