We all start out working to, in the best case scenario, be known for one small thing, but the road to maturity often leads us to expand our expertise: first in a scattered and unorganized manner, then forming some structure and coherence. I am at a point in my life where the following “themes” (click to expand for evidence) do a pretty good job summarizing my professional expertise.
Like most scientists, my technical competencies are best summarized by my publications, which follows the broad theme of exposing and understanding neural networks — the mystery behind their optimization, training dynamics, what and how they learn — and using this understanding to design better models. For example, understanding loss landscape properties, improving convolutional neural networks, improving generative models [1] [2], understanding sparsity [3] [4], tackling continual learning, pushing for model robustness by releasing a fun dataset as well as simplifying inputs.
I have a good general knowledge of deep learning papers out there and can speak about the landscape of topic distributions and waves of paper trends as I see it, given that I run a weekly paper reading group of all things ML since 2018.
I am happy to teach deep learning fundamentals. Example: guest lecturer at NYU AI School 2021.
I try to make research easy to understand, by producing short and fun video explanations for my papers. Examples: ASH, CoordConv, PPLM.
Like most scientists, my technical competencies are best summarized by my publications, which follows the broad theme of exposing and understanding neural networks — the mystery behind their optimization, training dynamics, what and how they learn — and using this understanding to design better models. For example, understanding loss landscape properties, improving convolutional neural networks, improving generative models [1] [2], understanding sparsity [3] [4], tackling continual learning, pushing for model robustness by releasing a fun dataset as well as simplifying inputs.
I have a good general knowledge of deep learning papers out there and can speak about the landscape of topic distributions and waves of paper trends as I see it, given that I run a weekly paper reading group of all things ML since 2018.
I am happy to teach deep learning fundamentals. Example: guest lecturer at NYU AI School 2021.
I try to make research easy to understand, by producing short and fun video explanations for my papers. Examples: ASH, CoordConv, PPLM.
My experience founding and running ML Collective teaches me:
What building and running a non-profit company is like: from ideation to setting up the structure, attracting members, obtaining non-profit status, fundraising, and everyday operations.
What entrepreneurship is about, to me. Some of it is captured in these two podcast episodes I was on: The Gradient (06.2022), Gradient Dissent (02.2021).
Why community building as a thankless business is so essential to research and yet so hard to put into practice.
My experience founding and running ML Collective teaches me:
What building and running a non-profit company is like: from ideation to setting up the structure, attracting members, obtaining non-profit status, fundraising, and everyday operations.
What entrepreneurship is about, to me. Some of it is captured in these two podcast episodes I was on: The Gradient (06.2022), Gradient Dissent (02.2021).
Why community building as a thankless business is so essential to research and yet so hard to put into practice.
Representation matters. I have a long track record advocating for underrepresented and unprivileged communities in ML research, from speaking about exisiting problems to taking concrete actions to make changes.
- Serving as the DEI Chair of ICLR 2022, ICLR 2023, ICLR 2024, and the DIA Chair of NeurIPS 2023.
- Started the Tiny Papers Track at ICLR 2023 and 2024, as a more accessible way to publish.
- [Talk] Talk on Career
choice creation for non-standard candidates.
- Co-organizing the “Broadening Research Collaborations in ML” workshop at NeurIPS 2022.
- Serving as the DEI Chair of ICLR 2022, and launched the “Broadening Call for Participation”, or CoSubmitting Summer (CSS) initiative, to help underrepresented, independent, and first-time submitters work on research. Upon conclusion we shared reflections on the program.
- [Talk] Talk on “How to have fun in AI research” (slides).
- [Talk] Talk on “AI research: the unreasonably narrow path and how not to be miserable” (slides).
- Making a gender equality pitch for getting women onboard with AI research.
- Making ML Collective entirely a place creating and distributing research opportunities and resources to those without traditional access.
- I also tweet about the minority experience: example
Representation matters. I have a long track record advocating for underrepresented and unprivileged communities in ML research, from speaking about exisiting problems to taking concrete actions to make changes.
- Serving as the DEI Chair of ICLR 2022, ICLR 2023, ICLR 2024, and the DIA Chair of NeurIPS 2023.
- Started the Tiny Papers Track at ICLR 2023 and 2024, as a more accessible way to publish.
- [Talk] Talk on Career
choicecreation for non-standard candidates. - Co-organizing the “Broadening Research Collaborations in ML” workshop at NeurIPS 2022.
- Serving as the DEI Chair of ICLR 2022, and launched the “Broadening Call for Participation”, or CoSubmitting Summer (CSS) initiative, to help underrepresented, independent, and first-time submitters work on research. Upon conclusion we shared reflections on the program.
- [Talk] Talk on “How to have fun in AI research” (slides).
- [Talk] Talk on “AI research: the unreasonably narrow path and how not to be miserable” (slides).
- Making a gender equality pitch for getting women onboard with AI research.
- Making ML Collective entirely a place creating and distributing research opportunities and resources to those without traditional access.
- I also tweet about the minority experience: example
I believe the existing academic structure can no longer accomodate the expanding needs and diverse growth trajectories of researchers, especially in CS and ML. ML Collective is us taking a shot at redesigning a scientific organization from the ground up.
Panelist for “Large-scale collaborations” at the ACL 2022 Workshop: Challenges & Perspectives in Creating Large Language Model.
Panelist for “Grassroots AI: The Unreasonable Effectiveness of Collaborative Research.”
Panelist for “Novelty in Science Organizations: A Virtual Workshop.”
Discussion lead for “Research within Community: How to Cultivate a Nurturing Environment for Your Research” at WiML @ ICML 2021.
[Talk] Talk on “How to Make a Positive ML Reserach Community”
Tweet-thoughts on hiring, skill training and “the Hollywood model” in a research organization, and how little you actually need to do to improve diversity.
I believe the existing academic structure can no longer accomodate the expanding needs and diverse growth trajectories of researchers, especially in CS and ML. ML Collective is us taking a shot at redesigning a scientific organization from the ground up.
Panelist for “Large-scale collaborations” at the ACL 2022 Workshop: Challenges & Perspectives in Creating Large Language Model.
Panelist for “Grassroots AI: The Unreasonable Effectiveness of Collaborative Research.”
Panelist for “Novelty in Science Organizations: A Virtual Workshop.”
Discussion lead for “Research within Community: How to Cultivate a Nurturing Environment for Your Research” at WiML @ ICML 2021.
[Talk] Talk on “How to Make a Positive ML Reserach Community”
Tweet-thoughts on hiring, skill training and “the Hollywood model” in a research organization, and how little you actually need to do to improve diversity.
Plainly speaking, the current academic environment is hostile to researchers of all levels. Almost everyone is struggling, some more than others.
Mentor for the topic “Mental Health & Surviving in Grad School” at WiML @ NeurIPS 2022.
Organizer of the “Computational Approaches to Mental Health” workshop at ICML 2021
“Grad school well-being” panel at DLCT
“Well-being listening sessions” at NeurIPS 2021.
Plainly speaking, the current academic environment is hostile to researchers of all levels. Almost everyone is struggling, some more than others.
Mentor for the topic “Mental Health & Surviving in Grad School” at WiML @ NeurIPS 2022.
Organizer of the “Computational Approaches to Mental Health” workshop at ICML 2021
“Grad school well-being” panel at DLCT
“Well-being listening sessions” at NeurIPS 2021.
I know a thing or two about mentorship (specifically in ML research but also just generally in scientific training), since I have personally gone through extensive grad school programs, worked with both great and not-so-great mentors, served as research mentors in a few Google Brain initiatives, and am now running an organization that’s all about facilitating collaboration and mentorship.
I know a thing or two about mentorship (specifically in ML research but also just generally in scientific training), since I have personally gone through extensive grad school programs, worked with both great and not-so-great mentors, served as research mentors in a few Google Brain initiatives, and am now running an organization that’s all about facilitating collaboration and mentorship.