2024 BAIR Graduate Directory – Berkeley Artificial Intelligence Research Blog

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now


Every year, the Berkeley Artificial Intelligence Research (BAIR) Lab graduates some of the most talented and innovative minds in artificial intelligence and machine learning. Our Ph.D. The graduates have each pushed the boundaries of AI research and are now ready for new adventures in academia, industry and beyond.

These amazing individuals bring with them a wealth of knowledge, fresh ideas, and a drive to continue contributing to the advancement of AI. His work at BAIR, from deep learning, robotics, and natural language processing to computer vision, security, and more, has made significant contributions to his fields and has had a transformative impact on society.

This website is dedicated to showcasing our colleagues, making it easier for academic institutions, research organizations, and industry leaders to discover and recruit the next generation of AI pioneers. Here, you'll find detailed profiles, research interests, and contact information for each of our graduates. We invite you to explore the potential collaborations and opportunities these graduates seek to apply their skills and insights in a new environment.

Join us in celebrating the achievements of BAIR's latest PhD graduates. Their journey has just begun, and the future they will help build is bright!

I thank our friends Stanford AI Lab For this idea!



E-mail: salam_azad@berkeley.edu
website:

Advisor(s): Einstein

Research blurb: My research interests are broadly in the field of machine learning and artificial intelligence. During my PhD I have focused on environment generation/curriculum learning methods to train autonomous agents with reinforcement learning. In particular, I work on methods that algorithmically generate different training environments (i.e., learning scenarios) for autonomous agents to generalize and improve sample performance. Currently, I am working on autonomous agents based on the Large Language Model (LLM).
Interested in jobs: Research Scientist, ML Engr


Alicia Sy.


E-mail: aliciatsai@berkeley.edu
website:

Advisor(s): Laurent Algui

Research blurb: My research sheds light on the theoretical aspects of deep implicit models, starting with a unified “state space” representation that simplifies notation. Additionally, my work explores various training challenges associated with deep learning, including tractability problems for convex and non-convex optimization. In addition to theoretical exploration, my research extends potential applications to various problem domains, including natural language processing, and the natural sciences.
Interested in jobs: Research Scientist, Applied Scientist, Machine Learning Engineer


Catherine Weaver


E-mail: catherine22@berkeley.edu
website:

Advisor(s): Masayoshi Tomizuka, V. Zhan

Research blurb: My research focuses on machine learning and control algorithms for the challenging task of autonomous racing in Gran Turismo Sport. I leverage my background in mechanical engineering to explore how machine learning and model-based optimal control can create safe, high-performance control systems for robotics and autonomous systems. A particular emphasis of mine has been how to leverage offline data sets (eg human players' racing tracks) to inform better, more model-efficient control algorithms.
Interested in jobs: Research Scientist and Robotics/Control Engineer


Chavin Sitavarian


E-mail: chawin.sitawarin@gmail.com
website:

Advisor(s): David Wagner

Research blurb: I am broadly interested in the safety and security aspects of machine learning systems. Most of my previous work is in the domain of adversarial machine learning, specifically adversarial examples and robustness of machine learning algorithms. More recently, I've been excited about emerging security and privacy risks on major language models.
Interested in jobs: Research scientist



Eliza Kosui


E-mail: eko@berkeley.edu
website:

Advisor(s): Allison Gopnik

Research blurb: Eliza Kosoy works with Professor Alison Gopnik at the intersection of child development and AI. Her work includes creating assessment criteria for LLMs rooted in child development and studying how children and adults use GenAI models such as ChatGPT/Dalle and create mental models about them. She is an intern at Google working with the AI/UX team and previously the Empathy Lab. He has published in Neurips, ICML, ICLR, Cogsci and Cognition. His thesis work created a unified virtual environment for testing children and AI models in one place for the purposes of training RL models. He also has experience building startups and STEM hardware coding toys.
Interested in jobs: Research Scientist (Child Development & AI), AI Safety (Expertise in Children), User Experience (UX) Researcher (Expertise in Mixed Methods, Youth, AI, LLM), Education & AI (STEM toys)


Fangio Wu


E-mail: fangyuwu@berkeley.edu
website:

Advisor(s): Alexander Bayne

Research blurb: Under the guidance of Professor Alexander Bain, Fangio focuses on the application of optimization methods to multi-agent robotic systems, particularly in the planning and control of automated vehicles.
Interested in jobs: Faculty, or Research Scientist in Control, Optimization, and Robotics


Francis Ding


E-mail: frances@berkeley.edu
website:

Advisor(s): Jacob Steinhardt, Moritz Hardt

Research blurb: My research focus is on machine learning for protein modeling. I understand what different protein models learn, along with improving protein property classification and protein design. I have previously worked on DNA and RNA sequence models, and benchmarks for evaluating the interpretability and fairness of ML models across domains.
Interested in jobs: Research scientist



Cathy Jung


E-mail: kathyjang@gmail.com
website:

Advisor(s): Alexander Bayne

Research blurb: My dissertation work has specialized in reinforcement learning for autonomous vehicles, with a focus on decision-making and enhancing performance in applied settings. In future work, I look forward to applying these principles to broader challenges in domains such as natural language processing. With my background, I aim to see the direct impact of my efforts by contributing to cutting-edge AI research and solutions.
Interested in jobs: ML Research Scientist / Engineer



Nikhil Ghosh


E-mail: nikhil_ghosh@berkeley.edu
website:

Advisor(s): Bin Yoo, Song Mi

Research blurb: I am interested in developing a better fundamental understanding of deep learning and optimizing practical systems using theoretical and empirical methodologies. Currently, I am particularly interested in improving the performance of large models by studying how to properly scale hyperparameters with model size.
Interested in jobs: Research scientist


Olivia Watkins


E-mail: oliviawatkins@berkeley.edu
website:

Advisor(s): Peter Abel and Trevor Darrell

Research blurb: My work includes RL, BC, learning from humans, and using common foundation model reasoning for agent learning. I am passionate about language agent learning, monitoring, alignment and robustness.
Interested in jobs: Research scientist


Ruiming Cao


E-mail: rcao@berkeley.edu
website:

Advisor(s): Laura Waller

Research blurb: My research is on computational imaging, specifically space-time modeling for dynamic scene recovery and motion estimation. I also work on optical microscopy techniques, optimization based optical design, event camera processing, novel wave rendering.
Interested in jobs: Research Scientist, Post-Doc, Faculty


Ryan Haq


E-mail: ryanhoque@berkeley.edu
website:

Advisor(s): Ken Goldberg

Research blurb: Simulation learning and reinforcement learning algorithms that scale to large fleets of robots performing manipulation and other complex tasks.
Interested in jobs: Research scientist


Sam Tower


E-mail: sdt@berkeley.edu
website:

Advisor(s): Stuart Russell

Research blurb: My research focuses on making language models safe, robust and secure. I also have experience in visioning, planning, simulation learning, reinforcement learning, and reward learning.
Interested in jobs: Research scientist


Shashirji Patil


E-mail: shishirpatil2007@gmail.com
website:

Advisor(s): Joseph Gonzalez

Research blurb: Gorilla LLM – Teaching LLMs to Use Tools (LLM Execution Engine: guarantees reversibility, robustness, and explosion radius minimization for LLM-agents involved in user and enterprise workflows; POET: memory bound, and LLM Energy-intensive fine-tuning of devices such as smartphones and laptops (
Interested in jobs: Research scientist


Suzy Patrick


E-mail: spethryk@berkeley.edu
website:

Advisor(s): Trevor Darrell, Joseph Gonzalez

Research blurb: I work on improving the reliability and safety of multimodal models. My focus is on localizing and reducing hallucinations for vision + language models, as well as measuring and using uncertainty and reducing bias. My interests lie in applying solutions to these challenges in real production scenarios rather than solely in an academic environment.
Interested in jobs: Applied Research Scientist in creative AI, security, and/or accessibility


Xingyu Lin


E-mail: xingyu@berkeley.edu
website:

Advisor(s): Peter Abel

Research blurb: My research is on robotics, machine learning, and computer vision, with the main goal of learning general robot skills from two perspectives: (1) learning structured world models with spatial and temporal abstractions; (2) advance training of visual representations and skills to enable knowledge transfer from Internet-scale vision datasets and simulators;
Interested in jobs: Faculty, or research scientists


Yaodong Yu


E-mail: yyu@eecs.berkeley.edu
website:

Advisor(s): Michael I. Jordan, Yi Ma

Research blurb: My research interests are broadly in the theory and practice of reliable machine learning, including interpretability, privacy, and robustness.
Interested in jobs: Faculty


WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment