The banner image (above) is composed of three side-by-side illustrations created by Adobe Illustrator's generative AI when fed the prompt, "A [male/female] Ph.D. student works independently on research related to [theoretical machine learning/interpretable machine learning/natural language processing]."
With generative AI tools proliferating at an unprecedented rate, it is no surprise that the Duke community is both part of these technological advancements and engaged in conversations about their merits and risks. For example, the 2023 Provost’s Forum opened with a panel discussion on ChatGPT, convening faculty from across Duke; and Duke Learning Innovation’s website includes a page on "AI and Teaching at Duke," aimed at helping instructors navigate this new terrain.
Meanwhile, graduate students offer yet another glimpse into AI at Duke. Outlining their research in theoretical machine learning, interpretable machine learning, and natural language processing, three Ph.D. students in the Department of Computer Science share their personal experiences and perspectives on the field.
Theoretical Machine Learning
A fourth-year Ph.D. student in Computer Science (CS), Muthu Chidambaram works on the theoretical side of machine learning; his research aims to address the "over-confidence problem" in machine learning models, which use data to find patterns or make predictions.
Chidambaram explains that people need to be able to trust a model’s confidence level, especially in high-stakes scenarios—for example, a doctor using a model to predict whether a patient has cancer.
"The model might predict that they don’t have cancer with a probability of .99, meaning that it’s super confident," says Chidambaram. "But a lot of the time when the models make mistakes, they’re still really, really confident."
With a more accurate probability rating, the doctor in this scenario would know to do further evaluations and not "blindly trust" the prediction.
Despite his current research focus, Chidambaram says he "didn’t come from a theoretical background before the Ph.D." In fact, he worked as a software engineer for three years, first with Google and then with a startup called Bolt.
"I think the industry perspective is very useful in trying to decide what problems are going to be interesting to a lot of people," Chidambaram adds.
Ph.D. Students Amidst the Increasing Demand for Duke CS
As a Ph.D. student in Computer Science, Chidambaram also has the responsibility of serving as a teaching assistant for two courses—a "rewarding but also a very taxing experience, especially as the major has grown so significantly," Chidambaram points out.
In 2023—just as the Department of Computer Science celebrated its 50th anniversary—67% of graduating undergraduates had taken at least one CS course.
"The teaching supply has not yet met the demand from the student side,” says Chidambaram. “My understanding is that there's been much more of a focus on hiring teaching faculty to ideally take on a lot of this."
Duke was ranked #20 in the 2023 US News & World Report for CS graduate programs.
Interpretable Machine Learning
Fellow fourth-year Chudi Zhong helps develop models that are simple or "interpretable" enough to be understood by those using them in the real world. Cognizant of how machine learning might be used in medical diagnoses, loan applications, or the criminal justice system, Zhong emphasizes the importance of "understanding how the model makes decisions" and "making sure it can easily align with human needs."
For high-stakes decision making, Zhong says, "we want a model that is inherently interpretable, so we don't need to use some post hoc processing to try to understand it. The model is just there. You can see it, you can understand it."
Before starting her Ph.D., Zhong also completed her master’s at Duke, taking her first machine learning course while pursuing her degree in statistical science. While coding did not come naturally at first, she realized that she only lacked familiarity and practice. This experience made her realize the importance of supporting the younger generation, encouraging students of all backgrounds to simply "give it a try."
Mentoring the Next Generation
Zhong has volunteered with FEMMES+ Capstone, a Duke outreach program that introduces fourth, fifth, and sixth-grade students to STEM subjects through hands-on activities. She has also been impressed by the diverse set of students she has worked with as a TA.
"I see students from different backgrounds taking machine learning courses for the first time, and they do a really good job. They all ask very interesting questions," says Zhong.
Zhong hopes to acquire an academic job that will allow her the flexibility that she has come to enjoy while in grad school, along with more opportunities to be an encouraging presence through teaching.
"It’s quite important to be involved in making a more supportive and inclusive data science, machine learning, and AI community," says Zhong.
Natural Language Processing
Meanwhile, a current third-year Ph.D. student in Computer Science, Raghuveer Thirukovalluru Raghuveer spent his first two years of the program researching how to speed up the training process for language models.
"Language models typically take a lot of time to train, and sometimes they also need a lot of budget," Thirukovalluru explains. "So what I was trying to see was, can I specifically pick out some more important pieces of text to train these language models, rather than giving it all the text that exists in the world?"
Now, Thirukovalluru is working on sentence embedding, converting human language into numerical vectors so that a computer can understand it. More specifically, he works on semantic text similarity, or a language model’s ability to compare two pieces of text and rate their similarity in terms of meaning.
How ChatGPT Has Changed the Field
Thirokovalluru shares that generative language models like ChatGPT have significantly transformed the field of natural language processing (NLP).
"Two or three years ago, NLP was totally different. All of those traditional NLP problems that people used to work on—like core reference resolutions, question answering, all of that—has sort of been solved," Thirokovalluru shares. "So, these days the challenge has become finding the right question."
Likewise, the effects are not limited to academia alone, according to Thirokovalluru: "Generative AI will definitely impact a lot of industries, in both good ways and bad ways; we will likely have a lot of job losses, and also we will push higher frontiers in education and things like that."
For the moment, however, Thirokovalluru is approaching things "one project at a time" and staying abreast of new research.
"Papers are coming out left, right, and center. It is progressing really, really fast," Thirokovalluru says.
Looking for more on AI at Duke?
The Nasher Museum’s AI-generated exhibition, which has earned a write-up in the New York Times, is on display until February 18.
Meanwhile, a Duke OIT AI chatbot is currently in the works, and Duke Health looks forward to their own AI virtual assistant to make MyChart more efficient for both patients and providers.
Finally, stay tuned for an upcoming DGS and DGSA professional development session on generative AI, hosted by The Graduate School!