Time: Fridays, noon - 1:05pm (PT)
Location: The Internet / The LSD Lab (Engineering 2, Room 398)
Organizers: Lindsey Kuper, Tyler Sorensen, Reese Levine, and Achilles Benetopoulos
The Languages, Systems, and Data Seminar meets weekly to discuss interesting topics in the areas of programming languages, systems, databases, formal methods, security, software engineering, verification, architecture, and beyond. Our goal is to encourage interactions and discussions between students, researchers, and faculty with interests in these areas. The seminar is open to everyone interested. Participating UCSC students should register for the 2-credit course CSE 280O (let the organizers know if you’re an undergrad and need a permission code).
For winter 2026, we will continue to host the LSD Seminar in a hybrid fashion. Anyone can attend on Zoom, and local folks can gather in person in the lab. Speakers can join either in person or on Zoom, whichever is convenient.
Talks will be advertised on the ucsc-lsd-seminar-announce (for anyone) and lsd-group (for UCSC-affiliated people) mailing lists.
| Date | Speaker | Title |
|---|---|---|
| Jan. 9 | Shanto Rahman | Reliable Software Testing Using LLM and Program Analysis |
| Jan. 16 | Lef Ioannidis | TBD |
| Jan. 23 | George Pirlea | TBD |
| Jan. 30 | Stephen Mell | TBD |
| Feb. 6 | TBD | TBD |
| Feb. 13 | TBD | TBD |
| Feb. 20 | TBD | TBD |
| Feb. 27 | TBD | TBD |
| March 6 | TBD | TBD |
| March 13 | TBD | TBD |
Jan. 9
Speaker: Shanto Rahman
Title: Reliable Software Testing Using LLM and Program Analysis
Abstract: Software testing is essential for software reliability, yet modern test suites frequently suffer from broken tests caused by nondeterminism or code evolution. These failures mislead developers, reduce trust in testing, and allow real bugs to escape into production. In this talk, I present my work on making software testing reliable using program analysis and large language models. I introduce techniques to automatically identify and explain the root causes of flaky tests using context-aware attribution, enabling developers to understand why nondeterministic failures occur. I then present automated repair techniques, including FlakeSync for repairing asynchronous flaky tests and UTFix for repairing unit tests broken by code changes. These techniques achieve high repair success with low overhead and have been evaluated on large-scale, real-world datasets. I conclude by outlining a vision for nondeterminism-aware reliability in emerging domains such as cloud services, ML systems, and quantum software.
Bio: Shanto Rahman is a Ph.D. candidate in Electrical and Computer Engineering at The University of Texas at Austin, advised by Professor August Shi. Her research spans software engineering and AI, focusing on reliable software testing under nondeterminism and code evolution. She develops program-analysis- and LLM-based techniques to detect, explain, reproduce, and repair broken tests, with publications in ICSE, OOPSLA, ASE, and ICST. She has gained industry experience through internships at Google and Amazon Web Services (AWS). Her recognitions include MIT EECS Rising Stars, UC Berkeley NextProf Nexus, and multiple UT Austin fellowships and awards.
Jan. 16
Speaker: Lef Ioannidis
Title: TBD
Abstract: TBD
Jan. 23
Speaker: George Pirlea
Title: TBD
Abstract: TBD
Jan. 30
Speaker: Stephen Mell
Title: TBD
Abstract: TBD
Feb. 6
Speaker: TBD
Title: TBD
Abstract: TBD
Feb. 13
Speaker: TBD
Title: TBD
Abstract: TBD
Feb. 20
Speaker: TBD
Title: TBD
Abstract: TBD
Feb. 27
Speaker: TBD
Title: TBD
Abstract: TBD
March 6
Speaker: TBD
Title: TBD
Abstract: TBD
March 13
Speaker: TBD
Title: TBD
Abstract: TBD