Singapore Management University, Singapore
Automated Bug Management: Reflections and the Road Ahead
Abstract: For many projects, bug reports, predominantly written in natural language, are submitted daily to issue tracking systems. The number of such reports is often too many for busy software engineers to manually handle and eventually resolve in a timely fashion. Also, the resolution of each report often requires many steps, e.g., detecting invalid reports, assigning the reports to engineers with the right expertise, finding the buggy files requiring changes, fixing the buggy files, etc. Incorrect decisions made for any of these steps can slow down the resolution of the bug report. To help reduce engineers’ workload and improve the reliability of systems, in the last decade, many automated solutions have been proposed for various steps in the bug management and resolution process. This talk will first do a reflection on the hundreds of studies done in this popular area of Natural-Language Based Software Engineering (NLBSE), highlighting success cases and the explored directions. It will then highlight interesting future work in the road ahead, describing important unsolved problems and untapped opportunities.
Automated Bug Management: Reflections & the Road Ahead
Biography: David Lo is a Professor of Computer Science at the School of Computing and Information Systems, Singapore Management University. He leads the SOftware Analytics Research (SOAR) group. His research interest is in the intersection of software engineering, cybersecurity, and data science, encompassing socio-technical aspects and analysis of different kinds of software artifacts, with the goal of improving software quality and security, and developer productivity. He has won more than 15 international research and service awards, including 2 Most Influential Paper (or Test-of-Time) Awards and 6 ACM SIGSOFT Distinguished Paper Awards. He has served in more than 30 organizing committees of conferences and is currently serving on the SIGSOFT Executive Committee, Editorial Boards of TSE, TRel, and EMSE, and as a PC Co-Chair of ESEC/FSE 2024 and ICSE 2025. He is an IEEE Fellow, NRF Investigator, Fellow of Automated Software Engineering, and ACM Distinguished Member.
Github, USA
Trends and Opportunities in the Application of Large Language Models: The Quest for Maximum Effect
Abstract: As large language models become more and more sophisticated, the machine learning problem "How to train a great new model so it best solves my task" increasingly pivots to "How to run a great existing model so it best solves my task".
This is easier said than done and requires reconciliation of four goals:
These goals are almost always conflicting. For example, an established strategy for 1 is few-shotting, but in particular for source code, this will often be very different in style to the model's training set, can easily bias the model, and uses lots of space that could otherwise be used for background information.
I discuss strategies for addressing each of these goals in the code domain, as well as methods for balancing them against each other. I will in particular focus on the example of GitHub Copilot and related AI for software development projects.
Biography: Albert Ziegler is a principal machine learning engineer with a background in Mathematics and a home at GitHub Next, GitHub's innovation and future group. His main interests are combinations of deductive and intuitive reasoning to improve the software development experience. He's previously worked on developer productivity, ML guided CodeQL, and he was part of the trio that conceived and then implemented the GitHub Copilot project. His most recent projects include Copilot Radar and AI for Pull Requests.