LLM & Generative AI for Software Engineering

Overview

Large Language Models (LLMs) and generative AI are transforming many aspects of software engineering, including code generation, requirements analysis, and test generation. Our lab integrates LLMs across multiple research areas — self-adaptive systems, discrete controller synthesis, security, and evolutionary computation — pursuing synergies between AI and software engineering.

Generative AI for Self-Adaptive Systems

  • Flagship survey: Generative AI for Self-Adaptive Systems (Li et al., ACM TAAS 2024) A systematic survey of the state of the art in integrating generative AI into self-adaptive systems, providing a comprehensive research roadmap. The paper examines how generative AI can be applied at each phase of the MAPE-K loop.

  • Exploring the potential of LLMs in self-adaptive systems (Li et al., SEAMS 2024) An empirical investigation of how LLMs can contribute to the Monitor, Analyze, Plan, and Execute (MAPE-K) phases of self-adaptive systems.

LLM-Guided Discrete Controller Synthesis

  • Automated adaptation rule optimization (Ishimizu et al., ACSOS 2024; IPSJ 2026) LLMs design and optimize search policies for discrete controller synthesis, improving synthesis efficiency.

  • LTL safety verification with LLM-generated plans (Koyama et al., KBSE 2026) Integrates LLM-generated action plans with formal verification in Linear Temporal Logic (LTL) to ensure safety guarantees.

Integration with Evolutionary Computation and AutoML

  • LLM-enhanced evolutionary computation (Cai et al., GECCO 2024) Integrates LLMs as evolutionary operators (mutation and crossover) to improve exploration performance on complex optimization problems.

  • Synergy between LLMs and AutoML (Xu et al., TMLR 2024) Leverages LLMs within automated machine learning (AutoML) pipelines to streamline hyperparameter search and model selection.

LLM for Security and Multi-Agent Applications

  • Vulnerability detection (Mao et al., QRS 2024) Development and evaluation of LLM-based automated software vulnerability detection methods.

  • Social media language evolution simulation (Cai et al., CEC 2024) Uses LLM-based multi-agent systems to simulate language change dynamics on social media platforms.

Accessibility Applications

  • LLM+AR for color vision deficiency support (Morita et al., GCCE 2024) Combines LLMs with augmented reality (AR) to help users with color vision deficiency more accurately perceive color information in everyday environments.

Selected Publications

  • Jialong Li et al. “Generative AI for Self-Adaptive Systems: State of the Art and Research Roadmap.” ACM Transactions on Autonomous and Adaptive Systems (TAAS), 2024.
  • Jialong Li et al. “Exploring the Potential of LLMs in Self-Adaptive Systems.” SEAMS 2024.
  • Jinyu Cai et al. “Exploring the Improvement of Evolutionary Computation via Large Language Models.” GECCO 2024.
  • Jinglue Xu et al. “Large Language Models Synergize with Automated Machine Learning.” Transactions on Machine Learning Research (TMLR), 2024.
  • Yusei Ishimizu et al. “Automatic Adaptation Rule Optimization via Large Language Models.” ACSOS 2024.
  • Ryoya Koyama et al. “LTL Safety Verification of LLM-Generated Plans.” KBSE 2026.
  • Zhenyu Mao et al. “Multi-role Consensus through LLMs Discussions for Vulnerability Detection.” QRS 2024.
  • Jinyu Cai et al. “Language Evolution for Evading Social Media Regulation via LLM-based Multi-agent Simulation.” CEC 2024.
  • Shogo Morita et al. “Towards Context-aware Support for Color Vision Deficiency: An Approach Integrating LLM and AR.” GCCE 2024.