My Research Vision

In the dynamic and rapidly advancing field of software engineering, the integration of artificial intelligence (AI) has introduced both significant advancements and notable challenges. AI-driven software development applications, such as code generation tools and intelligent coding assistants, have the potential to transform the processes by which developers write, debug, and maintain code. However, the reliability and security of these AI-powered tools remain critical considerations. Ensuring that these applications produce accurate, maintainable, and secure code is essential for their widespread adoption and trust among developers.

The primary objective of this research is to enhance the reliability, security, and overall effectiveness of AI-driven software development applications. By addressing the following research questions, we aim to provide actionable insights and best practices for developers and researchers in the field:

  1. How reliable AI-driven software development applications?
  2. How can we improve the reliability of AI-driven software development applications?
  3. What best practices should software developers follow to ensure reliable AI-driven software development?

Software Security

Modern software systems are increasingly complex and interconnected, making them vulnerable to various security threats. Ensuring the security and reliability of software is crucial to protect sensitive data, maintain user trust, and prevent costly breaches and attacks. Our research focuses on leveraging program analysis and learning-based approaches to identify and mitigate security risks, enhance software reliability, and protect against emerging threats. We develop advanced techniques for analyzing and securing software systems, including deep learning-based malware detection and program analysis for identifying security vulnerabilities. Our work highlights the importance of rigorous evaluation approaches to reduce false negatives and biases in security assessments. If you are interested in this theme, see more about my Research here.

Large Language Models for Software Engineering

Large language models (LLMs) have significantly impacted software engineering by assisting in tasks such as code generation, code completion, and bug fixing. However, their reliability and explainability remain concerns, as they may generate incorrect or suboptimal code. Our research aims to enhance the reliability, robustness, and trustworthiness of LLMs in practical applications. We conduct systematic literature reviews and empirical studies to understand the current state of LLMs in software engineering, analyze their performance on benchmark datasets, and explore strategies to improve code quality and explainability. For instance, our studies reveal that data duplications and lack of temporal splits can lead to unrealistic performance evaluations, and that LLM-generated code often suffers from inaccuracies and maintainability issues. If you are interested in this theme, see more about my research here.

Software Development Applications

In today’s rapidly evolving technological landscape, software development applications play a critical role in enhancing productivity for software developers. With the advent of AI, tools like Visual Studio Code extensions are becoming more sophisticated, ensuring best practices and improving the overall development experience. Our research aims to uncover unknown issues and gain empirical insights into the reliability and effectiveness of these applications. We focus on exploring how AI-generated code can assist developers, identifying potential issues such as solution inaccuracies and maintainability problems, and proposing solutions to improve code quality. Additionally, we investigate the security of development applications and the design of AI coding assistants to ensure they are reliable, efficient, and user-friendly. If you are interested in this theme, see more about my research here.