Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, Innovations in Systems and Software Engineering
…
15 pages
1 file
This paper describes a tool called SCRUB (Source Code Review User Browser) that was developed to support a more effective and tool-based code review process. The tool was designed to support a large team-based software development effort of mission critical software at JPL, but can also be used for individual software development on small projects. The tool combines classic peer code review with machine-generated analyses from a customizable range of source code analyzers. All reports, whether generated by humans or by background tools, are accessed through a single uniform interface provided by SCRUB.
Multidiszciplináris Tudományok, 2020
Code review is the most effective quality assurance strategy in software development where reviewers aim to identify defects and improve the quality of source code of both commercial and open-source software. Ultimately, the main purpose of code review activities is to produce better software products. Review comments are the building blocks of code review. There are many approaches to conduct reviews and analysis source code such as pair programming, informal inspections, and formal inspections. Reviewers are responsible for providing comments and suggestions to improve the quality of the proposed source code modifications. This work aims to succinctly describe code review process, giving a framework of the tools and factors influencing code review to aid reviewers and authors in the code review stages and choose the suitable code review tool.
2014 IEEE International Conference on Software Maintenance and Evolution, 2014
ReDA(http://reda.naist.jp/) is a web-based visualization tool for analyzing Modern Code Review (MCR) datasets for large Open Source Software (OSS) projects. MCR is a commonly practiced and lightweight inspection of source code using a support tool such as Gerrit system. Recently, mining code review history of such systems has received attention as a potentially effective method of ensuring software quality. However, due to increasing size and complexity of softwares being developed, these datasets are becoming unmanageable. ReDA aims to assist researchers of mining code review data by enabling better understand of dataset context and identifying abnormalities. Through real-time data interaction, users can quickly gain insight into the data and hone in on interesting areas to investigate. A video highlighting the main features can be found at:
Advances in Intelligent Systems and Computing, 2021
The growing complexity of software and associated code makes it difficult for software developers to produce high-quality code in a timely fashion. However, this process of assessing code quality can be automated with the help of software code metrics, which is a quantitative measure of code properties. The software metrics consist of several attributes, which describe the source code and this includes lines of code, program length, the effort required, the difficulty involved, cyclomatic complexity, volume, vocabulary, intelligence count, and so on. With the help of these features, code can be classified as a well-written code or badly-written code. This study focuses on evaluating the performance of main classification algorithms: Naïve Bayes, K-nearest neighbors (KNN), logistic regression, stochastic gradient descent (SGD) classifier, support vector machine (SVM), and decision tree (D-Tree) with thirteen of NASA metrics data program (MDP) dataset. The research work also focuses on understanding the math and working of each of the classifiers and the quality of each dataset. The comparison measure for the evaluating classifiers includes confusion matrix and other derived measures, namely F-measure, recall, precision, accuracy, and Matthews correlation coefficient (MCC). The best model is chosen along with the appropriate dataset. In order to allow the developers to use the trained model, we created Code Buddy a SharePoint web-portal; which allows the developers either assess the code quality by sending the review request to any of the colleagues or assess the code automatically using a trained model, which will predict whether the code is well written or badly written. Moreover, if the developer is not satisfied with the results, he/she can send a review request to any fellow colleague who can review the code and provide the review comment on the same.
2013
This paper investigates the feasibility of a plugin facilitating identification of code quality. The proposed solution includes a distributed plugin that performs static code analysis as well as machine learning based classification, and is able to identify well written and badly written code.
2008 International Conference on Software Testing, Verification, and Validation, 2008
There is empirical evidence that the code quality of software has an important impact on the external, i.e., user perceptible, software quality. Currently a large number of source code metrics exist that seem to ease the evaluation of code quality. Nevertheless, studies show that the theoretical foundations are weak and promising approaches for the automatic assessment of code quality are to be considered with great caution. We therefore came to the conclusion that the metric values and other findings provided by various static code analysis tools can only be used in the context of an expert-centred assessment of internal software quality. In order to be able to carry out code quality assessments in a timely and efficient manner it is inevitable to have additional tool support. For that purpose we developed the Eclipsed based tool Software Product Quality Reporter (SPQR) that supports expert-centred evaluation of source code -from the formulation of project-specific quality models up to the generation of preliminary code quality reports. The application of SPQR already proved its usefulness in various code assessment projects around the world.
2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), 2015
Software code review is an inspection of a code change by an independent third-party developer in order to identify and fix defects before an integration. Effectively performing code review can improve the overall software quality. In recent years, Modern Code Review (MCR), a lightweight and tool-based code inspection, has been widely adopted in both proprietary and open-source software systems. Finding appropriate codereviewers in MCR is a necessary step of reviewing a code change. However, little research is known the difficulty of finding codereviewers in a distributed software development and its impact on reviewing time. In this paper, we investigate the impact of reviews with code-reviewer assignment problem has on reviewing time. We find that reviews with code-reviewer assignment problem take 12 days longer to approve a code change. To help developers find appropriate code-reviewers, we propose REVFINDER, a file location-based code-reviewer recommendation approach. We leverage a similarity of previously reviewed file path to recommend an appropriate code-reviewer. The intuition is that files that are located in similar file paths would be managed and reviewed by similar experienced code-reviewers. Through an empirical evaluation on a case study of 42,045 reviews of Android Open Source Project (AOSP), OpenStack, Qt and LibreOffice projects, we find that REVFINDER accurately recommended 79% of reviews with a top 10 recommendation. REVFINDER also correctly recommended the code-reviewers with a median rank of 4. The overall ranking of REVFINDER is 3 times better than that of a baseline approach. We believe that REVFINDER could be applied to MCR in order to help developers find appropriate code-reviewers and speed up the overall code review process.
Code reviews play an important and successful role in modern software development. But usually they happen only once before new code is merged into the main branch. We present a concept which helps developers to continuously give feedback on their source code directly in the integrated development environment (IDE) by using the metaphor of social networks. is reduces context switches for developers, improves the software development process and allows to give feedback to developers of external libraries and frameworks.
Proceedings of the 40th International Conference on Software Engineering Software Engineering in Practice - ICSE-SEIP '18
Employing lightweight, tool-based code review of code changes (aka modern code review) has become the norm for a wide variety of open-source and industrial systems. In this paper, we make an exploratory investigation of modern code review at Google. Google introduced code review early on and evolved it over the years; our study sheds light on why Google introduced this practice and analyzes its current status, after the process has been refined through decades of code changes and millions of code reviews. By means of 12 interviews, a survey with 44 respondents, and the analysis of review logs for 9 million reviewed changes, we investigate motivations behind code review at Google, current practices, and developers' satisfaction and challenges.
Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
Code Review Automation can reduce human efforts during code review by automatically providing valuable information to reviewers. Nevertheless, it is a challenge to automate the process for large-scale companies, such as Samsung Electronics, due to their complexity: various development environments, frequent review requests, huge size of software, and diverse process among the teams. In this work, we show how we automated the code review process for those intricate environments, and share some lessons learned during two years of operation. Our unified code review automation system, Code Review Bot, is designed to process review requests holistically regardless of such environments, and checks various quality-assurance items such as potential defects in the code, coding style, test coverage, and open source license violations. Some key findings include: 1) about 60% of issues found by Code Review Bot were reviewed and fixed in advance of product releases, 2) more than 70% of developers gave positive feedback about the system, 3) developers rapidly and actively responded to reviews, and 4) the automation did not much affect the amount or the frequency of human code reviews compared to the internal policy to encourage code review activities. Our findings provide practical evidence that automating code review helps assure software quality. CCS CONCEPTS • General and reference → Empirical studies; • Software and its engineering → Application specific development environments; Collaboration in software development; Software defect analysis; Empirical software validation.
ArXiv, 2022
Code review is an essential part to software development lifecycle since it aims at guaranteeing the quality of codes. Modern code review activities necessitate developers viewing, understanding and even running the programs to assess logic, functionality, latency, style and other factors. It turns out that developers have to spend far too much time reviewing the code of their peers. Accordingly, it is in significant demand to automate the code review process. In this research, we focus on utilizing pre-training techniques for the tasks in the code review scenario. We collect a large-scale dataset of real world code changes and code reviews from open-source projects in nine of the most popular programming languages. To better understand code diffs and reviews, we propose CodeReviewer, a pre-trained model that utilizes four pre-training tasks tailored specifically for the code review senario. To evaluate our model, we focus on three key tasks related to code review activities, includ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the 11th Working Conference on Mining Software Repositories - MSR 2014, 2014
International Symposium on Software Reliability Engineering, 1996
2013 35th International Conference on Software Engineering (ICSE), 2013
International Journal of Cooperative Information Systems, 2006
IFIP Advances in Information and Communication Technology, 2014
ICSME 2020 (Tool Demo Track), 2020
Proceedings of the …, 1993
2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, 2015
IEEE Access, 2022
2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), 2015
Proceedings of the 16th International Conference on Software Technologies, ICSOFT, 2021
2012 ASEE Annual Conference & Exposition Proceedings
Software Engineering and Knowledge Engineering, 2009
2013 4th International Workshop on Emerging Trends in Software Metrics (WETSoM), 2013
Science of Computer Programming, 2021
The 10th Computer Science Education Research Conference