Sciweavers

MSR
2015
ACM

Will They Like This? Evaluating Code Contributions with Language Models

8 years 7 months ago
Will They Like This? Evaluating Code Contributions with Language Models
—Popular open-source software projects receive and review contributions from a diverse array of developers, many of whom have little to no prior involvement with the project. A recent survey reported that reviewers consider conformance to the project’s code style to be one of the top priorities when evaluating code contributions on Github. We propose to quantitatively evaluate the existence and effects of this phenomenon. To this aim we use language models, which were shown to accurately capture stylistic aspects of code. We find that rejected changesets do contain code significantly less similar to the project than accepted ones; furthermore, the less similar changesets are more likely to be subject to thorough review. Armed with these results we further investigate whether new contributors learn to conform to the project style and find that experience is positively correlated with conformance to the project’s code style.
Added 15 Apr 2016
Updated 15 Apr 2016
Type Journal
Year 2015
Where MSR
Comments (0)