Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
GitHub Copilot has been the subject of some controversy since Microsoft announced it in the Summer of 2021. Most recently, Microsoft has been sued by programmer and lawyer Matthew Butterick, who has alleged that GitHub’s Copilot violates the terms of open-source licenses and infringes the rights of programmers. Despite the lawsuit, my sense is that Copilot is likely here to stay in some form or another but it got me thinking: if developers are going to use an AI-assisted code generation tool, it would be more productive to think about how to improve it rather than fighting over its right to exist.
Behind the Copilot controversy
Copilot is a predictive code generator that relies on OpenAI Codex to suggest code — and entire functions — as coders compose their own code. It is much like the predictive text seen in Google Docs or Google Search functions. As you begin to compose a line of original code, Copilot suggests code to complete the line or fragment based on a stored repository of similar code and functions. You can choose to accept the suggestion or override it with your own, potentially saving time and effort.
The controversy comes from Copilot deriving its suggestions from a vast training set of open-source code that it has processed. The idea of monetizing the work of open-source software contributors without attribution has irked many in the GitHub community. It has even resulted in a call for the open-source community to abandon GitHub.
There are valid arguments for both sides of this controversy. The developers who freely shared their original ideas likely did not intend it to end up packaged and monetized. On the other hand, it could be argued that what Microsoft has monetized is not the code but the AI technology for applying that code in a suitable context. Anyone with a free GitHub account can access the code, cop …