There has been much recent talk about the near future of code writing itself with the help of trained neural networks but outside of some limited use cases, that reality is still quite some time away—at least for ordinary development efforts.
Although auto-code generation is not a new concept, it has been getting fresh attention due to better capabilities and ease of use in neural network frameworks. But just as in other areas where AI is touted as being the near-term automation savior, the hype does not match the technological complexity need to make it reality. Well, at least not yet.
Just in the last few weeks Google, Microsoft and IBM have announced new ways of boosting developer productivity with deep learning frameworks that fill themselves in—at least in part. The headlines exclaim that code is writing itself; that programmers will no longer be necessary. In reality, however, what all of these auto-generation code efforts share in common, aside from the developer productivity angle, is that the use cases are still limited. The amount of code may required may not be ample enough, the neural network may still require a great deal of expertise in how to construct new layers, or the data to inform auto-network creation is scattered across too formats.
Microsoft Research raised red flags about developer automation recently with its announcement of DeepCoder, a neural network that learns to predict properties of a program by inputs and outputs derived from a broad range of sources of code. This led to reported faster code generation and higher levels of difficulty according to various programming competition problems. Another effort in the same vein from IBM Research scanned thousands of peer reviewed papers for code, framework, and library details to help developers bootstrap neural network model generation.
It is difficult for DeepCoder to generate vast quantities of code at a time and for the IBM Research effort, there are limitations due to the data input (scattered among papers, fragmented data, etc.). AutoML works only with defined frameworks since standardization is key to generalizing anything, particularly a custom neural network.
While Rania Khalaf, director of AI engineering at IBM research says a neural network blazing through code repositories and gathering enough on the architecture side to build its own models is possible, but there is far more to it than it may appear. This is because for the IBM effort at least, there are no standard frameworks or libraries that the developer community uses in enough mass to have auto-code generating power.
Khalaf says we may be closer to getting AI to scan through a code repository like GitHub and make something meaningful out of it, but “close” is relative. “It is easy to think this can be easily automated but starting from a developer and data scientist standpoint, to get that fully automated, there has to be a significant ‘human in the loop’ factor.” Even getting to the point of broadly generalizing across different platforms, programming models, and libraries is far more difficult than it seems as the deep learning paper efforts from IBM Research highlight.
Senthil Mani, one of the researchers on the DLPaper2Code from IBM Research tells The Next Platform that interoperability between deep learning approaches represents one of the greatest problems. His team is working on a “grammar” to solve this problem—an abstract representation that can be platform agnostic. “If a standard can be picked up in the community it would be possible to generate more code automatically,” he adds.
“The larger goal is to democratize deep learning for the larger developer population. The set of developers trained well in languages like Java, for instance, have to now ump into building AI applications and there is a large skill gap in picking up these libraries and frameworks,” Mani explains. We have started building a tool where people can come in and start designing a deep learning model based on a composite of some networks, layers, and hyperparameters. The implementation and coding patterns will be different as well as configurations but the goal is to take an idea and ‘drag and drop’ layers to start constructing a model.”
Aside from that standardization, Mani says another area where they have made progress toward greater auto-code generation is in prediction of layers for neural networks, something that will add to overall productivity yet still relies on that human-in-the-loop aspect described above.
Still, Mani agrees, we are quite a long way from the future of automated programmers that some of the recent mainstream auto-code news has projected. “What we are looking at now is more along the lines of reducing time and effort for developers to bootstrap their models. Even that problem alone is hard enough—and we have only scratched the surface.”
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.