DeepMind’s AlphaCode reaches human parity in programming competition

According to a study published in the journal Science, DeepMind’s AlphaCode achieved human-level performance in programming competitions. AlphaCode first used the huge code base on GitHub for training, and was familiar with grammar and coding standards. It then collects thousands of problems from programming competitions and trains it to translate problem descriptions to code. As an example, ask to write a program to determine the number of binary strings (sequences of 0s and 1s) of length n that do not contain any consecutive 0s. When encountering new problems, AlphaCode will use Python or C++ to write candidate programming solutions and filter out bad ones. AlphaCode can generate up to a million candidate programming solutions. In order to filter out so many programming scenarios, it keeps only 1% programs that pass the test cases. To further narrow it down, it clusters the programs based on their similarity to the virtual input, starting with the largest cluster and submitting programs one by one until it finds a successful program or reaches the maximum submission limit of 10 programs. This allows it to test a wide range of programming strategies. After training AlphaCode solves 34% of the specified problems. It outperformed 45.7% of programmers in online programming competitions with at least 5,000 participants.

This article is transferred from: https://www.solidot.org/story?sid=73624
This site is only for collection, and the copyright belongs to the original author.