AlphaCode received an average ranking in the top 54.3% in simulated evaluations and achieved, 'approximately human-level performance.”
model. The results? Well, AlphaCode performed well but not exceptional. The model’s overall performance, according to a paper published inshared with Gizmodo, corresponds to a “novice programmer” with a few months to a year of training. Part of those findings were madeIn the test, AlphaCode was able to achieve “approximately human-level performance” and solve previously unseen, natural language problems in a competition by predicting segments of code and creating millions of potential solutions.
That might not sound all that impressive, particularly when compared to seemingly stronger model performances against humans in complex board games, though the researchers note that succeeding at coding competitions are uniquely difficult. To succeed, AlphaCode had to first understand complex coding problems in natural languages and then “reason” about unforeseen problems rather than simply memorizing code snippets.
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
DeepMind demonstrates AI can regulate nuclear fusion power plasmaFusion experts suggested that the research could help inform the design of tokamaks and their control systems. This premium story was unlocked by Verizon.
Read more »
Can We Please Stop With These Offensively Stupid 'Robotaxis'?After canning its autonomous vehicle division in 2020, Uber is back with a robotaxi service open to customers for free — but who needs 'em?
Read more »
This is the dock that lets Skydio drones truly fly themselvesPesky human observers not (always) required.
Read more »
Op-Ed: I’ll bet you didn't write that — a robot didOp-Ed: I'll bet you didn't write that — a robot did (via latimesopinion)
Read more »
NASA's Mars Perseverance rover bottles up 1st dirt samplesThe samples could help future humans visiting Mars.
Read more »