One time, during a security fix, the model's code introduced a non-obvious DoS vector. Well, obvious from the perspective of how the code would be deployed, but not from the code itself. That's exactly why reading each change was so important. Once the issue was pointed out, the model produced code that both addressed the security issue and avoided the DoS.
this is a core issue: the algogen has no concept of 'deployment' and only has the code itself. Even for simple things, not just security like here, it will not be able to look at the intention of a project outside the project. This a better anchor for human in the loop, the connection to reality / intention?