I’d suspect the low “density” of context makes it prone to hallucinations. You need to load in 3000 lines to express what Python does in 3, so there’s a lot of chances to guess the next token wtong.
I was gonna say that, probably the higher the abstraction level the best it is for LLMs to reason about the code, because once learned it’s less tokens.
I’d suspect the low “density” of context makes it prone to hallucinations. You need to load in 3000 lines to express what Python does in 3, so there’s a lot of chances to guess the next token wtong.
I was gonna say that, probably the higher the abstraction level the best it is for LLMs to reason about the code, because once learned it’s less tokens.