

An LLM trained on all books ever written would probably take romance novels, books by flat earthers, or even “Atlas Shrugged” as truth as much as current AIs consider all stack overflow comments to contain useful and accurate information.
Thinking about it, your questions comes back to the very first and original instance of a computer and the question interested people asked about it:
If you put into the machine wrong figures, will the right answer come out?
Now if we allow ourselves the illusion of assuming that an AGI could exist, and that it can actually learn by itself in a similar way as humans, than just that quote above leads us to these two truths:
- LLMs cannot help being stupid, they just do not know any better.
- AGIs will probably be idiots, just like the humans asking the above question, but there is at least a chance that they will not.










Could take a look at text books from the 1940s. Might have to brush up on their German, though.