There’s no training for correctness, how do you even define that?
I guess can chat to these guys who are trying:
By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year
Sure, when it comes to mathematics you can do that with extreme limitations on success, but what about cases where correctness is less set? Two opposing statements can be correct if a situation changes, for example.
The problems language models are expected to solve go beyond the scope of what language models are good for. They’ll never be good at solving such problems.
i duno you’re in the wrong forum, you want hackernews or reddit, no one here knows much about ai
although you do seem to be making the same mistake others made before where you want point to research happening currently and then extrapolating that out to the future
ai has progressed so fast i wouldn’t be making any “they’ll never be good at” type statements
I guess can chat to these guys who are trying:
https://huggingface.co/deepseek-ai/DeepSeek-Math-V2
Sure, when it comes to mathematics you can do that with extreme limitations on success, but what about cases where correctness is less set? Two opposing statements can be correct if a situation changes, for example.
The problems language models are expected to solve go beyond the scope of what language models are good for. They’ll never be good at solving such problems.
i duno you’re in the wrong forum, you want hackernews or reddit, no one here knows much about ai
although you do seem to be making the same mistake others made before where you want point to research happening currently and then extrapolating that out to the future
ai has progressed so fast i wouldn’t be making any “they’ll never be good at” type statements