Context, then answer… instead of having everything ride on the first character (e.g. we make it pick “Y” or “N” first in response to a yes-or-no question, it usually picks “Y” even if it later talks itself out of it).
You must log in or # to comment.
Remember when satnavs first came out and you could download different voices for them?
It would definitely be funnier to train it to do that.

Or even better, don’t use the racist pile of linear algebra that regurgitates misinformation and propaganda.
That’s the basis of reasoning models. Make LLMs ‘think’ through the problem for several hundred tokens before giving a final answer.
Crank the temperature settings and have it say “Trust me, bro.”
More wise it would sound.


