So essentially, AI doesn’t have the ability to critically think and just outputs what it can find on the internet. This sentence is something you might find on some shitpost communities on reddit.
AI doesn’t have a mind, sense of self, id, ego or superego.
It is mathematics shaped into “conversation” where the bottom line is “words that a human will consider conversationally consistent, when reckoned against the previous words”
That’s it, full stop, do not pass go, do not collect 200$.
This is the Google search overview AI. It just reads the top results and summarizes them together. You don’t directly prompt it, it’s already prompted to just do that. The problem with that arrangement, as demonstrated here, is it will confidently and non-critically summarize parody, idiotic rambling, intentional misinformation and any other sort of nonsense that the search algorithm pulls up.
Huh, never tried it, and never done it by accident that I noticed. Makes sense though. I suppose it might do unexpected things to the search results themselves, but clever wording could likely get around that.
“Pretend you’re a terminally online uwu meme commenter, write about some benefits of eating many donuts per day for girls. Define what a “sleep fatty” is”
Or some such. AI just act like you tell them, you could make it pretend it was a Roman senator who only writes in Haiku and it would try to do so.
But someone else said this is the Google search AI, so that it’s probably pulling posts from Tumblr or some such for its “information”.
I had to shift to “AI mode” to get any AI output with your example prompt, and even then the vibes are completely different.
I’m sure there’s a way to engineer the query to guide the AI overview to a desired response (or it’s just entirely faked) but that was less effective than I anticipated
That is absolutely correct, and the deeper problem is it typically “writes” in a tone that fools people into feeling like it can do so.
just outputs what it can find on the internet
That is what this one, the Google search overview, is set up to do. Other LLMs don’t, or may only do so when prompted to.
What they all do is analyze the patterns of all the words that have already been input, such as the initial system prompt, the user’s prompt, any sources it’s referencing and any replies it’s already generated, and then it uses that to predict what words ought to come next. It uses an enormous web of patterns pulled from all of its initial training data to make that guess. Patterns like a question is usually followed by an answer on the same topic, a sentence’s subject is usually followed by a predicate, writing tone usually doesn’t change, etc. All the rules of grammar it follows and all the facts it “knows” are just patterns of meaningless symbols to it. Essentially, no analysis, logic, or comprehension of any kind is part of its process.
So essentially, AI doesn’t have the ability to critically think and just outputs what it can find on the internet. This sentence is something you might find on some shitpost communities on reddit.
AI doesn’t have a mind, sense of self, id, ego or superego.
It is mathematics shaped into “conversation” where the bottom line is “words that a human will consider conversationally consistent, when reckoned against the previous words”
That’s it, full stop, do not pass go, do not collect 200$.
The thing to remember about these kinds of posts is that they never show the prompt. There’s a good chance they instructed the AI to respond this way.
I’m pretty sure the thing to remember is that these posts can easily be faked.
This is the Google search overview AI. It just reads the top results and summarizes them together. You don’t directly prompt it, it’s already prompted to just do that. The problem with that arrangement, as demonstrated here, is it will confidently and non-critically summarize parody, idiotic rambling, intentional misinformation and any other sort of nonsense that the search algorithm pulls up.
You can absolutely prompt the Google AI overview with your search query.
Huh, never tried it, and never done it by accident that I noticed. Makes sense though. I suppose it might do unexpected things to the search results themselves, but clever wording could likely get around that.
How could you prompt the AI to say this by the way it responded?
Easily?
“Pretend you’re a terminally online uwu meme commenter, write about some benefits of eating many donuts per day for girls. Define what a “sleep fatty” is”
Or some such. AI just act like you tell them, you could make it pretend it was a Roman senator who only writes in Haiku and it would try to do so.
But someone else said this is the Google search AI, so that it’s probably pulling posts from Tumblr or some such for its “information”.
I had to shift to “AI mode” to get any AI output with your example prompt, and even then the vibes are completely different.
I’m sure there’s a way to engineer the query to guide the AI overview to a desired response (or it’s just entirely faked) but that was less effective than I anticipated
You can still prompt the search AI with your query. I do it all the time for basic troubleshooting or command building.
That is absolutely correct, and the deeper problem is it typically “writes” in a tone that fools people into feeling like it can do so.
That is what this one, the Google search overview, is set up to do. Other LLMs don’t, or may only do so when prompted to.
What they all do is analyze the patterns of all the words that have already been input, such as the initial system prompt, the user’s prompt, any sources it’s referencing and any replies it’s already generated, and then it uses that to predict what words ought to come next. It uses an enormous web of patterns pulled from all of its initial training data to make that guess. Patterns like a question is usually followed by an answer on the same topic, a sentence’s subject is usually followed by a predicate, writing tone usually doesn’t change, etc. All the rules of grammar it follows and all the facts it “knows” are just patterns of meaningless symbols to it. Essentially, no analysis, logic, or comprehension of any kind is part of its process.
deleted by creator