Idea meat FTW
AI is a parasite.
So are many idea meat lumps
Are we talking incandescent bulbs or led bulbs?
Strong led or (very) weak incandescent. It’s about 20 Watts (at least, that’s what the popsci meme reports)
I dunno, I get real hungry on days when I have to think a lot
There are probably 2 reasons for this:
-
There’s probably a lot more motor control going on than you would expect when you need to think (writing, fidgeting, etc.).
-
Your brain wants sugars, so when you run out of immediately-available glycogen to break down, you will want to eat more in order to keep thinking. Breaking down fats wont supply energy fast enough (in the short term) to keep complex thought running continuously.
-
Lead bulbs ofc.
This guy lead bulbs.
Tbf a lot of energy goes into producing the food we consume. But nowhere near what it costs to run this AI garbage.
Isn’t this how we get the Matrix?
In the matrix humans were used as batteries, not processors. Although that was the original writing before an exec thought the average person would be “confused” by that
Win at what?
Yes
Win at being shit.
God creates man.
Man creates god.
Man kills god.
Man creates AI.
AI kills man.
AI destroys earth.
Crocodile people rule the galaxy until the heat death of the universe.-Nietzsche
As much as I think current IA is another bullshit marketing term, we will see when AI has been around for at least a few centuries, I don’t think we need thousands of years like brains did.
This is the thing about AI criticism. AI in the LLM sense we know today has been publicly available for a few years, in development for a couple decades. Any criticism about how stupid it is will be irrelevant in 6-12 months. Look at the people trashing AI 2 years ago, how it would constantly hallucinate and produce gibberish code. Now it’s a lot better on both regards. In 2 more years, what then? It’ll be better. Yes we’ll hit the LLM ceiling but there’s a lot of fine tuning to be done.
Criticize AI for the environmental effects, the inequality that it’s enhancing, how the rich and powerful have access to the AIs that know too much about us. Criticize it for lacking the reality of human composed text. But criticizing it on technical grounds is not the right angle.
FWIW if you asked both an AI and a HS student to crank out an essay on a random topic, the HS student not having studied the topic, the HS student would be the one making more shit up. Human brains have limitations too. AI and human brains aren’t directly comparable.
Look at the people trashing AI 2 years ago, how it would constantly hallucinate and produce gibberish code.
In my experience, this is absolutely still the case…
Edit: removing my far too serious comment.
Tldr Poe’s law, I can’t tell if this is a critique of AI, or of AI critics.Ah yeah don’t take shit post too seriously.
For me, this meme was just a Matrix joke.
Also you can run most models on a wide range of fuels. Sucrose, glucose, maltose, ethanol, molybdenum disulfide, small rocks, some grass. Really anything.
Yes I think the point is just a shit ton more than the human brain, generally.
deleted by creator
deleted by creator
It’s still leagues ahead of LLMs. I’m not saying it’s entirely impossible to build a computer that surpasses the human brain in actual thinking. But LLMs ain’t it.
The feature set of the human brain is different, in a way that you can’t compensate for by just increasing scale. So you get something that works but not quite, by using several orders of magnitude more power.
We optimize and learn constantly. We have chunking, whereby a complex idea becomes simpler for our brain once it’s been processed a few times, and this allows us to progressively work on more and more complex ideas without an increase in our working memory. And a lot of other stuff.
If you spend enough time using LLMs you must notice how their working is different from your own.
I think the moat is that when a human is born and their world model starts “training”, it’s already pre-trained by millions of years of evolution. Instead of starting from random weights like any artificial neural network, it starts with usable stuff, lessons from scenarios it may never encounter but will nevertheless gain wisdom from.
deleted by creator
Do you not have an internal experience?
deleted by creator
I don’t need to understand consciousness to be confident a llm is not conscious.
Dogs are glorified barking machines. Is a tape playing a tape of a dog barking have the consciousness or intellegence of a dog?
deleted by creator
Probably, their interactions with humans/dogs suggest they have a “theory of mind”.
Mites? No.
You can’t prove that I do, I can’t prove that you do. Those metaphysical arguments don’t have much punch in a scientific conversation.
Sorry, but I’m not a prediction engine, I am capable of abstract thought, and actually understanding the meaning of the words.
I can also process all kinds of different data and make connection between then which includes emotional connections.
Another cool trick, I also have this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.
this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.
Maybe you’d be able to if you dial it to 25W
deleted by creator
Nonetheless, the human brain is a better prediction engine.
deleted by creator









