there aren’t that many, if you’re talking specifically LLMs, but ML+AI is more than LLMs.
Not a defence or indictment of either side, just people tend to confuse the terms “LLM” and “AI”
I think there could be worth in AI for identification (what insect in this, find the photo I took of the receipt for my train ticket last month, order these chemicals from lowest to highest pH…) - but LLMs are only part of that stack - the input and output - which isn’t going to make many massive breakthroughs week to week.
The recent boom in neural net research will have real applicable results that are genuine progress: signal processing (e.g. noise removal), optical character recognition, transcription, and more.
However the biggest hype areas with what I see as the smallest real return is in the huge model LLM space, which basically try to portray AGI as just around the corner.
LLMs will have real applications in summarization, but largely otherwise they just generate asymptotically plausible babble, very good for filling the Internet with slop, not actually useful to replace all the positions OAI, et al, need it to (for their funding to be justified).
Because Lemmy is more representative of scientists and underprivileged while other media is more representative of celebrities and people who can afford other media, like hedge funds or tech monopolies.
Funny how I never see articles on Lemmy about improvements in LLM capabilities.
Probably because nobody really wants to read absolute nonsense.
i would guess a lot of the pro ai stuff is from corpos given the fact good press is money to them.
there aren’t that many, if you’re talking specifically LLMs, but ML+AI is more than LLMs.
Not a defence or indictment of either side, just people tend to confuse the terms “LLM” and “AI”
I think there could be worth in AI for identification (what insect in this, find the photo I took of the receipt for my train ticket last month, order these chemicals from lowest to highest pH…) - but LLMs are only part of that stack - the input and output - which isn’t going to make many massive breakthroughs week to week.
The recent boom in neural net research will have real applicable results that are genuine progress: signal processing (e.g. noise removal), optical character recognition, transcription, and more.
However the biggest hype areas with what I see as the smallest real return is in the huge model LLM space, which basically try to portray AGI as just around the corner. LLMs will have real applications in summarization, but largely otherwise they just generate asymptotically plausible babble, very good for filling the Internet with slop, not actually useful to replace all the positions OAI, et al, need it to (for their funding to be justified).
Because Lemmy is more representative of scientists and underprivileged while other media is more representative of celebrities and people who can afford other media, like hedge funds or tech monopolies.