My SO was complaining that boss (LLM that sucker) put all the meeting notes into LLM and then asked it to make a presentation on it. Then my SO had to redo 90% of it because it was trash. So yay it saved 10% of time. Oh but wait it took time to read all that and run it through the AI sooo no it didn’t.
You’re 100% right, and I should know that too. “Not LLM-based” is indeed what I was intending to say.
It gets hard to remember the (correct) broader definition when slop is being shoved into your brain through every possible orifice. Even for us that vehemently disagree, it still subconsciously molds the frameworks and language we use. It’s insidious, really.
There was someone at work who was using read.ai for technical discussion and the few summaries I read were like someone who didn’t understand the topic and couldn’t tell what details were important. We would summarize the decisions and next steps and each one had at least one really important thing changed or left out.
A transcription getting words wrong but still phonetically right is still more helpful than a misleading summary.
My SO was complaining that boss (LLM that sucker) put all the meeting notes into LLM and then asked it to make a presentation on it. Then my SO had to redo 90% of it because it was trash. So yay it saved 10% of time. Oh but wait it took time to read all that and run it through the AI sooo no it didn’t.
Why redo it? Clearly the boss wanted a presentation on garbage.
For real, now the boss will be like that AI isn’t half bad after all.
Tbh note taking is something LLMs are good at
Transcriptions, mostly decent.
Notes and summaries? Not if you care about accuracy.
And transcriptions usually aren’t really even AI; speech-to-text has been around a while.
Speech to text is AI and always has been.
It wasn’t always the current LLM slop bots that coopted the name, sure.
You’re 100% right, and I should know that too. “Not LLM-based” is indeed what I was intending to say.
It gets hard to remember the (correct) broader definition when slop is being shoved into your brain through every possible orifice. Even for us that vehemently disagree, it still subconsciously molds the frameworks and language we use. It’s insidious, really.
See this article by a fellow lemming which I highly recommend.
Yep, that’s a fact. Hidden Markov Models, LSTMs, and LLMs are all ML models, and ML is a branch of AI.
Yup, quite good
It works mostly to shrink the meaning. Something that LLMs are trained for in the first place.
And that’s all. 2 separate steps that are both good and reliable to an extent. One of the best applications of ai so far
There was someone at work who was using read.ai for technical discussion and the few summaries I read were like someone who didn’t understand the topic and couldn’t tell what details were important. We would summarize the decisions and next steps and each one had at least one really important thing changed or left out.
A transcription getting words wrong but still phonetically right is still more helpful than a misleading summary.
Now I understand the last part. Agreed, something deeply specialized could be in danger here. I had corpo speak in mind writing the previous messages