There was someone at work who was using read.ai for technical discussion and the few summaries I read were like someone who didn’t understand the topic and couldn’t tell what details were important. We would summarize the decisions and next steps and each one had at least one really important thing changed or left out.
A transcription getting words wrong but still phonetically right is still more helpful than a misleading summary.
Yup, quite good
It works mostly to shrink the meaning. Something that LLMs are trained for in the first place.
And that’s all. 2 separate steps that are both good and reliable to an extent. One of the best applications of ai so far
There was someone at work who was using read.ai for technical discussion and the few summaries I read were like someone who didn’t understand the topic and couldn’t tell what details were important. We would summarize the decisions and next steps and each one had at least one really important thing changed or left out.
A transcription getting words wrong but still phonetically right is still more helpful than a misleading summary.
Now I understand the last part. Agreed, something deeply specialized could be in danger here. I had corpo speak in mind writing the previous messages