Following their link to LiveKit’s blog, it seems LiveKit provides a real-time communication stack with adaptive video encoding. So they’re using it to handle multiple video streams over connections of varying quality. I don’t think it’s mainly about AI, even though that’s LiveKit’s focus.
There are lots of ML models that relate to frame generation (you may have heard of Nvidia DLSS) so the “AI” might not be LLM slop, it might be an actual good application of ML like reducing bandwidth by halving the framerate and interpolating on the client.
Following their link to LiveKit’s blog, it seems LiveKit provides a real-time communication stack with adaptive video encoding. So they’re using it to handle multiple video streams over connections of varying quality. I don’t think it’s mainly about AI, even though that’s LiveKit’s focus.
There are lots of ML models that relate to frame generation (you may have heard of Nvidia DLSS) so the “AI” might not be LLM slop, it might be an actual good application of ML like reducing bandwidth by halving the framerate and interpolating on the client.
I wouldn’t call such things agents though. They’re not acting autonomously or out-of-process.