• Honse@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    Interesting. LLMs have no ability to directly do anything put output text so the tooling around the LLM is what’s actually searching. They probably use some API from bing or something, have you compared results with those from bing because I’d be interested to see how similar they are or how much extra tooling is used for search. I can’t imagine they want to use a lot of cycles generating only like 3 search queries per request, unless they have a smaller dedicated model for that. Would be interested to see the architecture behind it and what’s different from normal search engines.