Thales@sh.itjust.works to 196@lemmy.blahaj.zoneEnglish · 2 months agoCamp Rulesh.itjust.worksexternal-linkmessage-square97fedilinkarrow-up1896arrow-down147
arrow-up1849arrow-down1external-linkCamp Rulesh.itjust.worksThales@sh.itjust.works to 196@lemmy.blahaj.zoneEnglish · 2 months agomessage-square97fedilink
minus-squareSuperb@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up1arrow-down3·edit-22 months agoId say if there is training beforehand, then its “generative AI”
minus-squarebrucethemoose@lemmy.worldlinkfedilinkarrow-up2·edit-22 months agoNot a great metric either, as models with simpler output (like text embedding models, which output a single number representing ‘similarity’, or machine vision models to recognize objects) are extensively trained. Another example is NNEDI3, very primitive edge enhancement. Or Languagetool’s tiny ‘word confusion’ model: https://forum.languagetool.org/t/neural-network-rules/2225
Id say if there is training beforehand, then its “generative AI”
Not a great metric either, as models with simpler output (like text embedding models, which output a single number representing ‘similarity’, or machine vision models to recognize objects) are extensively trained.
Another example is NNEDI3, very primitive edge enhancement. Or Languagetool’s tiny ‘word confusion’ model: https://forum.languagetool.org/t/neural-network-rules/2225