Tony Bark@pawb.social to Technology@lemmy.worldEnglish · 1 day agoGoogle removes Gemma models from AI Studio after GOP senator’s complaintarstechnica.comexternal-linkmessage-square15fedilinkarrow-up1115arrow-down10cross-posted to: [email protected]
arrow-up1115arrow-down1external-linkGoogle removes Gemma models from AI Studio after GOP senator’s complaintarstechnica.comTony Bark@pawb.social to Technology@lemmy.worldEnglish · 1 day agomessage-square15fedilinkcross-posted to: [email protected]
minus-squarefilister@lemmy.worldlinkfedilinkEnglisharrow-up11·1 day agoThe future is very small models trained to work in a certain domain and able to run on devices. Huge foundational models are nice and everything, but they are simply too heavy and expensive to run.
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-217 hours agoYeah. You are preaching to the choir here. …Still though, I just meant there’s no reason to use Gemma 3 27B (or 12? Whatever they used) unaugmented in AI Studio. The smallest flash seems to be more optimal for TPUs (hence it runs faster).
The future is very small models trained to work in a certain domain and able to run on devices.
Huge foundational models are nice and everything, but they are simply too heavy and expensive to run.
Yeah. You are preaching to the choir here.
…Still though, I just meant there’s no reason to use Gemma 3 27B (or 12? Whatever they used) unaugmented in AI Studio. The smallest flash seems to be more optimal for TPUs (hence it runs faster).