Peter_Arbeitslos@discuss.tchncs.de to Privacy@lemmy.mlEnglish · 21 days agoIn the light of Snowden's latest post: What are your FOSS-AIs?message-squaremessage-square40fedilinkarrow-up1130arrow-down110file-text
arrow-up1120arrow-down1message-squareIn the light of Snowden's latest post: What are your FOSS-AIs?Peter_Arbeitslos@discuss.tchncs.de to Privacy@lemmy.mlEnglish · 21 days agomessage-square40fedilinkfile-text
minus-squareWalnutLum@lemmy.mllinkfedilinkarrow-up3·21 days agoI’ve seen this said multiple times, but I’m not sure where the idea that model training is inherently non-deterministic is coming from. I’ve trained a few very tiny models deterministically before…
minus-squareumami_wasabi@lemmy.mllinkfedilinkarrow-up1·21 days agoYou sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?
minus-squareWalnutLum@lemmy.mllinkfedilinkarrow-up1·20 days agoYes of course, there’s nothing gestalt about model training, fixed inputs result in fixed outputs
I’ve seen this said multiple times, but I’m not sure where the idea that model training is inherently non-deterministic is coming from. I’ve trained a few very tiny models deterministically before…
You sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?
Yes of course, there’s nothing gestalt about model training, fixed inputs result in fixed outputs