I agree completely that it is scary we use extensively something that we truly dont understand. But even with the cleanest and most open source and open weights model ever, this statement would be just as true.
With parameters in the billions all interacting with each others it’s extremely hard to analyse. LLM are inherently hard to understand fully not really because of the opaqueness of the training data but rather the raw number of parameters involved in inference.
I would certainly be more appreciative of an open weights model were we destroy the environment (I have no choice in that) but at least we get the resulting matrix of parameters rather than OpenAI shit were we have absolutely nothing.
IMO a model like Deepseek is most certainly a clone of someone else model that published their open weights. An open weights model enabled someone else to build on top of whatever controversial base was used. So at least it can be used by a wider audience and the original trainers don’t have their say in what you do as long as you respect the licence.
I think the above process has pretty much nothing to do about the struggle of some linux dev losing hair trying to make sense of a Microsoft obfuscated binary…
I guess many will not share that “lesser evil” point of view on open weights model but it’s not like anybody is gonna stop the AI conglomerate to do their shit in the complete dark and under no regulation whatsoever.
I agree completely that it is scary we use extensively something that we truly dont understand. But even with the cleanest and most open source and open weights model ever, this statement would be just as true.
With parameters in the billions all interacting with each others it’s extremely hard to analyse. LLM are inherently hard to understand fully not really because of the opaqueness of the training data but rather the raw number of parameters involved in inference.
I would certainly be more appreciative of an open weights model were we destroy the environment (I have no choice in that) but at least we get the resulting matrix of parameters rather than OpenAI shit were we have absolutely nothing.
IMO a model like Deepseek is most certainly a clone of someone else model that published their open weights. An open weights model enabled someone else to build on top of whatever controversial base was used. So at least it can be used by a wider audience and the original trainers don’t have their say in what you do as long as you respect the licence.
I think the above process has pretty much nothing to do about the struggle of some linux dev losing hair trying to make sense of a Microsoft obfuscated binary…
I guess many will not share that “lesser evil” point of view on open weights model but it’s not like anybody is gonna stop the AI conglomerate to do their shit in the complete dark and under no regulation whatsoever.