However, that’s not really any better for privacy. There’s absolutely nothing preventing someone from logging a history of the changes.
However, that’s not really any better for privacy. There’s absolutely nothing preventing someone from logging a history of the changes.
The number of users who care about emulation is utterly insignificant compared to the hold Nvidia has on the compute market. There is a lot of stuff that either requires or is more optimized for CUDA.
Currently, these systems have no way to separate trusted and untrusted input. This leaves them vulnerable to prompt injection attacks in basically any scenario involving unvalidated user input. It’s not clear yet how that can be solved. Until it has been solved, it seriously limits how developers can use LLMs without opening the application up to exploitation.
Yup, and they are published by Microsoft. So all ChatGPT is doing here is spitting out a key commonly found in it’s training set. It’s not calculating anything.
They also dominate compute. There’s still a lot of software that depends on CUDA.