- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don’t actually exist
- Attackers work out what these imports’ names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves
From everything I asked chatgpt I found the most success in helping me understand some stuff in Linux or just writing small simple scripts that does some converting. But more often than not if I ask it about a system and how something should work, it literally makes something up. If you have googled already and read the documentation it’s probably not going to help you.
In the case of linux there is too much info on the web where it gets all the info from too so if posts are old or false it literally provides you with wrong information as a fact.
What I have learned to do is to just try it but I want to fully understand what it does and I’ll convert stuff to my own style.