The user can choose whether the AI can run commands on its own or ask first.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :
rm *filename
versus
rm * filename
where a single character makes the entire difference between deleting all files ending up with filename rather than all files in the current directory and also the file named filename.
Of course here you will spot it because you’ve been primed for it. In a normal workflow then it’s totally difference.
Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it. They were obviously painfully wrong but it would have been a reasonable assumption.
A big problem in computer security these days is all-or-nothing security: either you can’t do anything, or you can do everything.
I have no interest in agentic AI, but if I did, I would want it to have very clearly specified permission to certain folders, processes and APIs. So maybe it could wipe the project directory (which would have backup of course), but not a complete harddisk.
And honestly, I want that level of granularity for everything.
Kinda wrong to say “without permission”. The user can choose whether the AI can run commands on its own or ask first.
Still, REALLY BAD, but the title doesn’t need to make it worse. It’s already horrible.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :
rm *filenameversus
rm * filenamewhere a single character makes the entire difference between deleting all files ending up with
filenamerather than all files in the current directory and also the file namedfilename.Of course here you will spot it because you’ve been primed for it. In a normal workflow then it’s totally difference.
Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it. They were obviously painfully wrong but it would have been a reasonable assumption.
A big problem in computer security these days is all-or-nothing security: either you can’t do anything, or you can do everything.
I have no interest in agentic AI, but if I did, I would want it to have very clearly specified permission to certain folders, processes and APIs. So maybe it could wipe the project directory (which would have backup of course), but not a complete harddisk.
And honestly, I want that level of granularity for everything.