I tried asking an AI to make a basic webrtc client to make audio calls - something that has hundreds of examples on the web about how to do it from the first line of code to the very last.
It did generate a complete webrtc client for audio calls I could launch and see working, it just had a couple tiny bugs:
you needed an user id to call someone and one was only generated when you call (effectively meaning you can only call people if they are calling someone)
if you fixed the above and managed to make a call between two users, the audio was exchanged but never played.
Technically speaking, all of the small parts worked, they just didn’t work together. I can totally see someone ignoring that fact and treating this as an example of “working code”.
Btw I tried to ask the AI to fix those problems on its own code but from that point forward it just kept going farther and farther from a working solution.
Depends on their definition of “working” .
I tried asking an AI to make a basic webrtc client to make audio calls - something that has hundreds of examples on the web about how to do it from the first line of code to the very last. It did generate a complete webrtc client for audio calls I could launch and see working, it just had a couple tiny bugs:
Technically speaking, all of the small parts worked, they just didn’t work together. I can totally see someone ignoring that fact and treating this as an example of “working code”.
Btw I tried to ask the AI to fix those problems on its own code but from that point forward it just kept going farther and farther from a working solution.
That’s the broken behavior I see. It’s the evidence of a missing understanding that’s going to need another evolutionary bump to get over.