After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
The military, the department of government responsible for mass murder, should not have any fucking AI in their system, absolutely anywhere. Doubly so without any sort of guardrails.
If your AI tools are wrong half the time, you’re using it wrong. My legal AI is linked to databases of statutes and case law, providing results more reliable than most legal professionals.
The military, the department of government responsible for mass murder, should not have any fucking AI in their system, absolutely anywhere. Doubly so without any sort of guardrails.
Why? I can’t think of any reason that would not also preclude their usage if all computer assisted tools.
Because no other computer assisted tools are straight up fucking wrong half the time?
If your AI tools are wrong half the time, you’re using it wrong. My legal AI is linked to databases of statutes and case law, providing results more reliable than most legal professionals.