If I was a Cisco investor, I’d be looking to move my money elsewhere. All these companies laying people off, shuttering whole ass teams, and replacing them with non-proprietary and, frankly, unproven LLMs are going to blow their own legs off here pretty soon. I have a feeling that a pretty dramatic change is AI pricing models is coming soon, since all of these companies are providing access to their models for a fraction of the cost to run them, and the VCs are going to want their money back. Is chatGPT good enough at, what is it, 0.004 cents a token? Maybe, I guess, if the ghost of quality control doesn’t haunt you at night. Is chatGPT still good enough at 0.1 cents a token or more, or with surge pricing models? I sincerely doubt it. If openAI implements surge pricing, stay on the lookout for articles about some company or user getting a surprise bill for a million dollars, AWS-style. Given the current quality of LLMs, I don’t think that the cost shakes out for what you get.
Before we get to pricing, I’d like to see any successful use case in a corporate environment? Not one that’s sort-of working, or obe that hasn’t panned out yet - just a successful, implemented, customer-facing example.
Good point. I think a bigger problem than the customer facing AIs is going to be the internal ones that make shit up. Someone on here claimed to be working somewhere where they gutted their HR department and replaced them almost completely with an LLM that was fed their documents. They claimed the AI had already told them several blatantly illegal things. Any company that does that is just begging to get sued to death, and I’m sure the investors will be reeeeeaaaaal happy with the, what, 1% they saved by killing HR? I mean, HR aren’t the good guys here, but just imagine a company being brain dead enough to say “hey, let’s get rid of the people that keep us from getting sued to death and replace it with a chatbot lmao”.
And here I thought that outsourced “HR” folks were a travesty.
Had a small payroll issue recently having to do with some time off and a misunderstanding by the (outsourced) HR folks, was able to speak with enough people who understood one segment or another of the (rather complex) scenario to get it resolved in a couple of days.
AI would be a hard fail in that application, guaranteed.
IBM CEO Arvind Krishna announced a hiring pause in May, but that’s not all. Later that month, the CEO also stated the company plans to replace nearly 8,000 jobs with AI.
Krishna noted that back-office functions, specifically in the human resources (HR) sector, will be the first to face these changes. In recent weeks, the company has opened up dozens of positions for AI-based roles to help develop and maintain these systems.
Now that I think about it - that explains a few things . . .
At least one giant multi-national corp is actively soliciting examples and use cases from their employees.
“Toy” example submissions is fine, the company is just so eager for something to do with AI - they’re hoping for their actual AI folks to be able to take off of that uncompensated IP, while the employee with the idea gets a pat on the head.
Have I had ideas I might otherwise submit and that are well within my capabilities to implement at “toy” level? Hell, yes.
Do I want to contribute concepts into that sort of pipeline? Absolutely not. Not when they more or less automatically own my work product and whatever I do on company time already.
If I was a Cisco investor, I’d be looking to move my money elsewhere. All these companies laying people off, shuttering whole ass teams, and replacing them with non-proprietary and, frankly, unproven LLMs are going to blow their own legs off here pretty soon. I have a feeling that a pretty dramatic change is AI pricing models is coming soon, since all of these companies are providing access to their models for a fraction of the cost to run them, and the VCs are going to want their money back. Is chatGPT good enough at, what is it, 0.004 cents a token? Maybe, I guess, if the ghost of quality control doesn’t haunt you at night. Is chatGPT still good enough at 0.1 cents a token or more, or with surge pricing models? I sincerely doubt it. If openAI implements surge pricing, stay on the lookout for articles about some company or user getting a surprise bill for a million dollars, AWS-style. Given the current quality of LLMs, I don’t think that the cost shakes out for what you get.
Before we get to pricing, I’d like to see any successful use case in a corporate environment? Not one that’s sort-of working, or obe that hasn’t panned out yet - just a successful, implemented, customer-facing example.
Good point. I think a bigger problem than the customer facing AIs is going to be the internal ones that make shit up. Someone on here claimed to be working somewhere where they gutted their HR department and replaced them almost completely with an LLM that was fed their documents. They claimed the AI had already told them several blatantly illegal things. Any company that does that is just begging to get sued to death, and I’m sure the investors will be reeeeeaaaaal happy with the, what, 1% they saved by killing HR? I mean, HR aren’t the good guys here, but just imagine a company being brain dead enough to say “hey, let’s get rid of the people that keep us from getting sued to death and replace it with a chatbot lmao”.
And here I thought that outsourced “HR” folks were a travesty.
Had a small payroll issue recently having to do with some time off and a misunderstanding by the (outsourced) HR folks, was able to speak with enough people who understood one segment or another of the (rather complex) scenario to get it resolved in a couple of days.
AI would be a hard fail in that application, guaranteed.
IBM Plans To Replace Nearly 8,000 Jobs With AI — These Jobs Are First to Go
Now that I think about it - that explains a few things . . .
At least one giant multi-national corp is actively soliciting examples and use cases from their employees.
“Toy” example submissions is fine, the company is just so eager for something to do with AI - they’re hoping for their actual AI folks to be able to take off of that uncompensated IP, while the employee with the idea gets a pat on the head.
Have I had ideas I might otherwise submit and that are well within my capabilities to implement at “toy” level? Hell, yes.
Do I want to contribute concepts into that sort of pipeline? Absolutely not. Not when they more or less automatically own my work product and whatever I do on company time already.