We’re all seeing the breathless hype surrounding the vacuous marketing term. It’ll change everything! It’s coming for our jobs! Some 50% of white-collar workers will be laid off!

Setting aside “and how will it do that?” as outside the scope of the topic at hand, it’s a bit baffling to me how a nebulous concept prone to outright errors is an existential threat. (To be clear, I think the energy and water impacts are.)

I was having a conversation on Reddit along these lines a couple of days ago, and after seeing more news that just parrots Altman’s theme-du-jour, I need a sanity check.

Something I’ve always found hilarious at work is someone asking if you have a calculator (I guess that dates me to the flip-phone era) … my canned response was “what’s wrong with the very large one on your desk?”

Like, automation is literally why we have these machines.

And it’s worth noting that you can’t automate the interesting parts of a job, as those are creative. All you can tackle is the rote, the tedious, the structured bullshit that no one wants to do in the first place.

But here’s the thing: I’ve learned over the decades that employers don’t want more efficiency. They shout it out to the shareholders, but when it comes down to the fiefdoms of directors and managers, they like inefficiency, thank you very much, as it provides tangible work for them.

“If things are running smoothly, why are we so top heavy” is not something any manager wants to hear.

Whatever the fuck passes for “AI” in common parlance can’t threaten management in the same way as someone deeply familiar with the process and able to code. So it’s anodyne … not a threat to the structure. Instead of doubling efficiency via bespoke code (leading to a surplus of managers), just let a couple people go through attrition or layoffs and point to how this new tech is shifting your department’s paradigm.

Without a clutch.

I’ve never had a coding title, but I did start out in CS (why does this feel like a Holiday Inn Express ad?), so regardless of industry, when I end up being expected to use an inefficient process, my first thought is to fixing it. And it has floored me how severe the pushback is.

I reduced a team of 10 auditors to five at an audiobook company with a week of coding in VB. A team of three placing ads to 0.75 (with two of us being me and my girlfriend) at a newspaper hub.

Same hub, clawed back 25% of my team’s production time after absurd reporting requirements were implemented despite us having all the timestamps in our CMS – the vendor charged extra to access our own data, so management decided a better idea than paying the vendor six figures was overstaff by 33% (250 total at the center) to get those sweet, sweet self-reported error-laden data!

At a trucking firm, I solved a decadelong problem with how labour-intensive receiving for trade shows was. Basically, instead of asking the client for their internal data, which had been my boss’ approach, I asked how much they really needed from us, and could I simplify the forms and reports (samples provided)? Instant yes, but my boss hated the new setup because I was using Microsoft Forms to feed Excel, and then a 10-line script to generate receivers and reports, and she didn’t understand any of that, so how was she sure I knew what I was doing?

You can’t make this shit up.

Anyway, I think I’ve run far afield of my central thesis, but I think these illustrations point to a certain intransigence at the management level that will be far more pronounced than is being covered.

These folks locked in their 2.9% mortgage and don’t want to rock the boat.

My point is, why would management suddenly be keen on making themselves redundant when decades of data tell us otherwise?

This form of “AI” does not subvert the dominant paradigm. And no boss wants fewer employees.

As such, who’s actually going to get screwed here? The answer may surprise you.

  • Crotaro@beehaw.org
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    13 hours ago

    I think it’s more simple than you assume. From my limited experience (many stranger’s anecdotes and my team recently being fired literally because “the other (very different) production location is able to do it without a dedicated Quality Management team”) most employers / company chiefs just want to make more money or, at least, increase the perceived value so that being bought out becomes realistic and leaves them with more money. They don’t actually care if their product works well or efficient, as long as number go up. Maybe the original company founder does but how many companies are still there that have the founder for long-term in key decision making and without shareholders who kinda hold the real power and couldn’t care less if the company cleaned up oceans or burned children because to them it’s just one combination of letters that make them money?

    As @[email protected] suggested, the top management might not even understand that AI won’t help, so they think it will make a short- (savings due to firings) and long-term (increased efficiency or otherwise better product) profit. And those that are very informed about AI understand, at the very least, that they can increase short-term profits by firing employees (thus saving on needing to pay salaries to pesky humans) under the guise of increasing efficiency.

    So to top management it’s just a decision of “do I want more money now and in the future?” or “do I want more money now and maybe also trick idiots into buying us out before it goes belly-up?”

    Lastly, I think you might ascribe more self-reflection ability to middle management than they have. I want to believe that most of them truly think they are a crucial part of making the company work, so they don’t even see that replacing humans with AI would make them obsolete and thus prone for firing.