Rephrasing a common quote - talk is cheap, that’s why I talk a lot.

  • 0 Posts
  • 1K Comments
Joined 3 years ago
cake
Cake day: July 9th, 2023

help-circle
  • There’s a commonly used Russian metaphor “to not see the forest behind the trees”.

    What you are calling a device is in fact a system. It’s a local system, that you are carrying in your hand, but it’s functioning due to a very complex global system which is not. That device in itself is like a 1960s’ town in complexity. In itself, but there’s also the global system.

    And these are a result of quite a lot of people employed by various organizations with hierarchies and dependencies. And most of the power in those organizations doesn’t want you to have privacy and autonomy as much and when you want. If you want those, you should produce your own hardware and everything above it. Or build organizations interested in your full privacy and autonomy which will do that. It’s about structure, so just creating a few of them (a goal hardly reachable in itself) with manifests saying “we want to be good” won’t change anything.

    So, if you were wondering why contemporaries of Stalin’s regime were reluctant to divorce it with Marxism and call it something else, - that’s similar to this. They really wanted to believe there’s a Marxist superpower, just like some people wanted to believe Google is a good corporation, and before that some people wanted to believe Apple is a counterculture corporation, and so on. And, at various moments in time and space, in various dimensions, sometimes these were. Just like in some ways the British Empire was really bringing civilization to the world.

    The more life and diversity there is, the likelier we are to have good things. That doesn’t mean we’ll ever have full privacy, full autonomy, fully civilized, peaceful and honorable world, and so on. We won’t.




  • It has a soft flavor. I don’t put it into anything spicy, and probably won’t be noticeable with the way Americans seem to do seasoning. But if I’m making a soup with some meat and potatoes and various vegetables, I’ll put it in, it’ll be noticeable.

    If you just boil beef with and without it, you’ll feel the difference the most, I think.






  • There’s one thing funny about this - everyone is treating what’s happening as still some tomfoolery that will end after an election.

    This is institutional. I don’t live in your country, so I might be mistaken, but even if pretentious naming like “Department of War” goes, autonomous weapons remain. And also it’s easy to play a fool or employ a fool as a talking head, but most people are not fools, especially those with power. There will be wars.

    I’m not excited.

    Actually I might be, I had a thought now that I might have a more specific interpretation of what happened to one girl. She had sort of a “dark triad” personality, and a nasty one, but I’m starting to think that what I was hearing about her without specifics wasn’t connected to what she did to me (which I’d forgive despite her probably not caring at all). It was about her once having met herself, that is, seen a sadistic event and enjoyed it, and having been shaken since. And people who carried those indirect and vague messages to me seemed to think very badly of her because of that reaction alone, but I don’t know - I might be the reason she even saw that, and she wasn’t looking for it. She loved Exupery’s “Citadelle”, and if you read it, then you might notice that the same man who wrote “The Little Prince” definitely had some sadistic leanings.

    And that was because another girl took interest in me, and then was disappointed. That another girl was a real depraved sadist. That’s how this one got involved, my acquaintance from another place. And this one too took some interest in me.

    And I’m thinking that perhaps I too have something sadistic in my personality if they liked me. The girl I’ve started about is a pacifist. And one of my family members is a veteran (and he was basically special forces, so the kind of service that’s usually not a transformation but a discovery of personal traits) and a pacifist. And I’m a pacifist (I think everyone should be armed to react to violence, but I’m against any initiation of violence). It just seems to make sense that if you notice something sadistic in yourself early on, you become interested in pacifism, as well as in other ways of considering and containing your (and others’) inner Mr Hyde.

    So, getting back to names and what matters.

    The change that has already happened is contained in human psyche. Your society might have a hidden, but slowly unearthing desire that will have to be fulfilled. I think Freud also described in sexual terms (well, as was his usual trick) what he was feeling about the start of World War One.

    You have a Department of War and have probably even gotten used to that name.


  • Somewhat funny i actually realized this dynamic when watching star trek. Whenever they need to do something illegal they simply put their badges on the desk and just like magic they are no longer bound by federation ethics.

    That’s the main reason I don’t like the “good people in uniform as beacons of virtue” trope. That always happens. Every time I see that on screen I immediately imagine the morally inverted version of the same plot.

    At least in Babylon-V such a decision is something not reversible and important for the main characters.

    And in SG-1, despite that being sort of a piece of military propaganda, that too doesn’t happen too easily.

    But there the main characters are not some beacons of anything, they are just people with their own way.


  • You clearly don’t understand how finance works or don’t understand how leveraged these incestuous deals are. It’s perfectly possible for AI to make killbots and for an AI economic crash to happen.

    You might want to consult a history book. There are a few recurring themes there, silent leges inter arma and vae victis capture most of them. New weapons might change the intensiveness of wars all around the world, because they help those owning them avoid loss of life whatsoever and those not owning them to pay with lives for dealing damage that doesn’t even upset their adversary. Which will bring enormous profits, just not to everyone, only those who conquer. Finance is not all you need for that subject.

    On a humanist note, in “drone army against another drone army wars of the future” scenarios loss of life might be so small that pain and death in wars will be reduced to cases of deliberate sadism. Meaning that … again, there’ll be more war.

    They industry needs to make Trillions of dollars to pay off their creditors and to achieve the profit their investors need to make this worthwhile. That only happens if most white collar workers are replaced with AI.

    No, because profits are not only made from replacing existing mechanisms, but also from building new ones.

    Specifically, most people don’t use computers as really-really meta-machines. They use them as platforms for running specialized applications.

    But LLMs, however expensive in resources, change that. They make computers meta-machines for everyone.

    And also in some races you want to be further from the rear, not closer to the front. If this technology promises a profound crash in any case (because, suppose, it’ll bring about planet-wide totalitarianism), those investments might mean that rush to try to avoid getting eaten completely in the future. Losing less, not gaining more.





  • People are talking about AI killbots and upcoming crash at the same time, and complain about AI slop and vibe coding.

    Sorry, but if something is usable for making killbots, there will be no crash. And AI slop proves that for someone it’s useful to make slop. And vibe coding proves that someone makes things working in production with those tools. Saying that quality suffers is like saying that cobb houses are not comparable to brick houses and vice versa. Both exist. There are places where technologies related to cobb are still common for construction.

    But the most important reason is the first one, if some technique gives you a more convenient and sharper stick to kill someone from another tribe, then that something stays as tribe’s cherished wisdom.

    That LLMs consume too much resources … You might have noticed there’s a huge space for optimization. They are easy to parallelize, and we are in market capture stage, which means that optimization is not yet a priority. When it becomes a priority, there might happen a moment when all the arguments about operations costing in resources more than they give profit and that being funded by investors are suddenly not true anymore.

    I have been converted. Converted back, one might say, there was a time around years 2011-2014.


  • Which will happen regardless.

    Also where there are AI safeguards, they are usually in place because of chain of command and authorization, and those mattered so much because all most likely applications of any AI during the Cold War had a very steep damage curve.

    Small killbots don’t have such a damage curve. If they kill someone by mistake, the rest of the population learns to be careful and not raise attention of those operating them. Same reasons as with nukes and radars, where you need chains of specific people with clear authorization to answer why half the world melted, won’t force anyone to put such limits.





  • a(n effectively) non-deterministic

    Almost started to type an angry response to that.

    This lady should feel lucky that it only ran amok in her inbox.

    I have done that with less than an LLM. Just a typo in my Mutt configuration, and a few hundred e-mails were deleted which shouldn’t have been. After that I decided that removing spam is best done by first sorting into a separate mailbox and then manual revision. Which is an experience of plenty of people.

    Which just means that if you use an AI agent (and why not, it appears people do want them), then you should perhaps use many dedicated agents only having access each to its own narrow set of available actions.

    It’s more important with things based on fuzzy logic than it is with scripts. But people use Flatpaks and Snaps and AppImages, for isolation among other things, and I have run Skype from separate user under Linux in the olden days (it was such a stupid fashion, everyone wanted Skype, but everyone also considered it proprietary spyware, and nobody thought that an X11 client can spy after the whole display and all keyboard and mouse events anyway ; and that fashion didn’t involve running Skype in Xephyr or Xnest, just from a separate user).

    So the thought is not new. These agents should just be used with clear privilege separation, and some uniform way to declare privileges and interfaces for AI agents, and those interfaces simple enough. One can hope.