• 11 Posts
  • 104 Comments
Joined 1 year ago
cake
Cake day: September 29th, 2024

help-circle

  • let’s play a fun game where we read a “breaking news” story about a scientific “discovery” and count the reasons to be skeptical about it

    by Patty Wellborn, University of British Columbia

    Dr. Mir Faizal, Adjunct Professor with UBC Okanagan’s Irving K. Barber Faculty of Science

    right off the bat - you have a conflict of interest where the person writing this is from the same university as the lead author.

    this article is stylized to read like “news” but it’s probably more accurate to treat it like you would a press release.

    and in fact, this same text is on UBC’s website where it explicitly says “Content type: Media Release”

    Patty Wellborn’s author page there seems to indicate that writing this kind of press release is a major part of her job

    and his international colleagues, Drs. Lawrence M. Krauss

    huh…that name sounds familiar…let me go check his wikipedia page and oh look there’s a Controversies section with “Relationship with Jeffrey Epstein” and “Allegations of sexual misconduct” subsections.

    Their findings, published in the Journal of Holography Applications in Physics

    that journal is published by Damghan University in Iran

    there’s a ton of xenophobia and Islamophobia that gets turned up to 11 when people in the English-speaking world start discussing Iran, so I don’t want to dismiss this journal out-of-hand…but their school of physics has 2 full professors?

    if I was going to find out “oh Damghan is actually well-regarded for physics research” or something that’s not what I’d expect to see

    but anyway, let’s look at the paper itself

    except, hold on, it’s not a paper, it’s a letter:

    Document Type : Letter

    that’s an important difference:

    Letters: This is a very ambiguous category, primarily defined by being short, often <1000 words. They may be used to report a single piece of information, often from part of a larger study, or may be used to respond to another paper. These may or may not go out for peer review - for example, I recently had a paper accepted where the decision was made entirely by the editor.

    reading a bit further:

    Received: June 6, 2025; Accepted: June 17, 2025

    this is “proving” something fundamental about the nature of the universe…and the entire review process took 11 calendar days? (basically one work week, the 6th was a Friday and the 17th was a Tuesday)



  • This would do two things. One, it would (possibly) prove that AI cannot fully replace human writers. Two (and not mutually exclusive to the previous point), it would give you an alternate-reality version of the first story, and that could be interesting.

    this is just “imagine if chatbots were actually useful” fan-fiction

    who the hell would want to actually read both the actual King story and the LLM slop version?

    at best you’d have LLM fanboys ask their chatbot to summarize the differences between the two, and stroke their neckbeards and say “hmm, isn’t that interesting”

    4 emdashes in that paragraph, btw. did you write those yourself?


  • This is an inflammatory way of saying the guy got served papers.

    ehh…yes and no.

    they could have served the subpoena using registered mail.

    or they could have used a civilian process server.

    instead they chose to have a sheriff’s deputy do it.

    from the guy’s twitter thread:

    OpenAI went beyond just subpoenaing Encode about Elon. OpenAI could (and did!) send a subpoena to Encode’s corporate address asking about our funders or communications with Elon (which don’t exist).

    If OpenAI had stopped there, maybe you could argue it was in good faith.

    But they didn’t stop there.

    They also sent a sheriff’s deputy to my home and asked for me to turn over private texts and emails with CA legislators, college students, and former OAI employees.

    This is not normal. OpenAI used an unrelated lawsuit to intimidate advocates of a bill trying to regulate them. While the bill was still being debated.

    in context, the subpoena and the way in which it was served sure smells like an attempt at intimidation.


  • from another AP article:

    This would be the third ceasefire reached since the start of the war. The first, in November 2023, saw more than 100 hostages, mainly women and children, freed in exchange for Palestinian prisoners before it broke down. In the second, in January and February of this year, Palestinian militants released 25 Israeli hostages and the bodies of eight more in exchange for nearly 2,000 Palestinian prisoners. Israel ended that ceasefire in March with a surprise bombardment.

    maybe I’m cynical (OK, I’m definitely cynical) but I very much doubt this ceasefire is going to last.

    there are two things in the world that Trump wants more than anything else. one is to fuck his daughter. the other is a Nobel Peace Prize.

    I suspect the timing of this agreement comes from Netanyahu trying to manufacture a justification for Trump to get the Nobel. after the prize is announced (whether Trump receives it or not) they’ll kick the genocide back into high gear again.


  • If it had the power to do so it would have killed someone

    right…the problem isn’t the chatbot, it’s the people giving the chatbot power and the ability to affect the real world.

    thought experiment: I’m paranoid about home security, so I set up a booby-trap in my front yard, such that if someone walks through a laser tripwire they get shot with a gun.

    if it shoots a UPS delivery driver, I am obviously the person culpable for that.

    now, I add a camera to the setup, and configure an “AI” to detect people dressed in UPS uniforms and avoid pulling the trigger in that case.

    but my “AI” is buggy, so a UPS driver gets shot anyway.

    if a news article about that claimed “AI attempts to kill UPS driver” it would obviously be bullshit.

    the actual problem is that I took a loaded gun and gave a computer program the ability to pull the trigger. it doesn’t really matter whether that computer program was 100 lines of Python running on a Raspberry Pi or an “AI” running on 100 GPUs in some datacenter somewhere.



  • Why TF do Kindles and the like even need to exist? I read on my iPhone while the audiobook is playing.

    if you prefer to read on your phone, by all means read on your phone.

    but making the jump from that to “e-readers should not exist” is fucking stupid.

    Do Not Disturb and self control are a thing and have never been a problem for me.

    congratulations. would you like a gold star.

    This isn’t rocket science.

    I have ADHD. regulating my attention sometimes is rocket science.

    obviously that’s not the only reason, I have neurotypical friends and family who love their e-readers, and I’m sure there are people with ADHD who prefer reading on their phones.

    remember that there are 8 billion people in the world, and not all of them have the exact same preferences as you do. that isn’t rocket science.




  • “Nurses and medical staff are really overworked, under a lot of pressure, and unfortunately, a lot of times they don’t have capacity to provide engagement and connection to patients,” said Karen Khachikyan, CEO of Expper Technologies, which developed the robot.

    tapping the sign: every “AI” related medical invention is built around this assumption that there’s too few medical staff and they’re all overworked and changing that is not feasible. so we have to invest millions of dollars into hospital robots because investing millions of dollars in actually paying workers would be too hard. (also, robots never unionize)

    Robin is about 30% autonomous, while a team of operators working remotely controls the rest under the watchful eyes of clinical staff.

    30%…according to the company itself. they have a strong incentive to exaggerate. and they’re not publishing any data of how they arrived at that figure so that it could be independently verified.

    it sounds like they took one of the telepresence robots that’s been around for 10+ years and slapped ChatGPT into it and now they’re trying to fundraise on the hype of being an “AI” company. it’s a good grift if you can make it work.



  • Asshole cars for mostly assholes

    from the article:

    Some firms have reportedly already laid off staff, with the Unite union claiming that workers in the JLR supply chain “are being laid off with reduced or zero pay.” Some have been told to “sign up” for government benefits, the union claims.

    JLR, which is owned by India’s Tata Motors, is one of the UK’s biggest employers, with around 32,800 people directly employed in the country. Stats on the company’s website also claim it supports another 104,000 jobs through its UK supply chain and another 62,900 jobs “through wage-induced spending.”

    regardless of your opinion about the cars or the people who drive them…thousands of people getting furloughed or laid off suddenly is bad.




  • I guess my writing style ends up looking a bit polished sometimes

    uh-huh…“too polished” is not the thing that’s causing you to fail the Turing test. and your emdash count keeps rising, btw.

    — just wanted to share some thoughts I’ve had for a while.

    and what thoughts are those, exactly?

    your original post followed the pattern of every AI slop “discussion prompt” post I’ve ever seen - 3 paragraph structure that ends with “in conclusion, it’s a land of contrasts — what do you think?”

    and all your other comments in this thread are just variations on “yeah there are positives and negatives — we’ll need to think carefully about it”

    humans who want to talk about a thing…usually have opinions about that thing. often strong opinions, and often based on specifics about the thing. do you have any?


  • “In other words, these conversations with a social robot gave caregivers something that they sorely lack – a space to talk about themselves”

    so they’re doing a job that’s demanding, thankless, often unpaid (in the case of this study, entirely unpaid, because they exclusively recruited “informal” caregivers)

    and…it turns out talking about it improves their mood?

    yeah, that’s groundbreaking. no one could have foreseen it.

    if you did this with actual humans it’d be “lol yeah that’s just therapy and/or having friends” and you wouldn’t get it published in a scientific paper.

    it’s written up as a “robotics” story but I’m not sure how it being a “robot” changes anything compared to a chatbot. it seems like this is yet another “discovery” of “hey you can talk to an LLM chatbot and it kinda sorta looks like therapy, if you squint at it”.

    (tapping the sign about why “AI therapy” is stupid and trying to address the wrong problem)



  • here is the official NASA press release. primary sources are always preferable, especially compared to this fuckass “digital trends” clickbait website.

    “This finding by Perseverance, launched under President Trump in his first term, is the closest we have ever come to discovering life on Mars. The identification of a potential biosignature on the Red Planet is a groundbreaking discovery, and one that will advance our understanding of Mars,” said acting NASA Administrator Sean Duffy. “NASA’s commitment to conducting Gold Standard Science will continue as we pursue our goal of putting American boots on Mars’ rocky soil.”

    quick fact check: it was launched in 2020, but announced back in 2012. giving Trump credit here is idiotic, but it’s about what you’d expect from Sean Duffy, he’s a Trump crony through-and-through. before being the NASA administrator he was Trump’s Secretary of Transportation, and before that he was a Republican congressman, and reality TV contestant (on The Real World and the *checks notes* Lumberjack World Championship)

    I think it’s important to remember that everything, even basic scientific research, is liable to be politicized if it suits their ends. so it’s totally possible this biosignature is legitimate, but it’s also totally possible that they’re hyping up questionable findings because they want to persuade Trump that funding a NASA mission to Mars would boost his TV ratings.