Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 27 Posts
  • 4.04K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • Why is so much coverage of “AI” devoted to this belief that we’ve never had automation before (and that management even really wants it)?

    I’m going to set aside the question of whether any given company or a given timeframe or a given AI-related technology in particular is effective. I don’t really think that that’s what you’re aiming to address.

    If it just comes down to “Why is AI special as a form of automation? Automation isn’t new!”, I think I’d give two reasons:

    It’s a generalized form of automation

    Automating a lot of farm labor via mechanization of agriculture was a big deal, but it mostly contributed to, well, farming. It didn’t directly result in automating a lot of manufacturing or something like that.

    That isn’t to say that we’ve never had technologies that offered efficiency improvements across a wide range of industries. Electric lighting, I think, might be a pretty good example of one. But technologies that do that are not that common.

    kagis

    https://en.wikipedia.org/wiki/Productivity-improving_technologies

    This has some examples. Most of those aren’t all that generalized. They do list electric lighting in there. The integrated circuit is in there. Improved transportation. But other things, like mining machines, are not generally applicable to many industries.

    So it’s “broad”. Can touch a lot of industries.

    It has a lot of potential

    If one can go produce increasingly-sophisticated AIs — and let’s assume, for the sake of discussion, that we don’t run into any fundamental limitations — there’s a pathway to, over time, automating darn near everything that humans do today using that technology. Electrical lighting could clearly help productivity, but it clearly could only take things so far.

    So it’s “deep”. Can automate a lot within a given industry.


  • I was curious. Apparently this is a known phenomenon, and that drug is also prescribed specifically for that purpose.

    https://en.wikipedia.org/wiki/Bupropion

    Bupropion, formerly called amfebutamone,[15] and sold under the brand name Wellbutrin among others, is an atypical antidepressant that is indicated in the treatment of major depressive disorder and seasonal affective disorder and to support smoking cessation.

    Prescribed as an aid for smoking cessation, bupropion reduces the severity of craving for nicotine and withdrawal symptoms[53][54][55] such as depressed mood, irritability, difficulty concentrating, and increased appetite.[56] Initially, bupropion slows the weight gain that often occurs in the first weeks after quitting smoking. With time, however, this effect becomes negligible.[56]

    The bupropion treatment course lasts for seven to twelve weeks, with the patient halting the use of tobacco about ten days into the course.[56][9] After the course, the effectiveness of bupropion for maintaining abstinence from smoking declines over time, from 37% of tobacco abstinence at three months to 20% at one year.[57] It is unclear whether extending bupropion treatment helps to prevent relapse of smoking.[58]

    Overall, six months after the therapy, bupropion increases the likelihood of quitting smoking approximately 1.6-fold as compared to placebo. In this respect, bupropion is as effective as nicotine replacement therapy but inferior to varenicline. Combining bupropion and nicotine replacement therapy does not improve the quitting rate.[59]

    In children and adolescents, the use of bupropion for smoking cessation does not appear to offer any significant benefits.[60] The evidence for its use to aid smoking cessation in pregnant women is insufficient.[61]



  • I do not game on phones, but my best experiences have, ironically, been with ‘gaming’ phones like the Razer Phone 2 and Asus phones. They have gigantic batteries, lots of RAM, and lean, stock UIs that let you disable/uninstall apps, hence they’re fast as heck and last forever. I only gave up my Razer Phone 2 because the mic got clogged up with dust, and I miss it.

    While I kind of agree (though I don’t really like the “gamer” aesthetics), Asus only offers two major updates and two years of patches, which is quite short.

    https://www.androidauthority.com/phone-update-policies-1658633/

    If someone games with their phone and plans to frequently upgrade for new hardware, they may not care. But if you get the hardware just to have a large battery and RAM, that may be a concern.

    EDIT: Also, no mmWave support, which may or may not matter to someone.



  • Sixteen percent of GDP…The United States has tethered 16% of its entire economic output to the fortunes of a single company

    That’s not really how that works. Those two numbers aren’t comparable to each other. Nvidia’s market capitalization, what investors are willing to pay for ownership of the company, is equal to sixteen percent of US GDP, the total annual economic activity in the US.

    They’re both dollar values, but it’s like comparing the value of my car to my annual income.

    You could say that the value of a company is somewhat-linked to the expected value of its future annual profit, which is loosely linked to its future annual revenue, which is at least more connected to GDP, but that’s not going to be anything like a 1:1 ratio, either.


  • If your concern is load, disabling anonymous access (sadly), which a lot of instances have been doing. Probably using stuff like Cloudflare and Anubis.

    If your concern is not letting scrapers have access to your posts/comments at all, that isn’t going to happen short of a massive shift away from a publicly-accessible environment. You’re gonna be stuck with private, small forums if you want that; search engines won’t index it, and you’ll have small userbases. On the Threadiverse, if someone wants to harvest your comment and post text, all they have to do is set up an instance, federate, and subscribe to every community on every instance. They don’t need to scrape at all. The only reason that bots are scraping at all is because it isn’t worth the effort, at the current scale of the Threadiverse, to bother writing special-case code for the Threadiverse to obtain text via the federated instance route.






  • That’s a life question that involves a lot of factors.

    I will say that:

    • Generally-speaking, having a college degree in the US is financially advantageous.

    • While the younger you get a degree, the larger the return (since the longer you can use the skillset), unless you’re close to retirement, I’d expect an engineering degree to be advantageous; these tend to have strong returns.

    • I think that it is unlikely that AI will kill demand people with bachelor’s degrees in computer science and/or electrical engineering in the near future. Probably one day there will be human-level AI, and that if it does it’ll have much, much broader and dramatic impact on the world than just those degrees, and we’ll cross that bridge when we come to it. Heck, AI has increased demand for some people with a skillset relevant to AI.

    If you asked me, with what limited information you’ve provided, to just say “yes” or “no”, as long as you’re committed to completing the degree, I’d say “yes”.

    EDIT: For 2022:

    For sometime now, I’ve been hearing that college degrees are worthless nowadays

    https://www.usnews.com/education/best-colleges/articles/college-majors-with-the-best-return-on-investment

    According to Payscale data, here are some specific engineering majors in bachelor’s degree programs that result in high incomes:

    Electrical engineering and computer science: early career median pay is $119,200; mid-career median pay is $169,000.



  • https://en.wikipedia.org/wiki/Cosmic_ray_visual_phenomena

    Cosmic ray visual phenomena, or light flashes (LF), also known as Astronaut’s Eye, are spontaneous flashes of light visually perceived by some astronauts outside the magnetosphere of the Earth, such as during the Apollo program. While LF may be the result of actual photons of visible light being sensed by the retina,[1] the LF discussed here could also pertain to phosphenes, which are sensations of light produced by the activation of neurons along the visual pathway.[2]

    Researchers believe that the LF perceived specifically by astronauts in space are due to cosmic rays (high-energy charged particles from beyond the Earth’s atmosphere[3]), though the exact mechanism is unknown. Hypotheses include Cherenkov radiation created as the cosmic ray particles pass through the vitreous humour of the astronauts’ eyes,[4][5] direct interaction with the optic nerve,[4] direct interaction with visual centres in the brain,[6] retinal receptor stimulation,[7] and a more general interaction of the retina with radiation.[8]

    The main shapes seen are “spots” (or “dots”), “stars” (or “supernovas”), “streaks” (or “stripes”), “blobs” (or “clouds”) and “comets”. These shapes were seen at varying frequencies across astronauts. On the Moon flights, astronauts reported seeing the “spots” and “stars” 66% of the time, “streaks” 25% of the time, and “clouds” 8% of the time.[10] Astronauts who went on other missions reported mainly “elongated shapes”.[9] About 40% of those surveyed reported a “stripe” or “stripes” and about 20% reported a “comet” or “comets”. 17% of the reports mentioned a “single dot” and only a handful mentioned “several dots”, “blobs” and a “supernova”.

    A reporting of motion of the LF was common among astronauts who experienced the flashes.[9] For example, Jerry Linenger reported that during a solar storm, they were directional and that they interfered with sleep since closing his eyes would not help. Linenger tried shielding himself behind the station’s lead-filled batteries, but this was only partly effective.[11]

    There are a lot of not-immediately-obvious benefits to being on Earth.


  • But the software needs to catch up.

    Honestly, there is a lot of potential room for substantial improvements.

    • Gaining the ability to identify edges of the model that are not-particularly-relevant relevant to the current problem and unloading them. That could bring down memory requirements a lot.

    • I don’t think — though I haven’t been following the area — that current models are optimized for being clustered. Hell, the software running them isn’t either. There’s some guy, Jeff Geerling, who was working on clustering Framework Desktops a couple months back, because they’re a relatively-inexpensive way to get a ton of VRAM attached to parallel processing capability. You can have multiple instances of the software active on the hardware, and you can offload different layers to different APUs, but currently, it’s basically running sequentially — no more than one APU is doing compute presently. I’m pretty sure that that’s something that can be eliminated (if it hasn’t already been). Then the problem — which he also discusses — is that you need to move a fair bit of data from APU to APU, so you want high-speed interconnects. Okay, so that’s true, if what you want is to just run very models designed for very expensive, beefy hardware on a lot of clustered, inexpensive hardware…but you could also train models to optimize for this, like use a network of neural nets that have extremely-sparse interconnections between them, and denser connections internal to them. Each APU only runs one neural net.

    • I am sure that we are nowhere near being optimal just for the tasks that we’re currently doing, even using the existing models.

    • It’s probably possible to tie non-neural-net code in to produce very large increases in capability. To make up a simple example, LLMs are, as people have pointed out, not very good at giving answers to arithmetic questions. But…it should be perfectly viable to add a “math unit” that some of the nodes on the neural net interfaces with and train it to make use of that math unit. And suddenly, because you’ve just effectively built a CPU into the thing’s brain, it becomes far better than any human at arithmetic…and potentially at things that makes use of that capability. There are lots of things that we have very good software for today. A human can use software for some of those things, through their fingers and eyes — not a very high rate of data interchange, but we can do it. There are people like Musk’s Neuralink crowd that are trying to build computer-brain interfaces. But we can just build that software directly into the brain of a neural net, have the thing interface with it at the full bandwidth that the brain can operate at. If you build software to do image or audio processing in to help extract information that is likely “more useful” but expensive for a neural net to compute, they might get a whole lot more efficient.


  • There’s loads of hi-res ultra HD 4k porn available.

    It’s still gonna have compression artifacts. Like, the point of lossy compression having psychoacoustic and psychovisual models is to degrade the stuff as far as you can without it being noticeable. That doesn’t impact you if you’re viewing the content without transformation, but it does become a factor if you don’t. Like, you’re viewing something in a reduced colorspace with blocks and color shifts and stuff.

    I can go dig up a couple of diffusion models finetuned off SDXL that generate images with visible JPEG artifacts, because they were trained on a corpus that included a lot of said material and didn’t have some kind of preprocessing to deal with it.

    I’m not saying that it’s technically-impossible to build something that can learn to process and compensate for all that. I (unsuccessfully) spent some time, about 20 years back, on a personal project to add neural net postprocessing to reduce visibility of lossy compression artifacts, which is one part of how one might mitigate that. Just that it adds complexity to the problem to be solved.


  • I doubt that OpenAI themselves will do so, but I am absolutely confident that someone not only will be banging on this, but I suspect that they probably have already. In fact, IIRC from an earlier discussion, someone already was selling sex dolls with said integration, and I doubt that they were including local parallel compute hardware for it.

    kagis

    I don’t think that this is the one I remember, but doesn’t really matter; I’m sure that there’s a whole industry working on it.

    https://www.scmp.com/tech/tech-trends/article/3298783/chinese-sex-doll-maker-sees-jump-2025-sales-ai-boosts-adult-toys-user-experience

    Chinese sex doll maker sees jump in 2025 sales as AI boosts adult toys’ user experience

    The LLM-powered dolls are expected to cost from US$100 to US$200 more than existing versions, which are currently sold between US$1,500 and US$2,000.

    WMDoll – based in Zhongshan, a city in southern Guangdong province – embeds the company’s latest MetaBox series with an AI module, which is connected to cloud computing services hosted on data centres across various markets where the LLMs process the information from each toy.

    According to the company, it has adopted several open-source LLMs, including Meta Platforms’ Llama AI models, which can be fine-tuned and deployed anywhere.


  • While I don’t disagree with your overall point, I would point out that a lot of that material has been lossily-compressed to a degree that significantly-degrades quality. That doesn’t make it unusable for training, but it does introduce a real complication, since your first task has to be being able to deal with compression artifacts in the content. Not to mention any post-processing, editing, and so forth.

    One thing I’ve mentioned here — it was half tongue-in-cheek — is that it might be less-costly than trying to work only from that training corpus, to hire actors specifically to generate video to train an AI for any weak points you need. That lets you get raw, uncompressed data using high-fidelity instruments in an environment with controlled lighting, and you can do stuff like use LIDAR or multiple cameras to make reducing the scene to a 3D model simpler and more-reliable. The existing image and video generation models that people are running around with have a “2D mental model” of the world. Trying to bridge the gap towards having a 3D model is going to be another jump that will have to come to solve a lot of problems. The less hassle there is with having to deal with compression artifacts and such in getting to 3D models, probably the better.


  • So, I’m just talking about whether-or-not the end game is going to be local or remote compute. I’m not saying that one can’t generate pornography locally, but asking whether people will do that, whether the norm will be to run generative AI software locally (the “personal computer” model that came to the fore in the mid-late 1970s and on or so) or remotely (the “mainframe” model, which mostly preceded it).

    Yes, one can generate pornography locally…but what if the choice is between a low-resolution, static SDXL (well, or derived model) image or a service that leverages compute to get better images or something like real-time voice synth, recognition, dialogue, and video? I mean, people can get static pornography now in essentially unbounded quantities on the Internet; It is in immense quantity; if someone spent their entire lives going through it, they’d never, ever see even a tiny fraction of it. Much of it is of considerably greater fidelity than any material that would have been available in, say, the 1980s; certainly true for video. Yet…even in this environment of great abundance, there are people subscribing to commercial (traditional) pornography services, and getting hardware and services to leverage generative AI, even though there are barriers in time, money, and technical expertise to do so.

    And I’d go even further, outside of erotica, and say that people do this for all manner of things. I was really impressed with Wolfenstein 3D when it came out. Yet…people today purchase far more powerful hardware to run 3D video games. You can go and get a computer that’s being thrown out that can probably run dozens of simultaneous instances of Wolfenstein 3D concurrently…but virtually nobody does so, because there’s demand for the new entertainment material that the new software and hardware permits for.