







Notepad++ works fine on Wine on Mac and Linux. After being away from it from awhile, I realized I don’t need it anymore. I would often use the column edit mode and recorded macros, but I just bash script those now. I guess I’m a different person now?!?


Proud


Everything reminds me of her


For a glimpse into why Mike of Redlettermedia is being strangled:
The joke is that Rogue One, a newer Star Wars move, is a nostalgia shotgun blast with no substance.
The same channel went to great lengths to shred the Star Wars prequels (Eps I, III and III) under the name of Plinkett Reviews.


You should check out the LibRedirect Firefox addon. It does exactly what you’re describing you need. You can set up multiple redirect destinations for all kinds of sites and its easy to turn on/off


Bummer.


Great spin, Bloomberg. You were very careful to only talk about “potential” and missing revenue targets when the real problem is that a bunch of grifters pretended they were on the absolute verge of AGI when, in fact, they were/are bulding advanced bullshit machines.
I will eat my words when a model can come up with an original thought


Howdy. For the clarity of users such as myself, can you please clarify which “Proton” you’re referring to.


This framing still sucks. Google is blocking apps THEY don’t approve on YOUR phone.


That’s a great observation!..


I think organizing labor is a useful skill. I just think doing it to the sole benefit of “shareholder value” is what’s killing us. Is that liberal of me? I can’t imagine a society where work isn’t done by people and work needs some form of organization.


Here are some of the schools I know set the pace for Business education in the US. Feels like social responsibility is more than an afterthought.
Again, not defeding, “the MBAs” running companies. I’m defending the schools
https://www.hbs.edu/mba/academic-experience/curriculum


Broken systems elevate psychopath leaders into positions of wealth and power, and people who want those things exploit the fastest path there by getting degrees who put you on that track.
By this MBA logic, do we close CompSci for the the poor code coming out of Microsoft, close Law Schools because social rights are being lost, engineering schoolings because infrastructure doesn’t meet current needs?
My point is to blame the CEOs and their shitty behaviour, not the schools that, to my knowledge, try to educate reasonable policy, law, ethics, HR, etc.
Disclaimer: not an MBA
ROCm on my 7900xt is solid. ROCm on my MI50s (Vega) is a NIGHTMARE


I still think AI is mostly a toy and a corporate inflation device. There are valid use cases but I don’t think that’s the majority of the bubble
I played around with image creation and there isn’t anything there other than a toy for me. I take pictures with a camera.


I think you’re missing the point or not understanding.
Let me see if I can clarify
What you’re talking about is just running a model on consumer hardware with a GUI
The article talks about running models on consumer hardware. I am making the point that this is not a new concept. The GUI is optional but, as I mentioned, llama.cpp and other open source tools provide an OpenAI-compatible api just like the product described in the article.
We’ve been running models for a decade like that.
No. LLMs, as we know them, aren’t that old, were a harder to run and required some coding knowledge and environment setup until 3ish years ago, give or take when these more polished tools started coming out.
Llama is just a simplified framework for end users using LLMs.
Ollama matches that description. Llama is a model family from Facebook. Llama.cpp, which is what I was talking about, is an inference and quantization tool suite made for efficient deployment on a variety of hardware including consumer hardware.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
Map reduce, in very simplified terms, means spreading out compute work to highly pararelized compute workers. This is, conceptually, how all LLMs are run at scale. You can’t map reduce or parallelize LLMs any more than they already are. The article doent imply map reduce other than taking about using multiple computers.
They aren’t talking about just running models as you’re describing.
They don’t talk about how the models are run in the article. But I know a tiny bit about how they’re run. LLMs require very simple and consistent math computations on extremely large matrixes of numbers. The bottleneck is almost always data transfer, not compute. Basically, every LLM deployment tool is already tries to use as much parallelism as possible while reducing data transfer as much as possible.
The article talks about gpt-oss120, so were aren’t talking about novel approaches to how the data is laid out or how the models are used. We’re talking about tranformer models and how they’re huge and require a lot of data transfer. So, the preference is try to keep your model on the fastest-transfer part of your machine. On consumer hardware, which was the key point of the article, you are best off keeping your model in your GPU’s memory. If you can’t, you’ll run into bottlenecks with PCIe, RAM and network transfer speed. But consumers don’t have GPUs with 63+ GB of VRAM, which is how big GPT-OSS 120b is, so they MUST contend with these speed bottlenecks. This article doesn’t address that. That’s what I’m talking about.


This is basically meaningless. You can already run gpt-OSS 120 across consumer grade machines. In fact, I’ve done it with open source software with a proper open source licence, offline, at my house. It’s called llama.cpp and it is one of the most popular projects on GitHub. It’s the basis of ollama which Facebook coopted and is the engine for LMStudio, a popular LLM app.
The only thing you need is around 64 gigs of free RAM and you can serve gpt-oss120 as an OpenAI-like api endpoint. VRAM is preferred but llama.cpp can run in system RAM or on top of multiple different GPU addressing technologies. It has a built-in server which allows it to pool resources from multiple machines…
I bet you could even do it over a series of high-ram phones in a network.
So I ask is this novel or is it an advertisement packaged as a press release?
Reposting for the 3rd time
TLDR: I took the plunge on OLED TV in 2021 as a primary monitor and it’s been incredible
I’ve been using an LG C1 48" OLED TV as my sole monitor for my full-time job, my photography, and gaming since the start of 2021. I think it’s at around
300045008500 hours of screen time. It averages over 10 hours of on time per weekdayIt typically stays around
4090 brightness becausethat’s all I needI now have a bright office, being fairly close to my face the size. All of the burn-in protection features are on (auto dimming , burn-in protection, pixel rotation) but I haveWindowsMac set to never sleep for work reasons.Burn in has not been a thing. Sometimes, I leave it on with a spreadsheet open or a photo being edited overnight because I’m dumb. High brightness and high contrast areas might leave a spot visible in certain greys but by then, the TV will ask me to “refresh pixels” and it’ll be gone when I next turn the TV on. The task bar has not burned in.
Update in 2026 at 8500+ hours: there is minor garaininess to midtone, flat grays. Not distracting or even a risk for photo sensitive work, but I can find it if I know to look for it.
Experience for work, reading, dev: 8/10
Pros: screen real estate. One 48" monitor is roughly four 1080p 22" monitors tiled.The ergonomics are great. Text readability is very good especially in dark mode.
cons: sharing my full screen is annoying to others because it’s so big. Video camera has to be placed a bit higher than ideal so I’m at a slightly too high angle for video conferences.
This is categorically a better working monitor than my previous cheap dual 4k setup but text sharpness is not as good as a high end LCD with retina-like density because 1) the density and 2) the subpixel configuration on OLED is not as good for text rendering. This has never been an issue for my working life.
Experience with photo and video editing: 10/10
Outside of dedicated professional monitors which are extremely expensive, there is no better option for color reproduction and contrast. From what I’ve seen in the consumer sector, maybe Apple monitors are at this level but the price is 4 or 5x.
Gaming: 10/10
2160p120hz HDR with 3ms lag, perfect contrast and extremely good color reproduction.
FPSs feel really good. Anything dark/horror pops A lot of real estate for RTSs Maybe flight sim would have benefited from dusk monitor setup?
I’ve never had anything but a good gaming experience. I did have a 144hz monitor before and going to 120 IS marginally noticable for me but I don’t think it’s detrimental at the level I play (suck)
Reviewers had mentioned that it’s good for consoles too though I never bothered
Movies and TV: 10/10 4K HDR is better than theaters’ picture quality in a dark room. Everything I’ve thrown on it has been great.
Final notes/recommendations This is my third LG OLED and I’ve seen the picture quality dramatically increase over the years. Burn-in used to be a real issue and grays were trashed on my first OLED after about 1000 hours.
Unfortunately, I have to turn the TV on from the remote every time. It does automatically turn off from no signal after the computers screen sleep timer, which is a good feature. There are open source programs which get around this.This TV has never been connected to the Internet… I’ve learned my lesson with previous LG TVs. They spy, they get ads, they have horrendous privacy policies, and they have updates which kill performance or features… Just don’t. Get a streaming box.
You need space for it, width and depth wise. The price is high (around 1k USD on sale) but not compared with gaming monitors and especially compared with 2 gaming monitors.
Pixel rotation is noticeable when the entire screen shifts over a pixel two. It also will mess with you if you have reference pixels at the edge of the screen. This can be turned off.
Burn in protection is also noticable on mostly static images. I wiggle my window if it gets in my way. This can also be turned off.