• 26 Posts
  • 255 Comments
Joined 2 years ago
cake
Cake day: September 6th, 2023

help-circle
  • Reposting for the 3rd time

    TLDR: I took the plunge on OLED TV in 2021 as a primary monitor and it’s been incredible

    I’ve been using an LG C1 48" OLED TV as my sole monitor for my full-time job, my photography, and gaming since the start of 2021. I think it’s at around 3000 4500 8500 hours of screen time. It averages over 10 hours of on time per weekday

    It typically stays around 40 90 brightness because that’s all I need I now have a bright office, being fairly close to my face the size. All of the burn-in protection features are on (auto dimming , burn-in protection, pixel rotation) but I have Windows Mac set to never sleep for work reasons.

    Burn in has not been a thing. Sometimes, I leave it on with a spreadsheet open or a photo being edited overnight because I’m dumb. High brightness and high contrast areas might leave a spot visible in certain greys but by then, the TV will ask me to “refresh pixels” and it’ll be gone when I next turn the TV on. The task bar has not burned in.

    Update in 2026 at 8500+ hours: there is minor garaininess to midtone, flat grays. Not distracting or even a risk for photo sensitive work, but I can find it if I know to look for it.

    Experience for work, reading, dev: 8/10

    Pros: screen real estate. One 48" monitor is roughly four 1080p 22" monitors tiled.The ergonomics are great. Text readability is very good especially in dark mode.

    cons: sharing my full screen is annoying to others because it’s so big. Video camera has to be placed a bit higher than ideal so I’m at a slightly too high angle for video conferences.

    This is categorically a better working monitor than my previous cheap dual 4k setup but text sharpness is not as good as a high end LCD with retina-like density because 1) the density and 2) the subpixel configuration on OLED is not as good for text rendering. This has never been an issue for my working life.

    Experience with photo and video editing: 10/10

    Outside of dedicated professional monitors which are extremely expensive, there is no better option for color reproduction and contrast. From what I’ve seen in the consumer sector, maybe Apple monitors are at this level but the price is 4 or 5x.

    Gaming: 10/10

    2160p120hz HDR with 3ms lag, perfect contrast and extremely good color reproduction.

    FPSs feel really good. Anything dark/horror pops A lot of real estate for RTSs Maybe flight sim would have benefited from dusk monitor setup?

    I’ve never had anything but a good gaming experience. I did have a 144hz monitor before and going to 120 IS marginally noticable for me but I don’t think it’s detrimental at the level I play (suck)

    Reviewers had mentioned that it’s good for consoles too though I never bothered

    Movies and TV: 10/10 4K HDR is better than theaters’ picture quality in a dark room. Everything I’ve thrown on it has been great.

    Final notes/recommendations This is my third LG OLED and I’ve seen the picture quality dramatically increase over the years. Burn-in used to be a real issue and grays were trashed on my first OLED after about 1000 hours.

    Unfortunately, I have to turn the TV on from the remote every time. It does automatically turn off from no signal after the computers screen sleep timer, which is a good feature. There are open source programs which get around this.

    This TV has never been connected to the Internet… I’ve learned my lesson with previous LG TVs. They spy, they get ads, they have horrendous privacy policies, and they have updates which kill performance or features… Just don’t. Get a streaming box.

    You need space for it, width and depth wise. The price is high (around 1k USD on sale) but not compared with gaming monitors and especially compared with 2 gaming monitors.

    Pixel rotation is noticeable when the entire screen shifts over a pixel two. It also will mess with you if you have reference pixels at the edge of the screen. This can be turned off.

    Burn in protection is also noticable on mostly static images. I wiggle my window if it gets in my way. This can also be turned off.
















  • Broken systems elevate psychopath leaders into positions of wealth and power, and people who want those things exploit the fastest path there by getting degrees who put you on that track.

    By this MBA logic, do we close CompSci for the the poor code coming out of Microsoft, close Law Schools because social rights are being lost, engineering schoolings because infrastructure doesn’t meet current needs?

    My point is to blame the CEOs and their shitty behaviour, not the schools that, to my knowledge, try to educate reasonable policy, law, ethics, HR, etc.

    Disclaimer: not an MBA



  • I still think AI is mostly a toy and a corporate inflation device. There are valid use cases but I don’t think that’s the majority of the bubble

    • For my personal use, I used it to learn how models work from a compute perspective. I’ve been interested and involved with natural language processing and sentiment analysis since before LLMs became a thing. Modern models are an evolution of that.
    • A small, consumer grade model like GPT-oss-20 is around 13GB and can run on a single mid-grade consumer GPU and maybe some RAM. It’s capable of parsing text and summarizing, troubleshooting computer issues, and some basic coding or code review for personal use. I built some bash and home assistant automatons for myself using these models as crutches. Also, there is software that can index text locally to help you have conversations with large documents. I use this with documentation for my music keyboard which is a nightmare to program and with complex APIs.
    • A mid-size model like Nemotron3 30B is around 20GB can run on a larger consumer card (like my 7900xtx with 24 gb of VRAM, or 2 5060tis with 16gb of vRAM each) and will have vaguely the same usability as the small commercial models, like Gemini Flash, or Claude Haiku. These can write better, more complex code. I also use these to help me organize personal notes. I dump everything in my brain to text and have the model give it structure.
    • A large model like GLM4.7 is around 150GB can do all the things ChatGPT or Gemini Pro can do, given web access and a pretty wrapper. This requires big RAM and some patience or a lot of VRAM. There is software designed to run these larger models in RAM faster, namely ik_llama but, at this scale, you’re throwing money at AI.

    I played around with image creation and there isn’t anything there other than a toy for me. I take pictures with a camera.


  • I think you’re missing the point or not understanding.

    Let me see if I can clarify

    What you’re talking about is just running a model on consumer hardware with a GUI

    The article talks about running models on consumer hardware. I am making the point that this is not a new concept. The GUI is optional but, as I mentioned, llama.cpp and other open source tools provide an OpenAI-compatible api just like the product described in the article.

    We’ve been running models for a decade like that.

    No. LLMs, as we know them, aren’t that old, were a harder to run and required some coding knowledge and environment setup until 3ish years ago, give or take when these more polished tools started coming out.

    Llama is just a simplified framework for end users using LLMs.

    Ollama matches that description. Llama is a model family from Facebook. Llama.cpp, which is what I was talking about, is an inference and quantization tool suite made for efficient deployment on a variety of hardware including consumer hardware.

    The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.

    Map reduce, in very simplified terms, means spreading out compute work to highly pararelized compute workers. This is, conceptually, how all LLMs are run at scale. You can’t map reduce or parallelize LLMs any more than they already are. The article doent imply map reduce other than taking about using multiple computers.

    They aren’t talking about just running models as you’re describing.

    They don’t talk about how the models are run in the article. But I know a tiny bit about how they’re run. LLMs require very simple and consistent math computations on extremely large matrixes of numbers. The bottleneck is almost always data transfer, not compute. Basically, every LLM deployment tool is already tries to use as much parallelism as possible while reducing data transfer as much as possible.

    The article talks about gpt-oss120, so were aren’t talking about novel approaches to how the data is laid out or how the models are used. We’re talking about tranformer models and how they’re huge and require a lot of data transfer. So, the preference is try to keep your model on the fastest-transfer part of your machine. On consumer hardware, which was the key point of the article, you are best off keeping your model in your GPU’s memory. If you can’t, you’ll run into bottlenecks with PCIe, RAM and network transfer speed. But consumers don’t have GPUs with 63+ GB of VRAM, which is how big GPT-OSS 120b is, so they MUST contend with these speed bottlenecks. This article doesn’t address that. That’s what I’m talking about.


  • This is basically meaningless. You can already run gpt-OSS 120 across consumer grade machines. In fact, I’ve done it with open source software with a proper open source licence, offline, at my house. It’s called llama.cpp and it is one of the most popular projects on GitHub. It’s the basis of ollama which Facebook coopted and is the engine for LMStudio, a popular LLM app.

    The only thing you need is around 64 gigs of free RAM and you can serve gpt-oss120 as an OpenAI-like api endpoint. VRAM is preferred but llama.cpp can run in system RAM or on top of multiple different GPU addressing technologies. It has a built-in server which allows it to pool resources from multiple machines…

    I bet you could even do it over a series of high-ram phones in a network.

    So I ask is this novel or is it an advertisement packaged as a press release?