• Localhorst86@feddit.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 hours ago

    The main concern with old hardware is probably powerdraw/efficiency, depending on how old your PC is, it might not be the best choice. But remember: companies are getting rid of old hardware fairly quickly, they can be a good choice and might be available for dirt cheap or even free.

    I recently replaced my old Synology NAS from 2011 with an old Dell Optiplex 3050 workstation that companies threw away. The system draws almost twice the power (25W) compared to my old synology NAS (which only drew 13W, both with 2 spinning drives), but increase in processing power and flexibility using TrueNAS is very noticable, it allowed me to also replace an old raspberry pi (6W) that only ran pihole.

    So overall, my new home-server is close in power draw to the two devices it replaced, but with an immense increase in performance.

  • Horsey@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 hours ago

    The number one concern with a NAS is the power draw. I can’t think of many systems that run under 30W.

  • Rolivers@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 hours ago

    I’ve made a decent NAS out of a Raspberry Pi 4. It used USB to SATA converters and old hard drives.

    My setup has one 3Tb drive and two 1.5Tb drives. The 1.5Tb drives form a 3Tb drive using RAID and then combines with the 3Tb drive to make redundant storage.

    Yes it’s inefficient AF but it’s good enough for full HD streaming so good enough for me.

    I’m too stingy to buy better drives.

  • PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 hours ago

    Why would I throw it away, when I can give it to someone who needs it more, or sell it? Using it as a NAS will use up more power than just buying a mini PC and using that. I calculated the costs and the energy savings would pay for one in two years. My NUC uses 6-7W idle.
    I’d use an old PC as a NAS but turned it on only on demand, when it was needed. Which does hurt its convenience factor a little.
    Note: talking about desktops.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      Why would I throw it away, when I can give it to someone who needs it more, or sell it?

      Because selling is always a hassle, dealing with choosing beggars and scammers, and it may not be worth much anymore for general use.

      For example, my old PC is a i7 4770k… it can’t run Windows 11 or play remotely recent games. I don’t know anyone who could use this thing, so to save a few watts I took out the GPU, put it in eco mode and have been using it as my Linux server.

      My NUC uses 6-7W idle.

      I have played around with some mini PC’s (minisforum and beelink brand), they’re neat but they turned out to be not very reliable, two have already died prematurely, and unfortunately they are not end-user serviceable. Lack of storage expansion options is an issue as well, if you don’t just want to stack a bunch of external USB drives on top of each other.

  • addie@feddit.uk
    link
    fedilink
    English
    arrow-up
    25
    ·
    21 hours ago

    Big shout out to Windows 11 and their TPM bullshit.

    Was thinking that my wee “Raspberry PI home server” was starting to feel the load a bit too much, and wanted a bit of an upgrade. Local business was throwing out some cute little mini PCs since they couldn’t run Win11. Slap in a spare 16 GB memory module and a much better SSD that I had lying about, and it runs Arch (btw) like an absolute beast. Runs Forgejo, Postgres, DHCP, torrent and file server, active mobile phone backup etc. while sipping 4W of power. Perfect; much better fit than an old desktop keeping the house warm.

    Have to think that if you’ve been given a work desktop machine with a ten-year old laptop CPU and 4GB of RAM to run Win10 on, then you’re probably not the most valued person at the company. Ran Ubuntu / GNOME just fine when I checked it at its original specs, tho. Shocking, the amount of e-waste that Microsoft is creating.

    • hereiamagain@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Question, what’s the benefit of running a separate DHCP server?

      I run openwrt, and the built in server seems fine? Why add complexity?

      I’m sure there’s a good reason I’m just curious.

      • addie@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 hours ago

        The router provided with our internet contract doesn’t allow you to run your own firmware, so we don’t have anything so flexible as what OpenWRT would provide.

        Short answer; in order to Pi-hole all of the advertising servers that we’d be connecting to otherwise. Our mobile phones don’t normally allow us to choose a DNS server, but they will use the network-provided one, so it sorts things out for the whole house in one go.

        Long, UK answer: because our internet is being messed with by the government at the moment, and I’d prefer to be confident that the DNS look-ups we receive haven’t been altered. That doesn’t fix everything - it’s a VPN job - but little steps.

        The DHCP server provided with the router is so very slow in comparison to running our own locally, as well. Websites we use often are cached, but connecting to something new takes several seconds. Nothing as infuriating as slow internet.

        • hereiamagain@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          Oh you mean DNS server, yes ok that makes sense. Yeah I totally understand running your own.

          If I understand correctly, DHCP servers just assign local IPs on initial connection, and configure other stuff like pointing devices to the right DNS server, gateway, etc

          • addie@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            Sorry, putting the two things together, my mistake. My router doesn’t let you specify the DNS server directly, but it does allow you to specify a different DHCP server, which can then hand out new IPs with a different DNS server specified, as you say. Bit of a house of cards. DHCP server in order to be the DNS server too.

            • hereiamagain@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              Gotcha! No worries. Networking gets more and more like sorcery the deeper you go.

              Networking and printers are my two least favorite computer things.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        So on mine, I haven’t bothered to change from the ISP provided router, which is mostly adequate for my needs, except I need to do some DNS shenigans, and so I take over DHCP to specify my DNS server which is beyond the customization provided by the ISP router.

        Frankly been thinking of an upgrade because they don’t do NAT loopback and while I currently workaround with different DNS results for local queries, it’s a bit wonky to do that and I’m starting to get WiFi 7 devices and could use an excuse to upgrade to something more in my control.

        • hereiamagain@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 hours ago

          That makes sense. I haven’t used an ISP configured router in over a decade. At my parents house, their modem/router combo didn’t support bridge mode so I put it in a DMZ and slapped that to the WAN port on my router. Worked well.

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    21 hours ago

    True for notebooks. (For years my home NAS was an old Asus EEE PC)

    Desktops, on the other hand, tend to consume a lot more power (how bad it is, depends on the generation) - they’re simply not designed to be a quiet device sitting on a corner continuously running a low CPU power demanding task: stuff designed for a lot more demanding tasks will have things like much bigger power sources which are less efficient at low power demand (when something is design to put out 400W, wasting 5 or 10W is no big deal, when it’s designed to put out 15W, wasting 5 or 10W would make it horribly inefficient).

    Meanwhile the typical NAS out there is running an ARM processor (which are known for their low power consumption) or at worse a low powered Intel processor such as the N100.

    Mind you, the idea of running you own NAS software is great (one can do way more with that than with a proprietary NAS, since its far more flexible) as long as you put it in the right hardware for the job.

    • nutsack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 hours ago

      I have used laptops like this and I find that eventually the cooling system fails, probably because they aren’t meant to run all the time like a server would be. various brands including Dell and Lenovo and MSI and Apple. maybe it’s the dust in my house. I don’t know

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Yeah, different hardware is designed for different use cases and generally won’t work as well for other use cases, which is also why desktops seldom make for great NAS servers (their fans will also fail from constant use, plus their design spec is for much higher power usage so they have a lot more power waste even if trottled down).

        That said my ASUS EEE PC lasted a few years on top of a cabinet in my kitchen (which is were the Internet came into my house so the router was also there) with a couple of external HDDs plugged in, and that’s a bit of a hostile environment (because some of the particulates from cooking, including fat, don’t get pulled out and end up accumulating there).

        At the moment I just have a Mini-PC on my living room with a couple of external HDDs plugged in that works as NAS, TV Media Box and home server (including wireguard VPN on top of a 1Gbps connection, which at peak is somewhat processor intensive). It’s an N100 and the whole thing has a TDP of 15W so the fan seldom activates. So far that seems to be the best long term solution, plus it’s multiple use unlike a proprietary NAS. It’s the some of the best €140 (not including the HDDs) I’ve ever spent.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        When I had my setup with an ASUS EEE PC I had mobile external HDDs plugged to it via USB.

        Since my use case was long-term storage and feeding video files to a Media TV Box, the bandwidth limit of USB 2.0 and using HDDs rather than SDDs was fine. Also back then I had 100Mbps ethernet so that too limited bandwidth.

        Even in my current setup where I use a Mini-PC to do the same, I still have the storage be external mobile HDDs and now badwidth limits are 1Gbps ethernet and USB 3.0, which is still fine for my use case.

        Because my use case now is long term storage, home file sharing and torrenting, my home network is using the same principles as distributed systems and modern microprocessor architectures: smaller faster data stores with often used data close to were its used (for example fast smaller SDDs with the OS and game executables inside my gaming machine, plus a torrent server inside that same Mini-PC using its internal SDD) and then layered outwards with decreasing speed and increasing size (that same desktop machine has an internal “storage” HDD filled with low use files, and one network hop from it there’s the Mini-PC NAS sharing its external HDDs containing longer term storage files).

        The whole thing tries to balance storage costs and with usage needs.

        I suppose I could improve performance a bit more by setting up some of the space in the internal SDD in the Mini-PC as a read/write cache for the external HDDs, but so far I haven’t had the patience to do it.

        I used to design high performance distributed computing systems and funnilly enough my home setup follows the same design principles (which I had not noticed until thinking about it now as I wrote this).

  • etuomaala@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    21 hours ago

    Laptops are better, because they have an integrated uninterruptible power supply, but worse because most can’t fit two hard drives internally. Less of a problem, now that most have USB3. Just run external RAID if you have to.

    Arguably, a serious home server will need a UPS anyway to keep the modem and router online, but a UPS for just the NAS is still better than no UPS at all. Also, only a small UPS is needed for the modem and router. A full desktop UPS is much larger.

    • notthebees@reddthat.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      18 hours ago

      They make m.2 to SATA adapters that have like 10 SATA ports. A laptop motherboard in a case with one of those would be very interesting. I have plans for one but I need to buy some parts (keyboard and laptop fan).

      Edit: the adapters run hot and are kind of fragile. I’d recommend having a thermal pad under it thermally coupling it to the motherboard and giving it some support.

  • regeya@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    I’ve got a Lenovo sitting by the TV, quietly running Jellyfin along with ESde. Might not run Win11 but it works fine for what I use it for.

  • imetators@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    42
    ·
    1 day ago

    Nah. I dissagree. My dedicated NAS system consumes around 40W idling and is very small sized machine. My old PC would utilize 100W idling and is ATX-sized case. Of course I can use my old PC as a NAS, but these two are different category devices.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      I bought a used Coffee Lake era Xeon 2224G workstation with 32GB of ECC RAM to use as a NAS. It uses 15 Watts at the wall measured with a killawatt while streaming 4K with Plex.

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      I want to reduce wasteful power consumption.

      But I also desire ECC for stability and data corruption avoidance, and hardware redundancy for failures (Which have actually happened!!)

      Begrudgingly I’m using dell rack mount servers. For the most part they work really well, stupid easy to service, unified remote management, lotssss of room for memory, thick PCIe lane counts, stupid cheap 2nd hand RAM, and stable.

      But they waste ~100 watts of power per device though… That stuff ads up, even if we have incredibly cheap power.

      • oftenawake@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        I use my old pc server as a 50w continuous heater in my lab-shed which is a small stone outbuilding. Keeps it dry in there!

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Why do I need transcoding, if I may ask? My TV always plays the served file directly. 🤷‍♂️ Is there anything to gain by transcoding, especially on the local home network?

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      91
      ·
      2 days ago

      Depends.

      Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        Stuff designed for much higher peek usage tend to have a lot more waste.

        For example, a 400W power source (which is what’s probably in the original PC of your example) will waste more power than a lower wattage on (unless it’s a very expensive one), so in that example of yours it should be replaced by something much smaller.

        Even beyond that, everything in there - another example, the motherboard - will have a lot more power leakage than something designed for a low power system (say, an ARM SBC).

        Unless it’s a notebook, that old PC will always consume more power than, say, an N100 Mini-PC, much less an ARM based one.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          For example, a 400W power source (which is what’s probably in the original PC of your example) will waste more power than a lower wattage on

          in my experience power supplies are more efficient near the 50% utilization. be quiet psus have charts about it

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 hours ago

            The way one designs hardware in is to optimize for the most common usage scenario with enough capacity to account for the peak use scenario (and with some safety margin on top).

            (In the case of silent power sources they would also include lower power leakage in the common usage scenario so as to reduce the need for fans, plus in the actual physical circuit design would also include things like airflow and having space for a large slower fan since those are more silent)

            However specifically for power sources, if you want to handle more power you have to for example use larger capacitors and switching MOSFETs so that it can handle more current, and those have more leakage hence more baseline losses. Mind you, using more expensive components one can get higher power stuff with less leakage, but that’s not going to happen outside specialist power supplies which are specifically designed for high-peak use AND low baseline power consumption, and I’m not even sure if there’s a genuine use case for such a design that justifies paying the extra cost for high-power low-leakage components.

            In summary, whilst theoretically one can design a high-power low-leakage power source, it’s going to cost a lot more because you need better components, and that’s not going to be a generic desktop PC power source.

            That said, I since silent PC power sources are designed to produce less heat, which means have less leakage (as power leakage is literally the power turning to heat), even if the with the design having been targetted for the most common usage scenario of that power source (which is not going to be 15W) that would still probably mean better components hence lower baseline leakage, hence they should waste less power if that desktop is repurposed as a NAS. Still won’t beat a dedicated ARM SBC (not even close), but it might end up cheap enough to be worth it if you already have that PC with a silent power source.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 hours ago

          All true, yep.

          Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That’s even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.

      • cmnybo@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        33
        ·
        2 days ago

        You can slow the RAM down too. You don’t need XMP enabled if you’re just using the PC as a NAS. It can be quite power hungry.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          ·
          2 days ago

          Eh, older RAM doesn’t use much. If it runs close to stock voltage, maybe just set it at stock voltage and bump the speed down a notch, then you get a nice task energy gain from the performance boost.

          • fuckwit_mcbumcrumble@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            14
            ·
            2 days ago

            There was a post a while back of someone trying to eek every single watt out of their computer. Disabling XMP and running the ram at the slowest speed possible saved like 3 watts I think. An impressive savings, but at the cost of HORRIBLE CPU performance. But you do actually need at least a little bit of grunt for a nas.

            At work we have some of those atom based NASes and the combination of lack of CPU, and horrendous single channel ram speeds makes them absolutely crawl. One HDD on its own performs the same as this raid 10 array.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              2 days ago

              Yeah.

              In general, ‘big’ CPUs have an advantage because they can run at much, much lower clockspeeds than atoms, yet still be way faster. There are a few exceptions, like Ryzen 3000+ (excluding APUs), which idle notoriously hot thanks to the multi-die setup.

    • Damage@feddit.it
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 days ago

      So I did this, using a Ryzen 3600, with some light tweaking the base system burns about 40-50W idle. The drives add a lot, 5-10W each, but they would go into any NAS system, so that’s irrelevant. I had to add a GPU because the MB I had wouldn’t POST without one, so that increases the power draw a little, but it’s also necessary for proper Jellyfin transcoding. I recently swapped the GPU for an Intel ARC A310.

      By comparison, the previous system I used for this had a low-power, fanless intel celeron, with a single drive and two SSDs it drew about 30W.

      • BeardedGingerWonder@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        Literally did this migration this weekend. Still need to install the A310 drivers and I don’t run Jellyfin (streaming handled client side with minidlna or SMB) but how do you find it?

        • Damage@feddit.it
          link
          fedilink
          English
          arrow-up
          3
          ·
          20 hours ago

          Drivers? Are you running it on Windows? On Linux I just plugged it in and it worked, Jellyfin transparently started transcoding the additional codecs.

          It fixed my issue with tone mapping, before this HDR files on my not-so-old TV showed the wrong colors.

          • BeardedGingerWonder@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            I’ve not desktop environment on the NAS, it was plug and play in terminal. I did get an error about HSW/BDW HD-Audio HDMI/DP requiring binding with a gfx driver - but I’ve not yet even bothered to google it.

            I read somewhere the sparkle elf I have just ramps the fan to 100% at all times with the Linux driver and has no option to edit fan curve under Linux (suggested fix was install a windows VM, set the curve there and the card will remember, but after rebuilding the NAS and fixing a couple of minor issues to get it all working I couldn’t face installing windows, so just left it as is until I have the time lol).

            • Damage@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              18 hours ago

              The host is running Proxmox, so I guess their kernel just works with it.

              It does run the fan way more than I’d like, but its noise is drowned out by the original AMD cooler on the CPU anyway, but thanks for the info, I may look into it… But I guess I’d have to set up GPU pass-through on a VM just for that.

              • BeardedGingerWonder@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                Yeah, I can’t say I’ve really noticed the fan noise enough to bother me yet, but wasn’t sure if that’s because I’m running some generic driver or others were just more sensitive to it than me. Jellyfin is at least fourth on the list of maintenance/upgrade tasks at the minute, as long as I can display the terminal at the minute I’m happy enough.

      • lectricleopard@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        2 days ago

        Ok, im glad im not the only one that wants a responsive machine for video streaming.

        I ran a pi400 with plex for a while. I dont care to save 20W while I wait for the machine to respond after every little scrub of the timeline. I want to have a better experience than Netflix. Thats the point.

        • Damage@feddit.it
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Eh, TBH I’d like to consume less power, but I mean, a 30-40W difference isn’t going to ruin me or the planet, I’ve got a rather efficient home all in all.

      • YerbaYerba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I have a 3600 in a NAS and it idles at 25w. My mobo luckily runs fine without a GPU. I pulled it out after the initial install.

    • leftascenter@jlai.lu
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      2 days ago

      A desktop running a low usage wouldn’t consume much more than a NAS, as long as you drop the video card (which wouldn’t be running anyways).

      Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that’s around 5 years for an estimated extra 10W.

      • Damage@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        as long as you drop the video card

        As I wrote below, some motherboards won’t POST without a GPU.

        Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that’s around 5 years for an estimated extra 10W.

        Yeah, and what’s more, if one of those appliance-like NASes breaks down, how do you fix it? With a normal PC you just swap out the defective part.

        • fuckwit_mcbumcrumble@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          2 days ago

          Most modern boards will. Also there’s integrated graphics on basically every single current CPU. Only AMD on AM4 held out on having iGPUs for so damn long.

    • ImgurRefugee114@reddthat.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      2 days ago

      I’m still running a 480 that doubles as a space heater (I’m not even joking; I increase the load based on ambient temps during winter)

      • thatKamGuy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I am assuming that’s a GTX 480 and not an RX 480; if so - kudos for not having that thing melt the solder off the heatsink by now! 😅

        • fuckwit_mcbumcrumble@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          The GTX 480 is efficient by modern standards. If Nvidia could make a cooler that could handle 600 watts in 2010 you can bet your sweet ass that GPU would have used a lot more power.

          Well that and if 1000 watt power supplies were common back then.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I have an old Intel 1440 desktop that runs 24/7 hooked up to a UPS along with a Beelink miniPC, my router, and a POE switch and the UPS is reporting a combined 100w.