• 3 Posts
  • 48 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • Comes down to personal preferences really. Personally I have been running truenas since the freebsd days and its always been on bare metal. There would be no reason you could not virtualize it, and I have seen it done.

    I do run a pfsense virtualized on my proxmox VM machine. It runs great once I figured out all the hardware pass through settings. I do the same with GPU pass through for a retro gaming machine on the same proxmox machine.

    The only thing I dont like is that when you reboot your proxmox machine the PCI devices dont retain their mapping ids. So a PCI NIC card I have in the machine causes the pfsense machine not to start.

    The one thing to take into account with Unraid vs TrueNAS is the difference between how they do RAID. Unraid always drives of different sizes in its setup, but it does not provide the same redundancy as TrueNAS. Truenas requires disk be the same size inside a vdev, but you can have multiple vdevs in one large pool. One vdev can be 5 drives of 10tb and the other vdev can be 5 drives of 2tb. You can always swap any drive in truenas with a larger drive, but it will only be as big as the smallest disk in the vdev.



  • I personally run truenas on a standalone system to act as my NAS network wide. It never goes offline and is up near 24/7 except when I need to pull a dead drive.

    Unraid is my go to right now for self hosting as its learning curve for docker containers is fairly easy. I find I reboot the system from time to time so its not something I use for a daily NAS solution.

    Proxmox I run as well on a standalone system. This is my go to for VM instances. Really easy to spin up any OS I would need for any purpose. I run things like home assistant for example on this machine. And its uptime is 24/7.

    Each operating system has its advantages, and all three could potentially do the same things. Though I do find a containered approche prevents long periods of downtime if one system goes offline.


  • No worries, VMware or some of the other virtualization software’s should work in this case as most other comments pointed out. Probably the most simple and straight to the point.

    If you have the urge to tinker, another potential item or route you can look at is a proxmox machine. You can run multiple VMs in tandem at the same time. This would run on a standalone machine.

    You would then be able to remote desktop into any virtualized OS on your home network. You can use a software like parsec which I like to access each machine from a clean interface.









  • Soon we will all be plastic. Its already in our food and water.

    What i really think about is these are only the effects so far from the plastics that have started to break down from when plastics were created (smaller quantities). What happens when the plastics of today start to break down (larger quantities).

    Kind of like the effects of oil (air pollution) being felt 30-50 years down the line.







  • Seems like the N100 is your option if you are only choosing between these two. Personally I am in the same both as others here, where desktop hardware is my preference at the moment especially if I can find combo deals for mombo/cpu.

    Though my recommendation is to consider a board that would support PCIe for a potential LSI HBA card, stay away from any other sata expansion cards unless you don’t value your data.

    If you do ever pick up a LSI HBA card with support for either 8/12/24 drives I would also state to plug the whole pool into this card and not mix and match between onboard SATA connections and the card.

    A boot drive can still connect to a SATA connection on the board as it not part of the pool.


  • I’m running my NAS on a 12 year old motherboard with 16gb of ram the max the board supports. Though I wish I could bump this up now after running this system for 9 years.

    I would recommend having a board with at least a PCIe slot so if you ever need more drives you can plug them all into a HBA Card. My board has 3 and I use 2 of them at the moment. One for the HBA card that supports 24 drives and another for a 10gb NIC.

    The third I would probably use to add another HBA card if I expand drive quantities.


  • I got the same setup with eight 18TB Exos drives running in a RAIDz2 with an extra spare. Added to this though I got another vdev of eight 12 WD reds with another spare.

    With this I can have 2 drives fail in a vdev at any point and still rebuild the pool. Though if more than 2 drives all fail at the same time the whole pool is gone.

    But if that happens I have a second NAS offsite at my bro’s place that I backup specific datasets. This is connected with tailscale and a zfs replication task.