• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle



  • If your goal is to improve security you would have to look into e2e encryption. This means network traffic needs to be encrypted both between client and proxy as well as between proxy and service. Your volumes should be also encrypted. You didn’t elaborate on your proxmox/network setup. I will assume that you have multiple proxmox hosts and external router perhaps with switch between them. Traffic this way flows between multiple devices. With security mindset you’re assuming network can’t be trusted. You need to apply layered approach and use sparation of physical devices, VLANs, ACLs, separate network interfaces for management and services for respective networks. Firewall rules on router, proxmox and VM.

    Some solutions

    • separate network for VM/CT. Instead of using network routable IP going to your router you can create new bridge on separate CIDR without specifying gateway. Add bridge to every VM that needs connectivity. Use new bridge IPs to communicate between VMs. Further you can configure proxmox to communicate between nodes in ring network P2P instead using switch/router. This requires at least 2 dedicated NICs on Proxmox host. This separates network but doesn’t encrypt.

    Encryption:

    • You could run another proxy on same VM as service just to encrypt traffic if service doesn’t support that. Then have your proxy connect to that proxy instead of service directly. This way unencrypted traffic doesn’t leave VM. Step up would be to use certificate validation. Step up from there would be to use internal certificate authority and issue certificates from there as well as validate using CA cert.
    • Another alternative is to use overlay network between proxy and VM. There are bunch of different options. Hashicorp consul network could be interesting project. There are more advanced projects combining zero trust concepts like nebula.
    • if you start building advanced overlay networks you may as well look at kubernetes as it streamlines deployment of both services and underlying infrastructure. You could deploy calico with wire guard network. Setup gets more complicated for a simple home lab.

    All boils down to the question why you do self hosting? If it’s to learn new tech then go for it all the way. Experiment and fail often so you learn what works and what doesn’t. If you want to focus on reliability and simplicity don’t overcomplicate things. You will spend too much time troubleshooting and have your services unavailable. Many people run everything on single node just running docker with networks between services to separate internal services from proxy traffic. Simplicity trumps everything if you can’t configure complex networks securely.


  • The nice thing about vm with nginx proxy manager or just nginx running on the same host as the rest(or majority) of vms is that internal traffic doesn’t traverse other devices. This only applies if your backend services are not configured with TLS so you’re effectively terminating at proxy and run unencrypted traffic to backend. That being said chances of some packet sniffer running on your internal network between proxy and destination VM is low.

    I’m in similar situation as you. I run overpowered router that barely sees any CPU usage.

    I tried Nginx opnsense plug-in but looks like GUI doesn’t support proxy by header (locations are path based). I don’t want to ssh and mess with raw config files. I’m running HA proxy on opnsense router. I saw in community forums most people use that. After going through tutorial for one service it’s pretty easy to grasp configuration concept and replicate for other services. I think only one confusing option is that backends pools and rules can have backends configured and you can have only one in use when assigning rules to public service. Test syntax button ensures you don’t make mistakes. HA proxy has powerful options for backend more than you probably need. I moved router management port to higher number and setup proxy to run on 443. Then wildcard DNS entry points to router and that allows to keep adding services as needed.



  • I think there are many levels to approach this problem. First off the obvious investigate why your org DNS is having issues. This is IT request they should fix that. They should have SLA on this critical service and not fixing it should escalate to management. There may be many reasons why resolver is not working specially in complex multi site setups. This is the best option as it solves this and probably other DNS related issues.

    The rogue approach: On other side if you only host service for handful of users that you personally know and you have ability to edit your hosts file, you can bypass DNS completely. This isn’t ideal as it has to be done one every system and in case your IP changes you will have to do it again. It would largely depend on your level of access to system. If you even can change hosts file.

    Alternative crazy idea is to host your own DNS. Change DNS setting on your network configuration. Then point your dns to your org dns. Same problem as hosts file you will need to do that for all systems that need connectivity.

    Expanding on own DNS approach you could go as far as hosting your own network. WiFi or switch in case you need Ethernet cable connection. You can buy used enterprise equipment for cheap plug it in l, configure to point to your own DNS and anyone connected to your network would have your settings. Of course this is super shadow IT and I would discourage from pursuing that.

    Less crazy and rogue option is to use something like tailscale (or similar) which would have DNS (magic dns). You would need agent installed on every client.


  • Here is my security point of view. Second instance would be too much overhead for just one use case of sharing file. You have to decide how comfortable you are with exposing anything in your private network. I would personally not expose Nextcloud instance because it’s complex application with many modules each possibly having 0day exploits. If your goal is to share a file and selfhost I would look into dedicated apps for that purpose. You can setup simple microbin/privatebin on dedicated hardware in DMZ network behind firewall. You should run IDS/IPS on your open ports (pfsense/opnsense have that nicely pairs with crowdsec). You could also look into cloud fare tunnels to expose your dedicated file sharing app but I would still use as much isolation as possibilities (ideally phisical hardware) so that it would be not easy to compromise your local network in event of breach. Regardless selfhosted solution will always pose risks and management overhead if you want to run a tight setup. It’s much easier to use public cloud solution. For example proton drive is encrypted and you can share files via links with people.