• jordanwhite1@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    1 year ago

    I would documented everything as I go.

    I am a hobbyist running a proxmox server with a docker host for media server, a plex host, a nas host, and home assistant host.

    I feel if It were to break It would take me a long time to rebuild.

    • bmarinov@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      Ansible everything and automate as you go. It is slower, but if it’s not your first time setting something up it’s not too bad. Right now I literally couldn’t care less if the SD on one of my raspberry pi’s dies. Or my monitoring backend needs to be reinstalled.

      • Notorious@lemmy.link
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        IMO ansible is over kill for my homelab. All of my docker containers live on two servers. One remote and one at home. Both are built with docker compose and are backed up along with their data weekly to both servers and third party cloud backup. In the event one of them fails I have two copies of the data and could have everything back up and running in under 30 minutes.

        I also don’t like that Ansible is owned by RedHat. They’ve shown recently they have zero care for their users.

        • echo@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          if by “their users” you mean people who use rebuilds of RHEL ig

        • constantokra@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I didnlt realize that about ansible. I’ve always thought it was overkill for me as well, but I figured i’d learn it eventually. Not anymore lol.

  • TechieDamien@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    I would have taken a deep dive into docker and containerised pretty much everything.

    • Toribor@corndog.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Converting my environment to be mostly containerized was a bit of a slow process that taught me a lot, but now I can try out new applications and configurations at such an accelerated rate it’s crazy. Once I got the hang of Docker (and Ansible) it became so easy to try new things, tear them down and try again. Moving services around, backing up or restoring data is way easier.

      I can’t overstate how impactful containerization has been to my self hosting workflow.

    • howrar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Same for me. I’ve known about Docker for many years now but never understood why I would want to use it when I can just as easily install things directly and just never touch them. Then I ran into dependency problems where two pieces of software required different versions of the same library. Docker just made this problem completely trivial.

    • ThorrJo@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Same, but I’ve never once touched Docker and am doing everything old skool on top of Proxmox. Others may or may not like this approach, but it has many of the benefits in terms of productivity (ease of experimentation, migration, upgrade etc)

  • tejrik@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 year ago

    I wouldn’t change anything, I like fixing things as I go. Doing things right the first time is only nice when I know exactly what I’m doing!

    That being said, in my current enviroment, I made a mistake when I discovered docker compose. I saw how wonderfully simply it made deployment and helped with version control and decided to dump every single service into one singular docker-compose.yaml. I would separate services next time into at least their relevant categories for ease of making changes later.

    Better yet I would automate deployment with Ansible… But that’s my next step in learning and I can fix both mistakes while I go next time!

    • conrad82@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I do the same. I use caddy reverse proxy, and find it useful to use the container name for url, and no ports exposed

      What is the benefit for making changes with separate files?

      • wraith@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        If you have relevant containers (e.g. the *arr stack) then you can bring all of them up with a single docker compose command (or pull fresh versions etc.). If everything is in a single file then you have to manually pull/start/stop each container or else you have to do it to everything at once.

        • tejrik@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This. In addition, I’ve read that it’s best practice to make adding and removing services less of a pain.

          You’re not messing with stacks that benefit from extended uptime just to mess around with a few new projects. Considering my wife uses networks that the homelab influences, it would be a smarter choice for me long term to change things up.

  • Toribor@corndog.uk
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I should have learned Ansible earlier.

    Docker compose helped me get started with containers but I kept having to push out new config files and manually cycle services. Now I have Ansible roles that can configure and deploy apps from scratch without me even needing to back up config files at all.

    Most of my documentation has gone away entirely, I don’t need to remember things when they are defined in code.

  • ThorrJo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Go with used & refurb business PCs right out of the gate instead of fucking around with SBCs like the Pi.

    Go with “1-liter” aka Ultra Small Form Factor right away instead of starting with SFF. (I don’t have a permanent residence at the moment so this makes sense for me)

    • constantokra@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Ah, but now you have a stack of PiS to screw around with, separate from all the stuff you actually use.

  • Brad Ganley@toad.work
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    For me:

    • Document things (configs, ports, etc) as I go
    • Uniform folder layout for everything (my first couple of servers were a bit wild-westy)
    • Choosing and utilizing some reasonable method of assigning ports to things. I do not even want to explain what I need to do when I forget what port something in this setup is using.
  • stanleytweedle@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Buy an actual NAS instead of a rats nest of USB hub and drives. But now it works so I’m too lazy and cheap to migrate it off.

  • Anarch157a@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I already did a few months ago. My setup was a mess, everything tacked on the host OS, some stuff installed directly, others as docker, firewall was just a bunch of hand-written iptables rules…

    I got a newer motherboard and CPU to replace my ageing i5-2500K, so I decided to start from scratch.

    First order of business: Something to manage VMs and containers. Second: a decent firewall. Third: One app, one container.

    I ended up with:

    • Proxmox as VM and container manager
    • OPNSense as firewall. Server has 3 network cards (1 built-in, 2 on PCIe slots), the 2 add-ons are passed through to OPNSense, the built in is for managing Proxmox and for the containers .
    • A whole bunch of LXC containers running all sorts of stuff.

    Things look a lot more professional and clean, and it’s all much easier to manage.

      • oken735@yukistorm.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yes, you can pass through any GPU to containers pretty easily, and if you are starting with a new VM you can also pass through easily there, but if you are trying to use an existing VM you can run into problems.

      • Anarch157a@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Can’t say anything about CUDA because I don’t have Nvidia cards nor do I work with AI stuff, but I was able to pass the built-in GPU on my Ryzen 2600G to the Jellyfin container so it could do hardware transcoding of videos.

        You need the drivers for the GPU installed on the host OS, then link the devices on /dev to the container. For AMD this is easy, bc the drivers are open source and included in the distro (Proxmox is Debian based), for Nvidia you’d have to deal with the proprietary stuff both on the host and on the containers.

  • das@lemellem.dasonic.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I would have gone with an Intel CPU to make use of iGPU for transcoding and probably larger hard drives.

    I also would have written down my MariaDB admin password… Whoops

  • Showroom7561@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Instead of a 4-bay NAS, I would have gone with a 6-bay.

    You only realize just how expensive it is to expand on your space when you have to REPLACE HDDs rather than simply adding more.

    • billm@lemmy.oursphere.space
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Yes, but you’ll be wishing you had 8 bays when you fill the 6 :) At some point, you have to replace disks to really increase space, don’t make your RAID volumes consist of more disks than you can reasonably afford to replace at one time. Second lesson, if you have spare drive bays, use them as part of your upgrade strategy, not as additional storage. Started this last iteration with 6x3tb drives in a raidz2 vdev, opted to add another 6x3tb vdev instead of biting the bullet and upgrading. To add more storage I need to replace 6 drives. Instead I built a second NAS to backup the primary and am pulling all 12 disks and dropping back to 6. If/when I increase storage, I’ll drop 6 new ones in and MOVE the data instead of adding capacity.

        • Luke@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’ve got the argon one v2 with a m2 drive. Works well haven’t tested speeds. Not used as a nas though.

      • Showroom7561@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve been pretty happy with my Synology NAS. Literally trouble-free, worry-free, and “just works”. My only real complaint is them getting rid of features in the Photos app, which is why I’m still on their old OS.

        But I’d probably build a second NAS on the cheap, just to see how it compares :)

        What OS would you go with if you had to build one?

        • Luke@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m happy with synology too for the most part. But I like a bit more flexibility I’d probably build one and use truenas or unraid.

  • lemmy@lemmy.nsw2.xyz
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Setup for high availability. I have a hard time taking things down now since other people rely on my setup being on.

  • Nick@nickbuilds.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Actually plan things and research. Too many of my decisions come back to bite me because I don’t plan out stuff like networking, resources, hard drive layouts…

    also documentation for sure

  • misaloun@reddthat.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I always redo it lol, which is kind of a waste but I enjoy it.

    Maybe a related question is what I wish I could do if I had the time (which I will do eventually. Some I plan to do very soon):

    • self host wireguard instead of using tailscale
    • self host a ACME-like setup for self signed certificates for TLS and HTTPS
    • self host encrypted git server for private stuff
    • setup a file watcher on clients to sync my notes on-save automatically using rsync (yes I know I can use syncthing. Don’t wanna!)
    • PhilBro@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Wireguard is super quick and easy to setup and use, I’d highly recommend to do that now. I don’t understand the recent obsession with Tailscale apart from bypassing cgNAT

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Tailscale is an abstraction layer built on top of Wireguard. It handles things like assigning IP addresses, sharing public kegs, and building a mesh network without you having to do any manual work. People like easy solutions, which is why it’s popular.

        To manually build a mesh with Wireguard, every node needs to have every other node listed as a peer in their config. I’ve done this manually before, or you could automate it (eg using Ansible or a tool specifically for Wireguard meshes). With Tailscale, you just log in using one of their client apps, and everything just works automatically.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      self host wireguard instead of using tailscale

      You cam self-host a Headscale server, which is an open-source implementation of the Tailscale server. The Tailscale client apps can connect to it.

      • misaloun@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I don’t think there’s any significant downsides. I suppose you are dependent on their infrastructure and uptime. If they ever go down, or for any reason stop offering their services, then you’re out of luck. But yeah that’s not significant.

        The reason I want to do this is it gives me more control over the setup in case I ever wanted to customize it or the wireguard config, and also teaches me more in general, which will enable me to better debug.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I suppose you are dependent on their infrastructure and uptime

          AFAIK their infra is only used for configuring the VPN. The VPN itself is a regular peer-to-peer Wireguard VPN. If their infra goes down while a VPN tunnel is connected, the tunnel should keep working. I’ve never tested that, though.

          You can self-host your own Headscale server to avoid using their infra.