• d00ery@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    9 months ago

    Pi4 with 2TB SSD running:

    • Portainer
    • Calibre
    • qBittorrent
    • Kodi

    HDMI cable straight to the living room Smart TV (which is not connected to the internet).

    Other devices access media (TV shows, movies, books, comics, audiobooks) using VLC DLNA. Except for e-readers which just use the Calibre web UI.

    Main router is flashed with OpenWrt and running DNS adblocker. Ethernet running to 2nd router upstairs and to main PC. Small WiFi repeater with ethernet in the basement. It’s not a huge house, but it does have old thick walls which are terrible for WiFi propogation.

  • WaltzingKea@lemmy.nz
    link
    fedilink
    English
    arrow-up
    13
    ·
    9 months ago

    Bad. I have a Raspberry Pi 4 hanging from a HDMI cable going up to a projector, then have a 2TB SSD hanging from the Raspberry Pi. I host Nextcloud and Transmission on my RPi. Use Kodi for viewing media through my projector.

  • Presi300@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 months ago

    I only use the highest of grade when it comes to hardware

    Case: found in the trash

    Motherboard: some random Asus AM3 board I got as a hand-me down.

    CPU: AMD FX-8320E (8 core)

    RAM: 16GB

    Storage: 5x2tb hdds + 128gb SSD and a 32GB flash drive as a boot device

    That’s it… My entire “homelab”

  • rambos@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    1) DIY PC (running everything)

    • MSI Z270-A PRO
    • Intel G3930
    • 16GB DDR4
    • ATX PSU 550W
    • 250GB SSD for OS
    • 500GB SSD for data
    • 12TB HDD for backup + media

    2) Raspberry pi 4 4GB (running 2nd pihole instance)

  • iggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    9 months ago

    Internet:

    • 1G fiber

    Router:

    • N100 with dual 2.5G nics

    Lab:

    • 3x N100 mini PCs as k8s control plane+ceph mon/mds/mgr
    • 4x Aoostar R7 “NAS” systems (5700u/32G ram/20T rust/2T sata SSD/4T nvme) as ceph OSDs/k8s workers

    Network:

    • Hodge podge of switches I shouldn’t trust nearly as much as I do
    • 3x 8 port 2.5G switches (1 with poe for APs)
    • 1x 24 port 1G switch
    • 2x omada APs

    Software:

    • All the standard stuff for media archival purposes
    • Ceph for storage (using some manual tiering in cephfs)
    • K8s for container orchestration (deployed via k0sctl)
    • A handful of cloud-hypervisor VMs
    • Most of the lab managed by some tooling I’ve written in go
    • Alpine Linux for everything

    All under 120w power usage

    • slazer2au@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      How are you finding the AooStar R7? I have had my eye on it for a while but not much talk about it outside of YouTube reviews

      • iggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        They’ve been rock solid so far. Even through the initial sync from my old file server (pretty intensive network and disk usage for about 5 days straight). I’ve only been running them for about 3 months so far though, so time will tell. They are like most mini pc manufacturers with funny names though. I doubt I’ll ever get any sort of bios/uefi update

  • cow@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    I have 5 servers in total. All except the iMac are running Alpine Linux.

    Internet

    Ziply fiber 100mb small business internet. 2 Asus AX82U Routers running in AiMesh.

    Rack

    Raising electronics 27U rack

    N3050 Nuc’s

    One is running mailcow, dnsmasq, unbound and the other is mostly idle.

    iMac

    The iMac is setup by my 3d printers. I use it to do slicing and I run BlueBubbles on it for texting from Linux systems.

    Family Server

    Hardware

    • I7-7820x
    • Rosewill rackmount case
    • Corsair water cooler
    • 2 4tb drives
    • 2 240gb ssd
    • Gigabyte motherboard

    Mostly doing nothing, currently using it to mine Monero.

    Main Cow Server

    Hardware

    • R7-3900XT
    • Rosewill rackmount case
    • 3 18tb drives
    • 2 1tb nvme
    • Gigabyte motherboard

    Services

    • ZFS 36TB Pool
    • Secondary DNS Server
    • NFS (nas)
    • Samba (nas)
    • Libvirtd (virtual macines)
    • forgejo (git forge)
    • radicale (caldav/carddav)
    • nut (network ups tools)
    • caddy (web server)
    • turnserver
    • minetest server (open source blockgame)
    • miniflux (rss)
    • freshrss (rss)
    • akkoma (fedi)
    • conduit (matrix server)
    • syncthing (file syncing)
    • prosody (xmpp)
    • ergo (ircd)
    • agate (gemini)
    • chezdav (webdav server)
    • podman (running immich, isso, peertube, vpnstack)
    • immich (photo syncing)
    • isso (comments on my website)
    • matrix2051 (matrix to irc bridge)
    • peertube (federated youtube alternative)
    • soju (irc bouncer)
    • xmrig (Monero mining)
    • rss2email
    • vpnstack
      • gluetun
      • qbittorrent
      • prowlarr
      • sockd
      • sabnzbd
      • cow@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I kind of prefer mini flux but I maintain the freshrss package in Alpine so I have an instance to test things.

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 months ago
    • An HP ML350p w/ 2x HT 8 core xeons (forget the model number) and 256GB DDR3 running Ubuntu and K3s as the primary application host
    • A pair of Raspberry Pi’s (one 3, one 4) as anycast DNS resolvers
    • A random minipc I got for free from work running VyOS as by border router
    • A Brocade ICX 6610-48p as core switch

    Hardware is total overkill. Software wise everything is running in containers, deployed into kubernetes using helmfile, Jenkins and gitea

  • Hemi03@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    9 months ago
    • Pico psu
    • Asrock n100m
    • Eaton3S mini UPS
    • 250gb OS Sata SSD
    • 4x sata 4t SSD’s
    • Pcie sata splitter

    All in a small PC Case

    sever is running YunoHost

  • synae[he/him]@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    A 13-year-old former gaming computer, with 30TB storage in raid6 that runs *arrs, sabnzbd, and plex. Everything managed by k3s except plex.

    Also, 3-node digital ocean k8s cluster which runs services that don’t need direct access to the 30TB of storage, such as: grocy, jackett, nextcloud, a SOLID server, and soon a lemmy instance :)

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        My instance’s image cache is like 230GB. Plus a bunch more for the db. Can confirm storage is needed.

        (unrelated question 😶 - anyone running pictrs 0.5 on local storage happily?)

      • synae[he/him]@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Thanks for the heads up.

        I plan on using digital ocean’s Spaces (s3-alike) where possible and also it’s intended to be a personal instance, at least to start - just for me to federate with others and subscribe to my communities. Given that, do you think it’ll still use much disk (block device) storage?

        Might be time to familiarize myself with DO’s disk pricing…

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 months ago

    At home - Networking

    • 10Gbps internet via Sonic, a local ISP in the San Francisco Bay Area. It’s only $40/month.
    • TP-Link Omada ER8411 10Gbps router
    • MikroTik CRS312-4C+8XG-RM 12-port 10Gbps switch
    • 2 x TP-Link Omada EAP670 access points with 2.5Gbps PoE injectors
    • TP-Link TL-SG1218MPE 16-port 1Gbps PoE switch for security cameras (3 x Dahua outdoor cams and 2 x Amcrest indoor cams). All cameras are on a separate VLAN that has no internet access.
    • SLZB-06 PoE Zigbee coordinator for home automation - all my light switches are Inovelli Blue Zigbee smart switches, plus I have a bunch of smart plugs. Aqara temperature sensors, buttons, door/window sensors, etc.

    Home server:

    • Intel Core i5-13500
    • Asus PRO WS W680M-ACE SE mATX motherboard
    • 64GB server DDR5 ECC RAM
    • 2 x 2TB Solidigm P44 Pro NVMe SSDs in ZFS mirror
    • 2 x 20TB Seagate Exos X20 in ZFS mirror for data storage
    • 14TB WD Purple Pro for security camera footage. Alerts SFTP’d to offsite server for secondary storage
    • Running Unraid, a bunch of Docker containers, a Windows Server 2022 VM for Blue Iris, and an LXC container for a Bo gbackup server.

    For things that need 100% reliability like emails, web hosting, DNS hosting, etc, I have a few VPSes “in the cloud”. The one for my emails is an AMD EPYC, 16GB RAM, 100GB NVMe space, 10Gbps connection for $60/year at GreenCloudVPS in San Jose, and I have similar ones at HostHatch (but with 40Gbps instead of 10Gbps) in Los Angeles.

    I’ve got a bunch of other VPSes, mostly for https://dnstools.ws/ which is an open-source project I run. It lets you perform DNS lookup, pings, traceroutes, etc from nearly 30 locations around the world. Many of those are sponsored which means the company provides them for cheap/free in exchange for a backlink.

    This Lemmy server is on another GreenCloudVPS system - their ninth birthday special which has 9GB RAM and 99GB NVMe disk space for $99 every three years ($33/year).

  • thejevans@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 months ago

    https://pixelfed.social/p/thejevans/664709222708438068

    EDIT:

    Server:

    • AMD 5900x
    • 64GB RAM
    • 2x10TB HDD
    • RTX 3080
    • LSI-9208i HBA
    • 2x SFP+ NIC
    • 2TB NVMe boot drive

    Proxmox hypervisor:

    • TrueNAS VM (HBA PCIe passthrough)
    • HomeAssistant VM
    • Debian 12 LXC as SSH entrypoint and Ansible controller
    • Debian 12 VM with Ansible controlled docker containers
    • Debian 12 VM (GPU PCIe passthrough) with Jellyfin and other services that use GPU
    • Debian 12 VM for other docker stuff not yet controlled by Ansible and not needing GPU

    Router: N6005 fanless mini PC, 2.5Gbit NICs, pfsense

    Switch Mikrotik CRS 8-port 2.5Gbit, 2-port SFP+

      • thejevans@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        I have a Kasm setup with blender and CAD tools, I use the GPU for transcoding video in Immich and Jellyfin, and for facial recognition in Immich. I also have a CUDA dev environment on there as a playground.

        I upgraded my gaming PC to an AMD 7900 XTX, so I can finally be rid of Nvidia and their gaming and wayland driver issues on Linux.