• Scoopta@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Should probably fix that given we’ve been out of IPv4 for over a decade now and v6 is only becoming more widely deployed

    • PenisWenisGenius@lemmynsfw.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 months ago

      I use ipv6 when possible but it’s rarely possible. I’ve never had home internet that was ipv6 ready enough for my wan address when googling “what’s my ip” to be something besides an ipv4 number.

      Could I get ipv6 over otherwise non ipv6 compatible hardware using a vpn?

    • 0x0@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      we’ve been out of IPv4 for over a decade now

      Really? Haven’t had trouble allocating new VPSs with IPv4 as of late…

      • frezik@midwest.social
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        You’re probably in a country that got a ton of allocations in the 90s. If you came from a country that was a little late to build out their infrastructure, or even tried to setup a new ISP in just about any country, you would have a much harder time.

    • renzev@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Agreed. Though I wonder if ipv6 will ever displace ipv4 in things like virtual networks (docker, vpn, etc.) where there’s no need for a bigger address space

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        I’m using IPv6 on Kubernetes and it’s amazing. Every Pod has its own global IP address. There is no NAT and no giant ARP routing table slowing down the other computers on my network. Each of my nodes announces a /112 for itself to my router, allowing it to give addresses to over 65k pods. There is no feasible limit to the amount of IP addresses I could assign to my containers and load balancers, and no routing overhead. I have no need for port forwarding on my router or worrying about dynamic IPs, since I just have a /80 block with no firewall that I assign to my public facing load balancers.

        Of course, I only have around 300 pods on my cluster, and realistically, it’s not really possible for there to be over 1 million containers in current kubernetes clusters, due to other limitations. But it is still a huge upgrade in reducing overhead and complexity, and increasing scale.