• 4 Posts
  • 591 Comments
Joined 2 years ago
cake
Cake day: August 11th, 2023

help-circle



  • They are basically the exclusive target for GrapheneOS for their feature set:

    Non-exhaustive list of requirements for future devices, which are standards met or exceeded by current Pixel devices:
    
        Support for using alternate operating systems including full hardware security functionality
        Complete monthly Android Security Bulletin patches without any regular delays longer than a week for device support code (firmware, drivers and HALs)
        At least 5 years of updates from launch for device support code with phones (Pixels now have 7) and 7 years with tablets
        Device support code updated to new monthly, quarterly and yearly releases of AOSP within several months to provide new security improvements (Pixels receive these in the month they're released)
        Linux 6.1, 6.6 or 6.12 Generic Kernel Image (GKI) support
        Hardware accelerated virtualization usable by GrapheneOS (ideally pKVM to match Pixels but another usable implementation may be acceptable)
        Hardware memory tagging (ARM MTE or equivalent)
        Hardware-based coarse grained Control Flow Integrity (CFI) for baseline coverage where type-based CFI isn't used or can't be deployed (BTI/PAC, CET IBT or equivalent)
        PXN, SMEP or equivalent
        PAN, SMAP or equivalent
        Isolated radios (cellular, Wi-Fi, Bluetooth, NFC, etc.), GPU, SSD, media encode / decode, image processor and other components
        Support for A/B updates of both the firmware and OS images with automatic rollback if the initial boot fails one or more times
        Verified boot with rollback protection for firmware
        Verified boot with rollback protection for the OS (Android Verified Boot)
        Verified boot key fingerprint for yellow boot state displayed with a secure hash (non-truncated SHA-256 or better)
        StrongBox keystore provided by secure element
        Hardware key attestation support for the StrongBox keystore
        Attest key support for hardware key attestation to provide pinning support
        Weaver disk encryption key derivation throttling provided by secure element
        Insider attack resistance for updates to the secure element (Owner user authentication required before updates are accepted)
        Inline disk encryption acceleration with wrapped key support
        64-bit-only device support code
        Wi-Fi anonymity support including MAC address randomization, probe sequence number randomization and no other leaked identifiers
        Support for disabling USB data and also USB as a whole at a hardware level in the USB controller
        Reset attack mitigation for firmware-based boot modes such as fastboot mode zeroing memory left over from the OS and delaying opening up attack surface such as USB functionality until that's completed
        Debugging features such as JTAG or serial debugging must be inaccessible while the device is locked
    

    From https://grapheneos.org/faq#device-support






  • Definitely overkill lol. But I like it. Haven’t found a more complete solutions that doesn’t feel like a comp sci dissertation yet.

    The goal is pretty simple. Make as much as possible, helm values, k8s manifests, tofu, ansible, cloud init as possible and in that order of preference because as you go up the stack you get more state management for “free”. Stick that in git and test and deploy from that source as much as possible. Everything else is just about getting to there as fast as possible, and keeping the 3-2-1 rule alive and well for it all (3 backups, 2 different media, 1 off-site).


  • Fleet from Rancher to deploy everything to k8s. Baremetal management with Tinkerbell and Metal3 to management my OS deployments to baremetal in k8s. Harvester is the OS/K8S platform and all of its configs can be delivered in install or as cloudinit k8s objects. Ansible for the switches (as KubeOVN gets better in Harvester default separate hardware might be removed), I’m not brave enough for cross planning that yet. For backups I use velero, and shoot that into the cloud encrypted plus some nodes that I leave offline most of the time except to do backups and updating them. I user hauler manifests and a kube cronjob to grab images, helm charts, rpms, and ISO to local store. I use SOPS to store the secrets I need to boot strap in git. OpenTofu for application configs that are painful in helm. Ansible for everything else.

    For total rebuilds I take all of that config and load it into a cloudinit script that I stick on a Rocky or sles iso that, assuming the network is up enough to configure, rebuilds from scratch, then I have a manual step to restore lost data.

    That covers everything infra but physical layout in a git repo. Just got a pikvm v4 on order along with a pikvm switch, so hopefully I can get more of the junk on Metal3 for proper power control too and less IPXE shenanigans.

    Next steps for me are CI/CD pipelines for deploying a mock version of the lab into Harvester as VMs, running integrations tests, and if it passes merge the staged branch into prod. I do that manually a little already but would really like to automate it. One I do that I start running Renovate to grab the latest stable for stuff for me.


  • As THE USB-C PD evangelist. I have to say. Fair. Like PD EPD is definitely reaching the limits of the USB-C form factor to me, and data over copper is a dead end at some point too.

    Still want ever device I have on it. Though as we scale past the 260 watt range (and I do…) or longer distances (also me) it’s just going to have to be another interface and probably medium for data for the protocol. So far MPO for data and honestly pogo pins for power are the best I’m seeing.

    Again for everything thats not a serious power device like well pumps, servers, AC/Heat pump, Power tools, etc or serious data server/client. Its fine, which is seriously impressive.

    Rant over I also like the idea of better hardware stats reported to the OS. Its one reason I fell in love with software raid over hardware raid