I have too many machines floating around, some virtual, some physical, and they’re getting added and removed semi-frequently as I play around with different tools/try out ideas. One recurring pain point is I have no easy way to manage SSH keys around them, and it’s a pain to deal with adding/removing/cycling keys. I know I can use AuthorizedKeysCommand
on sshd_config to make the system fetch a remote key for validation, I know I could theoretically publish my pub key to github or alike, but I’m wondering if there’s something more flexible/powerful where I can manage multiple users (essentially roles) such that each machine can be assigned a role and automatically allow access accordingly?
I’ve seen Keyper before, but the container haven’t been updated for years, and the support discord owner actively kicks everyone from the server, even after asking questions.
Is there any other solution out there that would streamline this process a bit?
I would switch to certificate based SSH authentication.
All the server keys gets signed by your CA, all clients also gets signed by your CA. Everyone implicitly trust eachother though the CA and it’s as safe as regular SSH keys.
You can also sign short lived client keys if you want to make revocations easier, the servers don’t care because now all it cares is that it’s a valid cert issues by the CA, which can be done entirely offline!
HashiCorp Vault can also help managing the above, but it’s also pretty easy to do manually.
I do this, use the small step ca/Cli to manage the lot. It’s amazing
It’s such an underrated feature. It baffles me how people immediately turn to overly complicated solutions solving a problem they don’t really have to solve, just because everyone assumes the only way is the default commonly known way. Like OP, people immediately jump to the conclusion you need extra software to manage the keys, rather than using another authentication method natively supported, and keep filling their known_hosts file with junk, making the whole validation process useless because everyone just accepts whatever key the host presents.
It’s amazing how simple it is. Developer needs temporary access to debug a web server? Sure, here’s your 2h valid cert to log in as the web user on the server, don’t even need to actually log into the server to put their key in and then remove it. I mint a cert and it’s ready to go on whichever users and servers I specified in the cert. Can’t even gain persistence because regular authorized_keys is disabled and we have limited session durations.
I regularly leave people baffled at work because I come up with a clever and built-in super simple solution to something they expected to just slap more scripts and software to work around the only way they know to use the software. Read your manpages in full folks, it’ll save you so much work. Know what your software is capable of.
That’s a long rant, but your on point with it. I have a colleague who refuses to try new things cuz they don’t understand that it makes life easier, I do tend to find the solutions that are simpler and easier to work
Is smallstep free to self host? Looking at their pricing page it’s kind of unclear, and their saas is pretty pricey.
It sure is, give this a read DIY-Single-Sign-On-for-SSH
I’ve been using https://github.com/warp-tech/warpgate for essentially this purpose. It does kind of put all of your eggs in one basket so don’t expose this to the Internet and probably keep at least one other machine that has all the keys. I haven’t had any catastrophic issues so far other than my host going down (unrelated to this tool).
You could use an LDAP and/or Kerberos solution to centralize user management. Alternatively you could use ansible
To add to this. If everything is just linux based take a look at freeipa/rhel idm.
I quite like Tailscale SSH for this, but I don’t have as many machines, so not sure how it will scale. You can definitely assign roles here to allow/deny SSH between hosts in your fleet though.
Terrible idea of the day: You could use something like NFS and map the drive on all clients. On that drive you can have the latest keys then use symlinking to update, etc.
Something like puppet, chef, ansible are likely better choices.
You’re the devil.
Did we work together, maybe?
You could try SSH certificates using something like https://smallstep.com/sso-ssh/ - essentially you delegate validation of your public key to a IDP, which your servers are configured to trust.
The other approach would be something like ansible or puppet to deploy trusted keys to all servers
Hm… these are both interesting but might be a bit overkill IMO.
I don’t think I’d need a CA and intermediary step if all SSHd needs to do is check if a key is a currently approved key for this particular service or not; and I last looked at chef/puppet many years ago, and it was way too much orchestration work that we no longer need w/ Docker containers and smaller footprint host OSes.
This is one of the jobs of OpenLDAP.
Are you initiating SSH connections from all these hosts?
If you just need to SSH to these hosts, use a single key and copy the public key only to the hosts you need to connect to. If you don’t want to copy the pubkeys to target hosts, use LDAP + SSSD or certificates.
Then, if you do need to initiate connections from these hosts and use an SSH agent you can forward your agent and SSH to another host
client> ssh -A host1 host1> ssh host2 host2>
client> ssh -A host1 host1> ssh -A host2 host2> ssh -A host3 host3>
Have an alias so trusted hosts can bounce through my authorization host and end up on a tmux session on the targetted host. It has logging and such but mostly it’s for simplicity.
If I plan to use that connection a lot there’s a script to cat my priv key through the relay.
Have an scp alias too, but that gets more complicated.
For more sensitive systems I have 2fa from gauth set up, works great.
This is a common pattern, typically called a “jump host” or “bastion host”.
a script to cat my priv key through the relay
When it comes to security, I typically recommend against rolling your own. SSH already has agent forwarding option to do this securely and the -J option to accomplish the same without even needing to forward the key. The agent can seem complex at first, it’s actually pretty simple and worth learning.
Feel free to message me if you have more questions, I’ve got lots of experience w/ SSH.
I did not know -J, I rolled my own because I’ve been doing it forever and many of my tricks (non-ssh included) aren’t as easily portable across different os’s.
For some reason ssh-copy-id has been failing for me sometimes lately because it can’t reach the agent, while cat always works, but I never learned much about the user agent, let me look into that now, thanks for the tip!
Sometimes the obvious solution is the way to go.
Your idea sounds good to go ahead and publish your pubkey(s) to fully public URL you control and can memorize.
Then you can stash or memorize the curl command needed to grab it (them) and authorize something to it (them).
A lot of more complicated solutions are just fancy ways to safely move private keys around.
For my private keys, I prefer to generate a new one for each use case, and throw them out when I’m done with them. That way I don’t need a solution to move, share or store them.
Edit: Full disclosure - I do also use Ansible to deploy my public keys.
You could use ldap with OpenLdap, Keycloak, freeipa, etc to set ssh keys for users.
If you want something simpler, you could use Ansible (or another cm) or just have a startup script that downloads the authorized keys file from GitHub or wherever you can store it.
And if you want something less simple, hashicorp Vault supports dynamic ssh keys using certificates.
Some options
- Use a build system like Foreman to automate the builds putting the key in place, uses puppet for config management after the build
- Use vanilla puppet without Foreman
- Use Ansible
Ansible
I use ansible for that: https://docs.ansible.com/ansible/latest/collections/ansible/posix/authorized_key_module.html
Keys stored alongside my playbook in a git repository.
all I know is that on NixOS you can declare the authorized keys for each user in the config
Yeah, the problem is that I have 2 physical servers, each with 5 to 10 VMs on it, and a bunch of other VMs scattered across different cloud providers; it gets tricky to edit the
~/.ssh/authorized_keys
file on each of them to reflect a new SSH key (i.e.: new machine on the “network”) or replace an existing SSH key (i.e.: annual key cycle).yeah what i mean is on nixos you make 1 config for them all and you’d just change the key in 1 spot
You do realize that those machines are not necessarily NixOS right? It is best to separate the management of SSH from NixOS declarative nature since what you would really want to be declarative is ACL rules, not network topology/SSH keys. For example you can use Netbird or Tailscale and their respective SSH feature.