This is a neat trick that people have been doing with Yubikeys for a long time, but from an operational security perspective, if you have a fleet rather than just a couple of hosts, the win is only marginal vs. short-lived keys, certificates, and a phishing-proof IdP.
Seems a little pointless, your keys can't be stolen but they can be instantly used by malware to persist across anything you have access to. The keys don't have any value in their own right, the access they provide does.
The idea with HSM-backed keys is that even in case of compromise, you can clean up without having to rotate the keys. It also makes auditing easier as you can ensure that if your machine was powered down or offline then you are guaranteed the keys weren't used during that timeframe.
That's still an improvement. In sophisticated attacks, attackers might well store stolen credentials and use them at a later, more opportune time.
Of course a real secure attention sequence would be preferable, such as e.g. requiring a Touch ID press on macOS for keys stored in the Secure Enclave. Not sure if TPM supports something similar when a fingerprint sensor is present?
Without presence test (e.g. yubikeys touch) it's certainly not perfect. But it does close some real world attacks. Like the key can only be used while your laptop is on. (assuming laptop, here).
And keys cannot be stolen from backups.
Or stolen without your knowledge when you left your laptop unguarded for 5min.
Not every attacker has persistent undetected access. If the key can be copied then there's no opportunity for the original machine's tripwires to be triggered by its use. Every second malware runs is a risk of it being detected. Not so, or not in the same way, with a copied key.
Android actually supports secure transaction confirmation on Pixel devices using a secure second OS that can temporarily take control of the screen and volume button as secure input and output! https://android-developers.googleblog.com/2018/10/android-pr...
This is really cool and goes beyond the usual steps of securing the key, but handling "what you see is what you sign" and key usage user confirmation at the OS level, which can be compromised much more easily (both input and output).
We created Keeta Agent [0] to do this on macOS more easily (also works with GPG, which is important for things that don't yet support SSH Signatures, like XCode).
Since it just uses PKCS#11, it also works with tpm_pkcs11. Source for the various bits that are bundled is here [1].
Here's an overview of how it works:
1. Application asks to sign with GPG Key "1ABD0F4F95D89E15C2F5364D2B523B4FDC488AC7"
2. GPG looks at its key database and sees GPG Key "1ABD...8AC7" is a smartcard, reaches out to Smartcard Daemon (SCD), launching if needed -- this launches gnupg-pkcs11-scd per configuration
3. gnupg-pkcs11-scd loads the SSH Agent PKCS#11 module into its shared memory and initializes it and asks it to List Objects
4. The SSH Agent PKCS#11 module connects to the SSH Agent socket provided by Keeta Agent and asks it to List Keys
5. Key list is converted from SSH Agent protocol to PKCS#11 response by SSH Agent PKCS#11 module
6. Key list is converted from PKCS#11 response to gnupg-scd response by gnugpg-pkcs11-scd
7. GPG Reads the response and if the key is found, asks the SCD (gnugpg-pkcs11-scd) to Sign a hash of the Material
8. gnupg-pkgcs11-scd asks the PKCS#11 module to sign using the specified object by its Object ID
9. PKCS#11 module sends a message to Secretive over the SSH Agent socket to sign the material using a specific key (identified by its Key ID) using the requested signing algorithm and raw signing (i.e., no hashing)
10. Response makes it back through all those same layers unmodified except for wrapping
I would love a world where I could put all my API keys in the TPM so malware couldn't gain persistent access to services after wiping my computer. This would be so easy if more providers used asymmetric keys, like through SSH or mTLS. Unfortunately, many don't, which means that stealing a single bearer token gives full access to services.
There's also the TPM speed issue. My computer takes ~500ms to sign with an ECC256 key with the TPM, which starts to become an issue when running scripts that use git operations in serial. This is a recurring problem that people tend to blame on export controls: https://stiankri.substack.com/p/tpm-performance
In some cases there is a work-around for bearer tokens. If they allow key/cert login to generate the token (either directly, or via oath), and the token can be generated with a short lifetime, you can build something pretty safe (certainly safer then having a not-expiring, or long TTL token in a wallet).
apologies for asking this question here instead of actually doing the research, but it always seemed to be that while putting keys in a secure environment would help against leakage of the private bits, there really isn't a great story around making sure than only authorized requests can be signed. is this a stupid concern?
Yubikey can require touch, and Secretive for Apple Secure enclave can require touch with fingerprint id. Some people disable these, it depends exactly on your use case.
yes, but what's to stop a malicious actor from intercepting a signature request and replacing its own contents in place of the legitimate one. yes you would find out when your push was rejected, but that would be a bit late.
Anything pkcs#11 you can proxy. I'm using that on some systems - I have an old notebook with a nitrokey hsm at home. It binds pkcs11-proxy to a local wireguard interface, so I'm registering systems I want to be able to use those keys to that notebooks wireguard. They still need a pin for unlocking a session as well.
For Yubikey, this guide is worth looking at: https://github.com/drduh/yubikey-guide ("Community guide to using YubiKey for GnuPG and SSH - protect secrets with hardware crypto.")
It's also a bit outdated. OpenSSH supports FIDO2 natively, so all this gnupg stuff is unnecessary for ssh. One can even use yubikey-backed ssh keys for commit signing.
And the best thing is that you can create several different ssh keys this way, each with a different password, if that's something you prefer. Then you need to type the password _and_ touch the yubikey.
These work flawlessly with the KeepassXC ssh-agent integration. My private keys are password protected, saved securely inside my password vault, and with my ssh config setup, I just type in the hostname and tap my Yubikey.
This assumes that the server is running a recent enough OpenSSH. Configured with this enabled. For Linux servers, sure. For routers, less obviously so.
We've got private Git repos only accessible through ssh (and the users' shell is set to git-shell) and it's SSH only through Yubikey. The challenge to auth happens inside the Yubikey and the secret never leaves the Yubikey.
This doesn't solve all the worlds' problem (like hunger and war) but at least people are definitely NOT committing to the repo without physically having access to the Yubikey and pushing on it (now ofc a dev's computer may be compromised and he may confirm auth on his Yubikey and push things he didn't meant to but that's a far cry from "we stole your private SSH key after you entered your passphrase a friday evening and are now pushing stuff in your name to 100 repos of yours during the week-end").
Keep a CA (constrained to your one identity) with a longish (90 day?) TTL on the TPM. Use it to sign a short lived (16h?) keys from your TPM, use that as your working key.
This could make real sense for ssh host keys, since they need to be used without presence and they're generally tied to the lifetime of the machine anyway.
I saw a write up where someone successfully got sshd to use a host key from a fido2 yubikey without touch, but I can't find it...
As far as "TPM vs HSM", it is soooo much simpler to make a key pair with a fido2 hardware key:
Or put them in a $2 FLOSS Gnuk token/smart card that you can carry with you and still have strong password protection and AES encrypted data at rest with KDF/DO:
Didn't Tailscale try to do something similar but found out quickly that TPMs 1) aren't as reliable as common wisdom makes them out to be, and 2) have gotchas when it comes to BIOS updates?
I can't find it now, but I believe someone from Tailscale commented on HN (or was it github?) on what they ran into and why the default was reverted so that things were not stored in the TPM.
EDIT: just saw the mention in the article about the BIOS updates.
I had a friend tell me once that his yubikey is more secure than my authenticator app on my phone because my phone has this giant attack surface that his yubikey doesn't. Yet the yubikey has an entire attack surface of the computer it is plugged into. Which is largely the same or worse than my phone's.
I'm wondering why that doesn't apply here. The TPM holds the key to the cipher that is protecting your private keys. Someone uses some kind of RCE or LPE to get privileged access to your system. Now it sits and waits for you to do something that requires access to your SSH keys. When you do that you are expecting whatever user prompts come up, the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere. I'm not even positive that they need high degree of privileges on your box, if they can manipulate your invocation of the ssh client, by modifying your PATH or adding an ssh wrapper to something already in your path, then this pattern will also work.
What am I gaining from using this method that I don't get from using a password on my ssh private key?
The promise of HSM, TPM and smart cards are that you have a tiny computer (microcontroller) where the code is easier to audit. Ideally a sealed key never leaves your MCU. The cryptographic primitives, secret keys and operations are performed in this mini-computer.
Further promises are RTC that can prevent bruteforce (forced wait after wrong password entry) or locking itself after too many wrong attempts.
A good MCU receives the challenge and only replies with the signature, if the password was correct. You can argue that a phone with a Titan security chip is a type of TPM too. In the end it doesn't matter. I chose the solution that works best for me, where I can either only have all keys in my smart card or an offline paper wallet too in a fireproof safe. The choice is the user's.
For SSH to use your keys a calculation has to be done using your private key and then send the results back to the remote site so it can validate that you got the results that prove you have your private key. The TPM and your yubikey do not do this calculation. They allow software on your computer to access the private key in plaintext form, perform this calculation, and then send the result (and then presumably overwrite the plaintext key in RAM). If your system has been compromised, then when this private key is provided to the host based software, it can be taken.
Yubikey (and nitrokey and other HSMs) are technically smart cards, which perform crypto operations on the card. This can be an issue when doing lots of operations, as the interface is quite slow.
Downvoted - this is false, sorry. The whole point of security keys (whether exposed via PKCS#11, or FIDO) is that the private key material never leaves the security key and instead the cryptographic operations are delegated to the key, just like a commercial HSM.
Technically, a private key that was imported (and is marked as exportable) to a PKCS#11 device can subsequently be re-exported (but even then, during normal operation the device itself handles the crypto), but a key generated on-device and marked as non-exportable guarantees the private key never leaves the physical device.
They can use the key as long as they can access your computer, but they shouldn't be able to get the secret key out of the TPM or Yubikey and use it elsewhere while your computer is off. That's the main point of HSMs.
Yeah but they already mentioned that they expect the attacker to hijack your ssh command so you'll touch it yourself, thinking you're authorizing something else than you actually are.
It does mean that they can't use the key a thousand times. But once? Yeah sure.
I know the title says "in your TPM chip" but the method described does not store your private key in the TPM, it stores it in a PKCS keystore which is encrypted by a key in your TPM. In actual use the plaintext of your private ssh key still shows up in your ssh client for validation to the remote host.
The recommended usage of a yubikey for ssh does something similar as otherwise your key consumes one of the limited number of slots on the key.
I really don't think this is true for FIDO2 like Yubikey. My understanding is that your ssh client gets a challenge from the server, reads the key "handle" from the private key file, and sends both to Yubikey. The device then combines its master key with the handle to get the actual private key, signs the challenge, and gives the result back to your ssh client. At no point does the private key leave the Yubikey.
I don't know if you are missing anything. That's why I'm asking and making statements about how I understand the various processes to work. I want to understand how it is that the only device that interacts with the yubikey/tpm, when compromised, can't be subverted to the attackers ends.
Or, alternatively, don't. Stuff in a TPM isn't for "security" in the abstract, it's fundamentally for authentication. Organizations want to know that the device used for connection is the one they expect to be connecting. It's an extra layer on top of "Organizations want to know the employee account associated with the connection".
"Your SSH keys" aren't really part of that threat model. "You" know the device you're connecting from (or to, though generally it's the client that's the mobile/untrusted thing). It's... yours. Or under your control.
All the stuff in the article about how the TPM contents can't be extracted is true, but missing the point. Yes, you need your own (outer) credentials to extract access to the (inner) credentials, which is no more or less true than just using your own credentials in the first place via something boring like a passphrase. It's an extra layer of indirection without value if all the hardware is yours.
TPMs and secure enclaves only matter when there's a third party watching[1] who needs to know the transaction is legitimate.
[1] An employer, a bank, a cloud service provider, a mobile platform vendor, etc... This stuff has value! But not to you.
TPMs can be useful to you as an individual if you're trying to protect against an evil maid attack. Although I think Linux isn't quite there yet with its support for it. The systemd folks are making progress though.
That only helps if you set a strong password as your TPM PIN. Otherwise its hardware-bound with no access control, and just as susceptible to evil maid attacks as storing the keys directly in a file.
This may be bash-only, but a space before the command excludes something from history too.
Personally I like this which reduces noise in history from duplicate lines too. export HISTCONTROL=ignoreboth:erasedups
Of course a real secure attention sequence would be preferable, such as e.g. requiring a Touch ID press on macOS for keys stored in the Secure Enclave. Not sure if TPM supports something similar when a fingerprint sensor is present?
The PIN can be an arbitrary string (password).
And keys cannot be stolen from backups.
Or stolen without your knowledge when you left your laptop unguarded for 5min.
Not every attacker has persistent undetected access. If the key can be copied then there's no opportunity for the original machine's tripwires to be triggered by its use. Every second malware runs is a risk of it being detected. Not so, or not in the same way, with a copied key.
This is really cool and goes beyond the usual steps of securing the key, but handling "what you see is what you sign" and key usage user confirmation at the OS level, which can be compromised much more easily (both input and output).
Since it just uses PKCS#11, it also works with tpm_pkcs11. Source for the various bits that are bundled is here [1].
Here's an overview of how it works:
1. Application asks to sign with GPG Key "1ABD0F4F95D89E15C2F5364D2B523B4FDC488AC7"
2. GPG looks at its key database and sees GPG Key "1ABD...8AC7" is a smartcard, reaches out to Smartcard Daemon (SCD), launching if needed -- this launches gnupg-pkcs11-scd per configuration
3. gnupg-pkcs11-scd loads the SSH Agent PKCS#11 module into its shared memory and initializes it and asks it to List Objects
4. The SSH Agent PKCS#11 module connects to the SSH Agent socket provided by Keeta Agent and asks it to List Keys
5. Key list is converted from SSH Agent protocol to PKCS#11 response by SSH Agent PKCS#11 module
6. Key list is converted from PKCS#11 response to gnupg-scd response by gnugpg-pkcs11-scd
7. GPG Reads the response and if the key is found, asks the SCD (gnugpg-pkcs11-scd) to Sign a hash of the Material
8. gnupg-pkgcs11-scd asks the PKCS#11 module to sign using the specified object by its Object ID
9. PKCS#11 module sends a message to Secretive over the SSH Agent socket to sign the material using a specific key (identified by its Key ID) using the requested signing algorithm and raw signing (i.e., no hashing)
10. Response makes it back through all those same layers unmodified except for wrapping
(illustrated at [2])
[0] https://github.com/KeetaNetwork/agent
[1] https://github.com/KeetaNetwork/agent/tree/main/Agent/gnupg/...
[2] https://rkeene.org/tmp/pkcs-sign.png
There's also the TPM speed issue. My computer takes ~500ms to sign with an ECC256 key with the TPM, which starts to become an issue when running scripts that use git operations in serial. This is a recurring problem that people tend to blame on export controls: https://stiankri.substack.com/p/tpm-performance
And the best thing is that you can create several different ssh keys this way, each with a different password, if that's something you prefer. Then you need to type the password _and_ touch the yubikey.
These work flawlessly with the KeepassXC ssh-agent integration. My private keys are password protected, saved securely inside my password vault, and with my ssh config setup, I just type in the hostname and tap my Yubikey.
https://www.stavros.io/posts/u2f-fido2-with-ssh/
We've got private Git repos only accessible through ssh (and the users' shell is set to git-shell) and it's SSH only through Yubikey. The challenge to auth happens inside the Yubikey and the secret never leaves the Yubikey.
This doesn't solve all the worlds' problem (like hunger and war) but at least people are definitely NOT committing to the repo without physically having access to the Yubikey and pushing on it (now ofc a dev's computer may be compromised and he may confirm auth on his Yubikey and push things he didn't meant to but that's a far cry from "we stole your private SSH key after you entered your passphrase a friday evening and are now pushing stuff in your name to 100 repos of yours during the week-end").
Keep a CA (constrained to your one identity) with a longish (90 day?) TTL on the TPM. Use it to sign a short lived (16h?) keys from your TPM, use that as your working key.
I saw a write up where someone successfully got sshd to use a host key from a fido2 yubikey without touch, but I can't find it...
As far as "TPM vs HSM", it is soooo much simpler to make a key pair with a fido2 hardware key:
You can get them for <$30.https://github.com/ran-sama/stm32-gnuk-usb-smartcard
Well no thanks, that risk is much higher than what this is worth.
I can't find it now, but I believe someone from Tailscale commented on HN (or was it github?) on what they ran into and why the default was reverted so that things were not stored in the TPM.
EDIT: just saw the mention in the article about the BIOS updates.
I'm wondering why that doesn't apply here. The TPM holds the key to the cipher that is protecting your private keys. Someone uses some kind of RCE or LPE to get privileged access to your system. Now it sits and waits for you to do something that requires access to your SSH keys. When you do that you are expecting whatever user prompts come up, the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere. I'm not even positive that they need high degree of privileges on your box, if they can manipulate your invocation of the ssh client, by modifying your PATH or adding an ssh wrapper to something already in your path, then this pattern will also work.
What am I gaining from using this method that I don't get from using a password on my ssh private key?
Depending on which authenticator app (or maybe applies to all?), that data either is, or can be, backed up.
A yubikey cannot be cloned.[1]
> the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere.
Ah, this is where your misunderstanding lies. No, the crypto operation runs ON the TPM or yubikey. The actual secret key NEVER lives in RAM.
[1] You know what I mean. Of course in principle it can be. But not like a phone where it can literally be sent via scp.
Further promises are RTC that can prevent bruteforce (forced wait after wrong password entry) or locking itself after too many wrong attempts.
A good MCU receives the challenge and only replies with the signature, if the password was correct. You can argue that a phone with a Titan security chip is a type of TPM too. In the end it doesn't matter. I chose the solution that works best for me, where I can either only have all keys in my smart card or an offline paper wallet too in a fireproof safe. The choice is the user's.
https://wiki.archlinux.org/title/SSH_keys#Storing_SSH_keys_o...
And even the password can be forced to be re-entered by the agent for every use, if that level of security is wanted.
Technically, a private key that was imported (and is marked as exportable) to a PKCS#11 device can subsequently be re-exported (but even then, during normal operation the device itself handles the crypto), but a key generated on-device and marked as non-exportable guarantees the private key never leaves the physical device.
It does mean that they can't use the key a thousand times. But once? Yeah sure.
The recommended usage of a yubikey for ssh does something similar as otherwise your key consumes one of the limited number of slots on the key.
What am I missing?
Thank you for your reply.
"Your SSH keys" aren't really part of that threat model. "You" know the device you're connecting from (or to, though generally it's the client that's the mobile/untrusted thing). It's... yours. Or under your control.
All the stuff in the article about how the TPM contents can't be extracted is true, but missing the point. Yes, you need your own (outer) credentials to extract access to the (inner) credentials, which is no more or less true than just using your own credentials in the first place via something boring like a passphrase. It's an extra layer of indirection without value if all the hardware is yours.
TPMs and secure enclaves only matter when there's a third party watching[1] who needs to know the transaction is legitimate.
[1] An employer, a bank, a cloud service provider, a mobile platform vendor, etc... This stuff has value! But not to you.
Which is what SSH keys are for?
The advantage of this approach is that malware can't just send off your private key file to its servers.