I like how the article describes how certificates work for both client and server. I know a little bit about it but what I read helps to reinforce what I already know and it taught me something new. I appreciate it when someone takes the time to explain things like this.
Thanks! I didn't intentionally write this for a broader audience (I didn't expect to see it while casually opening HN!). Our user base is quite diverse, so I try to find the balance between being too technical and over-explanatory. Glad it was helpful!
I would think it's more secure than clientAuth certs because if an attacker gets a misissued cert they'd have to actually execute a MitM attack to use it. In contrast, with a misissued clientAuth cert they can just connect to the server and present it.
Another fun fact: the Mozilla root store, which I'd guess the vast majority of XMPP servers are using as their trust store, has ZERO rules governing clientAuth issuance[1]. CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
> Is there a reason why dialback isn't the answer?
There are some advantages to using TLS for authentication as well as encryption, which is already a standard across the internet.
For example, unlike an XMPP server, CAs typically perform checks from multiple vantage points ( https://letsencrypt.org/2020/02/19/multi-perspective-validat... ). There is also a lot of tooling around TLS, ACME, CT logs, and such, which we stand to gain from.
In comparison, dialback is a 20-year-old homegrown auth mechanism, which is more vulnerable to MITM.
Nevertheless, there are some experiments to combine dialback with TLS. For example, checking that you get the same cert (or at least public key) when connecting back. But this is not really standardized, and can pose problems for multi-server deployments.
> It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
Good job we haven't been doing this for a very long time by now :)
Sorry, it's late here and I guess I didn't word it well. Dialback (these days) always runs over a TLS-encrypted connection, as all servers enforce TLS.
The next question is how to authenticate the peer, and that can be done a few ways, usually either via the certificate PKI, via dialback, or something else (e.g. DNSSEC/DANE).
My comment about "combining dialback with TLS" was to say that we can use information from the TLS channel to help make the dialback authentication more secure (by adding extra constraints to the basic "present this magic string" that raw dialback authentication is based on).
How would dialback-over-TLS be "more vulnerable to MITM" though? I think that claim was what led to the confusion, I don't see how TLS-with-client-EKU is more secure then TLS-with-dialback
Firstly, nobody is actually calling for authentication using client certificates. We use "normal" server certificates and validate the usual way, the only difference is that such a certificate may be presented on the "client" side of a connection when the connection is between two servers.
The statement that dialback is generally more susceptible to MITM is based on the premise that it is easier to MITM a single victim XMPP server (e.g. hijack its DNS queries or install an intercepting proxy somewhere on the path between the two servers) than it is to do the same attack to Let's Encrypt, which has various additional protections such as performing verification from multiple vantage points, always using DNSSEC, etc.
If an attacker gets a misissued cert not through BGP or DNS hijacks, but by exploiting a domain validation flaw in a CA (e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=2011713) then it's trivial for them to use it as a client certificate, even if you're requiring the serverAuth EKU. On the other hand, dialback over TLS would require the attacker to also MitM the connection between XMPP servers, which is a higher bar.
The good news is that since Prosody requires the serverAuth EKU, the misissued cert would be in-scope of Mozilla's root program, so if it's discovered, Mozilla would require an incident report and potentially distrust the CA. But that's reactive, not proactive.
You're not wrong. PKI has better protections against MITM, dialback has better protections against certificate leaks/misissuance.
I think the ideal approach would be combining both (as mentioned, there have been some experiments with that), except when e.g. DANE can be used ( https://prosody.im/doc/modules/mod_s2s_auth_dane_in ). But if DANE can be used, the whole CA thing is irrelevant anyway :)
Firstly, nobody is actually calling for authentication using client certificates. We use "normal" server certificates and validate the usual way
I'm not sure I understand this point. You authenticate the data you receive using the client's certificate. How is that "nobody is calling for authentication using client certificates"? Maybe there's some nuance I'm missing here but if you're authenticating the data you're receiving based on the client's certificate, then how is that "validating the usual way"?
There is a lot of confusion caused by overlapping terminology in this issue.
By "client certificates" I mean (and generally take most others in this thread to mean) certificates which have been issues with the clientAuth key purpose defined in RFC 5280. This is the key purpose that Let's Encrypt will no longer be including in their certificates, and what this whole change is about.
However when one server connects to another server, all of TCP, TLS and the application code see the initiating party as a "client", which is distinct from say, an "XMPP client" which is an end-user application running on e.g. some laptop or phone.
The comment I was responding to clearly specified " I don't see how TLS-with-client-EKU [...]" which was more specific, however I used the more vague term "client certificates" to refer to the same thing in my response for brevity (thinking it would be clear from the context). Hope that clarifies things!
> CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
It has never been secure to to rely on the Mozilla root store at all, or any root store for that matter, as they all contain certificate authorities which are in actively hostile countries or can otherwise be coerced by hostile actors. The entire security of the web PKI relies on the hope that if some certificate authority does something bad it'll become known.
> The entire security of the web PKI relies on the hope that if some certificate authority does something bad it'll become known.
Correct, but it's not a vain hope. There are mechanisms like certificate transparency that are explicitly designed to make sure any misbehavior does become known.
> The current CA ecosystem is *heavily* driven by web browser vendors (i.e. Google, Apple, Microsoft and Mozilla), and they are increasingly hostile towards non-browser applications using certificates from CAs that they say only provide certificates for consumption by web browsers.
Let's translate and simplify:
> The current CA ecosystem is Google. They want that only Google-applications get certificates from CAs.
Huh? Google does not even make a web server, or any kind of major servers, unless you count GCP load balancers or whatever. You are confusing their control of the client (which is still significantly shared with Apple and Microsoft since they control OS-level certificate trusts) with the server side, who are the "customers" of the CA. Google has almost no involvement in that and couldn't care less what kind of code is requesting and using certificates.
Huh bis? Aren't people using Google browser mostly to use Google services hosted on Google servers? Haven't you heard of Google the search engine, Google maps, YouTube, Gmail, Google docs, etc?
But this is about SSL certificates. Google may account for say half of web traffic, but there are billions of other servers that account for the other half, and it has absolutely no care what web server or ACME client they are running or much else. It is concerned about the client experience and how it trusts certificates.
Google already has its own CA that is used for its own systems as well as to issue certificates for GCP customers. They don't interact with Lets Encrypt or any other external CA as far as I know for their own services.
For decades there have been a few entities interested in actually providing a working and trustworthy PKI for the Internet - it's called the Web PKI because in practice the only interested parties are always browser vendors.
There are always plenty of people who aren't interested in doing any hard work themselves but are along for the ride, and periodically some of those people are very angry because a system they expended no effort to maintain hasn't focused on their needs.
The Web PKI wasn't somehow blessed by God, people made this. If you do the hard work you can make your own PKI, with your own rules. If you aren't interested in doing that work, you get whatever the people who did the work wanted. This ought to be a familiar concept.
> The CA/Browser Forum is a voluntary organization of Certification Authorities and suppliers of Internet browser and other relying‐party software applications.
IMHO "other relying-party software applications" can include XMPP servers (also perhaps SMTP, IMAP, FTPS, NNTP, etc).
If Google/Chrome doesn't want to allow it, good for them. But why do they get to dictate what others do?
No. HTTPS certificates are being abused for non-https purposes. CAs want to sell certificates for everything under the sun, and want to force those in the ecosystem to support their business, even though https certificates are not designed to be used for other things (mail servers for example).
If CAs don't want hostility from browser companies for using https certificate for non-http/browser applications, they should build their own thing.
They weren't "HTTPS certificates" originally, just certificates. They may be "HTTPS certificates" today if you listen to some people. However there was never a line drawn where one day they weren't "HTTPS certificates" and the next day they were. The ecosystem was just gradually pushed in that direction because of the dominance of the browser vendors and the popularity of the web.
I put "HTTPS certificates" in quotes in this comment because it is not a technical term defined anywhere, just a concept that "these certificates should only be used for HTTPS". The core specifications talk about "TLS servers" and "TLS clients".
This is technically true, and nobody contested the CABF's focus on HTTPS TLS.
However, eventually, the CABF started imposing restrictions on the public CA operators regarding the issuance of non-HTTPS certificates. Nominally, the CAs are still offering "TLS certificates", but due to the pressure from the CABF, the allowed certificates are getting more and more limited, with the removal of SRVname a few years ago, and the removal of clientAuth that this thread is about.
I can understand the CABF position of "just make your own PKI" to a degree, but in practice that would require a LetsEncrypt level of effort for something that is already perfectly provided by LetsEncrypt, if it wouldn't be for the CABF lobbying.
> CABF started imposing restrictions on the public CA operators regarding the issuance of non-HTTPS certificates.
The restriction is on signing non web certificates with the same root/intermediate as is part of the WebPKI.
There's no rule (that I'm aware of?) that says the CAs can't have different signing roots for whatever use-case that are then trusted by people who need that use case.
> The CA/Browser Forum is a voluntary organization of Certification Authorities and suppliers of Internet browser and other relying‐party software applications.
IMHO "other relying-party software applications" can include XMPP servers (also perhaps SMTP, IMAP, FTPS, NNTP, etc).
It is a single member of the CAB that is insisting on changing the MAY to a MUST NOT for clientAuth. Why does that single member, Google-Chrome, get to dictate this?
Has Mozilla insisted on changing the meaning of §1.3 to basically remove "other relying‐party software applications"? Apple-Safari? Or any other of the "Certificate Consumers":
The membership of CAB collectively agree to the requirements/restrictions they places on themselves, and those requirements (a) state both browser and non-browser use cases, and (b) explicitly allow clientAuth usage as a MAY; see §7.1.2.10.6, §7.1.2.7.10:
> CAs want to sell certificates for everything under the sun
A serious problem with traditional CAs, which was partly solved by Let's Encrypt just giving them away. Everyone gradually realized that the "tying to real identity" function was both very expensive and of little value, compared to what people actually want which is "encryption, with reasonable certainty that it's not MITMd suddenly".
Where did you get that idea? These certs have always been intended for any TLS connection of any application. They are also in no way specific or "designed for" HTTPS. Neither the industry body formed from the CAs and software vendors, nor the big CAs themselves are against non-HTTPS use.
> Welcome to the CA/Browser Forum
>
> The Certification Authority Browser Forum (CA/Browser Forum) is a voluntary gathering of Certificate Issuers and suppliers of Internet browser software and other applications that use certificates (Certificate Consumers).
> Does Let’s Encrypt issue certificates for anything other than SSL/TLS for websites?
>
> Let’s Encrypt certificates are standard Domain Validation certificates, so you can use them for any server that uses a domain name, like web servers, mail servers, FTP servers, and many more.
PKI certificates weren't even intended for SSL, it predates even that.
X.509 was published in November 25, 1988 ; version 3 added support for "the web" as it was known at the time. One obvious use was for X.400 e-mail systems in the 1980s. Novell Netware adopted x.509.
It was originally intended to use with X.511 "Directory Access Protocol", which LDAP was based on. You can still find X.500 heritage in Microsft Exchange and Active Directory, although it's getting less over time and e.g. EntraID only has some affordances for backward compatibility.
Google has recently imposed a rule that CA roots trusted by Chrome must be used solely for the core server-authentication use case, and can't also be used for other stuff. They laid out the rationale here: https://googlechrome.github.io/chromerootprogram/moving-forw...
It's a little vague, but my understanding reading between the lines is that sometimes, when attempts were made to push through security-enhancing changes to the Web PKI, CAs would push back on the grounds that there'd be collateral damage to non-Web-PKI use cases with different cost-benefit profiles on security vs. availability, and the browser vendors want that to stop happening.
Let's Encrypt could of course continue offering client certificates if they wanted to, but they'd need to set up a separate root for those certificates to chain up to, and they don't think there's enough demand for that to be worth it.
This sounds a lot like the "increasing hostility for non-web usecases" line in the OP.
In theory, Chrome's rule would split the CA system into a "for web browsers" half and a "for everything else" half - but in practice, there might not be a lot of resources to keep the latter half operational.
In practice this might just mean that applications designed to use web PKI certs start ignoring the value of the extendedKeyUsage extension. OP says Prosody already does this.
Well, if libraries like OpenSSL check extendedKeyUsage by default but provide an option to disable this, then most apps benefit from more stringent security, but ones like Prosody with unusual use cases can continue to make those use cases work. That doesn't sound like the worst thing in the world, necessarily? (I'm not sure how Prosody actually implemented this, though, or whether OpenSSL actually works that way.)
So that argues against including CAs that don't issue server authentication cerificates. That's somewhat reasonable, although it does put non-browser use cases in an awkward position, since there isn't currently a standard distribution channel for trusted CAs that is independent of browsers.
But prohibiting certs from being marked for client usage is mostly unrelated to that goal because:
1. There are many non-web use cases for certificates that are only used for server authentication. And
2. There are use cases where it makes sense to use the same certificate used for web PKI as a client with mTLS to another server using web PKI, especially for federated communication.
>when attempts were made to push through security-enhancing changes to the Web PKI, CAs would push back on the grounds that there'd be collateral damage to non-Web-PKI use cases
Do you (or anyone else) have an example of this happening?
After the WebPKI banned the issuance of new SHA-1 certificates due to the risk of collisions, several major payment processors (Worldpay[1], First Data[2], TSYS[3]) demanded to get more SHA-1 certificates because their customers had credit card terminals that did not support SHA-2 certificates.
They launched a gross pressure campaign, trotting out "small businesses" and charity events that would lose money unless SHA-1 certificates were allowed. Of course, these payment processors did billions in revenue per year and had years to ship out new credit card terminals. And small organizations could have and would have just gotten a $10 Square reader at the nearest UPS store if their credit card terminals stopped working, which is what the legacy payment processors were truly scared of.
The pressure was so strong that the browser vendors ended up allowing Symantec to intentionally violate the Baseline Requirements and issue SHA-1 certificates to these payment processors. Ever since, there has been a very strong desire to get use cases like this out of the WebPKI and onto private PKI where they belong.
A clientAuth EKU is the strongest indicator possible that a certificate is not intended for use by browsers, so allowing them is entirely downside for browser users. I feel bad for the clientAuth use cases where a public PKI is useful and which aren't causing any trouble (such as XMPP) but this is ultimately a very tiny use case, and a world where browsers prioritize the security of ordinary Web users is much better than the bad old days when the business interests of CAs and their large enterprise customers dominated.
But this has nothing to do with clientAuth as in this case the payment processor uses a server certificate and terminal connect to the payment processor, not the other way around. So this change would not have prevented this and I don't see what browsers can do to prevent it - after all, the exact same situation would have happened if the payment processors used a HTTPS-based protocol.
Yeah, the more I think about it the more futile this effort starts to look. The industry is investing tons of resources into building and maintaining an open, highly secure PKI ecosystem which allows any server on the public internet to cryptographically prove its identity, and Google wants to try to prevent anyone who's not a web browser from relying on that ecosystem? Seems impossible. The incentives are far too strong.
Google is hoping that after this change other TLS clients will go off and build their own PKI entirely separate from the web PKI, but in reality that would take way too much redundant effort when the web PKI already does 99% of what they want. What will actually happen is clients that want to use web certs for client authentication will just start ignoring the value of the extendedKeyUsage extension. The OP says Prosody already does. I don't see how that's an improvement to the status quo.
Google Chrome (along with Mozilla, and eventually the other root stores) distrusted Symantec, despite being the largest CA at the time and frequently called "too big to fail".
Given how ubiquitous LE is, I think people will switch browsers first. non-chrome browsers based on chrome are plenty as well, they can choose to trust LE despite Chrome's choices. Plus, they had a good reason with Symantec, a good reason to distrust them that is. This is just them flexing, there is no real reason to distrust LE, non-web-pki does not reduce security.
GP gave a very good reason that non-web-PKI reduces security, you just refused to accept it. Anybody who has read any CA forum threads over the past two years is familiar with how big of a policy hole mixed-use-certificates are when dealing with revocation timelines and misissuance.
"it's complicated" is not the same as "it's insecure". Google feels like removing this complexity improves security for web-pki. Improving security is not the same as saying something is insecure. Raising security for web-pki is not the same as caliming non-web-pki usage is insecure or is degrading security expectations of web-pki users. It's just google railroading things because they can. You can improve security by also letting Google decide and control everything, they have the capability and manpower. But we don't want that either.
Neither, I meant if enough people panic and stop using chrome, website operators need not worry much. Safari is default on macs, and Edge is default on windows, both can render any website that can't be accessed in Chrome, so it'll make Chrome the browser that can't open half of the websites, instead of half of the websites out there suddenly being incompatible with chrome. The power of numbers is on LE's side.
If they wanted, they absolutely can distrust LE. The trick is to distrust only certificates issued after specific date (technically: with „NotBefore” field after specific point in time), so the certs already issued continue to work for the duration of their validity (until „NotAfter”). That way they can phase out even the biggest CAs. Moreover, they have infrastructure in place and playbook well rehearsed on other CAs already.
Even then, is the message "stop using chrome after this date because half the internet will break" (because where will all those non-paying people go to?), or "stop using LE and start paying someone for a free service"?
I bet google themselves would be scared of anti-trust lawsuits over this. Even if they weren't, i don't think they'll really go so far as to compromise the security of half of the internet just to get their way on this one small improvement.
The point about antitrust lawsuits I concur, but LE is not the only free-as-in-beer ACME. For one, there's ZeroSSL, then Actalis, SSL.com. For some time BuyPass offered free certs, but it does no longer. Last but not least Google itself has Public CA that offers certs over ACME, a fact that I think would be a huge fulcrum for antitrust suit. I would also expect that all other CAs would deploy ACME endpoints to attract at least some part of the cake (note they're in business of being vultures already). So the message will be „go find another CA, here are three examples, sorted randomly like the European first boot UX, just change the URI in certbot config".
Perhaps this shouldn't be left to the CA/B board, it has critical economic impact on many countries, it should be regulated by them?
Either way, I think LE has enough power to at least push-back and see where things fall. continuing to support users can't hurt them, until they truly have no other choice.
> [...] it has critical economic impact on many countries, it should be regulated by them?
This was exactly the point of recent (2024) eIDAS update, which introduced EU Trusted Lists. The original draft was that the browsers were mandated to accept X.509 certs from CAs („TSP”s) accredited in EU by national bodies. Browsers were supposed not to be free to just eject CAs from their root programs for any reason or no reason at all, but in case of infractions they were supposed to report to CAB or NAB that would make the final decision.
Browesers responded by lobbying, because the proposal also contained some questionable stuff like mandatory EV UI, which the browsers rightfully deprecated, and also it wasn't clear if they can use OneCRL and similar alternative revocation schemes for mitigations of ongoing attacks. The language was diluted.
Interestingly though, doesn't this threat become less credible the shorter certificate lifetimes get? Back in the day they could just do this and server admins would figure out how to switch to a new CA the next time they got around to renewing their certificate. Now though that's all automated, so killing a CA will likely nuke a bunch of sites.
This is good point. I think it would still be discounted in favour of suggesting another CAs that users can switch to, but you're right, the promise was that cert management would be hands off, and changing CAs is not hands off in any ACME client that I know of. Best Google could do would be to shift the blame to LE/ISRG, because it was ISRG that promised this automation.
They can do this with certificate transparency other wise CA can sign whatever date they want. But if they collude with CT that can issue rouge certificates for targeted attacks.
Yes, that's all right, there's already a requirement that they submit to one Google CT log and one non-Google CT log. They thought about it already. The playbook I mentioned they've been rehearsing contains specific threat against backdating certs, they say they'll distrust immediately if they detect, and they have means of detecting backdating on significant scale (esp. for LE, where they submit 100% issued certs, not just the subset that is intended for consumption with Chrome).
> Let's Encrypt could of course continue offering client certificates if they wanted to, but they'd need to set up a separate root for those certificates to chain up to, and they don't think there's enough demand for that to be worth it.
Why an entire new root? Perhaps set up a ACME profile [1] where the requestor could ask for clientAuth if their use case would be helped; the default would be off.
Google would be free to reject with-clientAuth HTTPS server certificates in their browser, but why should they care if a XMPP or SMTP server has a such a certificate if the browser never sees it?
According to Google. Why do they get to dictate this?
Per the current (2.2.2) CAB requirements [1], §7.1.2.10.6, "CA Certificate Extended Key Usage": id-kp-clientAuth is a MAY.
If I was (say) Let's Encrypt I would (optionally?) allow it and dare Google/Chrome to remove my root certificate. Letting bullies get away with this kind of non-sense only encourages them.
Have you been reading the thread? https://news.ycombinator.com/item?id=46952590 there are a lot of reasons why browsers need to care about whether CAs are issuing insecure certificates to XMPP or SMTP servers (or credit card machines)
> […] there are a lot of reasons why browsers need to care about whether CAs are issuing insecure certificates to XMPP or SMTP servers (or credit card machines)
Why does having the clientAuth capability make a certificate "insecure"?
It is really great how they write "TLS use cases" and in fact mean HTTPS use cases.
CA/Browser Forum has disallowed the issuance of server certificates that make use of the SRVName [0] subjectAltName type, which obviously was a server use case, and I guess the only reason why we still are allowed to use the Web PKI for SMTP is that both operate on the server hostname and it's not technically possible to limit the protocol.
It would be perfectly fine to let CAs issue certificates for non-Web use-cases with a different set of requirements, without the hassle of maintaining and distributing multiple Roots, but CA/BF deliberately chose not to.
Based on previous history where people actually did call google's bluff to their regret, what happens is that google trusts all current certificates and just stops trusting new certs as they are issued.
Google has dragged PKI security into the 21st century kicking and screaming. Their reforms are the reason why PKI security is not a joke anymore. They are definitely not afraid to call CA companies bluff. They will win.
As a general rule in cryptography, a lot of vulnerabilities relate confusing the system by using a correct thing in the wrong context. Making it a rule that you have to use separate chains for separate purposes is a good rule from a general design standpoint.
> Making it a rule that you have to use separate chains for separate purposes is a good rule from a general design standpoint.
No it's not. It's a specific argument, that's true only in specific cases. You shouldn't handle knives, is equally a good rule from a general design standpoint. But nonsensical when you're a chef.
You should have separate chains is a reasonable decision when the ability to rotate out a compromised chain, and insulate some downtime, from other chains/usages is desirable. Needing to manage multiple cert chains is more overhead. Making use or maintenance harder. It increases complexity.
Large companies have never been afraid of more overhead. It's their singular advantage.
Removing features someone is using, and calling it better security, when it doesn't actually meaningfully reduce or remove some risk is weaponized incompetence. And sufficiently advanced incompetence, is....
There's no world where anyone gains additional protection, from a 3rd party compromise. Or one where LE has one of chains compromised, but doesn't rotate all of them.
Except we didn't get a separate chain - all we got is that from now on software will just ignore the "client" flag and accept the "server" flag for client purposes, adding one more hack onto the pile of hacks that is the Internet.
Not forbidden, just not going to be a part of WebPKI.
It's one of those things that has just piggybacked on top of WebPKI and things just piggybacking is a bad idea. There have been multiple cases in the past where this has caused a lot of pain for making meaningful improvements (some of those have been mentioned elsewhere in this thread).
The current PKI system was designed by Netscape as part of SSL to enable secure connections to websites. It was never independent of the web. Of course PKIs and TLS have expanded beyond that.
"WebPKI" is the term used to refer to the PKI used by the web, with root stores managed by the browsers. Let's Encrypt is a WebPKI CA.
The idea of a PKI was of course designed independently, there are many very large PKIs beyond WebPKI. However the one used by browsers is what we call WebPKI and that has its own CAs and rules.
You're trying to make it sound like there has ever been some kind of an universal PKI that can be used for everything and without any issues.
Google didn't drag anyone anywhere without LE though.
Sure, they supported the nascent HTTPS very early on, but most of the web thought that certificates were "too expensive for the likes of us", and so only really banks and the like actually adopted HTTPS. Most of the internet was still HTTP only for years after HTTPS was available.
Only when LE came along and started offering free certificates and facilitated a massive uptake in HTTPS websites were Google ever in a position to default to marking HTTP as "insecure and dangerous".
I've got no figures, but I suspect that if LE were to kick their heels in, that Google wouldn't dare risk half the internet not working using their browser. I'm sure that would be some people who didn't want to be collateral damage if there was a standoff and would switch to a CA that complied with Google's will, but I suspect most people would be happy to see Google challenged on this. And end users would hopefully discover that every other browser still worked, just Chrome had broken, and Chrome would quite rapidly fall out of favour.
While google did do a lot of work on making https by default be a thing, that is only a small part of what im refering to. Google did huge amount of work to make https high quality, so that sites using it were actually secure. They increased standards for CAs significantly, took a much tougher line on CAs who violated the rules, pushed certificate transparency hard (which is probably one of the most important developments for security of TLS ecosystem). Chrome was the first browser to support HSTS which is very important for https to work in practise. Google maintains the hsts preload list.
Google didn't just make TLS popular, they made it secure.
It's how it should work but the supposed CA browser "forum" has become a browser dictatorship. While there have been issues where CAs were dragging their feet, just letting Google dictate whatever they want is not the solution.
I’m disappointed that a competitor doesn’t exist that uses longevity of IP routing as a reputation validator. I would think maintaining routing of DNS to a static IP is a better metric for reputation. Having unstable infrastructure to me is a flag for fly by night operations.
The German NSA intercepted the jabber.im server with a physical interposer device, issued themselves an LE certificate and MITMed this service for months
I wonder if this is a potential "off switch" for the internet. Just hit the root ca so they can't hand out the renewed certificates, you only have to push them over for a week or so.
People will learn to press all the buttons with scarry messages to ignore the wrong certificates. It may be a problem for credit cards and online shopping.
HSTS was specifically designed to block you from having any ignore buttons. (And Firefox refuses to implement a way to bypass it.)
But this is also why the current PKI mindset is insane. The warnings are never truly about a security problem, and users have correctly learned the warnings are useless. The CA/B is accomplishing absolutely nothing for security and absolutely everything for centralized control and platform instability.
The CA/B is basically some Apple and Google people plus a bunch of people who rubber stamp the Apple and Google positions. Everyone is culpable and it creates a self-fulfilling process. Everyone is the expert for their company's certificate policy so nobody can tell them it's dumb and everyone else can say they have no choice because the CA/B decided it.
Even Google and Apple from a corporate level likely have no idea what their CA/B reps are doing and would trust their expertise if asked, regardless of how many billions of dollars it is burning.
The CA/B has basically made itself accountable to nobody including itself, it has no incentives to balance practicality or measure effectiveness. It's basically a runaway train of ineffective policy and procedure.
The real takeaway is that there's never been a lot of real thought put into supporting client authentication - e.g. there's no root CA program for client certificates. To use a term from that discussion, it's usually just "piggybacked" on server authentication.
Imho because to put both into a certificate just by convention (or what was the reason to still do it?) is for a CA that has the webpki in scope is not best practice.
From experience people are often misleading the client authentication part as a substitute for user authentication what you simply don't get and than they are surprised that anyone with the certificate can login...
Yeah people with knowledge should know the difference but I have seen this way too many times...The thing I really see LE is problematic is the topic of revocation. Yes revocation is broken but the only working mechanism with ocsp stapling was brought to the graveyard (aka made optional by the cab) with the argument of data privacy issues under the normal ocsp umbrella...Yeah back to CRLs/proprietary browser revocation mechanisms such as CRLsets (https://www.grc.com/revocation/crlsets.htm#:~:text=What%20is...) combined with CTlogs as a reactive measure that simply don't work in practice/are too slow (e.g. remember the Fina CA/Cloudflare incident and the time it went unnoticed).
I have the feeling the driver for LE were rather the costs than the data privacy arguments brought up.
I can think of a of other ways that client certificates could work, but they have problems too:
1. Use DANE to verify the client certificate. But that requires DNSSEC, which isn't widely used. Would probably require new implemntations of the handshake to check the client cert, and would add latency since the server has to do a DNS call to verify the clients cer.
2. When the server receives a request it makes an https request to a well known enpdpoint on the domain in the client-cert's subject that contains a CA, it then checks that the client cert is signed by that CA. And the client generates the client cert with that CA (or even uses the same self-signed cert for both). This way the authenticity of the client CA is verified using the web PKI cert. But the implementation is kind of complicated, and has an even worse latency problem than 1.
3. The server has an endpoint where a client can request a client certificate from that server, probably with a fairly short expiration, for a domain, with a csr, or equivalent. The server then responds by making an https POST operation to a well known enpdpoint on the requested domain containing a certificate signed by the servers own CA. But for that to work, the registration request needs to be unauthenticated, and could possibly be vulnerable to DoS attacks. It also requires state on the client side, to connect the secret key with the final cert (unless the server generated a new secret key for the client, which probably isn't ideal). And the client should probably cache the cert until it expires.
And AFAIK, all of these would require changes to how XMPP and other federated protocols work.
Of these, (1) and (2) are already implemented in XMPP.
(1) just isn't that widely deployed due to low DNSSEC adoption and setup complexity, but there is a push to get server operators to use it if they can.
(2) is defined in RFC 7711: https://www.rfc-editor.org/rfc/rfc7711 however it has more latency and complexity compared to just using a valid certificate directly in the XMPP connection's TLS handshake. Its main use is for XMPP hosting providers that don't have access to a domain's HTTPS.
The second one doesn't seem excessively complicated and the latency could be mitigated by caching the CA for a reasonable period of time.
But if you're going to modify the protocol anyway then why not just put it in the protocol that a "server" certificate is to be trusted even if the peer server is initiating rather than accepting the connection? That's effectively what you would be doing by trusting the "server" certificate to authenticate the chain of trust for a "client" certificate anyway.
The complication of (2) is that it requires a server with a completely different protocol and port, that may or may not already be claimed by another server software than the XMPP server, to act in a specific way (e.g. use a compatible certificate).
The technical term for such cross-service requirements is "a giant pain in the ass".
That's assuming you're requiring the ordinary HTTPS port to be used. For that matter, why would it even need to use HTTPS? Have the peer make a TLS connection to the XMPP server to get the CA.
But it still seems like the premise is wrong. The protocol is server-to-server and the legacy concept that one of them is the "client" and needs a "client certificate" is inapplicable, so why shouldn't the protocol just specify that both peers are expected to present a "server certificate" regardless of which one initiated the connection?
As the founder of both projects, explaining the difference between the two projects is roughly 20% of my working day (okay, not quite 20% but sometimes it feels that way).
The problem here is that when alice@chat.example.com and bob@xmpp.example2.com talk to each other, chat.example.com asks "Are you xmpp.example2.com?" and xmpp.example2.com asks "Are you chat.example.com?"
If you strictly require the side that opens the TCP connection to only use client certs and require the side that gets the TCP connection to only use server certs, then workflows where both sides validate each other become impossible with a single connection.
You could have each server open a TCP connection to the other, but then you have a single conversation spread across multiple connections. It gets messy fast, especially if you try to scale beyond a single server -- the side that initiates the first outgoing connection has to receive the second incoming connection, so you have to somehow get your load balancer to match the second connection with the first and route it to the same box.
Then at the protocol level, you'd have to essentially have each connection's server send a random number challenge to the client saying "I can't authenticate clients because they don't have certs. So please echo this back on the other connection where you're the server and I can authenticate you." The complexity and subtlety of this coordination dance seems like you're just asking security issues.
If I was implementing XMPP I would be very tempted to say, "Don't be strict about client vs. server certs, let a client use a server cert to demonstrate ownership of a domain -- even if it's forbidden by RFC and even if we have to patch our TLS library to do it."
"This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline."
I don't think this is true. It's something that could be useful, with some sort of ACME-like automated issuance, but should definitely be issued from a non-WebPKI certificate authority.
> they just decided it wasn't worth the effort anymore
That seems disingenuous. Doesn't being in the client cert business now require a lot of extra effort that it didn't before, due entirely to Google's new rule?
Code can just ignore the EKU. Especially if the ecosystem consists of things that are already using certificates in odd ways, as it shouldn't be making outgoing connections without it in the first place.
Client authentication with publicly-trusted (i.e. chaining to roots in one of the major 4 or 5 trust-store programs) is bad. It doesn't actually authenticate anything at all, and never has.
No-one that uses it is authenticating anything more than the other party has an internet connection and the ability, perhaps, to read.
No part of the Subject DN or SAN is checked. It's just that it's 'easy' to rely on an existing trust-store rather than implement something secure using private PKI.
Some providers who 'require' public TLS certs for mTLS even specify specific products and CAs (OV, EV from specific CAs) not realising that both the CAs and the roots are going to rotate more frequently in future.
A client cert can be stored, so it provides at least a little bit of identification certainty. It's very hard to steal or impersonate a specific client cert, so the site has a high likelihood of knowing you're the same person you were when you connected before (even though the initial connection may very well not have ID'd the correct person!). That has value.
But it also doesn't involve any particular trust in the CA either. Lets Encrypt has nothing to offer here so there's no reason for them to try to make promises.
Point being that if you get a valid TLS connection from a client cert, and then you get another valid connection from the same cert tomorrow, you can be very certain that the entity connecting is either the same software environment that connected earlier, or an attacker that has compromised it. You can be cryptographically certain that it is not an attacker that hasn't effected a full compromise of your client.
And there's value there, if you're a server. It's why XMPP wants federated servers to authenticate themselves with certificates in the first place.
I can't believe this was downvoted. Seriously a Certificate is binding a public key and the attributes (mainly the identity). If you don't need to use the attributes, you don't need a certificate!
> This is basically how Let's Encrypt / ACME accounts work
That's how they're implemented. How they "work" is a trivial pushbutton thing as documented by a well-known and trusted provider who cares deeply about simple user experience.
"Just self-sign a cert" is very much not the story XMPP wants their federated server operators to deal with.
I feel like using web pki for client authentication doesn't really make sense in the first place. How do you verify the common name/subject alt name actually matches when using a client cert.
Using web pki for client certs seems like a recipe for disaster. Where servers would just verify they are signed but since anyone can sign then anyone can spoof.
And this isn't just hypothetical. I remember xmlsec (a library for validating xml signature, primarily saml) used to use web pki for signature validation in addition to specified cert, which resulted in lot SAML bypasses where you could pass validation by signing the SAML response with any certificate from lets encrypt including the attackers.
A public CA checks it one-time, when it's being issued.
Most/all mTLS use-cases don't do any checking of the client cert in any capacity. Worse still, some APIs (mainly for finance companies) require things like OV and EV, but of course they couldn't check the Subject DN if they wanted to.
If it's for auth, issue it yourself and don't rely on a third-party like a public CA.
A federated ecosystem of servers that need to verify each other based on their domain name as the identity is the prime use-case for a public CA to issue domain-verified client certificates. XMPP happens to be this ecosystem.
Rolling out a private PKI for XMPP, with a dedicated Root CA, would be a significant effort, essentially redoing all the hard work of LetsEncrypt, but without the major funding, thus ending up with an insecure solution.
We make use of the public CAs, that have been issuing TLS certificates based on domain validation, for quite a few years now, before the public TLS CAs have been subverted to become public HTTPS-only CAs by Google and the CA/Browser Forum.
That's exactly what prosody is doing, but it's a weird solution. Essentially, they're just ignoring the missing EKU flag and pretend it would be there, violating the spec.
It seems weird to first remove the flag and then tell everyone to update their servers to ignore the removal. Then why remove it in the first place?
I think you're confusing different actors here. The change was made by the CA/B Forum, the recommendation is just how it is if you want to use a certificate not for the purposes intended.
But it does mean that the CA/B requirement change has zero positive effect on security of anything and only causes pointless work and breakage.
Or to put it another way, the pragmatic response of the XMPP community shows that the effect of the change is not to remove the clientAuth capability from any certs but to effectively add it to all serverAuth certs no matter what the certificate says.
Yes, this is what is happening. It isn't happening fast enough, so some implementations (especially servers that don't upgrade often enough, or running long-term-support OS flavors) will still be affected. This is the impact that the original article is warning about.
My point was that this is yet another change that makes TLS operations harder for non-Web use cases, with the "benefit" to the WebPKI being the removal of a hypothetical complexity, motivated by examples that indeed should have used a private PKI in the first place.
> A public CA checks it one-time, when it's being issued.
That's the same problem we have with server certs, and the general solution seems to be "shorter cert lifetimes".
> Worse still, some APIs (mainly for finance companies) require things like OV and EV, but of course they couldn't check the Subject DN if they wanted to.
Not an expert there, but isn't the point of EV that the CA verified the "real life entity" that requested the cert? So then it depends on what kind of access model the finance company was specifying for its API. "I don't care who is using my API as long as they are a company" is indeed a very stupid access model, but then I think the problem is deeper than just cert validation.
> "I don't care who is using my API as long as they are a company" is indeed a very stupid access model, but then I think the problem is deeper than just cert validation
It's not stupid if you reframe it as "you can only use my API if you give me a cryptographically verifiable trace to your legal identity".
That's true if it worked, but I think there was the problem that EV names aren't always enough to trace back the legal entity? At least that's what I read, it might be wrong.
You are correct, and the answer is - no-one using publicly-trusted TLS certs for client authentication is actually doing any authentication. At best, they're verifying the other party has an internet connection and perhaps the ability to read.
It was only ever used because other options are harder to implement.
It seems reasonable for server-to-server auth though? Suppose my server xmpp.foo.com already trusts the other server xmpp.bar.com. Now I get some random incoming connection. How would I verify that this connection indeed originates from xmpp.bar.com? LE-assigned client certs sound like a good solution to that problem.
> It seems reasonable for server-to-server auth though? Suppose my server xmpp.foo.com already trusts the other server xmpp.bar.com.
If you already trust xmpp.foo.com, then you probably shouldn't be using PKI, as PKI is a complex system to solve the problem where you don't have preexisting trust. (I suppose maybe PKI could be used to help with rolling over certs)
No, the problem it was solving was "how do I verify that an incoming connection is actually from xmpp.foo.com and not from an impostor?"
You could also solve this with API keys or plain old authentication, but all of those require effort on xmpp.foo.com's side to specifically support your server.
Client certs seem better suited in that regard. A server can generate a trusted client cert once, and then everyone else can verify connections from that server without having to do any prior arrangements with it.
The public TLS PKI was never supposed to serve every use case and you know it. But let me point out when it was possible to get a public CA certificate for an XMPP server with SRVname and xmppAddr:
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1096750 (0x10bc2e)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = IL, O = StartCom Ltd., OU = Secure Digital Certificate Signing, CN = StartCom Class 1 Primary Intermediate Server CA
Validity
Not Before: May 27 16:16:59 2015 GMT
Not After : May 28 12:34:54 2016 GMT
Subject: C = DE, CN = chat.yax.im, emailAddress = hostmaster@yax.im
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:chat.yax.im, DNS:yax.im, xmppAddr:chat.yax.im, dnsSRV:chat.yax.im, xmppAddr:yax.im, dnsSRV:yax.im
Ironically, this was the last server certificate I obtained pre-LetsEncrypt.
This post here is the demonstration, that some non-WebPKI purpose is causing issues and complaints. This has happened before with SHA-1 deprecation. WebPKI does not want this burden and should not have this burden.
Let's Encrypt also provides value by providing signed TLS certificates that are enrolled in all major operating systems, and that can be used to authenticate any TLS server.
This is a significant and important use case that's totally ignored by the "WebPKI" proponents, and there is no alternative infrastructure that would provide that value if WebPKI would e.g. decide to add certificate constraints limiting issued certificates to TCP/433.
Too late for an edit, i read a bit more about how xmpp works, i guess the cert is not really about network access controls or authenticating the connection, but authenticating the data is coming from the right server.
Is there any reason why things gravitate towards being web-centric, especially Google-centric?
Seeing that Google's browser policies triggered the LE change and the fact that most CAs are really just focusing on what websites need rather than non-web services isn't helpful considering that browsers now are terribly inefficient (I mean come on, 1GB of RAM for 3 tabs of Firefox whilst still buffering?!) yet XMPP is significantly more lightweight and yet more featureful compared to say Discord.
Google dominate the space because they have an active, robust trust-store program that they manage well. Apple the same. Mozilla and Microsoft too (though to a lesser extent).
If any ecosystem - such as XMPP - wishes to, they could start their own root-program, but many simply copy what Chrome or Mozilla do and then are surprised when things change.
There might be some confusion here, as there is no refusal at all.
As stated in the blog post, we (Prosody) have been accepting (only) serverAuth certificates for a long time. However this is technically in violation of the relevant RFCs, and not the default behaviour of TLS libraries, so it's far from natural for software to be implementing this.
There was only one implementation discovered so far which was not accepting certificates unless they included the clientAuth purpose, and that was already updated 6+ months ago.
This blog post is intended to alert our users, and the broader XMPP community, about the issue that many were unaware of, and particularly to nudge server operators to upgrade their software if necessary, to avoid any federation issues on the network.
Sorry, I probably misunderstood, I thought there were resistance to update the corresponding specs. I understand that non XMPP specs might refuse to be updated, but at least this behavior could be standardized for XMPP specifically.
Yeah, the resistance is outside the XMPP community. However we have a long history of working with internet standards, and it's disappointing to now be in an era where "the internet" has become just a synonym for "the web", and so many interesting protocols and ideas get pushed aside because of the focus on browsers, the web and HTTPS.
The article literally talks about how one of the server implementations does exactly that:
> Does this affect Prosody?
> Not directly. Let’s Encrypt is not the first CA to issue server-only certificates. Many years ago, we incorporated changes into Prosody which allow server-only certificates to be used for server-to-server connections, regardless of which server started the connection. [...]
On the contrary, setting up a separate PKI for XMPP would be what is not pragmatic or even feasible at all. The pragmatic choice is to make use of the options available even if some people find them icky.
XMPP identifiers have domain names, so the XMPP server can check that the DNS SAN matches the domain name of the identifiers in incoming XMPP messages.
I've seen non-XMPP systems where you configure the DNS name to require in the client certificate.
It's possible to do this securely, but I agree entirely with your other comment that using a public PKI with client certs is a recipe for disaster because it's so easy and common to screw up.
Yes, definitely. Prosody supports DANE, but DNSSEC deployment continues to be an issue when talking about the public XMPP network at large. Ironically the .im TLD our own site is on still doesn't support it at all.
Trust chains. Some implementations would accept an LE certificate for foo.com as a valid login for foo.com or something like that, because they treated all trusted certs the same, whether issued by the service being authenticated to, or some other CA.
It might be possible to relay communications between two servers and have one of them act as a client without knowing. Handshake verification prevents that in TLS, but there could be similar attacks.
I really fail to understand or sympathize with Let's Encrypt limiting their certs so. What is gained by slamming the door on other applications than servers being able to get certs?
In this case I do think it makes sense for servers to accept certs even as marked by servers, since it's for a s2s use case. But this just feels like such an unnecessary clamping down. To have made certs finally plentiful, & available for use... Then to take that away? Bother!
I would think it's more secure than clientAuth certs because if an attacker gets a misissued cert they'd have to actually execute a MitM attack to use it. In contrast, with a misissued clientAuth cert they can just connect to the server and present it.
Another fun fact: the Mozilla root store, which I'd guess the vast majority of XMPP servers are using as their trust store, has ZERO rules governing clientAuth issuance[1]. CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
[1] https://www.mozilla.org/en-US/about/governance/policies/secu...
There are some advantages to using TLS for authentication as well as encryption, which is already a standard across the internet.
For example, unlike an XMPP server, CAs typically perform checks from multiple vantage points ( https://letsencrypt.org/2020/02/19/multi-perspective-validat... ). There is also a lot of tooling around TLS, ACME, CT logs, and such, which we stand to gain from.
In comparison, dialback is a 20-year-old homegrown auth mechanism, which is more vulnerable to MITM.
Nevertheless, there are some experiments to combine dialback with TLS. For example, checking that you get the same cert (or at least public key) when connecting back. But this is not really standardized, and can pose problems for multi-server deployments.
> It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
Good job we haven't been doing this for a very long time by now :)
The next question is how to authenticate the peer, and that can be done a few ways, usually either via the certificate PKI, via dialback, or something else (e.g. DNSSEC/DANE).
My comment about "combining dialback with TLS" was to say that we can use information from the TLS channel to help make the dialback authentication more secure (by adding extra constraints to the basic "present this magic string" that raw dialback authentication is based on).
The statement that dialback is generally more susceptible to MITM is based on the premise that it is easier to MITM a single victim XMPP server (e.g. hijack its DNS queries or install an intercepting proxy somewhere on the path between the two servers) than it is to do the same attack to Let's Encrypt, which has various additional protections such as performing verification from multiple vantage points, always using DNSSEC, etc.
The good news is that since Prosody requires the serverAuth EKU, the misissued cert would be in-scope of Mozilla's root program, so if it's discovered, Mozilla would require an incident report and potentially distrust the CA. But that's reactive, not proactive.
I think the ideal approach would be combining both (as mentioned, there have been some experiments with that), except when e.g. DANE can be used ( https://prosody.im/doc/modules/mod_s2s_auth_dane_in ). But if DANE can be used, the whole CA thing is irrelevant anyway :)
By "client certificates" I mean (and generally take most others in this thread to mean) certificates which have been issues with the clientAuth key purpose defined in RFC 5280. This is the key purpose that Let's Encrypt will no longer be including in their certificates, and what this whole change is about.
However when one server connects to another server, all of TCP, TLS and the application code see the initiating party as a "client", which is distinct from say, an "XMPP client" which is an end-user application running on e.g. some laptop or phone.
The comment I was responding to clearly specified " I don't see how TLS-with-client-EKU [...]" which was more specific, however I used the more vague term "client certificates" to refer to the same thing in my response for brevity (thinking it would be clear from the context). Hope that clarifies things!
It has never been secure to to rely on the Mozilla root store at all, or any root store for that matter, as they all contain certificate authorities which are in actively hostile countries or can otherwise be coerced by hostile actors. The entire security of the web PKI relies on the hope that if some certificate authority does something bad it'll become known.
Correct, but it's not a vain hope. There are mechanisms like certificate transparency that are explicitly designed to make sure any misbehavior does become known.
Let's translate and simplify:
> The current CA ecosystem is Google. They want that only Google-applications get certificates from CAs.
Huh? Google does not even make a web server, or any kind of major servers, unless you count GCP load balancers or whatever. You are confusing their control of the client (which is still significantly shared with Apple and Microsoft since they control OS-level certificate trusts) with the server side, who are the "customers" of the CA. Google has almost no involvement in that and couldn't care less what kind of code is requesting and using certificates.
Google already has its own CA that is used for its own systems as well as to issue certificates for GCP customers. They don't interact with Lets Encrypt or any other external CA as far as I know for their own services.
There are always plenty of people who aren't interested in doing any hard work themselves but are along for the ride, and periodically some of those people are very angry because a system they expended no effort to maintain hasn't focused on their needs.
The Web PKI wasn't somehow blessed by God, people made this. If you do the hard work you can make your own PKI, with your own rules. If you aren't interested in doing that work, you get whatever the people who did the work wanted. This ought to be a familiar concept.
The title of the current (2.2.2) CAB standard is "Baseline Requirements for the Issuance and Management of Publicly‐Trusted TLS Server Certificates":
* https://cabforum.org/working-groups/server/baseline-requirem...
§1.3, "PKI Participants", states:
> The CA/Browser Forum is a voluntary organization of Certification Authorities and suppliers of Internet browser and other relying‐party software applications.
IMHO "other relying-party software applications" can include XMPP servers (also perhaps SMTP, IMAP, FTPS, NNTP, etc).
If Google/Chrome doesn't want to allow it, good for them. But why do they get to dictate what others do?
If CAs don't want hostility from browser companies for using https certificate for non-http/browser applications, they should build their own thing.
I put "HTTPS certificates" in quotes in this comment because it is not a technical term defined anywhere, just a concept that "these certificates should only be used for HTTPS". The core specifications talk about "TLS servers" and "TLS clients".
There's loads of non web, non HTTPS TLS use cases, it's just the CAB doesn't care about those (why should it?).
However, eventually, the CABF started imposing restrictions on the public CA operators regarding the issuance of non-HTTPS certificates. Nominally, the CAs are still offering "TLS certificates", but due to the pressure from the CABF, the allowed certificates are getting more and more limited, with the removal of SRVname a few years ago, and the removal of clientAuth that this thread is about.
I can understand the CABF position of "just make your own PKI" to a degree, but in practice that would require a LetsEncrypt level of effort for something that is already perfectly provided by LetsEncrypt, if it wouldn't be for the CABF lobbying.
The restriction is on signing non web certificates with the same root/intermediate as is part of the WebPKI.
There's no rule (that I'm aware of?) that says the CAs can't have different signing roots for whatever use-case that are then trusted by people who need that use case.
[citation needed]
The title of their current (2.2.2) standard is "Baseline Requirements for the Issuance and Management of Publicly‐Trusted TLS Server Certificates":
* https://cabforum.org/working-groups/server/baseline-requirem...
§1.3, "PKI Participants", states:
> The CA/Browser Forum is a voluntary organization of Certification Authorities and suppliers of Internet browser and other relying‐party software applications.
IMHO "other relying-party software applications" can include XMPP servers (also perhaps SMTP, IMAP, FTPS, NNTP, etc).
My citation is the membership of the CAB.
> IMHO "other relying-party software applications" can include XMPP servers (also perhaps SMTP, IMAP, FTPS, NNTP, etc).
This may be your opinion, but what's the representation of XMPP etc. software maintainers at the CAB?
> My citation is the membership of the CAB.
It is a single member of the CAB that is insisting on changing the MAY to a MUST NOT for clientAuth. Why does that single member, Google-Chrome, get to dictate this?
Has Mozilla insisted on changing the meaning of §1.3 to basically remove "other relying‐party software applications"? Apple-Safari? Or any other of the "Certificate Consumers":
* https://cabforum.org/working-groups/server/#certificate-cons...
The membership of CAB collectively agree to the requirements/restrictions they places on themselves, and those requirements (a) state both browser and non-browser use cases, and (b) explicitly allow clientAuth usage as a MAY; see §7.1.2.10.6, §7.1.2.7.10:
* https://cabforum.org/working-groups/server/baseline-requirem...
A serious problem with traditional CAs, which was partly solved by Let's Encrypt just giving them away. Everyone gradually realized that the "tying to real identity" function was both very expensive and of little value, compared to what people actually want which is "encryption, with reasonable certainty that it's not MITMd suddenly".
From https://cabforum.org/
> Welcome to the CA/Browser Forum > > The Certification Authority Browser Forum (CA/Browser Forum) is a voluntary gathering of Certificate Issuers and suppliers of Internet browser software and other applications that use certificates (Certificate Consumers).
From https://letsencrypt.org/docs/faq/
> Does Let’s Encrypt issue certificates for anything other than SSL/TLS for websites? > > Let’s Encrypt certificates are standard Domain Validation certificates, so you can use them for any server that uses a domain name, like web servers, mail servers, FTP servers, and many more.
Are we really at an age where people don’t remember that SSL was intended for many protocols, including MAIL?!
Do you think email works on web technology because you use a web-client to access your mailbox?
Jesus christ, formal education needs to come quickly to our industry.
X.509 was published in November 25, 1988 ; version 3 added support for "the web" as it was known at the time. One obvious use was for X.400 e-mail systems in the 1980s. Novell Netware adopted x.509.
It was originally intended to use with X.511 "Directory Access Protocol", which LDAP was based on. You can still find X.500 heritage in Microsft Exchange and Active Directory, although it's getting less over time and e.g. EntraID only has some affordances for backward compatibility.
It just went away, upset. It might never come back.
It's a little vague, but my understanding reading between the lines is that sometimes, when attempts were made to push through security-enhancing changes to the Web PKI, CAs would push back on the grounds that there'd be collateral damage to non-Web-PKI use cases with different cost-benefit profiles on security vs. availability, and the browser vendors want that to stop happening.
Let's Encrypt could of course continue offering client certificates if they wanted to, but they'd need to set up a separate root for those certificates to chain up to, and they don't think there's enough demand for that to be worth it.
In theory, Chrome's rule would split the CA system into a "for web browsers" half and a "for everything else" half - but in practice, there might not be a lot of resources to keep the latter half operational.
But prohibiting certs from being marked for client usage is mostly unrelated to that goal because:
1. There are many non-web use cases for certificates that are only used for server authentication. And
2. There are use cases where it makes sense to use the same certificate used for web PKI as a client with mTLS to another server using web PKI, especially for federated communication.
Do you (or anyone else) have an example of this happening?
They launched a gross pressure campaign, trotting out "small businesses" and charity events that would lose money unless SHA-1 certificates were allowed. Of course, these payment processors did billions in revenue per year and had years to ship out new credit card terminals. And small organizations could have and would have just gotten a $10 Square reader at the nearest UPS store if their credit card terminals stopped working, which is what the legacy payment processors were truly scared of.
The pressure was so strong that the browser vendors ended up allowing Symantec to intentionally violate the Baseline Requirements and issue SHA-1 certificates to these payment processors. Ever since, there has been a very strong desire to get use cases like this out of the WebPKI and onto private PKI where they belong.
A clientAuth EKU is the strongest indicator possible that a certificate is not intended for use by browsers, so allowing them is entirely downside for browser users. I feel bad for the clientAuth use cases where a public PKI is useful and which aren't causing any trouble (such as XMPP) but this is ultimately a very tiny use case, and a world where browsers prioritize the security of ordinary Web users is much better than the bad old days when the business interests of CAs and their large enterprise customers dominated.
[1] https://groups.google.com/g/mozilla.dev.security.policy/c/RH...
[2] https://groups.google.com/g/mozilla.dev.security.policy/c/yh...
[3] https://groups.google.com/g/mozilla.dev.security.policy/c/LM...
Google is hoping that after this change other TLS clients will go off and build their own PKI entirely separate from the web PKI, but in reality that would take way too much redundant effort when the web PKI already does 99% of what they want. What will actually happen is clients that want to use web certs for client authentication will just start ignoring the value of the extendedKeyUsage extension. The OP says Prosody already does. I don't see how that's an improvement to the status quo.
How exactly?
TL;DR yes, tis a credible threat.
I bet google themselves would be scared of anti-trust lawsuits over this. Even if they weren't, i don't think they'll really go so far as to compromise the security of half of the internet just to get their way on this one small improvement.
Either way, I think LE has enough power to at least push-back and see where things fall. continuing to support users can't hurt them, until they truly have no other choice.
This was exactly the point of recent (2024) eIDAS update, which introduced EU Trusted Lists. The original draft was that the browsers were mandated to accept X.509 certs from CAs („TSP”s) accredited in EU by national bodies. Browsers were supposed not to be free to just eject CAs from their root programs for any reason or no reason at all, but in case of infractions they were supposed to report to CAB or NAB that would make the final decision.
Browesers responded by lobbying, because the proposal also contained some questionable stuff like mandatory EV UI, which the browsers rightfully deprecated, and also it wasn't clear if they can use OneCRL and similar alternative revocation schemes for mitigations of ongoing attacks. The language was diluted.
Why an entire new root? Perhaps set up a ACME profile [1] where the requestor could ask for clientAuth if their use case would be helped; the default would be off.
Google would be free to reject with-clientAuth HTTPS server certificates in their browser, but why should they care if a XMPP or SMTP server has a such a certificate if the browser never sees it?
[1] https://datatracker.ietf.org/doc/draft-ietf-acme-profiles/
> To qualify as a dedicated TLS server authentication PKI hierarchy under this policy:
> All corresponding unexpired and unrevoked subordinate CA certificates operated beneath an applicant root CA MUST:
> [...]
> when disclosed to the CCADB…
> [...]
> on or after June 15, 2025, include the extendedKeyUsage extension and only assert an extendedKeyUsage purpose of id-kp-serverAuth.
> [...]
> NOT contain a public key corresponding to any other unexpired or unrevoked certificate that asserts different extendedKeyUsage values.
https://googlechrome.github.io/chromerootprogram/policy-arch...
According to Google. Why do they get to dictate this?
Per the current (2.2.2) CAB requirements [1], §7.1.2.10.6, "CA Certificate Extended Key Usage": id-kp-clientAuth is a MAY.
If I was (say) Let's Encrypt I would (optionally?) allow it and dare Google/Chrome to remove my root certificate. Letting bullies get away with this kind of non-sense only encourages them.
[1] https://cabforum.org/working-groups/server/baseline-requirem...
Why does having the clientAuth capability make a certificate "insecure"?
CA/Browser Forum has disallowed the issuance of server certificates that make use of the SRVName [0] subjectAltName type, which obviously was a server use case, and I guess the only reason why we still are allowed to use the Web PKI for SMTP is that both operate on the server hostname and it's not technically possible to limit the protocol.
It would be perfectly fine to let CAs issue certificates for non-Web use-cases with a different set of requirements, without the hassle of maintaining and distributing multiple Roots, but CA/BF deliberately chose not to.
[0] https://community.letsencrypt.org/t/srvname-and-xmppaddr-sup...
The title alone tells you that they are fully aware that they are fucking others over and don't care one bit.
In a better world this kind of manipulative language would get you shamed and ostracized but somehow it's considered professional communication.
Calling Google's bluff and seeing if they would willingly cut their users off from half the web seems like an option here.
Based on previous history where people actually did call google's bluff to their regret, what happens is that google trusts all current certificates and just stops trusting new certs as they are issued.
Google has dragged PKI security into the 21st century kicking and screaming. Their reforms are the reason why PKI security is not a joke anymore. They are definitely not afraid to call CA companies bluff. They will win.
No it's not. It's a specific argument, that's true only in specific cases. You shouldn't handle knives, is equally a good rule from a general design standpoint. But nonsensical when you're a chef.
You should have separate chains is a reasonable decision when the ability to rotate out a compromised chain, and insulate some downtime, from other chains/usages is desirable. Needing to manage multiple cert chains is more overhead. Making use or maintenance harder. It increases complexity.
Large companies have never been afraid of more overhead. It's their singular advantage.
Removing features someone is using, and calling it better security, when it doesn't actually meaningfully reduce or remove some risk is weaponized incompetence. And sufficiently advanced incompetence, is....
There's no world where anyone gains additional protection, from a 3rd party compromise. Or one where LE has one of chains compromised, but doesn't rotate all of them.
I'm curious what other use cases there have been for domain-validated client certs aside from XMPP.
It's one of those things that has just piggybacked on top of WebPKI and things just piggybacking is a bad idea. There have been multiple cases in the past where this has caused a lot of pain for making meaningful improvements (some of those have been mentioned elsewhere in this thread).
The PKI system was designed independently of the web and the web used to be one usecase of it. You're kind of turning that around here.
"WebPKI" is the term used to refer to the PKI used by the web, with root stores managed by the browsers. Let's Encrypt is a WebPKI CA.
You're trying to make it sound like there has ever been some kind of an universal PKI that can be used for everything and without any issues.
WebPKI is the name of a specific PKI system, where PKI us a generic term for any PKI.
Sure, they supported the nascent HTTPS very early on, but most of the web thought that certificates were "too expensive for the likes of us", and so only really banks and the like actually adopted HTTPS. Most of the internet was still HTTP only for years after HTTPS was available.
Only when LE came along and started offering free certificates and facilitated a massive uptake in HTTPS websites were Google ever in a position to default to marking HTTP as "insecure and dangerous".
I've got no figures, but I suspect that if LE were to kick their heels in, that Google wouldn't dare risk half the internet not working using their browser. I'm sure that would be some people who didn't want to be collateral damage if there was a standoff and would switch to a CA that complied with Google's will, but I suspect most people would be happy to see Google challenged on this. And end users would hopefully discover that every other browser still worked, just Chrome had broken, and Chrome would quite rapidly fall out of favour.
Google didn't just make TLS popular, they made it secure.
It's for your own good dontchaknow!
Given that LE renews certs every few weeks that wouldn't take long
I do hate this vagueness. What cases? Identify at least one.
But this is also why the current PKI mindset is insane. The warnings are never truly about a security problem, and users have correctly learned the warnings are useless. The CA/B is accomplishing absolutely nothing for security and absolutely everything for centralized control and platform instability.
is it their fault?
with the structure of the browser market today: you do what Google or Apple tell you to, or you're finished as a CA
the "forum" seems to be more of a puppet government
Even Google and Apple from a corporate level likely have no idea what their CA/B reps are doing and would trust their expertise if asked, regardless of how many billions of dollars it is burning.
The CA/B has basically made itself accountable to nobody including itself, it has no incentives to balance practicality or measure effectiveness. It's basically a runaway train of ineffective policy and procedure.
https://cabforum.org/2025/06/11/minutes-of-the-f2f-65-meetin...
The real takeaway is that there's never been a lot of real thought put into supporting client authentication - e.g. there's no root CA program for client certificates. To use a term from that discussion, it's usually just "piggybacked" on server authentication.
Lets Encrypt is just used for like, webservers right, why do this other stuff webservers never use.
Which does appear to be the thinking, though they blame Google, which also seems to have taken the 'webservers in general don't do this, it's not important' - https://letsencrypt.org/2025/05/14/ending-tls-client-authent...
[1] https://letsencrypt.org/2025/05/14/ending-tls-client-authent...
1. Use DANE to verify the client certificate. But that requires DNSSEC, which isn't widely used. Would probably require new implemntations of the handshake to check the client cert, and would add latency since the server has to do a DNS call to verify the clients cer.
2. When the server receives a request it makes an https request to a well known enpdpoint on the domain in the client-cert's subject that contains a CA, it then checks that the client cert is signed by that CA. And the client generates the client cert with that CA (or even uses the same self-signed cert for both). This way the authenticity of the client CA is verified using the web PKI cert. But the implementation is kind of complicated, and has an even worse latency problem than 1.
3. The server has an endpoint where a client can request a client certificate from that server, probably with a fairly short expiration, for a domain, with a csr, or equivalent. The server then responds by making an https POST operation to a well known enpdpoint on the requested domain containing a certificate signed by the servers own CA. But for that to work, the registration request needs to be unauthenticated, and could possibly be vulnerable to DoS attacks. It also requires state on the client side, to connect the secret key with the final cert (unless the server generated a new secret key for the client, which probably isn't ideal). And the client should probably cache the cert until it expires.
And AFAIK, all of these would require changes to how XMPP and other federated protocols work.
(1) just isn't that widely deployed due to low DNSSEC adoption and setup complexity, but there is a push to get server operators to use it if they can.
(2) is defined in RFC 7711: https://www.rfc-editor.org/rfc/rfc7711 however it has more latency and complexity compared to just using a valid certificate directly in the XMPP connection's TLS handshake. Its main use is for XMPP hosting providers that don't have access to a domain's HTTPS.
But if you're going to modify the protocol anyway then why not just put it in the protocol that a "server" certificate is to be trusted even if the peer server is initiating rather than accepting the connection? That's effectively what you would be doing by trusting the "server" certificate to authenticate the chain of trust for a "client" certificate anyway.
The technical term for such cross-service requirements is "a giant pain in the ass".
But it still seems like the premise is wrong. The protocol is server-to-server and the legacy concept that one of them is the "client" and needs a "client certificate" is inapplicable, so why shouldn't the protocol just specify that both peers are expected to present a "server certificate" regardless of which one initiated the connection?
[1] https://snikket.org/service/quickstart/
[2] https://github.com/snikket-im/snikket-server/blob/master/ans...
Your description is great :)
If you strictly require the side that opens the TCP connection to only use client certs and require the side that gets the TCP connection to only use server certs, then workflows where both sides validate each other become impossible with a single connection.
You could have each server open a TCP connection to the other, but then you have a single conversation spread across multiple connections. It gets messy fast, especially if you try to scale beyond a single server -- the side that initiates the first outgoing connection has to receive the second incoming connection, so you have to somehow get your load balancer to match the second connection with the first and route it to the same box.
Then at the protocol level, you'd have to essentially have each connection's server send a random number challenge to the client saying "I can't authenticate clients because they don't have certs. So please echo this back on the other connection where you're the server and I can authenticate you." The complexity and subtlety of this coordination dance seems like you're just asking security issues.
If I was implementing XMPP I would be very tempted to say, "Don't be strict about client vs. server certs, let a client use a server cert to demonstrate ownership of a domain -- even if it's forbidden by RFC and even if we have to patch our TLS library to do it."
"This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline."
TL;DR blame Google
As LE says, most users of client certs are doing mtls and so self-signed is fine.
That seems disingenuous. Doesn't being in the client cert business now require a lot of extra effort that it didn't before, due entirely to Google's new rule?
No-one that uses it is authenticating anything more than the other party has an internet connection and the ability, perhaps, to read. No part of the Subject DN or SAN is checked. It's just that it's 'easy' to rely on an existing trust-store rather than implement something secure using private PKI.
Some providers who 'require' public TLS certs for mTLS even specify specific products and CAs (OV, EV from specific CAs) not realising that both the CAs and the roots are going to rotate more frequently in future.
But it also doesn't involve any particular trust in the CA either. Lets Encrypt has nothing to offer here so there's no reason for them to try to make promises.
If you're relying on a certificate for authentication - issue it yourself.
And there's value there, if you're a server. It's why XMPP wants federated servers to authenticate themselves with certificates in the first place.
(This is basically how Let's Encrypt / ACME accounts work)
There's DANE but outside of maybe two countries that's impractical to set up because DNS providers keep messing up DNSSEC.
That's how they're implemented. How they "work" is a trivial pushbutton thing as documented by a well-known and trusted provider who cares deeply about simple user experience.
"Just self-sign a cert" is very much not the story XMPP wants their federated server operators to deal with.
Using web pki for client certs seems like a recipe for disaster. Where servers would just verify they are signed but since anyone can sign then anyone can spoof.
And this isn't just hypothetical. I remember xmlsec (a library for validating xml signature, primarily saml) used to use web pki for signature validation in addition to specified cert, which resulted in lot SAML bypasses where you could pass validation by signing the SAML response with any certificate from lets encrypt including the attackers.
This seems exactly like a reason to use client certs with public CAs.
You (as in, the server) cannot verify this at all, but a public CA could.
If it's for auth, issue it yourself and don't rely on a third-party like a public CA.
Rolling out a private PKI for XMPP, with a dedicated Root CA, would be a significant effort, essentially redoing all the hard work of LetsEncrypt, but without the major funding, thus ending up with an insecure solution.
We make use of the public CAs, that have been issuing TLS certificates based on domain validation, for quite a few years now, before the public TLS CAs have been subverted to become public HTTPS-only CAs by Google and the CA/Browser Forum.
Rolling out a change that removes the EKU check would not be that much effort however.
It seems weird to first remove the flag and then tell everyone to update their servers to ignore the removal. Then why remove it in the first place?
Or to put it another way, the pragmatic response of the XMPP community shows that the effect of the change is not to remove the clientAuth capability from any certs but to effectively add it to all serverAuth certs no matter what the certificate says.
The XMPP community can continue to adapt other infrastructure for their purposes and do the thing they do. It does not mean it has to be catered to.
My point was that this is yet another change that makes TLS operations harder for non-Web use cases, with the "benefit" to the WebPKI being the removal of a hypothetical complexity, motivated by examples that indeed should have used a private PKI in the first place.
That's the same problem we have with server certs, and the general solution seems to be "shorter cert lifetimes".
> Worse still, some APIs (mainly for finance companies) require things like OV and EV, but of course they couldn't check the Subject DN if they wanted to.
Not an expert there, but isn't the point of EV that the CA verified the "real life entity" that requested the cert? So then it depends on what kind of access model the finance company was specifying for its API. "I don't care who is using my API as long as they are a company" is indeed a very stupid access model, but then I think the problem is deeper than just cert validation.
It's not stupid if you reframe it as "you can only use my API if you give me a cryptographically verifiable trace to your legal identity".
No it isn't, and that's not the reason why cert lifetimes are getting smaller.
Cert lifetimes being smaller is to combat certs being stolen, not man in the middle attacks.
It was only ever used because other options are harder to implement.
If you already trust xmpp.foo.com, then you probably shouldn't be using PKI, as PKI is a complex system to solve the problem where you don't have preexisting trust. (I suppose maybe PKI could be used to help with rolling over certs)
You could also solve this with API keys or plain old authentication, but all of those require effort on xmpp.foo.com's side to specifically support your server.
Client certs seem better suited in that regard. A server can generate a trusted client cert once, and then everyone else can verify connections from that server without having to do any prior arrangements with it.
Last time I checked, Let's Encrypt was saying they provide free TLS certs, not free WebPKI certs. When did that change?
Lets encrypt provides value by providing signed TLS certs that are enrolled in webPKI (i.e. trusted by browsers).
If they were just provided a (not necessarily trusted) tls cert, like what anyone can generate from the command line, nobody would use them.
This is a significant and important use case that's totally ignored by the "WebPKI" proponents, and there is no alternative infrastructure that would provide that value if WebPKI would e.g. decide to add certificate constraints limiting issued certificates to TCP/433.
The CA verifies the subject just like any server certificate, which is what LE has already been doing.
The server verifies the subject by checking that the name in the certificate matches the name the client is claiming to be.
So i guess that could make sense.
Yes, the reason is called "Chrome" and "90% market share"...
If any ecosystem - such as XMPP - wishes to, they could start their own root-program, but many simply copy what Chrome or Mozilla do and then are surprised when things change.
As stated in the blog post, we (Prosody) have been accepting (only) serverAuth certificates for a long time. However this is technically in violation of the relevant RFCs, and not the default behaviour of TLS libraries, so it's far from natural for software to be implementing this.
There was only one implementation discovered so far which was not accepting certificates unless they included the clientAuth purpose, and that was already updated 6+ months ago.
This blog post is intended to alert our users, and the broader XMPP community, about the issue that many were unaware of, and particularly to nudge server operators to upgrade their software if necessary, to avoid any federation issues on the network.
You don't have to do what the RFC says.
> Does this affect Prosody?
> Not directly. Let’s Encrypt is not the first CA to issue server-only certificates. Many years ago, we incorporated changes into Prosody which allow server-only certificates to be used for server-to-server connections, regardless of which server started the connection. [...]
I've seen non-XMPP systems where you configure the DNS name to require in the client certificate.
It's possible to do this securely, but I agree entirely with your other comment that using a public PKI with client certs is a recipe for disaster because it's so easy and common to screw up.
It might be possible to relay communications between two servers and have one of them act as a client without knowing. Handshake verification prevents that in TLS, but there could be similar attacks.
In this case I do think it makes sense for servers to accept certs even as marked by servers, since it's for a s2s use case. But this just feels like such an unnecessary clamping down. To have made certs finally plentiful, & available for use... Then to take that away? Bother!
They do what Google says.