Observations on Responsible Vulnerability Disclosure in Practice
Over the past few months I’ve been collaborating with two talented Finnish security researchers Tomi Koski and Joona Hoikkala on getting the results of their vulnerability research published and recognized.
On the sidelines, I’ve also made a number of observations on how the responsible vulnerability disclosure process works — or doesn’t. From the researchers’ perspective this can be quite frustrating even if your findings are rock solid and important.
Researchers: The First Obstacle is Often the Steepest
Usually as a researcher, the first and biggest obstacle for you to overcome is how to collaborate with the vendor and getting them to realize, reproduce, fix and publish the vulnerabilities you have discovered. This can take time and in some cases even lead to a situation where the vendor hides their head in the sand like an Ostrich and pretends the vulnerabilities do not exist.
Quem superare potes interdum vince ferendo;
maxima enim est hominum semper patientia virtus.Defeat one you could destroy with an enduring mind.
Patience is the greatest virtue of human kind(sic).
Of course the more mature the vendor’s development processes are, the faster and better they usually react to your findings. I’m not saying that all vendors are hopeless, far from it, but discounting and discrediting researchers’ work is not the best approach they can take, especially when the findings can have serious implications to their business in general.
Vendors: Vulnerability Disclosure Process 101
If you represent a vendor and you don’t have an established product security track, getting started doesn’t need to be complicated. The first step can be as easy as setting up a communication channel with the potential reporters through a file called security.txt as defined in RFC 9116. I think the most important motivation for this RFC is the following:
…how security researchers can locate contact information and vulnerability disclosure practices for organizations in order to report vulnerabilities.
Apart from the obvious, i.e. the contact information, the difficult bit for you will be to come up with a vulnerability disclosure policy which will help researchers figure out how to best work with you. A good starting point for writing a policy is for example the one put forth by CISA. It is geared towards US government agencies, but should be easily adaptable for any organization wishing to set one up.
I think some of the most important aspects of a Vulnerability Disclosure Policy, VDP, are the following sections:
- Guidelines: what is expected of a researcher either performing research or trying to report an issue they have discovered.
- Scope: what is in the scope of acceptable vulnerability research, e.g. domains, products, services.
- Test methods: what is out of scope, such as performing a DDoS attack or social engineering.
Doing this will help you get on your way, but the devil is in the details. One important thing to understand about a vulnerability disclosure policy is that it should reflect the best practice of coordinated vulnerability disclosure, CVD. Please bear in mind that your VDP is not an NDA nor a contract that both parties sign and abide by. Moreover, you shouldn’t try to impose clauses in your policy that work against established norms in vulnerability coordination.
For example, if your policy imposes limits on what can be talked about an issue after it has been fixed, it can be detrimental to the end goal, which usually entails that you go through the following steps in your CVD:
- Come up with a mitigation for the issue if possible.
- Fix the issue.
- Make the fix available to your users.
- Publish the issue.
The last point is important so that even users who might not be your customers any more can learn about the issue and mitigate it. These could for example be customers, which have a right to use a certain version of your software but do not have a maintenance contract with you any more. They can be running a vulnerable version of your software and need to know about it.
Quoting the CISA Vulnerability Disclosure Policy template:
This policy describes what systems and types of research are covered under this policy, how to send us vulnerability reports, and how long we ask security researchers to wait before publicly disclosing vulnerabilities.
The bottom line is that the world needs to know about the vulnerabilities after they have been discovered (and fixed). One the best proofs of you as a vendor taking things seriously is to apply for CVEs for the vulnerabilities, period. It is not about if the vulnerabilities need to become public; it is about when they become public.
I quote Jarkko Huttunen:
Hiding security issues does not help anyone but threat actors, as they couldn’t care less about your policies when they attack you or your customers.
Getting CVEs assigned for your security issues is important so that the world and your users can:
- identify specific issues with your software
- the versions they affect
- what is the impact
- and most importantly, which version(s) contain the fixes.
Case Study: Integrity Checking — an Integral Part of Cyber Security
I’m an editor for an independent security blog called Public Exposure. In late June, I received a first draft of a write-up from Joona that was very intriguing. After a couple of rounds of light edits, I agreed with Joona and Tomi that we publish the blog post on Tuesday, 2022–07–12, which happened to be Patch Tuesday. (We usually publish on Patch Tuesday.)
Publishing the details of the vulnerabilities they had discovered was important, so that MITRE could assign CVEs for them. The vendor had not done their part for CVD step 4. detailed above, so the only option for the researchers was to apply for the CVEs themselves.
I’m not going to delve into the details here, as it is best that you read the post by yourself.