In this post, I want to share with you my experience about writing reports for bug bounty programs.

Important Note

Its important to note, that these tips aren't describing the golden rule about the reporting, I just want to share my experience and thoughts about a pretty well written, informative report.


In my opinion, the sections are very important in a report. Just think about the analyst, who is reading! If you provide a short summary about the vulnerability you found, it helps the initial decision, whether the finding is duplicate or not, or even get the analyst some initial insight about the vector of the vulnerability.

When I’m writing my reports, I’m splitting my report into four different sections. These are:


In this section I’m describing the vulnerability in a few words, give the analyst some

#Proof Of Concept

This is the core of your report. I would mark this section as the most important one, because in this section, you have to provide a clear understanding about the issue you found.


Describe the severity and the actual risk of the finding.


In this section, I suggest you to provide some information about the potential fix for the issue.

#Overview section - For the good impression

As described above, in this section you will provide some initial information about the finding without any deep technical details. The purpose of this section is to help the analyst, how to categorize your finding, and to decide, whether is it unique, or a dupe. Keep in mind, that these are the first lines of your report, so -since we are all human- this gives the analyst a good or a bad impression about your report. To be clear, your overview should give answer for the following questions:

  • Who can do?
  • What can be done?
  • What is the category of the vulnerability, or issue?

If you provide the above information, I bet you will give good impression for the reader!

#Proof Of Concept section - Simply the CORE

Now the analyst have the basic information about the report, so go for the real technical details. Don’t bother them by copy/paste some generic bullshit about a well-known issue. Just trust me, the guys on the other side know a lot about the vulnerabilities. They don’t need any Wikipedia or owasp description of the SQL injection attacks. Just provide them the exact information, what is the function, where you can successfully exploit the vulnerability, and tell them how. It’s very important to be aware of the issue, and understand, what are you actually reporting. Go for every steps in details. It’s maybe not important for you, but really easy steps are important for the analyst.

I provide a section inside the Proof Of Concept, where the prerequisites of the attack are described. For example in the case of an IDOR, I usually tell them, what type of user levels are required, what type of objects have to be created in order to be able to reproduce the issue. Obviously if there is no special requirement to exploit the vulnerability, just simply go for the steps.

I think the numbering method is very cool. Just simply number your steps, because if the analyst needs some extra information, these incremental numbers can easily referenced later.

When you are writing your POC, simultaneously replicate every command or step, to be sure, you are not writing something wrong. Beleive me, the easiest command can be fucked up in a report, and it gives a headache for the reader.

The screenshots. When you want to share some information on a screenshot, don’t forget that the reader doesn’t know what are you aiming at, if you don’t mark on the screenshot. Use a good color on the image to mark the important part! (Don’t use yellow on white background!)

Don’t capture your whole screen! The analyst is not interested about your backgound image or the actual temperature of your CPU. Just the most important part! Always provide some information about the picture. What can we see on it? Sometimes, it’s not so straightforward.

When you think, you are finished with the POC, you should TEST IT! According to my previous POCs, In 50% of them, I had typos in the commands, or used the wrong parameter, etc. It’s very important to reproduce the issue based on your documentation!

#Risk - Is it important or not?

This section’s aim is to provide a real world scenario, how can the issue exploited and what is the outcome of it. You don’t have to write a novel about a hacker, but you can give some information about the result of the issue.

I also provide CVSS score and attack vector, to clearly provide a general metric about the vulnerability I found.

If you are not familiar with the CSS scoring, please take a look at the following article, where all the details are provided.

If you are really aware of the issue you found, you will be able to determine the CVSS score of it. For the scoring I use the following calculator:

Let’s have a look at an authenticated SQL injection’s CVSS vector and score:

In this case, I would use the following vector string: AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N
What does it mean? The vector string provides a value for each parameters in the methodology. With the First calculator, you can easily open the vector, by inserting the string in the URL:

AV – This metric reflects the context by which vulnerability exploitation is possible. The Base Score increases the more remote (logically, and physically) an attacker can be in order to exploit the vulnerable component. In our case the “AV:N” means “Network“, because we exploited the vulnerability on a web application over the internet, and no direct connection needed to the target.

AC – This metric describes the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. Such conditions may require the collection of more information about the target, the presence of certain system configuration settings, or computational exceptions. In our case the “AC:L” means “Low“, because we don’t need any special condition in order to successfully exploit the vulnerability.

PR – This metric describes the level of privileges an attacker must possess before successfully exploiting the vulnerability. This Base Score increases as fewer privileges are required. In our case the “PR:L” means “Low“, because we need an authenticated, but not high privileged user in order to successfully exploit the vulnerability.

UI – This metric captures the requirement for a user, other than the attacker, to participate in the successful compromise the vulnerable component. This metric determines whether the vulnerability can be exploited solely at the will of the attacker, or whether a separate user (or user-initiated process) must participate in some manner. The Base Score is highest when no user interaction is required. In our case the “UI:N”  means “None“, because we don’t need any user interaction in order to successfully exploit the vulnerability.

S – Does a successful attack impact a component other than the vulnerable component? If so, the Base Score increases and the Confidentiality, Integrity and Authentication metrics should be scored relative to the impacted component. In our case the “S:U”  means “Unchanged“, because the vulnerable and affected or impacted components of the system are the same.

C – This metric measures the impact to the confidentiality of the information resources managed by a software component due to a successfully exploited vulnerability. Confidentiality refers to limiting information access and disclosure to only authorized users, as well as preventing access by, or disclosure to, unauthorized ones. In our case the “C:H” means “High“, because we can dump the whole DB by executing the vulnerability.

I – This metric measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of information. In our case the “I:N” means “None“, because the data we dumped can’t be modified in the database.

A – This metric measures the impact to the availability of the impacted component resulting from a successfully exploited vulnerability. It refers to the loss of availability of the impacted component itself, such as a networked service (e.g., web, database, email). Since availability refers to the accessibility of information resources, attacks that consume network bandwidth, processor cycles, or disk space all impact the availability of an impacted component. In our case the “A:N” means “None“, because we aren’t able to DOS the application and the DB by dumping the data.

When we selected the appropriate values for each parameters, we receive a score, what is 6.5 in our case. It means the vulnerability has medium severity based on the CVSS 3.0 methodology. 

The scores have the following thresholds:

    • None: 0.0
    • Low 0.1 – 3.9
    • Medium 4.0 – 6.9
    • High 7.0 – 8.9
    • Critical 9 – 10.0

There is no need to “Boost” your finding, and give it a higher severity, because in the end it will be always modified according the real risk.

Recommendation - Just if you are sure

This section is totally optional, and should be used only if you are 90% clear about the issue background and the proper method of the successful patching. If you have some extra information, maybe you use the same software and you already figured out, how can it be fixed, you can provide this information.


#Your inputs and test data

It’s really important to use demonstrative values in the input fields, and forget using things like “haha u have been fcked”.

Why is it important?

Let’s think about an XSS in some fields. There is no additional information in the “haha” string, but for example if you are using document.location, or document.cookie, or alert(“[NAMEOFTHEINPUTFIELD]”), is much more valuable for the further investigation.

The test data. Imagine a huge web application, where you can create hundreds of different type entity. It makes your life easier as well. For example: “Test_article_01”, “Test_comment_01” and “Test_dashboard_01” so on…

Also if you have file upload opportunity, use file names to reflect their content. xss_payload_01.jpg

It’s really important to be structured even during your testing phase, because this approach makes your documentation much more easier.


The companies aren’t interested in the output of your Nessus or Burp scan. They can do it as well, so they are expecting unique information.

Scanners are pretty awesome, but the generating a huge load, and do not have real valuable information about the target. Most of the programs aren’t allow you to use automated tools. Even if you are using a scanner, and it provides you a result, ALWAYS manually review and validate the information!


Always keep in mind, that human resource is required to evaluate your report. This takes time for the analysts, so please be kind and don’t ask them on a daily basis about your finding. They will reach you, when any update is available about your report.