The second Web hacking challenge

To be honest, I already got this flag, before I opened this challenge. I don’t want to destroy your game, so let’s go step by step.

When you face any challenge, the first step is always enumeration. Enumeration is the key. The most important is to get familiar with the thing you are facing. What is it? What is it used for?

There are billions of opportunities, but if you are testing a website, you should check the robots.txt file in the very beginning of your engagement. That’s what I did as well in the case of RingZer0.

The robots.tx containing only the following entries:

User-agent: *
Disallow: /16bfff59f7e8343a2643bdc2ee76b2dc/ 

What is the next step? Obviously, open that URI!

You got the flag. That’s what I did as well, just I didn’t know where is this flag goes. But with the hint of this challenge, the solution came right in my face. “Even Google cannot find this one”

What is robots.txt?

Robots.txt is a text file with instructions for search engine crawlers. It defines which areas of a website crawlers are allowed to search. However, these are not explicitly named by the robots.txt file. Rather, certain areas are not allowed to be searched. Using this simple text file, you can easily exclude entire domains, complete directories, one or more subdirectories or individual files from search engine crawling. However, this file does not protect against unauthorized access.

Robots.txt is stored in the root directory of a domain. Thus it is the first document that crawlers open when visiting your site. However, the file does not only control crawling. You can also integrate a link to your sitemap, which gives search engine crawlers an overview of all existing URLs of your domain.

So actually the disallow section means, that none of search engines and crawlers can access the given URI.