Robots.txt Generator
Build a valid robots.txt file visually. Add rules, set crawl delay, include your sitemap — then copy or download.
Rules
Output
- Use
*as the user-agent to target all bots - An empty Disallow allows the bot to crawl everything
Disallow: /blocks the entire site- Test your file in Google Search Console after uploading
What is a robots.txt File?
A robots.txt file is a plain text file placed at the root of your website (e.g. https://example.com/robots.txt) that instructs search engine crawlers which pages they should and shouldn't crawl. It is part of the Robots Exclusion Protocol, a standard followed by all major search engines including Google, Bing, and Yahoo.
While robots.txt does not guarantee a page won't appear in search results, it is essential for preventing crawlers from wasting crawl budget on low-value pages like admin panels, staging directories, or duplicate content.
How to Use This Generator
- 1Optionally enter your sitemap URL. This will add a
Sitemap:directive to the output. - 2Click Add Rule to create a new rule block. Set the user-agent (use
*for all bots). - 3Add Disallow and Allow paths as needed. Add multiple rules for different bots.
- 4Copy the output or click Download to get the robots.txt file, then upload it to your site root.
Use Cases
Prevent crawlers from accessing /admin/, /staging/, and /dev/ paths to conserve crawl budget for important pages.
Block /?s= and faceted navigation URLs like /shop/?color= to prevent duplicate content issues.
Create separate rules for Googlebot, Bingbot, or AI scrapers. For example, you can allow Googlebot but block GPTBot from crawling your content.
Add your sitemap URL so search engines can discover all your pages faster. This is recommended for every website with more than a handful of pages.