A robots.txt file is a text file that tells search engine crawlers which pages or files the crawler can or can't request from your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google.
Always ensure your robots.txt file is located in the root directory of your site (e.g., example.com/robots.txt). Use relative paths for directives and absolute URLs for sitemaps. Remember that robots.txt is a public file—do not use it to hide sensitive information.