• What is Robots.txt file and how to implement in website

Robots.txt is a text file which in SEO used to tell Google spider (crawler) to which website page or folder, not index or crawl. Google bots follow the instruction of Robots.txt file. We can Block whole website, Particular folder or single landing page. It is a set of instructions for search engines bot. For example (https://www.abc.com/robots.txt). Robots.txt file always uploads in the root folder. We can submit in Google Webmaster tools (Google Console).

Robots.txt syntax

User-agent: This is used to give crawl instruction to a different search engine (Google, msnbot, Bing etc..)
Disallow: This command is used for Block
Allow: This command is used to all crawl of website

Crawl-delay: This command is used to tell bots to delay crawl
Sitemap: We used sitemap link in the robots.txt file for easily bots visit sitemap and crawl all updated pages.

Block all Website:-
User-agent: *
Disallow: /

Crawler access allow Website:-
User-agent: *
Disallow:

Block Specific Folder:-
User-agent: *
Disallow: /example-subfolder/

Block Specific Page:-
User-agent: *
Disallow: /example-subfolder/blocked-page.html

Block Specific Crawler:-
User-agent: Googlebot
Disallow: /abc-subfolder/
Disallow: /abc-subfolder/index-page.html