These are what I found from Google,
Method 1:
Put simply, the googleon/googleoff tags tell GoogleBot Google Search Appliance when to start and stop indexing various parts of the web document. Consider the following example:
<p>This is normal (X)HTML content that will be indexed by Google.</p>
<!--googleoff: index-->
<p>This (X)HTML content will NOT be indexed by Google.</p>
<!--googleon: index>
In this example, we see how the googleon/googleoff tags will prevent the second paragraph from being indexed by Google. Notice the “index” parameter, which may be set to any of the following:
- index — content surrounded by “googleoff: index” will not be indexed by Google
- anchor — anchor text for any links within a “googleoff: anchor” area will not be associated with the target page
- snippet — content surrounded by “googleoff: snippet” will not be used to create snippets for search results
- all — content surrounded by “googleoff: all” are treated with all attributes: index, anchor, and snippet
Read more > https://perishablepress.com/tell-google-to-not-index-certain-parts-of-your-page/
Method 2: Using CSS
HTML:
<span class="sig">signature goes here</span>
CSS:
.sig {
display:none;
}
Read more: http://webmasters.stackexchange.com/questions/16390/preventing-robots-from-crawling-specific-part-of-a-page
3. Content in iframes and javascript are don't indexed usually
<iframe src ="sidebar.asp" width="100%" height="300">
</iframe>
here the rules to be added in the robots.txt file for block the spider
user-agent: *
disallow: sidebar.asp