Althogh my site has approx 300 questions, but my URLs indexed in search engines are more than 1000. This is because of indexing of tags, categories, question pages, activity pages etc. So, Any Q2A user want to block those pages being indexed by search engines. Many similar question to this suggests to use "disallow: /example-page/" in robots.txt.
But, be clear guys, Robots.txt tells search engines to crawl or not and Meta Tags tells search engines to index or not.
If you add disallow to any pages in robots.txt, it won't make your pages being de-indexed by search engines. You should add <meta noindex,nofollow>
I saw many q2a websites but none of them are performing very well in search engines, this is because they index all pages and search engines hate it! I saw many duplicate title and description issues in Google webmaster tools. If this thing resolved (adding meta tags via admin panel or custom function codes) then believe me, there will be more users coming to q2a sites as well as more users to question2answer script.
I hope developers will take this as serious issue
Below is the glance of an article of SEOMOZ regarding robots.txt and meta tags:
Blocking page
There are a few ways to block search engines from accessing a given domain:
Block with Robots.txt
This tells the engines to not crawl the given URL but tells them that they may keep the page in the index and display it in in results.
Block with Meta NoIndex
This tells engines they can visit but they are not allowed to display the URL in results. (This is the recommended method)