- Page with redirect
- Alternative page with proper canonical tag
- Crawled - currently not indexed
- Blocked by robots.txt
- Excluded by ‘noindex’ tag
- Google chose different canonical than user
I would examine each category and see what are the actual pages exclude. For example " 4 - Blocked by robots.txt". Check the page, then check your robots.txt. Do you block the page or not?
Google excludes about 3.500 pages from its index on my site and for very good reasons, the ones you mentioned above. My website has less than 400 actual information pages. I would massively suffer from duplicate content issues if all those pages where in google index.