Here’s my recent experience with the Google Supplemental Index:A relatively new site that I had launched was receiving some traffic from Google. All of a sudden, traffic stopped, and I ran a site: command to find that my site is still in the index, but now there are two copies of my site’s homepage: http://www.mysite.com?q=a, which is in the main index, and http://www.mysite.com, which is in the supplemental index.
Clearly, my URL got placed in the supplemental index because http://www.mysite.com?q=a and http://www.mysite.com are quite similar (they are not identical, but it looked like Google thought they are close enough to be considered duplicates), and Google just happened to keep the one with a query string in the main index. Of course, all the backlinks point to http://www.mysite.com, so http://www.mysite.com?q=a ranked poorly (was top 5, now in the 200′s), hence the reason I was no longer receiving traffic from Google.
Once I found out what the problem was, the solution was straightforward: Modify robots.txt. I changed robots.txt so that Googlebot cannot see http://www.mysite.com?q=a. This had the desired effect, as 2-3 days later, http://www.mysite.com got back to the main index with its original ranking, and the site started receiving traffic from Google again.