MicrostockGroup
Microstock Photography Forum - General => Symbiostock => Symbiostock - Technical Support => Topic started by: cathyslife on September 02, 2013, 21:19
-
Last night i got an email from google saying it couldnt crawl my site because it was missing the robots.txt file. I used fetch as google, and it found it just fine, so i sent it to index. Tonight i have gotten an email saying i have a large number of urls that cant be accessed. 238 to be exact. Any idea what might be causing this? Or how i might fix it?
-
I would be interested to know the answer to this as well as I has similar emails but when I tried to access the folders they were usually there
-
Since it happened the day we were troubleshooting, my best guess would be that the site was down at the time when the googlebot arrived. Then, because of the unusual activity, bluehost throttled my account. Which may have caused pages to load really slow and the bot gave up. These might not have happened all at the same instance but over the 1-2 days. But id like to hear ideas from someone with more knowledge about it than i.
-
Anyone?
-
Didn't you answer the question already?
Crawling errors usually happens when a site has a down time or when links are broken.
-
Didn't you answer the question already?
Crawling errors usually happens when a site has a down time or when links are broken.
Thanks for confirming. I answered with my guess, which isnt the same as knowing. :)