Сообщение # 1 | 01:53:41
https://thassos.ucoz.com -> https://thassos.one (using method 3 + 301 redirect from uCoz CP)

Will Google see and move the page rank for each page to the new domain, considering it is a 301 permanent redirect ?
Do you have any experiences from the past with uCoz regarding this matter ?

You are blocking with robots.txt the https://thassos.ucoz.com !

Because some people say :

Blocking in robots.txt urls which are also being redirected means the redirection will never be found.

Don't use robots to block duplicate content!

Often webpages are accessible by a number of different URLs (this is often true in content management systems like Drupal). The temptation is to block the unwanted URLs so that they are not crawled by Google.

In fact, the best way to handle multiple URLs is to use a 301 redirect and/or the canonical META tag.

Don't combine robots and 301s

Most commonly, people realise that Google is crawling webpages it shouldn't, so they block those pages using robots.txt and set up a 301 redirect to the correct pages hoping to kill two birds with one stone (i.e. remove unwanted urls from the index and pass PageRank juice to the correct pages). However, Google will not follow a 301 redirect on a page blocked with robots.txt.

This leads to a situation where the blocked pages hang around indefinitely because Google isn't able to follow the 301 redirect.

I don't want my high ranking site to go back to page rank ZERO and be moved on page 20+ and lose all my clients.
Because this is what is happening right now ! I lost all my positions and google is complaining it can't reach the sub-domain.

So please uCoz unblock the robots.txt on thassos.ucoz.com and just apply the permanent redirect 301 !

If i am wrong please correct me.

Добавлено (27 Июн 2017, 01:53:41)
From google : https://support.google.com/webmasters/answer/6033080

Update your robots.txt files:

On the source site, remove all robots.txt directives. This allows Googlebot to discover all redirects to the new site and update our index.
On the destination site, make sure the robots.txt file allows all crawling. This includes crawling of images, CSS, JavaScript, and other page assets, apart from the URLs you are certain you do not want crawled.

Why uCoz doesn't know this and breaks sites ?

Do i have to enable this ?
Allow default subdomain indexing (by search engines) :
This feature allows search engines to index the default subdomain. If a 301 redirect is set up, the default subdomain and the connected domain will be merged together. We advise you to enable this feature.

Сообщение отредактировал Thassos - Вторник, 27 Июн 2017, 02:09:53