How to fix /feed/ duplicate pages in Google Search Console

Google Search Console can cause ‘duplicate pages without a user-selected canonical’ issues due to indexing issues, which can result in /feed/ pages being created after the address. This issue occurs when RSS feeds are included in the index, so if left unaddressed, it can negatively impact your SEO.

Some people think that if the original has a canonical tag applied, it is not a big problem because the original is exposed to search engines even if it is indexed. I used to think so too, but after reading foreign documents recently, I realized that it is not good in terms of SEO.

The reason RSS feeds are recognized as duplicate pages is because the URL of the RSS feed is in the format /feed/ or /feed.xml, and this page provides similar content to the original page. This causes Googlebot to consider these feeds as duplicate content when crawling them.

Google may get confused about which page to prioritize when there are multiple pages with the same or similar content, which can result in lower rankings for those pages.

Therefore, all pages should be marked as original by applying the canonical tag, and when duplicate pages occur, non-original pages should be blocked from being indexed.

I was thinking about blocking addresses containing the /feed/ parameter in the robots.txt file as a way to prevent duplicate pages from occurring. However, I was wondering how to block it in this case if I have registered my RSS in Google Search Console.

After looking at posts in various communities, there were opinions that RSS is not important and should be resolved by blocking, and that the duplicate page error itself is not a big problem.

After synthesizing opinions from various communities, the general consensus was that if duplicate pages occur due to RSS, block the page through the robots.txt file, and if it is exposed to search engines, request deletion through Google Search Console or redirect to the original page.

So, based on this, let’s take a look at how we can solve the /feed/ duplicate page problem.

Canonical application

feed

Canonical tags are used to tell search engines like Google that a particular page is the primary content. Therefore, when writing articles on your website, it is important to add canonical tags to prevent duplicate pages from being recognized as the original.

If you are using an SEO plugin in WordPress, the Canonical tag will be applied automatically. However, if you are not using an SEO plugin or if it is not WordPress, you can apply the tag below to let search engines know that your content is original.

<link rel="canonical" href="https://yourwebsite.com/original-page-url" />

Adding the above code to the top in HTML mode will tell search engines that this content is original. In this case, if another article with the same content is indexed, it will be considered a duplicate page and excluded from search results.

Block /feed/ in robots.txt

User-agent: *

Disallow: /feed/

Even if you apply canonical, you may still encounter the /feed/ duplicate page problem. So the most reliable way is to block that page in your Robots.txt file. Just add the code above.

If you add the above code, Google Search Console will no longer be able to read your RSS feed. So if you want to keep your RSS feed and apply it, you can add the code below.

Allow: /feed/atom/
Disallow: /*/feed/

If you apply the above code, pages with the /feed/ address will be blocked, and /feed/atom/ addresses will be allowed. Therefore, if you submit /feed/atom/ when submitting RSS, you can use RSS normally.

The important thing here is that Allow must be above disallow so that it is not blocked when registering RSS in Google Search Console.

feed

A more convenient way to check if a block has been applied is to use the Robots.txt Tester tool in Bing Webmaster Tools under ‘Tools and Advanced Features’ rather than Google Search Console. This tool allows you to see the results in real time.

Update your unique address

feed

If you are experiencing duplicate pages due to parameters that are caused by plugins, themes, or other settings, there are several solutions. One of them is to go to Settings > Permalinks in your WordPress admin and click the Save Changes button.

  1. Rewrite Rules Update: WordPress uses Rewrite Rules internally to create user-friendly addresses. When you change or save your unique address settings, these rules are regenerated and updated.
  2. Update your .htaccess file: When you save your permalink, WordPress will automatically update your .htaccess file or create a new one if necessary.
  3. Troubleshoot URL Parameters: Issues with URL parameters are most often caused by incorrect Rewrite Rules rules, and clicking Change Permalink will re-apply the rules, which will resolve any parameter-related issues.
  4. Cache issues: Your server’s cache or plugin cache can cause problems with your rules. When you store unique addresses, invalidate these caches and apply new rules.

Clicking the Save Unique Address Changes button will update the Rewrite Rules rules, update the .htaccess file, and normalize URL processing including URL parameters. This process can help resolve issues caused by added parameters.

If you search on Google, you will find various communities suggesting solutions, including a method for setting noindex on RSS feeds. However, even when I applied that method, the problem was not solved.

So, if duplicate pages occur with the /feed/ address, the most reliable way is to block them in robots.txt. I also had an increase in error pages in Google Search Console due to this problem, but after blocking them, they have decreased.

▶ 3 Ways to Add YouTube Videos to WordPress

▶ Sticky Notes: How to Backup and Restore on Windows

▶ How to disable ‘automatically log in’ in Edge browser

Leave a Comment

Your email address will not be published. Required fields are marked *