Looking for a technical SEO checklist? Technical SEO elements can be difficult to master for those that are not technically minded. However, technical SEO elements are essential to help your website rank higher on Google.
Therefore, I have created a simple technical SEO checklist and guide to help those that find this particular side of SEO difficult. I will break down some of the most important technical SEO techniques into explanations that can be digested easily. You will learn what the technical elements are, why they’re useful, the tools to use and how to carry out some technical SEO best practices.
1. Add an XML Sitemap to help Search Engines Crawl your Website
What is an XML Sitemap?
An XML Sitemap is a file that contains a list of all the URLs on your website you want search engines such as Google to index and crawl. The XML sitemap tells search engines about your website structure.
To find your sitemap: http://www.example.com/sitemap_index.xml
How to Create an XML Sitemap
When creating an XML sitemap, you can use this XML sitemap generator.
Once you are on the website you will enter your website’s URL (see www.example.com above), fill out the optional fields and then click “Start”. The XML sitemap generator crawls your website and redirects you to the generated sitemap details page. Copy and paste your sitemap URL and upload it to your Google Search Console account. Once you log into your Search Console account, click on “Crawl” followed by “Sitemaps” and finally click on “Add/Test Sitemap”.
If your website is built on WordPress and you have the SEO Yoast plugin installed, you can generate your sitemap by using this plugin. When you are in the back end of your website, you must click on the SEO plugin and click “XML Sitemaps”. You will then see the following:
It is vital that the box above is ticked to enable XML sitemap functionality. The rest of the settings will be based on the website’s owners preference. If there are pages, you do not want to include in the sitemap you would click on “Excluded Posts”. The page ID or post ID is what you will then include that you do not want to have part of the sitemap.
The pages that are included here usually have no value. For example, the Terms and Conditions page of a website should not be included in a sitemap, and therefore, you would add the page ID or post ID in the space above. The way in which you find the page or post ID number is by looking at the page source of the page you want to eliminate from the sitemap.
2. How to Block a Web Page from Being Crawled by Search Engines
A “no index” meta tag is a piece of code that is entered into the head section of a page’s HTML which then prevents the page from being crawled and displayed in search results. You can also block it with a Robots.txt file. However, a Robots.txt file tells Google not to crawl the URL but allows it to index the page and display it in search results. The best practice is to apply robots meta tags to the page you do not want to crawl. This will tell Google not to list the relevant page.
Use the following code to add a Robots Meta Tag:
<meta name=”robots” content=”noindex, nofollow”/>
If your website is built on WordPress, log into the back end of your site, click “Pages” and then select a page. Then scroll down to the Yoast SEO section and click on the “Advanced” tab and click “NoFollow”
3. Apply a Canonical Tag to Avoid Duplicate Content
What Is a Canonical Tag?
The canonical tag is a HTML tag that lets Google know that pages with the tag are copies of the original page. Canonical tags are used to help Google distinguish the original content from its duplicate pages.
Why Use a Canonical Tag?
When a web page has duplicate content (one or more pages containing the same content as an existing page), search engine crawlers get confused about which one it should index. This causes a significant issue as Google divides the page’s authority. Duplicate content without the canonical tag can result in a Google penalty for your website.
That is why it’s important to apply a canonical tag to the duplicate pages you don’t want Google to read as the original page source. Google will then crawl and index the source and ignore the duplicate pages.
How to Create a Canonical Tag:
If your website is built on WordPress and you have the SEO Yoast plugin installed, you can quickly and easily add a canonical tag without messing with any code. To do this, log into the back end of your website, click “Pages” and then select a page. Then scroll down to the Yoast SEO section and click “Advanced” SEO section on your SEO Yoast dashboard and copy the URL of your desired page into the Canonical URL parameter. Add the URL into the Canonical URL section.
Note: If this page does not have duplicate content you can leave the canonical URL section blank. See the small text above!
However, if your website is not built on WordPress you will have to work with some code. The rel=canonical tag goes into the head section of the webpage.
<link rel=”canonical” href=”https://blog.example.com/skirts/blue-skirts-are-awesome” />
This indicates the preferred URL to use to access the blue skirts post, so that the search results will be more likely to show users that URL structure.
4. How to Crawl for Website Errors:
The first question you need to ask yourself here is why do a website crawl?
It is essential to crawl your site regularly for any SEO errors you might not be aware of. It is vital to make sure the website is crawlable. It is a health check for your site. It is one of the best ways to identify any issues that may be negatively impacting your website’s performance.
Use an SEO Crawling Tool:
SEO crawling tools are one of the best investments you can make for managing the performance of your website. They will help you identify any SEO errors that could have an adverse impact on your ranking quickly and easily. There are so many SEO crawling tools available for digital marketers to choose from. Some of the best and well-known crawling tools in the SEO business today include:
Identify Errors That Can Negatively Affect Your SEO Performance:
Identify Errors that can Negatively Affect Your SEO Performance:
Here is a list of crucial errors your SEO crawling tool should help you identify:
- 4xx Errors
- Server 5xx Errors
- Duplicate URLs
- Duplicate Page Titles
- Missing Page Titles
- Missing Meta Descriptions
- Duplicate Meta Descriptions
- Duplicate H1 tags
- Missing H1 tags
To wrap up this blog post, I have identified:
- The importance of having an XML Sitemap to tell search engines about your website structure
- How to block a web page from being crawled by search engines by using a “no index” meta tag
- The importance of applying a canonical tag to avoid duplicate content
- How to crawl a website for errors.
Technical SEO can be difficult to comprehend, but I hope from reading this blog post you will have a much better understanding of how it all works. It is important to carry out all of the technical SEO elements I have listed above in your SEO strategy. Stay tuned for my next blog post where I will discuss website errors in more detail after running a website crawl test. Contact us today for a technical SEO audit.