SEO SCORE- RICH RESULTS

Get your website on Google :Google automatically looks for sites to add to our index; you usually don't even need to do anything except post your site on the web. However, sometimes sites get missed. Check to see if your site is on Google and learn how to make your content more visible in Google Search.

  • Your site isn't linked to by other sites on the web. See if you can get your site linked to by other sites (but please don't pay them to link to you; that could be considered a violation of Google's spam policies).
  • You've just launched a new site and Google hasn't had time to crawl it yet. It can take a few weeks for Google to notice a new site, or any changes in your existing site.
  • The design of the site makes it difficult for Google to crawl its content effectively. If your site is built on some other specialized technology, rather than HTML, Google might have trouble crawling it correctly. Remember to use text, not just images or video, on your site.
  • Google received an error when trying to crawl your site. Most common reasons for this are that you have a login page for your site, or that your site blocks Google for some reason. Make sure that you can access your site in an incognito window.
  • Google missed it: Although Google crawls billions of pages, it's inevitable that we'll miss a few sites, especially small ones. Wait a while, and try to get linked from other sites.

    If you're feeling adventurous, you can add your site to Search Console to see if there's an error that might prevent Google from understanding your site. You can also send us your most important URLs to let us know we should crawl and potentially index them.

Follow the Google Search Essentials to make sure that you're fulfilling the site guidelines for appearing on Google.

Your number one priority is ensuring that your users have the best possible experience on your site. Think about what makes your site unique, valuable, or engaging. To help you evaluate your content, ask yourself the self-assessment questions in our guide to creating content that's helpful, reliable, and people-first. To make sure that you're managing your website using Google-friendly practices, read the Search Essentials.

Your Business Profile lets you manage how your business information appears across Google, including Search and Maps. Consider claiming your Business Profile.

Most searches are now done from mobile devices; make sure that your content is optimized to load quickly and display properly on all screen sizes. You can use tools such as Lighthouse to test if your page is mobile-friendly.

Modern users expect a secure online experience. Secure your website's connection with HTTPS.

SEOs (search engine optimizers) are professionals who can help you improve your website and increase visibility on search engines. Learn more about why and how to hire an SEO.

Depending on what your content is about, there are more ways you can get that content on Google. The following table contains links to the different avenues Google provides to get your content related to a business or person on Googl

Here are a few basic questions to ask yourself about your website when you get started. You can find additional getting started information in the SEO Starter Guide.

To see if your pages are already indexed, search for your site in Google Search with a query like this. Substitute your own site for "example.com".

How these things can help your shopify store be found in google better.

1. you want good RICH RESULTS- 

  • Rich results are search listings that contain additional information beyond the standard URL, page title and description. They also include visual enhancements and/or interactive features. An example of a standard search listing (above) and an example of a rich result (below) with star ratings, review count, price and availability information. 
  • Rich results are experiences on Google surfaces, such as Search, that go beyond the standard blue link. Rich results can include carousels, images, or other non-textual elements. Test your publicly accessible page to see which rich results can be generated by the structured data it contains. THESE ALSO INCLUDE Carousel. Multple photos you can look through with arrows online like a slide show for example will include extra rich results. ( A carousel is a list-like rich result that people can swipe through on mobile devices. It displays multiple cards from the same site (also known as a host carousel). ( more info here: https://developers.google.com/search/docs/appearance/structured-data/carousel) 
  • IE: if you have a review listed by a prodcuct, the key words in the review will pop up in Google and link to your product. Find here: https://search.google.com/test/rich-results

2. If you are using shopify- HEADING 1 needs to be used ONLY for the top bar title area- do not use it in the paragraph area as it will cancel out the top title header in the google search results and make it in elegible for search engines and will be hidden. You want to use HEADING 2 as the top part of your paragraph in the description box and anything under that will be like a bulleted column section and heading 3-4 etc. The information will stack ontop of each other and will show up in search engines according to heading number and key words.

 

 

SEO titles vs. headlines

 

SEO titles aren’t always the same as the headlines that appear on the page itself. Title tags must be around 60 characters or fewer to appear in full on a SERP. Headlines can be longer and more detailed. For example, you might choose “12 Photos of Beautiful Orcas” as your title tag and “12 Photos of Beautiful Orcas Taken by This Year’s Photographer of the Year” as your on-page headline. 

Title tags and headlines are written differently within a web page’s HTML markup; headlines are indicated by the tag <h1> and SEO titles have the <title> tag, hence the name. 

Why are title tags important for SEO?

Title tags are a way to tell search engines what your page is about. Search engines like Google and Bing are designed to help search users discover relevant, useful information online. To deliver the best match for a user’s query, search engines analyze SEO titles, meta descriptions, page copy, and other elements of on-page SEO. They display the most relevant sites on a search results page. Describing your content accurately and using relevant keywords in your SEO title will help search engines match your content with interested readers. 

Paul Shapiro, head of technical SEO and SEO product management at Shopify, suggests brands keep this tip in mind when they include title tags. “Writing title tags requires a gentle balance of including relevant keywords and crafting something clickable for the SERP,” he says. “On top of that, you want to be aware of ensuring the most important part of the title tag doesn't get truncated on the SERP.” 

In addition to keywords, search engines also take click-through rate into account when ranking web content. Pages with higher click-through rates are seen as engaging and given algorithmic priority. Title tags are the most prominent copy displayed on search results pages, so it’s important for the copy to appeal to readers. A compelling title that appeals to the reader and gets them to click on your web page can help you achieve higher click-through rates, leading to a better search rank.

What makes a good SEO title tag? 

A good SEO title tag clearly describes a web page’s content. A perfect title tag does this while also intriguing the reader and inviting clicks. Here are some elements to keep in mind while creating title tags:

1. Represent the content accurately

Make sure your title includes your primary keyword, or the main term your audience searches for when looking for the content on your page. Keep the user experience in mind. If the content of your page doesn’t match the title, they’re likely to leave the page immediately. 

2. Give keywords prime real estate

Keyword placement matters. Keeping your target keyword toward the beginning of your SEO title will help catch the user's eye. 

3. Stand out

A good title can capture the attention of a user scrolling through search page results, so make an effort to differentiate your content from others. Consider using action words or descriptive language that appeals to the reader. Evocative language can improve your title tag’s effectiveness. 

4. Write unique titles

Avoid using the same title for multiple pages on your site. Each page should have a unique title tag to describe its content.

SEO title tag FAQ

Do title tags affect SEO?

A great title tag can help boost your search ranking and increase your click-through rate. Title tags tell search engine algorithms what your content is about. A concise, accurate title tag will help your content appear for relevant searches.

Are title tags a Google ranking factor?

Yes. Title tags tell search engines like Google what your page is about. A title tag alone won’t guarantee reaching the top of Google search results for any given term, but a well-crafted title paired with informative copy can increase your click-through rate and search ranking.

Should I include my brand name in the title tag?

Including your brand’s name in web page title tags can help build brand recognition and make readers more likely to click.

How often should I update or change my title tags?

Rewrite your title tags if you make significant changes to the page content or if the title itself is no longer up to date. For example, you’d want to update the title “Best Faberge Eggs of 2023” if you update the page content in 2024. You can also monitor performance and perform a title tag refresh if a post is not performing as well as expected.

Structured data markup that Google Search supports

Google uses structured data to understand the content on the page and show that content in a richer appearance in search results, which is called a rich result. To make your site eligible for appearance as one of these rich results, follow the guide to learn how to implement structured data on your site. If you're just getting started, visit Understand how structured data works

Product (Product, Review, Offer) structured data

When you add structured data to your product pages, Google search results (including Google Images and Google Lens) can show product information in richer ways. Users can see price, availability, review ratings, shipping information, and more right in search results.

Here's how shopping experiences may appear in Google Search results. This list is not exhaustive—Google Search is constantly exploring new and better ways to help people find what they're looking for, and the experiences may change over time. Result types. there are two classes of result types: product snippets and merchant listing experiences. Product snippets in search results

Product snippets are a richer form of presentation for snippets in search results than just text. They are used for products and product reviews, and can include additional information such as ratings, review information, price, and availability.

Merchant listing experiences rely on more specific data about a product, such as its price and availability. Only pages from which a shopper can purchase a product are eligible for merchant listing experiences, not pages with links to other sites that sell the product. Google may attempt to verify merchant listing product data before showing the information in search results.

The merchant listing experiences are:

If you provide additional product information beyond the required properties, your content may receive additional visual enhancements, helping your content to stand out in search results. See Structured data type definitions for all required and recommended product information.

Search result enhancements are shown at the discretion of each experience, and may change over time. For this reason, it is recommended to provide as much rich product information as available, without concern for the exact experiences that will use it. Here are some examples of how merchant listing experiences may be enhanced:

  • Ratings: Enhance the appearance of your search result by providing customer reviews and ratings.
  • Pros and Cons: Identify pros and cons in your product review description so they can be highlighted in search results.
  • Shipping: Share shipping costs, especially free shipping, so shoppers understand the total cost.
  • Availability: Provide availability data to help customers know when you currently have a product in stock.
  • Price drop: Price drops are computed by Google by observing price changes for the product over time. Price drops are not guaranteed to be shown.
  • Returns: Share return information, such as your return policy, fees involved in returns, and how many days customers have to return a product.

To provide rich product data to Google Search you can add Product structured data to your web pages, upload data feeds via Google Merchant Center and opt into free listings within the Merchant Center console, or both. This page focuses on the former.

Providing both structured data on web pages and a Merchant Center feed will maximize your eligibility to experiences and help Google correctly understand and verify your data. Some experiences combine data from structured data and Google Merchant Center feeds if both are available. For example, product snippets may use pricing data from your merchant feed if not present in structured data on the page. The Google Merchant Center feed documentation includes additional recommendations and requirements for feed attributes.

In addition to Google Search, learn more about eligibility to the Google Shopping tab by reading the data and eligibility requirements in Google Merchant Center.

Structured data is a standardized format for providing information about a page and classifying the page content. If you're new to structured data, you can learn more about how structured data works.

Here's an overview of how to build, test, and release structured data. For a step-by-step guide on how to add structured data to a web page, check out the structured data codelab.

  1. Add the required properties. Based on the format you're using, learn where to insert structured data on the page.
  2. Follow the guidelines.
  3. Validate your code using the Rich Results Test and fix any critical errors. Consider also fixing any non-critical issues that may be flagged in the tool, as they can help improve the quality of your structured data (however, this isn't necessary to be eligible for rich results).
  4. Deploy a few pages that include your structured data and use the URL Inspection tool to test how Google sees the page. Be sure that your page is accessible to Google and not blocked by a robots.txt file, the noindex tag, or login requirements. If the page looks okay, you can ask Google to recrawl your URLs.
  5. To keep Google informed of future changes, we recommend that you submit a sitemap. You can automate this with the Search Console Sitemap API.

MORE INFO: https://developers.google.com/search/docs/appearance/structured-data/product

In-depth guide to how Google Search works

Google Search is a fully-automated search engine that uses software known as web crawlers that explore the web regularly to find pages to add to our index. In fact, the vast majority of pages listed in our results aren't manually submitted for inclusion, but are found and added automatically when our web crawlers explore the web. This document explains the stages of how Search works in the context of your website. Having this base knowledge can help you fix crawling issues, get your pages indexed, and learn how to optimize how your site appears in Google Search.

Before we get into the details of how Search works, it's important to note that Google doesn't accept payment to crawl a site more frequently, or rank it higher. If anyone tells you otherwise, they're wrong.

Google doesn't guarantee that it will crawl, index, or serve your page, even if your page follows the Google Search Essentials.

Google Search works in three stages, and not all pages make it through each stage:

  1. Crawling: Google downloads text, images, and videos from pages it found on the internet with automated programs called crawlers.
  2. Indexing: Google analyzes the text, images, and video files on the page, and stores the information in the Google index, which is a large database.
  3. Serving search results: When a user searches on Google, Google returns information that's relevant to the user's query.

The first stage is finding out what pages exist on the web. There isn't a central registry of all web pages, so Google must constantly look for new and updated pages and add them to its list of known pages. This process is called "URL discovery". Some pages are known because Google has already visited them. Other pages are discovered when Google follows a link from a known page to a new page: for example, a hub page, such as a category page, links to a new blog post. Still other pages are discovered when you submit a list of pages (a sitemap) for Google to crawl.

Once Google discovers a page's URL, it may visit (or "crawl") the page to find out what's on it. We use a huge set of computers to crawl billions of pages on the web. The program that does the fetching is called Googlebot (also known as a crawler, robot, bot, or spider). Googlebot uses an algorithmic process to determine which sites to crawl, how often, and how many pages to fetch from each site. Google's crawlers are also programmed such that they try not to crawl the site too fast to avoid overloading it. This mechanism is based on the responses of the site (for example, HTTP 500 errors mean "slow down").

However, Googlebot doesn't crawl all the pages it discovered. Some pages may be disallowed for crawling by the site owner, other pages may not be accessible without logging in to the site.

During the crawl, Google renders the page and runs any JavaScript it finds using a recent version of Chrome, similar to how your browser renders pages you visit. Rendering is important because websites often rely on JavaScript to bring content to the page, and without rendering Google might not see that content.

Crawling depends on whether Google's crawlers can access the site. Some common issues with Googlebot accessing sites include:

After a page is crawled, Google tries to understand what the page is about. This stage is called indexing and it includes processing and analyzing the textual content and key content tags and attributes, such as <title> elements and alt attributes, images, videos, and more.

During the indexing process, Google determines if a page is a duplicate of another page on the internet or canonical. The canonical is the page that may be shown in search results. To select the canonical, we first group together (also known as clustering) the pages that we found on the internet that have similar content, and then we select the one that's most representative of the group. The other pages in the group are alternate versions that may be served in different contexts, like if the user is searching from a mobile device or they're looking for a very specific page from that cluster.

***Search Engine Optimization (SEO) Starter Guide:

https://developers.google.com/search/docs/fundamentals/seo-starter-guide

 ****What Is Organic Search in Google Analytics? YOU WANT ORGANIC REACH** What is organic search in Google Analytics? Organic search is a source of traffic in Google Analytics — one of the ways users get to your website.

 The term refers to unpaid listings on search engine results pages (SERPs), called organic search results.

https://www.webfx.com/blog/seo/what-is-organic-search-in-google-analytics/

Google also collects signals about the canonical page and its contents, which may be used in the next stage, where we serve the page in search results. Some signals include the language of the page, the country the content is local to, the usability of the page, and so on.

The collected information about the canonical page and its cluster may be stored in the Google index, a large database hosted on thousands of computers. Indexing isn't guaranteed; not every page that Google processes will be indexed.

Indexing also depends on the content of the page and its metadata. Some common indexing issues can include:

When a user enters a query, our machines search the index for matching pages and return the results we believe are the highest quality and most relevant to the user's query. Relevancy is determined by hundreds of factors, which could include information such as the user's location, language, and device (desktop or phone). For example, searching for "bicycle repair shops" would show different results to a user in Paris than it would to a user in Hong Kong.

Based on the user's query the search features that appear on the search results page also change. For example, searching for "bicycle repair shops" will likely show local results and no image results, however searching for "modern bicycle" is more likely to show image results, but not local results. You can explore the most common UI elements of Google web search in our Visual Element gallery.

Search Console might tell you that a page is indexed, but you don't see it in search results. This might be because:

YOU WANT A FAST PAGE SPEED! IF IT IS SLOW GOOGLE WILL HIDE IT AND SEE IT AS  EMPTY. About PageSpeed Insights

PageSpeed Insights (PSI) reports on the user experience of a page on both mobile and desktop devices, and provides suggestions on how that page may be improved.

PSI provides both lab and field data about a page. Lab data is useful for debugging issues, as it is collected in a controlled environment. However, it may not capture real-world bottlenecks. Field data is useful for capturing true, real-world user experience - but has a more limited set of metrics. See How To Think About Speed Tools for more information on the two types of data.

Real-user experience data in PSI is powered by the Chrome User Experience Report (CrUX) dataset. PSI reports real users' First Contentful Paint (FCP), First Input Delay (FID), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and Interaction to Next Paint (INP) experiences over the previous 28-day collection period. PSI also reports experiences for the experimental metric Time to First Byte (TTFB).

In order to show user experience data for a given page, there must be sufficient data for it to be included in the CrUX dataset. A page might not have sufficient data if it has been recently published or has too few samples from real users. When this happens, PSI will fall back to origin-level granularity, which encompasses all user experiences on all pages of the website. Sometimes the origin may also have insufficient data, in which case PSI will be unable to show any real-user experience data. more info: https://developers.google.com/speed/docs/insights/v5/about

 Website Crawling: The What, Why & How To Optimize

Not sure where to start when it comes to making sure your pages are crawled? From internal linking to instructing Googlebot, here's what to prioritize.

Crawling is essential for every website, large and small alike.

If your content is not being crawled, you have no chance to gain visibility on Google surfaces.

 

What Is Crawling In SEO

In the context of SEO, crawling is the process in which search engine bots (also known as web crawlers or spiders) systematically discover content on a website.

This may be text, images, videos, or other file types that are accessible to bots. Regardless of the format, content is exclusively found through links.

How Web Crawling Works

A web crawler works by discovering URLs and downloading the page content.

During this process, they may pass the content over to the search engine index and will extract links to other web pages.

These found links will fall into different categorizations:

  • New URLs that are unknown to the search engine.
  • Known URLs that give no guidance on crawling will be periodically revisited to determine whether any changes have been made to the page’s content, and thus the search engine index needs updating.
  • Known URLs that have been updated and give clear guidance. They should be recrawled and reindexed, such as via an XML sitemap last mod date time stamp.
  • Known URLs that have not been updated and give clear guidance. They should not be recrawled or reindexed, such as a HTTP 304 Not Modified response header.
  • Inaccessible URLs that can not or should not be followed, for example, those behind a log-in form or links blocked by a “nofollow” robots tag.
  • Disallowed URLs that search engine bots will not crawl, for example, those blocked by the robots.txt file.

All allowed URLs will be added to a list of pages to be visited in the future, known as the crawl queue.

However, they will be given different levels of priority.

This is dependent not only upon the link categorization but a host of other factors that determine the relative importance of each page in the eyes of each search engine.

Most popular search engines have their own bots that use specific algorithms to determine what they crawl and when. This means not all crawl the same.

Googlebot behaves differently from Bingbot, DuckDuckBot, Yandex Bot, or Yahoo Slurp.

Why It’s Important That Your Site Can Be Crawled

If a page on a site is not crawled, it will not be ranked in the search results, as it is highly unlikely to be indexed.

But the reasons why crawling is critical go much deeper.

Speedy crawling is essential for time-limited content.

Often, if it is not crawled and given visibility quickly, it becomes irrelevant to users.

For example, audiences will not be engaged by last week’s breaking news, an event that has passed, or a product that is now sold out.

But even if you don’t work in an industry where time to market is critical, speedy crawling is always beneficial.

When you refresh an article or release a significant on-page SEO change, the faster Googlebot crawls it, the faster you’ll benefit from the optimization – or see your mistake and be able to revert.

You can’t fail fast if Googlebot is crawling slowly.

Think of crawling as the cornerstone of SEO; your organic visibility is entirely dependent upon it being done well on your website.

Measuring Crawling: Crawl Budget Vs. Crawl Efficacy

Contrary to popular opinion, Google does not aim to crawl and index all content of all websites across the internet.

Crawling of a page is not guaranteed. In fact, most sites have a substantial portion of pages that have never been crawled by Googlebot.

If you see the exclusion “Discovered – currently not indexed” in the Google Search Console page indexing report, this issue is impacting you.

But if you do not see this exclusion, it doesn’t necessarily mean you have no crawling issues.

There is a common misconception about what metrics are meaningful when measuring crawling.

But the idea that more crawling is inherently better is completely misguided. The total number of crawls is nothing but a vanity metric.

Enticing 10 times the number of crawls per day doesn’t necessarily correlate against faster (re)indexing of content you care about. All it correlates with is putting more load on your servers, costing you more money.

The focus should never be on increasing the total amount of crawling, but rather on quality crawling that results in SEO value.

Crawl Efficacy Value

Quality crawling means reducing the time between publishing or making significant updates to an SEO-relevant page and the next visit by Googlebot. This delay is the crawl efficacy.

To determine the crawl efficacy, the recommended approach is to extract the created or updated datetime value from the database and compare it to the timestamp of the next Googlebot crawl of the URL in the server log files.

If this is not possible, you could consider calculating it using the lastmod date in the XML sitemaps and periodically query the relevant URLs with the Search Console URL Inspection API until it returns a last crawl status.

By quantifying the time delay between publishing and crawling, you can measure the real impact of crawl optimizations with a metric that matters.

As crawl efficacy decreases, the faster new or updated SEO-relevant content will be shown to your audience on Google surfaces.

If your site’s crawl efficacy score shows Googlebot is taking too long to visit content that matters, what can you do to optimize crawling?

Search Engine Support For Crawling

There has been a lot of talk in the last few years about how search engines and their partners are focused on improving crawling.

After all, it’s in their best interests. More efficient crawling not only gives them access to better content to power their results, but it also helps the world’s ecosystem by reducing greenhouse gases.

Most of the talk has been around two APIs that are aimed at optimizing crawling.

The idea is rather than search engine spiders deciding what to crawl, websites can push relevant URLs directly to the search engines via the API to trigger a crawl.

In theory, this not only allows you to get your latest content indexed faster, but also offers an avenue to effectively remove old URLs, which is something that is currently not well-supported by search engines.

Non-Google Support From IndexNow

The first API is IndexNow. This is supported by Bing, Yandex, and Seznam, but importantly not Google. It is also integrated into many SEO tools, CRMs & CDNs, potentially reducing the development effort needed to leverage IndexNow.

This may seem like a quick win for SEO, but be cautious.

Does a significant portion of your target audience use the search engines supported by IndexNow? If not, triggering crawls from their bots may be of limited value.

But more importantly, assess what integrating on IndexNow does to server weight vs. crawl efficacy score improvement for those search engines. It may be that the costs are not worth the benefit.

Google Support From The Indexing API

The second one is the Google Indexing API. Google has repeatedly stated that the API can only be used to crawl pages with either jobposting or broadcast event markup. And many have tested this and proved this statement to be false.

By submitting non-compliant URLs to the Google Indexing API you will see a significant increase in crawling. But this is the perfect case for why “crawl budget optimization” and basing decisions on the amount of crawling is misconceived.

Because for non-compliant URLs, submission has no impact on indexing. And when you stop to think about it, this makes perfect sense.

You’re only submitting a URL. Google will crawl the page quickly to see if it has the specified structured data.

If so, then it will expedite indexing. If not, it won’t. Google will ignore it.

So, calling the API for non-compliant pages does nothing except add unnecessary load on your server and wastes development resources for no gain.

Google Support Within Google Search Console

The other way in which Google supports crawling is manual submission in Google Search Console.

Most URLs that are submitted in this manner will be crawled and have their indexing status changed within an hour. But there is a quota limit of 10 URLs within 24 hours, so the obvious issue with this tactic is scale.

However, this doesn’t mean disregarding it.

You can automate the submission of URLs you see as a priority via scripting that mimics user actions to speed up crawling and indexing for those select few.

Lastly, for anyone who hopes clicking the ‘Validate fix’ button on ‘discovered currently not indexed’ exclusions will trigger crawling, in my testing to date, this has done nothing to expedite crawling.

So if search engines will not significantly help us, how can we help ourselves?

How To Achieve Efficient Site Crawling

There are five tactics that can make a difference to crawl efficacy.

1. Ensure A Fast, Healthy Server Response

A highly performant server is critical. It must be able to handle the amount of crawling Googlebot wants to do without any negative impact on server response time or erroring out.

Check your site host status is green in Google Search Console, that 5xx errors are below 1%, and server response times trend below 300 milliseconds.

2. Remove Valueless Content

When a significant portion of a website’s content is low quality, outdated, or duplicated, it diverts crawlers from visiting new or recently updated content as well as contributes to index bloat.

The fastest way to start cleaning up is to check the Google Search Console pages report for the exclusion ‘Crawled – currently not indexed.’

In the provided sample, look for folder patterns or other issue signals. For those you find, fix it by merging similar content with a 301 redirect or deleting content with a 404 as appropriate.

3. Instruct Googlebot What Not To Crawl

While rel=canonical links and noindex tags are effective at keeping the Google index of your website clean, they cost you in crawling.

While sometimes this is necessary, consider if such pages need to be crawled in the first place. If not, stop Google at the crawling stage with a robot.txt disallow.

Find instances where blocking the crawler may be better than giving indexing instructions by looking in the Google Search Console coverage report for exclusions from canonicals or noindex tags.

Also, review the sample of ‘Indexed, not submitted in sitemap’ and ‘Discovered – currently not indexed’ URLs in Google Search Console. Find and block non-SEO relevant routes such as:

  • Parameter pages, such as ?sort=oldest.
  • Functional pages, such as “shopping cart.”
  • Infinite spaces, such as those created by calendar pages.
  • Unimportant images, scripts, or style files.
  • API URLs.

You should also consider how your pagination strategy is impacting crawling.

4. Instruct Googlebot On What To Crawl And When

An optimized XML sitemap is an effective tool to guide Googlebot toward SEO-relevant URLs.

Optimized means that it dynamically updates with minimal delay and includes the last modification date and time to inform search engines when the page last was significantly changed and if it should be recrawled.

5. Support Crawling Through Internal Links

We know crawling can only occur through links. XML sitemaps are a great place to start; external links are powerful but challenging to build in bulk at quality.

Internal links, on the other hand, are relatively easy to scale and have significant positive impacts on crawl efficacy.

Focus special attention on mobile sitewide navigation, breadcrumbs, quick filters, and related content links – ensuring none are dependent upon Javascript.

  • Technical requirements: What Google needs from a web page to show it in Google Search.
  • The technical requirements cover the bare minimum that Google Search needs from a web page in order to show it in search results. There are actually very few technical things you need to do to a web page; most sites pass the technical requirements without even realizing it.

    • Create helpful, reliable, people-first content.
    • Use words that people would use to look for your content, and place those words in prominent locations on the page, such as the title and main heading of a page, and other descriptive locations such as alt text and link text.
    • Make your links crawlable so that Google can find other pages on your site via the links on your page.
    • Tell people about your site. Be active in communities where you can tell like-minded people about your services and products that you mention on your site.

 

Back to blog