SEO title tag FAQ
Do title tags affect SEO?
A great title tag can help boost your search ranking and increase your click-through rate. Title tags tell search engine algorithms what your content is about. A concise, accurate title tag will help your content appear for relevant searches.
Are title tags a Google ranking factor?
Yes. Title tags tell search engines like Google what your page is about. A title tag alone won’t guarantee reaching the top of Google search results for any given term, but a well-crafted title paired with informative copy can increase your click-through rate and search ranking.
Should I include my brand name in the title tag?
Including your brand’s name in web page title tags can help build brand recognition and make readers more likely to click.
How often should I update or change my title tags?
Rewrite your title tags if you make significant changes to the page content or if the title itself is no longer up to date. For example, you’d want to update the title “Best Faberge Eggs of 2023” if you update the page content in 2024. You can also monitor performance and perform a title tag refresh if a post is not performing as well as expected.
Structured data markup that Google Search supports
Google uses structured data to understand the content on the page and show that content in a richer appearance in search results, which is called a rich result. To make your site eligible for appearance as one of these rich results, follow the guide to learn how to implement structured data on your site. If you're just getting started, visit Understand how structured data works.
https://developers.google.com/search/docs/appearance/structured-data/search-gallery
Product (Product
, Review
, Offer
) structured data
Review
,Offer
) structured dataWhen you add structured data to your product pages, Google search results (including Google Images and Google Lens) can show product information in richer ways. Users can see price, availability, review ratings, shipping information, and more right in search results.
Shopping experiences
Here's how shopping experiences may appear in Google Search results. This list is not exhaustive—Google Search is constantly exploring new and better ways to help people find what they're looking for, and the experiences may change over time. Result types. there are two classes of result types: product snippets and merchant listing experiences. Product snippets in search results
Product snippets are a richer form of presentation for snippets in search results than just text. They are used for products and product reviews, and can include additional information such as ratings, review information, price, and availability.
Merchant listing experiences
Merchant listing experiences rely on more specific data about a product, such as its price and availability. Only pages from which a shopper can purchase a product are eligible for merchant listing experiences, not pages with links to other sites that sell the product. Google may attempt to verify merchant listing product data before showing the information in search results.
The merchant listing experiences are:
Result enhancements
If you provide additional product information beyond the required properties, your content may receive additional visual enhancements, helping your content to stand out in search results. See Structured data type definitions for all required and recommended product information.
Search result enhancements are shown at the discretion of each experience, and may change over time. For this reason, it is recommended to provide as much rich product information as available, without concern for the exact experiences that will use it. Here are some examples of how merchant listing experiences may be enhanced:
- Ratings: Enhance the appearance of your search result by providing customer reviews and ratings.
- Pros and Cons: Identify pros and cons in your product review description so they can be highlighted in search results.
- Shipping: Share shipping costs, especially free shipping, so shoppers understand the total cost.
- Availability: Provide availability data to help customers know when you currently have a product in stock.
- Price drop: Price drops are computed by Google by observing price changes for the product over time. Price drops are not guaranteed to be shown.
- Returns: Share return information, such as your return policy, fees involved in returns, and how many days customers have to return a product.
Providing product data to Google Search
To provide rich product data to Google Search you can add Product
structured data to your web pages, upload data feeds via Google Merchant Center and opt into free listings within the Merchant Center console, or both. This page focuses on the former.
Providing both structured data on web pages and a Merchant Center feed will maximize your eligibility to experiences and help Google correctly understand and verify your data. Some experiences combine data from structured data and Google Merchant Center feeds if both are available. For example, product snippets may use pricing data from your merchant feed if not present in structured data on the page. The Google Merchant Center feed documentation includes additional recommendations and requirements for feed attributes.
In addition to Google Search, learn more about eligibility to the Google Shopping tab by reading the data and eligibility requirements in Google Merchant Center.
How to add structured data
Structured data is a standardized format for providing information about a page and classifying the page content. If you're new to structured data, you can learn more about how structured data works.
Here's an overview of how to build, test, and release structured data. For a step-by-step guide on how to add structured data to a web page, check out the structured data codelab.
- Add the required properties. Based on the format you're using, learn where to insert structured data on the page.
- Follow the guidelines.
- Validate your code using the Rich Results Test and fix any critical errors. Consider also fixing any non-critical issues that may be flagged in the tool, as they can help improve the quality of your structured data (however, this isn't necessary to be eligible for rich results).
- Deploy a few pages that include your structured data and use the URL Inspection tool to test how Google sees the page. Be sure that your page is accessible to Google and not blocked by a robots.txt file, the
noindex
tag, or login requirements. If the page looks okay, you can ask Google to recrawl your URLs. - To keep Google informed of future changes, we recommend that you submit a sitemap. You can automate this with the Search Console Sitemap API.
MORE INFO: https://developers.google.com/search/docs/appearance/structured-data/product
In-depth guide to how Google Search works
Google Search is a fully-automated search engine that uses software known as web crawlers that explore the web regularly to find pages to add to our index. In fact, the vast majority of pages listed in our results aren't manually submitted for inclusion, but are found and added automatically when our web crawlers explore the web. This document explains the stages of how Search works in the context of your website. Having this base knowledge can help you fix crawling issues, get your pages indexed, and learn how to optimize how your site appears in Google Search.
A few notes before we get started
Before we get into the details of how Search works, it's important to note that Google doesn't accept payment to crawl a site more frequently, or rank it higher. If anyone tells you otherwise, they're wrong.
Google doesn't guarantee that it will crawl, index, or serve your page, even if your page follows the Google Search Essentials.
Introducing the three stages of Google Search
Google Search works in three stages, and not all pages make it through each stage:
- Crawling: Google downloads text, images, and videos from pages it found on the internet with automated programs called crawlers.
- Indexing: Google analyzes the text, images, and video files on the page, and stores the information in the Google index, which is a large database.
- Serving search results: When a user searches on Google, Google returns information that's relevant to the user's query.
Crawling
The first stage is finding out what pages exist on the web. There isn't a central registry of all web pages, so Google must constantly look for new and updated pages and add them to its list of known pages. This process is called "URL discovery". Some pages are known because Google has already visited them. Other pages are discovered when Google follows a link from a known page to a new page: for example, a hub page, such as a category page, links to a new blog post. Still other pages are discovered when you submit a list of pages (a sitemap) for Google to crawl.
Once Google discovers a page's URL, it may visit (or "crawl") the page to find out what's on it. We use a huge set of computers to crawl billions of pages on the web. The program that does the fetching is called Googlebot (also known as a crawler, robot, bot, or spider). Googlebot uses an algorithmic process to determine which sites to crawl, how often, and how many pages to fetch from each site. Google's crawlers are also programmed such that they try not to crawl the site too fast to avoid overloading it. This mechanism is based on the responses of the site (for example, HTTP 500 errors mean "slow down").
However, Googlebot doesn't crawl all the pages it discovered. Some pages may be disallowed for crawling by the site owner, other pages may not be accessible without logging in to the site.
During the crawl, Google renders the page and runs any JavaScript it finds using a recent version of Chrome, similar to how your browser renders pages you visit. Rendering is important because websites often rely on JavaScript to bring content to the page, and without rendering Google might not see that content.
Crawling depends on whether Google's crawlers can access the site. Some common issues with Googlebot accessing sites include:
- Problems with the server handling the site
- Network issues
- robots.txt rules preventing Googlebot's access to the page
Indexing
After a page is crawled, Google tries to understand what the page is about. This stage is called indexing and it includes processing and analyzing the textual content and key content tags and attributes, such as <title>
elements and alt attributes, images, videos, and more.
During the indexing process, Google determines if a page is a duplicate of another page on the internet or canonical. The canonical is the page that may be shown in search results. To select the canonical, we first group together (also known as clustering) the pages that we found on the internet that have similar content, and then we select the one that's most representative of the group. The other pages in the group are alternate versions that may be served in different contexts, like if the user is searching from a mobile device or they're looking for a very specific page from that cluster.
***Search Engine Optimization (SEO) Starter Guide:
https://developers.google.com/search/docs/fundamentals/seo-starter-guide
****What Is Organic Search in Google Analytics? YOU WANT ORGANIC REACH** What is organic search in Google Analytics? Organic search is a source of traffic in Google Analytics — one of the ways users get to your website.
The term refers to unpaid listings on search engine results pages (SERPs), called organic search results.
https://www.webfx.com/blog/seo/what-is-organic-search-in-google-analytics/
Google also collects signals about the canonical page and its contents, which may be used in the next stage, where we serve the page in search results. Some signals include the language of the page, the country the content is local to, the usability of the page, and so on.
The collected information about the canonical page and its cluster may be stored in the Google index, a large database hosted on thousands of computers. Indexing isn't guaranteed; not every page that Google processes will be indexed.
Indexing also depends on the content of the page and its metadata. Some common indexing issues can include:
- The quality of the content on page is low
- Robots
meta
rules disallow indexing - The design of the website might make indexing difficult
Serving search results
When a user enters a query, our machines search the index for matching pages and return the results we believe are the highest quality and most relevant to the user's query. Relevancy is determined by hundreds of factors, which could include information such as the user's location, language, and device (desktop or phone). For example, searching for "bicycle repair shops" would show different results to a user in Paris than it would to a user in Hong Kong.
Based on the user's query the search features that appear on the search results page also change. For example, searching for "bicycle repair shops" will likely show local results and no image results, however searching for "modern bicycle" is more likely to show image results, but not local results. You can explore the most common UI elements of Google web search in our Visual Element gallery.
Search Console might tell you that a page is indexed, but you don't see it in search results. This might be because:
- The content on the page is irrelevant to users' queries
- The quality of the content is low
- Robots
meta
rules prevent serving
YOU WANT A FAST PAGE SPEED! IF IT IS SLOW GOOGLE WILL HIDE IT AND SEE IT AS EMPTY. About PageSpeed Insights
PageSpeed Insights (PSI) reports on the user experience of a page on both mobile and desktop devices, and provides suggestions on how that page may be improved.
PSI provides both lab and field data about a page. Lab data is useful for debugging issues, as it is collected in a controlled environment. However, it may not capture real-world bottlenecks. Field data is useful for capturing true, real-world user experience - but has a more limited set of metrics. See How To Think About Speed Tools for more information on the two types of data.
Real-user experience data
Real-user experience data in PSI is powered by the Chrome User Experience Report (CrUX) dataset. PSI reports real users' First Contentful Paint (FCP), First Input Delay (FID), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and Interaction to Next Paint (INP) experiences over the previous 28-day collection period. PSI also reports experiences for the experimental metric Time to First Byte (TTFB).
In order to show user experience data for a given page, there must be sufficient data for it to be included in the CrUX dataset. A page might not have sufficient data if it has been recently published or has too few samples from real users. When this happens, PSI will fall back to origin-level granularity, which encompasses all user experiences on all pages of the website. Sometimes the origin may also have insufficient data, in which case PSI will be unable to show any real-user experience data. more info: https://developers.google.com/speed/docs/insights/v5/about
Website Crawling: The What, Why & How To Optimize
Not sure where to start when it comes to making sure your pages are crawled? From internal linking to instructing Googlebot, here's what to prioritize.
Crawling is essential for every website, large and small alike.
If your content is not being crawled, you have no chance to gain visibility on Google surfaces.
What Is Crawling In SEO
In the context of SEO, crawling is the process in which search engine bots (also known as web crawlers or spiders) systematically discover content on a website.
This may be text, images, videos, or other file types that are accessible to bots. Regardless of the format, content is exclusively found through links.
How Web Crawling Works
A web crawler works by discovering URLs and downloading the page content.
During this process, they may pass the content over to the search engine index and will extract links to other web pages.
These found links will fall into different categorizations:
- New URLs that are unknown to the search engine.
- Known URLs that give no guidance on crawling will be periodically revisited to determine whether any changes have been made to the page’s content, and thus the search engine index needs updating.
- Known URLs that have been updated and give clear guidance. They should be recrawled and reindexed, such as via an XML sitemap last mod date time stamp.
- Known URLs that have not been updated and give clear guidance. They should not be recrawled or reindexed, such as a HTTP 304 Not Modified response header.
- Inaccessible URLs that can not or should not be followed, for example, those behind a log-in form or links blocked by a “nofollow” robots tag.
- Disallowed URLs that search engine bots will not crawl, for example, those blocked by the robots.txt file.
All allowed URLs will be added to a list of pages to be visited in the future, known as the crawl queue.
However, they will be given different levels of priority.
This is dependent not only upon the link categorization but a host of other factors that determine the relative importance of each page in the eyes of each search engine.
Most popular search engines have their own bots that use specific algorithms to determine what they crawl and when. This means not all crawl the same.
Googlebot behaves differently from Bingbot, DuckDuckBot, Yandex Bot, or Yahoo Slurp.
Why It’s Important That Your Site Can Be Crawled
If a page on a site is not crawled, it will not be ranked in the search results, as it is highly unlikely to be indexed.
But the reasons why crawling is critical go much deeper.
Speedy crawling is essential for time-limited content.
Often, if it is not crawled and given visibility quickly, it becomes irrelevant to users.
For example, audiences will not be engaged by last week’s breaking news, an event that has passed, or a product that is now sold out.
But even if you don’t work in an industry where time to market is critical, speedy crawling is always beneficial.
When you refresh an article or release a significant on-page SEO change, the faster Googlebot crawls it, the faster you’ll benefit from the optimization – or see your mistake and be able to revert.
You can’t fail fast if Googlebot is crawling slowly.
Think of crawling as the cornerstone of SEO; your organic visibility is entirely dependent upon it being done well on your website.
Measuring Crawling: Crawl Budget Vs. Crawl Efficacy
Contrary to popular opinion, Google does not aim to crawl and index all content of all websites across the internet.
Crawling of a page is not guaranteed. In fact, most sites have a substantial portion of pages that have never been crawled by Googlebot.
If you see the exclusion “Discovered – currently not indexed” in the Google Search Console page indexing report, this issue is impacting you.
But if you do not see this exclusion, it doesn’t necessarily mean you have no crawling issues.
There is a common misconception about what metrics are meaningful when measuring crawling.
But the idea that more crawling is inherently better is completely misguided. The total number of crawls is nothing but a vanity metric.
Enticing 10 times the number of crawls per day doesn’t necessarily correlate against faster (re)indexing of content you care about. All it correlates with is putting more load on your servers, costing you more money.
The focus should never be on increasing the total amount of crawling, but rather on quality crawling that results in SEO value.
Crawl Efficacy Value
Quality crawling means reducing the time between publishing or making significant updates to an SEO-relevant page and the next visit by Googlebot. This delay is the crawl efficacy.
To determine the crawl efficacy, the recommended approach is to extract the created or updated datetime value from the database and compare it to the timestamp of the next Googlebot crawl of the URL in the server log files.
If this is not possible, you could consider calculating it using the lastmod date in the XML sitemaps and periodically query the relevant URLs with the Search Console URL Inspection API until it returns a last crawl status.
By quantifying the time delay between publishing and crawling, you can measure the real impact of crawl optimizations with a metric that matters.
As crawl efficacy decreases, the faster new or updated SEO-relevant content will be shown to your audience on Google surfaces.
If your site’s crawl efficacy score shows Googlebot is taking too long to visit content that matters, what can you do to optimize crawling?
Search Engine Support For Crawling
There has been a lot of talk in the last few years about how search engines and their partners are focused on improving crawling.
After all, it’s in their best interests. More efficient crawling not only gives them access to better content to power their results, but it also helps the world’s ecosystem by reducing greenhouse gases.
Most of the talk has been around two APIs that are aimed at optimizing crawling.
The idea is rather than search engine spiders deciding what to crawl, websites can push relevant URLs directly to the search engines via the API to trigger a crawl.
In theory, this not only allows you to get your latest content indexed faster, but also offers an avenue to effectively remove old URLs, which is something that is currently not well-supported by search engines.
Non-Google Support From IndexNow
The first API is IndexNow. This is supported by Bing, Yandex, and Seznam, but importantly not Google. It is also integrated into many SEO tools, CRMs & CDNs, potentially reducing the development effort needed to leverage IndexNow.
This may seem like a quick win for SEO, but be cautious.
Does a significant portion of your target audience use the search engines supported by IndexNow? If not, triggering crawls from their bots may be of limited value.
But more importantly, assess what integrating on IndexNow does to server weight vs. crawl efficacy score improvement for those search engines. It may be that the costs are not worth the benefit.
Google Support From The Indexing API
The second one is the Google Indexing API. Google has repeatedly stated that the API can only be used to crawl pages with either jobposting or broadcast event markup. And many have tested this and proved this statement to be false.
By submitting non-compliant URLs to the Google Indexing API you will see a significant increase in crawling. But this is the perfect case for why “crawl budget optimization” and basing decisions on the amount of crawling is misconceived.
Because for non-compliant URLs, submission has no impact on indexing. And when you stop to think about it, this makes perfect sense.
You’re only submitting a URL. Google will crawl the page quickly to see if it has the specified structured data.
If so, then it will expedite indexing. If not, it won’t. Google will ignore it.
So, calling the API for non-compliant pages does nothing except add unnecessary load on your server and wastes development resources for no gain.
Google Support Within Google Search Console
The other way in which Google supports crawling is manual submission in Google Search Console.
Most URLs that are submitted in this manner will be crawled and have their indexing status changed within an hour. But there is a quota limit of 10 URLs within 24 hours, so the obvious issue with this tactic is scale.
However, this doesn’t mean disregarding it.
You can automate the submission of URLs you see as a priority via scripting that mimics user actions to speed up crawling and indexing for those select few.
Lastly, for anyone who hopes clicking the ‘Validate fix’ button on ‘discovered currently not indexed’ exclusions will trigger crawling, in my testing to date, this has done nothing to expedite crawling.
So if search engines will not significantly help us, how can we help ourselves?
How To Achieve Efficient Site Crawling
There are five tactics that can make a difference to crawl efficacy.
1. Ensure A Fast, Healthy Server Response
A highly performant server is critical. It must be able to handle the amount of crawling Googlebot wants to do without any negative impact on server response time or erroring out.
Check your site host status is green in Google Search Console, that 5xx errors are below 1%, and server response times trend below 300 milliseconds.
2. Remove Valueless Content
When a significant portion of a website’s content is low quality, outdated, or duplicated, it diverts crawlers from visiting new or recently updated content as well as contributes to index bloat.
The fastest way to start cleaning up is to check the Google Search Console pages report for the exclusion ‘Crawled – currently not indexed.’
In the provided sample, look for folder patterns or other issue signals. For those you find, fix it by merging similar content with a 301 redirect or deleting content with a 404 as appropriate.
3. Instruct Googlebot What Not To Crawl
While rel=canonical links and noindex tags are effective at keeping the Google index of your website clean, they cost you in crawling.
While sometimes this is necessary, consider if such pages need to be crawled in the first place. If not, stop Google at the crawling stage with a robot.txt disallow.
Find instances where blocking the crawler may be better than giving indexing instructions by looking in the Google Search Console coverage report for exclusions from canonicals or noindex tags.
Also, review the sample of ‘Indexed, not submitted in sitemap’ and ‘Discovered – currently not indexed’ URLs in Google Search Console. Find and block non-SEO relevant routes such as:
- Parameter pages, such as ?sort=oldest.
- Functional pages, such as “shopping cart.”
- Infinite spaces, such as those created by calendar pages.
- Unimportant images, scripts, or style files.
- API URLs.
You should also consider how your pagination strategy is impacting crawling.
4. Instruct Googlebot On What To Crawl And When
An optimized XML sitemap is an effective tool to guide Googlebot toward SEO-relevant URLs.
Optimized means that it dynamically updates with minimal delay and includes the last modification date and time to inform search engines when the page last was significantly changed and if it should be recrawled.
5. Support Crawling Through Internal Links
We know crawling can only occur through links. XML sitemaps are a great place to start; external links are powerful but challenging to build in bulk at quality.
Internal links, on the other hand, are relatively easy to scale and have significant positive impacts on crawl efficacy.
Focus special attention on mobile sitewide navigation, breadcrumbs, quick filters, and related content links – ensuring none are dependent upon Javascript.
- Technical requirements: What Google needs from a web page to show it in Google Search.
-
Technical requirements
The technical requirements cover the bare minimum that Google Search needs from a web page in order to show it in search results. There are actually very few technical things you need to do to a web page; most sites pass the technical requirements without even realizing it.
-
Key best practices
-
While there are many things you can do to improve your site's SEO, there are a few core practices that can have the most impact on your web content's ranking and appearance on Google Search:
- Create helpful, reliable, people-first content.
- Use words that people would use to look for your content, and place those words in prominent locations on the page, such as the title and main heading of a page, and other descriptive locations such as alt text and link text.
- Make your links crawlable so that Google can find other pages on your site via the links on your page.
- Tell people about your site. Be active in communities where you can tell like-minded people about your services and products that you mention on your site.