There is a lot of focus at the moment around UX on the web, and rightly so.
Googles guidelines are basically create websites for your users and don’t try to ‘game’ the system. In other words, write good content and make your websites easy to use. This should always be the baseline. Don’t create a website that you think will rank well, focusing on the your search terms in the title and making sure that your content has the correct keyword density. Focus on the user.
Having said that, there is one visitor to your website that you may need to pay attention to. The Googlebot.
The Googlebot is the crawler that reads web pages and follows links to your other content. It is what determines what is shown from your site on the Search engine result page.
The Googlebot reads the content on your website, but it also reads metadata that the typical user doesn’t see. For example, the page description is what the Googlebot pulls from the pages metadata to display on the search engine results page (SERP).
In SERP answer boxes
Google, in the last few years have started including answer boxes in their search engine results page that give the user an answer to their query without the need for them to visit the actual site. While this is probably a better experience for the user, it’s not good if you are trying to earn revenue on your site from advertising.
While there is not a lot you can do to change this, you can ensure that it is your website that is providing the answers.
Elsewhere on the SERP it is important that your website appears in the best way possible. There are set lengths for the text that you use in your titles and page descriptions. It is important that you test these so that they look correct.
If they are not right, Google can sometimes make a best guess as to what the content on the page is. This takes the control away from you and prevents you from having consistency across your site as to how it appears in search engines.
Metadata
Your site is crawled by robots, they used to be called ‘spiders’ in the more romantic age of the web. These bots (e.g. the Googlebot) love data, but more importantly, they love data about data. Another name for this data is Metadata.
An example of metadata in this blog post could be the word count, or the author or the publish date.
There are many different ways this data exists, for example the title of this post is wrapped in a h1 tag, this is implicit metadata telling the bots that the sentence wrapped in the h1 tag is what this post is about.
The title of the page is another example of implicit metadata.
You can also create explicit metadata. There are 2 main ways to do this Schema and JSON-LD (JSON for linked data).
Both of these technologies allow you to tell the bots exactly what the content is about. This is not only for use on the web with search engines, but a way of structuring data so that computers can understand it. Another example of how it is used could be a calendar app that has access to an email confirming a hotel booking. If there is structured data in the email, then the calendar could add your hotel stay automatically.
This is a huge topic and there is a good article from Google showing the benefits.
Hreflang
This is another type of metadata that you can add into your page or sitemap. It helps to describe content in multi-language websites. Hreflang is especially useful when you have have language variant content; for example, US English and UK English.
What it does is it tells the bots which language and locale the content is meant for. SEO Moz have an up-to-date article on the topic.
Be kind to your bots
Technical SEO is a huge area and so much is hidden in the secret world of search algorithms. Adding structured data to our websites gives them an extra layer and turns content into a set of data that can be used by APIs and give a richer experience to our users.