How to Make a Optimised Website FOR SEO

| | SEO
SEO Expert

Now that you know what SEO is and what are the main factors that Google takes into account when it comes to positioning a website, you need to learn what you have to do so that your page has opportunities to position up in the SERPs .

In this chapter we will talk about how to optimize the main positioning factors as well as the main SEO problems that arise when optimizing a web and its possible solutions.

We will divide the themes of this chapter into four large blocks:

Accessibility
Indexability
Content
Meta tags

1. Accessibility

The first step in optimizing SEO for a website is to allow search engines access to our content. That is, you have to check if the web is visible in the eyes of the search engines and especially, how they are viewing the page.

For several reasons that we explain later it can be the case that search engines can not read a web correctly, an indispensable requirement for positioning.

Things to keep in mind for good accessibility

Robots txt file
Meta tag robots
HTTP status codes
Sitemap
Web structure
JavaScript and CSS
Speed โ€‹โ€‹of the web
Robots txt file

The robots.txt file is used to prevent search engines from accessing and indexing certain parts of a web. It is very useful to prevent Google from displaying the pages we do not want in the search results. For example in WordPress, so that they do not access the administrator files, the robots.txt file would look like this:

Example
User agent: *

Disallow: / wp-admin

EYE: You must be very careful not to block the access of the search engines to your entire web without realizing it as in this example:

Example
User agent: *

Disallow: /

We must verify that the robots.txt file is not blocking any important part of our web. We can do this by visiting the url www.example.com/robots.txt, or through Google Webmaster Tools in “Tracking”> “Robot Tester.txt”

The robots.txt file can also be used to indicate where our Sitemap is by adding it to the last line of the document.

Therefore, an example of full robots.txt for WordPress would look like this:

Example
User-agent: *

Disallow: / wp-admin

Sitemap: http: //www.example.com/sitemap.xml

If you want to go into more detail on this file, we recommend you visit the web with information about the standard.

Meta Label Robot

The meta tag “robots” is used to tell search engine robots whether or not they can index the page and if they should follow the links it contains.

When analyzing a page you should check if there is any meta tag that is mistakenly blocking access to these robots. Here’s an example of how these tags would look in the HTML code:

Example
<Meta name = “robots” content = “noindex, nofollow”>

On the other hand meta tags are very useful to prevent Google from indexing pages that do not interest you, such as pages or filters, but follow the links to continue to track our website. In this case the label would look like this:

Example
<Meta name = “robots” content = “noindex, follow”>

We can check the meta tags by right clicking on the page and selecting “view page source”.

Or if we want to go a little further, with the Screaming Frog tool we can see at a glance which pages on the whole web have such a label implemented. You can see it in the “Directives” tab and in the “Meta Robots 1” field. Once you have located all the pages with this tags you just have to delete them.

HTTP status codes

In the event that a URL returns a status code (404, 502, etc.), users and search engines will not be able to access that page. To identify these URLs we recommend that you also use Screaming Frog, because it quickly displays the status of all URLs on your page.

IDEA: Every time you do a new search on Screaming Frog it exports the result into a CSV. So you can put them together in one Excel later.

Sitemap

The sitemap is an XML file that contains a list of the pages of the site along with some additional information, such as how often the page changes its contents, when it was last updated, and so on.

A small excerpt from a sitemap would be:

Example
<Url>

<Loc> http://www.example.com </ loc>

<Changefreq> Daily </ changefreq>

<Priority> 1.0 </ priority>

</ Url>

Important points to check with the Sitemap are:

Follow the protocols, otherwise Google will not process it properly
Be uploaded to Google Webmaster Tools
Be updated. When updating your website, make sure you have all the new pages in your sitemap
All pages in the sitemap are being indexed by Google
In case the web does not have any sitemap we must create one, following four steps:

Generate an Excel with all the pages that we want to be indexed, for this we will use the same Excel that we created when doing the search of the HTTP response codes
Create the Sitemap. For this we recommend the tool Generators Sitemap (simple and very complete)
Compare the pages that are in your excel and those that are in the Sitemap and remove from the Excel those that we do not want to be indexed
Upload the sitemap through Google Webmaster Tools
Web structure

If the structure of a web is too deep for Google it will be more difficult to reach all the pages. So it is recommended that the structure does not have more than 3 levels of depth (not counting the home) since the Google robot has a limited time to crawl a web, and the more levels you have to go through the less time you will have to access To the deepest pages

That is why it is always better to create a web structure in horizontal rather than vertical.

Vertical Structure

Vertical web structure

 

 

Horizontal structure

Horizontal web structure

 

 

Our advice is to make a scheme of the whole web in which you can easily see the levels you have, from the home to the deepest page and calculate how many clicks it takes to reach it.

Find out what level each page is on and if you have links pointing to it using Screaming Frog again.

JavaScript and CSS

Although in recent years Google has become more intelligent when it comes to reading such technologies we must be careful because JavaScript can hide part of our content and CSS can clutter it by showing it in another order that Google sees it.

There are two methods to know how Google reads a page:

Plugins
Cache command:
Plugins

Plugins such as Web Developer or Disable-HTML help us see how a web browser “crawls” the web. To do this you have to open one of these tools and disable JavaScript. We do this because all pull-down menus, links and texts must be readable by Google.

Then we deactivate the CSS, since we want to see the actual order of the content and the CSS can completely change this.

Cache command:

Another way to know how Google sees a web is through the command “cache:”

Enter “cache: www.mieexample.com” in the search engine and click on “Text only version”. Google will show you a photo where you can know how to read a website and when was the last time you accessed it.

Of course, for the command “cache:” to work correctly our pages must be previously indexed in Google indexes.

Once Google indexes a page for the first time, it determines how often it will revisit for updates. This will depend on the authority and relevance of the domain to which that page belongs and the frequency with which it is updated.

Either by means of a plugin or the command “cache:”, make sure that you complete the following points:

You can see all the menu links.
All links on the web are clickable.
There is no text that is not visible with CSS and Javascript enabled.
The most important links are at the top.

Load speed

The Google robot has a limited time to browse our page, the less late each page in loading more pages will get.

Also note that a very slow page load can cause your rebound percentage to shoot, so it becomes a vital factor not only for positioning but also for a good user experience.

To see the loading speed of your website we recommend Google Page Speed , there you can check what are the problems that slow down your site in addition to finding the advice that Google offers you to stop them. Focus on those who have high and medium priority.

Indexability

Once the Google robot has accessed a page the next step is to index it, these pages will be included in an index where they are sorted according to their content, authority and relevance to make it easier and faster for Google to access they.

How to check if Google has indexed my web correctly?

The first thing you have to do to know if Google has indexed your web correctly is to perform a search with the command “site:” , this way Google will give you the approximate figure of the pages of our website that has indexed:

Google Site Command

If you have linked Google Webmaster Tools in your web you can also check the actual number of indexed pages by going to Index Google Indexing Status

Knowing (more or less) the exact number of pages that your website has, this data will serve to compare the number of pages that Google has indexed with the number of real pages of your website. There are three scenarios:

The number in both cases is very similar. It means that everything is in order.
The number that appears in Google search is lower , which means that Google is not indexing many of the pages. This happens because you can not access all the pages of the web. To solve this check the accessibility part of this chapter.
The number that appears in Google search is higher , which means that your web has a duplicate content problem. Surely the reason why there are more indexed pages than actually exist on your website is that you have duplicate content or that Google is indexing pages that you do not want to be indexed.

Duplicate Content

Having duplicate content means that for several URLs we have the same content. This is a very common problem, which is often involuntary and can also have negative effects on the positioning in Google.

Here are the main reasons for duplicate content:

“Canonicalization” of the page
URL Parameters
Pagination
This is the most common reason for duplicate content and occurs when your homepage has more than one URL:

Example
Example.com

Www.example.com

Example.com/index.html

Www.example.com/index.html

Each one of the previous ones direct to the same page with the same content, if it is not indicated to Google which is correct it will not know which one has to position and it can position exactly the version that is not wanted.

Solution. There are 3 options:
Make a redirect on the server to ensure that there is only one page that is displayed to users.
Define which subdomain we want to be the main (“www” or “no-www”) in Google Webmaster Tools. How to define the main subdomain.
Add a “rel = canonical” tag in each version that points to the one that is considered correct.
URL Parameters
There are many types of parameters, especially in e-commerce: product filters (color, size, punctuation, etc.), sorting (price lower, by relevance, higher price, grid, etc.) and user sessions. The problem is that many of these parameters do not change the content of the page and that generates many URLs for the same content.

Www.example.com/boligrafos?color=negro&precio-dede=5&precio-hasta=10

In this example we find three parameters: color, minimum price and maximum price.

Solution
Add a “rel = canonical” tag to the original page, so you will avoid any confusion on the part of Google with the original page.

Another possible solution is to indicate through Google Webmaster Tools> Tracking> URL Parameters what parameters Google should ignore when indexing the pages of a web.

Pagination
When an article, product list, or tag page and category has more than one page, duplicate content issues may occur even though the pages contain different content because they are all focused on the same topic. This is a huge problem in e-commerce pages where there are hundreds of articles in the same category.

Solution
Currently the rel = next and rel = prev tags allow search engines to know which pages belong to the same category / publication and so it is possible to focus the entire positioning potential on the first page.

How to use the NEXT and PREV parameters

1. Add the label rel = next in the part of the code to the first page:

Link rel = “next” href = “http://www.Example.com/page-2.html” />
2. Add on all pages except the first and last tags rel = next and rel = prev

Link rel = “prev” href = “http://www.example.com/page-1.html” />
Link rel = “next” href = “http://www.example.com/page-3.html” />
3. Add to the last page the tag rel = prev

Link rel = “prev” href = “http://www.example.com/page-4.html” />
Another solution is to search the paging parameter in the URL and enter it in Google Webmaster Tools so that it is not indexed.

Cannibalization

The cannibalization of keywords occurs when in a web there are several pages that compete for the same keywords. This confuses the search engine by not knowing which one is most relevant to that keyword.

This problem is very common in e-commerce, because having several versions of the same product “attack” with all of them to the same keywords. For example, if you sell a book in soft cover, hard cover and digital version, you will have 3 pages with virtually the same content.

Solution
Create a main page of the product, from where you access the pages of the different formats, where we will include a canonical label that points to the main page. The optimum will always be to center each keyword on a single page to avoid any cannibalization problem.

3. Content

Since in recent years it has become quite clear that content is the king for Google. Let’s give him a good throne then.

Content is the most important part of a website and no matter how well optimized it is at SEO level, if it is not relevant with regard to the searches that the users perform it will never appear in the top positions.

To make a good analysis of the content of our website you have a few tools at your disposal, but in the end the most useful is to use the page with Javascript and CSS deactivated as explained above. This way you will see what content Google is actually reading and in what order it is arranged.

When analyzing the content of the pages you must ask yourself several questions that will guide you in the process:

Does the page have enough content? There is no standard measure of how much is “sufficient”, but at least it should contain 300 words.
Is the content relevant? It should be useful to the reader, just ask yourself if you would read that. Be sincere.
Do you have important keywords in the first few paragraphs? In addition to these we should use related terms because Google is very effective relating terms.
A PAGE WILL NEVER POSITION FOR SOMETHING THAT DOES NOT CONTAIN –
Do you have keyword stuffing ? If the content of the page “sins” from excess of keywords to Google will not be funny. There is no exact number that defines a perfect keyword density , but Google advises to be as natural as possible.

 

Do you misspellings?

Is it easy to read? If the reading is not tedious, it will be fine. Paragraphs should not be very long, the letter should not be too small and it is advisable to have images or videos that reinforce the text. Remember to always think for what audience you write.
Can Google read the text on the page? We have to prevent the text from being inside Flash, images or Javascript. This will be verified by seeing the text-only version of our page, using Google the cache: www command. Example.com and selecting this version.
Is the content well distributed? It has its corresponding H1, H2 and etc. tags, the images are well laid out etc.
Is it linkable? If we do not give the user how to share it, it is very likely that he will not do so. Includes sharing buttons on social networks in visible places on the page that do not interfere with the display of content, be it a video, a photo or text.
Is actual? The more up-to-date your content is, the greater the Google crawling frequency on your website and the better the user experience.
advice
You can create an excel with all the pages, their texts and the keywords that you want them to appear in them, this way it will be easier to see where you should reduce or increase the number of keywords in each page.

 

4. Meta tags

The meta tags or meta tags are used to convey information to search engines what the page is about when they have to sort and show your results. Here are the most important tags to keep in mind:

Title

The title tag is the most important element within meta tags. It is the first thing that appears in the results in Google.

When optimizing the title, keep in mind that:

The tag should be in the <head> </ head> section of the code.
Each page must have a unique title.
It should not exceed 70 characters, if not cut.
It must be descriptive with respect to the content of the page.
It must contain the keyword for which we are optimizing the page.
We should never abuse the keywords in the title , this will make users suspicious and Google think we are trying to deceive.

Another aspect to take into account is where to put the “mark”, ie: the name of the web, usually is usually put at the end to give more importance to keywords, separating these from the name of the web with a hyphen or A vertical bar.

Meta-description

Although it is not a critical factor in the positioning of a web it affects considerably the click-through rate in the search results.

For the meta-description we will follow the same principles as the title, except that its length should not exceed 155 characters. For both titles and meta-descriptions we must avoid duplication, we can check this in Google Webmaster Tools> Search aspect> HTML improvements.

Meta Keywords

At the time meta keywords were a very important factor of positioning, but Google discovered how easy it is to manipulate the search results so it eliminated it as a positioning factor.

Labels H1, H2, H3 …

The labels H1, H2, etc. Are very important to have a good information structure and a good user experience, since they define the hierarchy of content, something that will improve SEO. We must give importance to the H1 because it is usually in the highest part of the content and the higher a keyword is, the more importance it will give Google.

“Alt” tag in the image

The “alt” tag on the images is added directly into the image code itself.

Example
<Img src = “http://www.example.com/example.jpg” alt = “keyword molona” />

This tag has to be descriptive with respect to the image and content of that image , since that is what Google reads when crawling it and one of the factors that it uses to position it in Google Images.

conclusion

You already know how to make a page optimized for SEO and that there are many factors that optimize if you want to appear in the best positions of the search results. Now you will surely ask yourself, what are the keywords that best position my web?

thnaks for stay with us.

I am a Founder of Techloge.com- my aim is to write about tech, Affiliates, and digital marketing to spread the knowledge among my read .

Leave a Reply

Your email address will not be published. Required fields are marked *

Developed by: TechLoge