O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Seozone - 5 tips

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Carregando em…3
×

Confira estes a seguir

1 de 78 Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Quem viu também gostou (20)

Anúncio

Semelhante a Seozone - 5 tips (20)

Mais recentes (20)

Anúncio

Seozone - 5 tips

  1. 1. How valuable was Search in 2015? @SearchMATH
  2. 2. @SearchMATH Turkey
  3. 3. Have you ever intentionally browsed products at a store but, decided to buy them online? Have you ever intentionally browsed products online, but decided to buy them in-store? @SearchMATH 68% 32% Yes No 70% 30% Yes No
  4. 4. @SearchMATH “Web-rooming” is just as important as “showrooming”
  5. 5. @SearchMATH 56% - first thing they do when researching a purchase is to go to a search engine
  6. 6. @SearchMATH
  7. 7. @SearchMATH
  8. 8. Today I’m going to look at “ON-THE-PAGE FACTORS” AKA “TECHNICAL SEO” @SearchMATH
  9. 9. ON-THE-PAGE FACTORS  CONTENT  ARCHITECTURE  HTML
  10. 10. Why does TECHNICAL SEO matter? @SearchMATH
  11. 11. Ok… nice article @SearchMATH
  12. 12. But how healthy is their ON-THE-PAGE SEO? @SearchMATH
  13. 13. @SearchMATH Let’s test the User Experience (U/X)
  14. 14. But do 404’s hurt my site? @SearchMATH
  15. 15. YES
  16. 16. @SearchMATH
  17. 17. @SearchMATH
  18. 18. Website Health 0 2 4 6 8 10 1 5 10 Commercial Performance Low  High Website ‘Health’ impact on Commercial Performance
  19. 19. What do the world’s top SEO experts say? @SearchMATH
  20. 20. "Even a basic understanding of what to look for in technical SEO can get you far…
  21. 21. …So many people today focus too heavily on off-page SEO, but if a site is technically flawed, it won't matter how many links you have or how good your content is.”
  22. 22. Isolate & Dominate Your Base Metric 1. Organic revenue 2. Visits compared to last month 3. Visits compared year-over-year http://searchengineland.com/win-battle-proving-seos-value-201328
  23. 23. "The biggest problems we have are tech problems. Often webmasters [are] trying to be too clever and give confusing signals… John Mueller, Webmaster Trends Analyst, Google http://managinggreatness.com/2014/01/27/john-mueller-closing-keynote-at-smx-israel/
  24. 24. …Send clear, consistent and obvious signals.” John Mueller, Webmaster Trends Analyst, Google http://managinggreatness.com/2014/01/27/john-mueller-closing-keynote-at-smx-israel/
  25. 25. GOOGLE RECOMMEND CRAWLING SEPARATELY Google Webmaster Hangout, 16th October, John Mueller recommended running a separate crawl to help identify and resolve technical issues that could be causing problems and delays when Google tries to crawl your site. “We get kind of lost crawling all of these unnecessary URLs and we might not be able to crawl your new updated content.”
  26. 26. I’m going to show you a case study to remind us of some of the power at our disposal @SearchMATH
  27. 27. Let’s begin… @SearchMATH
  28. 28. Where do I start with a technical review? @SearchMATH
  29. 29. Jon suggests we “CRAWL” @SearchMATH TIP #1
  30. 30. Independent web crawler software which imitates googlebot to produces rich reports detailing opportunities to achieve perfect website architecture. @SearchMATH
  31. 31. “CRAWL” = DISCOVERY @SearchMATH
  32. 32. An embarrassing truth for too many web managers… @SearchMATH
  33. 33. They don’t even know how many pages they have on their website @SearchMATH
  34. 34. Let alone how many issues are lurking below the surface @SearchMATH
  35. 35. Establish a clear picture of your website and then consider how it fits into the URL universe @SearchMATH
  36. 36. Recommend 5 actions from your audit: • 3 Quick-wins • 2 Long-term wins @SearchMATH TIP #2
  37. 37. What’s the story with these rel=canonical links? @SearchMATH
  38. 38. “Including a rel=canonical link in your webpage is a strong hint to search engines your about preferred version to index among duplicate pages on the web.”
  39. 39. “rel=canonical can be a bit tricky because it’s not very obvious when there’s a misconfiguration.”
  40. 40. www.mavi.com rel=canonical to a Disallowed URL @SearchMATH
  41. 41. Audit your canonical tags quarterly @SearchMATH TIP #3
  42. 42. Your robots.txt file holds the power over crawlers. Monitor for free with Robotto.org @SearchMATH TIP #4
  43. 43. So, how can I fix this issue?
  44. 44. Adding this directive in the robots.txt is quicker, cleaner, easier to manage than getting a meta noindex added to specific pages. @SearchMATH
  45. 45. But ultimately the meta canonical needs amending to…
  46. 46. DeepCrawl already supports the robots.txt noindex directive: check which pages are being noindexed in your report via Indexation > Non-Indexable Pages > Noindex Pages.
  47. 47. Constantly optimise your architecture to improve crawl efficiency @SearchMATH TIP #5
  48. 48. So, remember: 1) Crawl your website 2) 5 actions per audit 3) Audit your canonicals 4) Try robotto.org for FREE 5) Optimise your crawl efficiency @SearchMATH
  49. 49. Thank you! Please say “hi” @SearchMATH

Notas do Editor

  • Using a number of different channels to make a single purchase. An example of
    this kind of transaction is researching a product online and then buying it in
    store. In fact, more than 80% of all respondents conduct online research before
    they buy electronics, computers, books, music, and movies. Eighty-eight percent
    of US respondents said that they research a product online via their PC before
    buying (vs. 80% globally) ― and 73% of US respondents report that they
    research online when buying clothing, footwear, toys, and health and beauty
    products (vs. 60% globally). In other words, online research doesn‘t just lead to
    online purchases, it‘s also critical in leading to purchases through other
    channels and in driving traffic to physical brick and mortar outlets.
  • Using a number of different channels to make a single purchase. An example of
    this kind of transaction is researching a product online and then buying it in
    store. In fact, more than 80% of all respondents conduct online research before
    they buy electronics, computers, books, music, and movies. Eighty-eight percent
    of US respondents said that they research a product online via their PC before
    buying (vs. 80% globally) ― and 73% of US respondents report that they
    research online when buying clothing, footwear, toys, and health and beauty
    products (vs. 60% globally). In other words, online research doesn‘t just lead to
    online purchases, it‘s also critical in leading to purchases through other
    channels and in driving traffic to physical brick and mortar outlets.
  • Using a number of different channels to make a single purchase. An example of
    this kind of transaction is researching a product online and then buying it in
    store. In fact, more than 80% of all respondents conduct online research before
    they buy electronics, computers, books, music, and movies. Eighty-eight percent
    of US respondents said that they research a product online via their PC before
    buying (vs. 80% globally) ― and 73% of US respondents report that they
    research online when buying clothing, footwear, toys, and health and beauty
    products (vs. 60% globally). In other words, online research doesn‘t just lead to
    online purchases, it‘s also critical in leading to purchases through other
    channels and in driving traffic to physical brick and mortar outlets.
  • I love sightseeing – so that’s definitely covered
  • So I checked out a couple of sites and chose my target
  • Reasons to explore –

    Search engines only provide a limited amount of data. They are trying to be helpful, but don’t want to confuse beginners with complicated reports, or do anything which might not be in their own best interests.

    1 page every 2 seconds: 1M Pages

    Duplicate Content

    Thin Content

    Web Depth Opportunities

    Migration work is the riskiest time for websites – crawl staging environments

    Data storage example - Sears

    https://www.deepcrawl.com/knowledge/best-practice/7-compelling-reasons-not-to-use-google-search-console-on-its-own/
  • Christmas Eve, 1968 as Apollo 8 made its way around the dark side of the moon.

    A single photograph from Bill Anders
  • Earthrise
  • the most complete 3-D map of the local universe (out to a distance of 380 million light-years) ever created. Taking more than 10 years to complete, the 2MASS Redshift Survey (2MRS) also is notable for extending closer to the Galactic plane than previous surveys – a region that’s generally obscured by dust. 
    http://artdaily.com/news/47768/After-10-Years--Astronomers-Unveil-Most-Complete-3-D-Map-of-Local-Universe#.VkNLunbhDIU

    https://www.cfa.harvard.edu/news/2011-16
  • Reasons to explore –

    Search engines only provide a limited amount of data. They are trying to be helpful, but don’t want to confuse beginners with complicated reports, or do anything which might not be in their own best interests.

    1 page every 2 seconds: 1M Pages

    Duplicate Content

    Thin Content

    Web Depth Opportunities

    Migration work is the riskiest time for websites – crawl staging environments

    Data storage example - Sears

    https://www.deepcrawl.com/knowledge/best-practice/7-compelling-reasons-not-to-use-google-search-console-on-its-own/
  • Reasons to explore –

    Search engines only provide a limited amount of data. They are trying to be helpful, but don’t want to confuse beginners with complicated reports, or do anything which might not be in their own best interests.

    1 page every 2 seconds: 1M Pages

    Duplicate Content

    Thin Content

    Web Depth Opportunities

    Migration work is the riskiest time for websites – crawl staging environments

    Data storage example - Sears

    https://www.deepcrawl.com/knowledge/best-practice/7-compelling-reasons-not-to-use-google-search-console-on-its-own/
  • Unlike disallowed pages, noindexed pages don’t end up in the index and therefore won’t show in search results. Combine both in robots.txt to optimise your crawl efficiency: the noindex will stop the page showing in search results, and the disallow will stop it being crawled:
    Disallow: /example-page-1/ Disallow: /example-page-2/
    Noindex: /example-page-1/ Noindex: /example-page-2/
    This is the holy grail of robots control that SEOs have been looking for.

    SIMPLE, STRAIGHTFORWARD & POWERFUL
  • Unlike disallowed pages, noindexed pages don’t end up in the index and therefore won’t show in search results. Combine both in robots.txt to optimise your crawl efficiency: the noindex will stop the page showing in search results, and the disallow will stop it being crawled:
    Disallow: /example-page-1/ Disallow: /example-page-2/
    Noindex: /example-page-1/ Noindex: /example-page-2/
    This is the holy grail of robots control that SEOs have been looking for.

    SIMPLE, STRAIGHTFORWARD & POWERFUL
  • Reasons to explore –

    Search engines only provide a limited amount of data. They are trying to be helpful, but don’t want to confuse beginners with complicated reports, or do anything which might not be in their own best interests.

    1 page every 2 seconds: 1M Pages

    Duplicate Content

    Thin Content

    Web Depth Opportunities

    Migration work is the riskiest time for websites – crawl staging environments

    Data storage example - Sears

    https://www.deepcrawl.com/knowledge/best-practice/7-compelling-reasons-not-to-use-google-search-console-on-its-own/

×