The document discusses the importance of technical SEO and on-page factors like content, architecture, and HTML. It provides tips for technical SEO audits, including using independent web crawlers to discover technical issues, establishing the full scope of pages on a website, recommending quick and long-term fixes, regularly auditing canonical tags, using robots.txt to manage crawlers, and optimizing website architecture for efficient crawling. Quotes from SEO experts emphasize the importance of addressing technical issues and sending clear signals to search engines.
5. Have you ever intentionally browsed
products at a store but, decided to
buy them online?
Have you ever intentionally
browsed products online, but
decided to buy them in-store?
@SearchMATH
68%
32%
Yes No
70%
30%
Yes No
30. "Even a basic understanding of what to
look for in technical SEO can get you
far…
31. …So many people today focus too
heavily on off-page SEO, but if a site is
technically flawed, it won't matter how
many links you have or how good your
content is.”
32. Isolate & Dominate Your Base Metric
1. Organic revenue
2. Visits compared to last month
3. Visits compared year-over-year
http://searchengineland.com/win-battle-proving-seos-value-201328
35. GOOGLE RECOMMEND CRAWLING SEPARATELY
Google Webmaster Hangout, 16th October, John Mueller recommended running a separate
crawl to help identify and resolve technical issues that could be causing problems and delays
when Google tries to crawl your site.
“We get kind of lost crawling all of these unnecessary URLs and we might not be able to
crawl your new updated content.”
36. I’m going to show you a
case study to remind us
of some of the power at
our disposal
@SearchMATH
43. Independent web crawler
software which imitates
googlebot to produces rich
reports detailing
opportunities to achieve
perfect website architecture.
@SearchMATH
59. “Including a rel=canonical link in your webpage is a
strong hint to search engines your about preferred
version to index among duplicate pages on the web.”
74. DeepCrawl already supports the robots.txt noindex directive: check which pages are
being noindexed in your report via Indexation > Non-Indexable Pages > Noindex
Pages.
77. So, remember:
1) Crawl your website
2) 5 actions per audit
3) Audit your canonicals
4) Try robotto.org for FREE
5) Optimise your crawl efficiency
@SearchMATH