• Home
  • About
  • Piqq.us Invite Feed
  • Examining a Search Ranking Fluctuation

    Alright guys. This is not a list of search engine ranking factors. Think of it like a list of potential factors and how the web looks from an overhead view; how the algorithm’s can see it. Things here may not affect rankings today, or they may. But even those that don’t, look to them being integrated soon. This is not search engine specific. Just possibilities.

    Whenever there’s a big shift in the search results, you need to know what the search engines see, and what they could possibly be changing, to try and find the source of your troubles or success. Your own list of potential factors may be different from mine (mine is heavily biased towards the type of sites I build), but whatever. Figured I’d put it out there anyways.

    • Site Ownership/Identification
      • Hosting/IPs – Websites obviously need to be hosted. It’s one of the things that restricts the building of truly solid link farms. Each IP has a few identifiable pieces of information. It’s C-Class block (, the company that IP block is registered to, and obviously the IP itself.
      • Registrar/WhoIs – No, they can’t access private whois information, even though Google is a registrar. Either way though, private whois will not help you out exactly on reinclusion requests. We also know for sure this information is archived to an extent(hence issues with domain flipping that have arisen lately). In terms of connecting sites together, registrar can be significant. For example, this blog uses 1and1(don’t get me started) private registration. Let’s say I have 2 sites on hostgator. With hundreds of registrars, and 255^4 IPs, statistically what do you think are the chances of 2 domains with private whois residing within 5 IPs of eachother linking to eachother? Astronomical. This isn’t built into the algorithm yet as far as I know, but keep it in mind.
      • Linking by IP – The chances of 2 IPs linking back and forth to eachother from different domains frequently is extraordinarily low unless the webmaster owns both servers. Once again, not known if this is a factor yet.
    • OutBound Linking
      • Outbound Linking and Niche “base” Sites
        This is the theory that fixed Ask.com’s search engine. Look at the SEO sphere. There’s a distinct circle of sites that link between eachother. You’ll see links going back and forth between myself and SEO ROI, and SEOMoz, and from there out to the rest of the seo sphere. You’ll also see ones coming into me have a tendency to go out to wickedfire and bluehat seo. This forms an intricate kind of web of authority. Sites being linked to most in the pyramid become the top sites. Sites at the bottom become more eligible to become major players when linked to from someone at the top. Most likely quite similar to Google’s “authority”, but perhaps not as niche specific as Ask’s once was.
      • Outbound Linking to Spammy Sites
        (Read: Linking to anywhere but Google or Wikipedia)
      • Use of NoFollow – I don’t for a second buy that right now Google is penalizing sites as being “SEOed” based on no-follow. It defeats their end goal of nofollow. But it’s still excellent for spotting certain CMS’s, like wordpress
    • Inbound Links
      • Paid Links
        • Whored out Links: Examine the place you purchased your links from. Are there any other obviously paid links? Is your relevant link sitting next to a Viagra link? If so, congrats. There’s a chance someone reported the site as paid links.
        • Common Text: Contextual links rock. But is the article syndicated over 400 different identical blog?
        • Common Location: Is the link in the footer? Rumor has it that GoogleBot uses the gecko rendering engine. This means that yes, they can tell where the links are located.
      • Spammed Links
        • Similar Text – Though it’s unlikely this is a current factor, is there similar text around your spammed links? Perhaps a common username? Remember Google’s social API, which attempts to link together social profiles. It’s not a stretch to say this already does, or could someday, work on forums and all social news sites.
        • Common CMS/Footprinting – For the purpose of link spam, most sites are found by common footprints from that CMS. However, that means there are footprints for the search engines to discover. Or perhaps they’ve already classified and discounted links from super spammable software.
        • Overspammed Locations – Think “guestbooks”. Ancient BBS implementations that haven’t seen a legitimate post in over 6 years. In the past, these have sped indexing. Nowadays, while that remains true, I’ve noticed they have if anything, a negative impact on rankings. So consider your links. Is there anything that just screams “link spam” about the locations.
      • Link Building Metrics
        • Link Velocity – The speed at which links are gathered, and their consistent speed. Ideally, a graph of when new links pop up should look like a bell curve, levelling out at some point on the way down.
        • Link Temporization – Certain links get removed by admins for link spam. The percentage of these is ideally low, as natural sites do not have a large % of their links removed
        • Link Location – Web 2.0 is a game of temporary bumps. Frontpaging on a site like digg creates a powerful link. But as soon as that link goes off the front page, some of the power is lost. Too much in the social media arena, and this can get messy. Beyond that, tags getting syndicated(wordpress.com for example keeps a feed of different tags), social bookmarks getting syndicated and later getting bumped off…there’s a lot that can change.
        • Keyword Variance – If you have all links with 100% identical anchor text…something is amiss. Google seems to be decent at realizing this already. Yahoo, not as good. Live? Well, I don’t need to go there.
    • Domain Trust – Domain trust(and what occurs when it is lost) has always been a bit hard for me to analyze. But look at the parasite domains that get hit hard, then what happens after the fact. Is this changing? It has to to a certain extent. But sites like Digg have been parasited hundreds of times, lost some trust, and yet still rank. It’s odd. Then other domains that have been parasited still rank for real key terms, but can’t be used to parasite again.
    • Internal SEO vs. External – The search engines have always been attempting to balance out the power of internal SEO(site structure, self-linking with proper anchor text, etc) with external SEO (inbound links). This balance from time to time changes. Also, from search engine to search engine the emphasis is completely different. Live for example, recently appears to be a sucker for internal links with a given anchor(or external links with identical anchor..no matter how many).
    • Content Freshness – Google has a love-hate relationship with freshness. Showing recent news results requires ranking things that could not possibly have gathered too many external links yet. This really opens it up to spam. But at the same time, it’s what users are looking for. Figure out how it was handled, how it is being handled, and if there’s any difference.
    • Internal Areas of Emphasis – Every search engine has a few things that are weighted internally to a different extent. Domains(exact/partial match), subdomains, titles, <h#> tags, etc. The balance in importance of these can change from time to time. It’s a bit hard to detect specifically, but a good thing to keep an overall eye on.

    This is by all means not a complete list. But it’s a good starting place. Once again, I’ll say that not all of these appear to be in use now. This list are just some things I mentally check through whenever I see a major flux in the search results. It’s far from an exact science.


    13 Responses to “Examining a Search Ranking Fluctuation”

    1. Vacation Rentals says:

      I thought as Google being a registrar they had access to private whois information?

      I remember around 2006 Matt Cutts said it wasn’t using whois as part of the algorythm, but you’d think they would be using private whois info by now.

    2. admin says:

      Well, all whois privacy(as far as I can tell) is a fake corporate entity or domain used to shield the real owner. I see no way for that to be on the backend. Beyond that, it would mean that whois privacy from a legal standpoint would be void; if the US laws were too restrictive to get the privacy lifted for whatever reason, someone could just go to any other country that has a registrar. If anyone has more concrete information on this, feel free to chime in.

    3. Trevor Nash-Keller says:

      I have heard mixed opinions on this matter and have always wondered what the deal REALLY is…

    4. Terry says:

      Now that is a great post. I’ve always thought about the algorithms detecting CMS installations and that having an effect on rankings and IPs! that just scares me right now.

    5. Traffic2MyPage.com says:

      i have to comment on my own site… your last point about internal areas of emphasis… my site moved from no where in the ranking to number 30 with adjustments to these elements in an decently competitive keyword

    6. pligg.com says:

      Examining a Search Ranking Fluctuation : Slightly Shady SEO…

      Examining a Search Ranking Fluctuation…

    7. Jaan Kanellis says:

      Recently had to break this down for a client as well:

      Rankings are in everflx more than ever…hey that sounds weird. Meaning the ranking constantly seem to be moving around for many 2-3 keyword phrases. So when these reports run, FD could be ranking #3 for “police sunglasses” and then when you check Google live we are ranked #9. There are many reason for that:

      a. Google uses tens of thousands of datacenters. At anytime you could be accessing a different one because of load balancing. So what you check today could be different than tomorrow. What your mom in California checks the same you check in Texas can also be different because you are both using different datacenters. Google is also known to test different algorithms on different datacenters at anytime during the month. This makes the change even harder to pin down because you never know when or why they are doing it.

      b. Geo-location also makes a difference. Even if your mom in California is using the same datacenter as you in Texas you can see different results because Google is trying to further “personalize” the search results to your location. They feel this is best for queries that are possibly effected by localization. Take searching for “plumbers” for instance. If you search for a plumber in California you probably don’t want to see plumbers from Texas in your SERPs. Google is becoming more transparent about this of late testing a new feature in the SERPs that you can read about here: http://searchengineland.com/080730-163351.php

      c. Google Personalization can affect your SERPs when you are logged into your Google account. They can take any number of demographic items from your personal data to make the SERPs more customized to who you are.

      d. Lastly Google Universal Search has probably changed the SERP landscape the most. With the addition of Related Searches, Google News, Images, Video, Blog Posts in the SERP landscape one would wonder how much room is left for those 10 spots per page.

    8. SEO Hosting Blog » Blog Archive » Improve WordPress Search - SEO Jobs Declined - Full Review On Search Engine Ranking says:

      [...] 4th, 2008, site author of Slightly Shady SEO, releases an article titled, Examining a Search Ranking Fluctuation. This article hit off very well among the Search Engine Community as it provided a very in depth [...]

    9. Gab Goldenberg says:

      Appreciate the link X – though when I saw the referrer, and read the first line, thought it might be to my latest bit of research. A site I own, plain html, got indexed with no links or submission. It was temporarily not on private whois and the email was one associated with other trusted sites I own. I’m betting that’s how it got indexed.


    10. Gab Goldenberg says:

      Oh, and temporal link analysis is being used, as shown by Branko, aka neyne, aka SEO Scientist: http://www.seo-scientist.com/unconventional-link-attributes.html

    11. De interessantste SEO artikelen van deze week | echthelder.nl says:

      [...] Vandaag op positie 5, morgen op positie 2: fluctuaties van zoekresultaten Algoritmes van zoekmachines worden met enige regelmaat veranderd. Slightlyshadyseo geeft een gedetailleerde inkijk in de mogelijke redenen waarom zoekresultaten fluctueren. Een interessante inkijk in de mogelijke factoren van de algoritmes. slightlyshadyseo.com/index.php/examining-a-search-ranking-fluctuation/ [...]

    12. Sphinn Weekly - Week 5 | The Sphinn Blog says:

      [...] 5th August – Examining a Search Ranking Fluctuation (vangogh) XMCP list a number of factors that Google most likely considers when ranking a page/site or detecting spam. Direct Link: SlightlyShadySEO [...]

    13. BigSten says:

      +1 :)

    Leave a Reply

    XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

    Marketing & SEO Blogs - Blog Top Sites