• Home
  • About
  • Piqq.us Invite Feed
  • Affiliate Marketing, the Economy, and Maslow

    February 22nd, 2009 | admin

    Hey everyone. It’s been a bit. I ended up typing out a few entries lately, and ended up rejecting most for one reason or another. But here’s one for old time’s sake(no the blog isn’t dead, I’m just busy). This entry is going to be about the economy, psychology, and how to get some conversions out of it all.

    The Economy & Hope

    Yeah everyone knows the economy sucks. That’s not what this is about. What it’s about is it’s affect on the consumer mindset. A lot of people are broke, and those not broke are not spending. Except on hope. Hope turned out to be more than a presidential campaign motto in my opinion. Not in the political sense, but more in that it was the perfect motto for the American consumer/voter’s mindset.
    Sure enough, hope is what’s converting. I’d like to nominate it for product of the year actually. It’s not like the “hope for a perfect golf swing”, but rather “hope for money” “hope for love”, etc. So how do we figure out what people are going to be buying

    Affiliate Marketing, now with Excellent Maslow Goodness
    Alright. Really quickly for those who don’t know: Maslow was a psychologist who created something called the hierarchy of needs. It’s basically a classification system for the different human needs and motivations, and their importance. They’re shown in the triangle below.

    Maslow’s Hierarchy of Needs
    Ok. So the base are the most basic needs. Keep in mind that being on the bottom of the pyramid does not necessarilly create an increase in demand, but at least should be more stable than things that fall towards the top in the current economy.

    Level 1 – Physiological
    Since it’s a bit tricky to sell water or food online(though it is done), we’ll rule those two out.
    “Homeostasis” is a persons ability to regulate their internal system. So that’s medication(for our purposes). Unfortunately though, most of the health products around for affiliates(that won’t give their merchant a lovely phone call from the FTC) are not really based around homeostasis, but rather are for external health.
    I could pretend like all the “colon” weight loss offers were excretion, but really the motivation for those products was not the desire to poop.

    As for “sex” there’s a few obvious industries that come to mind(adult, dating, etc). However, my understanding of the pyramid leads me to believe that they mean to imply the ability and opportunity to have sex. So according to that, fleshlights and other supplemental things may not be included in that.

    Level 2- Safety (The Important One)
    This is the one to pay attention to. It’s where a lot of the american public is mentally. “New things” and the excess that has existed before seems like it’d be driven by the consumer’s view of their own success(esteem) being heavily related to that which they could(in theory) afford. Now, it’s not about that. It’s about keeping their job, house, family, etc. So products that convert are going to be within those basic areas, and with those basic goals.

    Think about what the term “safety” implies. It’s preservation of the old, not creation of the new. This obviously would ordinarily be a problem, except that people also panic when they feel their safety(in any respect) is threatened. Ever get the feeling that you need to do something, but not know what to do?
    A lot of people feel like that right now. Give them a potential way out and they’ll likely take it.

    For example, let’s take a look at how online colleges are doing right now.

    Using Quantcast Data
    in december(and as far back as the graph shows) was at around 2 million uniques per month. In January it was edging up on 4 million.
    Kaplan.edu – Previously averaged around 180,000 users per month. Last month it was estimated to be around 474,000 users per month.

    Using Compete.com Data
    Kaplan.edu – Up 61% for the year, 128% for the month.
    Phoenix.edu - Up 184.6% for the year, 48.6% for the month

    Some More Random Thoughts
    The economic conditions create a few more things that pretty wildly play with the marketplace. Whenever a big company goes out of business, look into the conditions under which they went out. Some companies have recurring customers that then all become fair game. Others, like Circuit City, can flood the market place with cheap goods as they liquidate their stock. Loads of fun. Either way, if you see one dive, don’t think of it as “that niche is dead” think of it as “some company is going to benefit a lot from their old customers”.

    Oh yeah, and sorry if people have been doing entries similar to this one lately. I haven’t had much time to read lately, so I don’t know if they have.


    The Lazy Man’s Link Spamming Program

    January 9th, 2009 | admin

    Ok, so lately I’ve been working on a lot of internal tools I can’t really talk about here(or they’d become useless), so I thought I’d talk about an old internal tool that I haven’t used in a bit that served me well for getting things indexed quickly. Yes it’s using other blackhats to profit, but hey. Most would do the same in a similar situation.
    Edit: Keep in mind this article is discussing HTTP proxies, NOT cgi proxies/myspace proxies. Also keep in mind most server hosts will not be ok with this, so find one that is if you want to do it. Sorry, I can’t recommend any.

    The Basics
    A large percentage of the people using link spammers like XRumer use lists of open proxies to spam forums/guestbooks/whatever. Now I have a couple IPs I don’t care about getting listed on akismet on hosts that don’t care about link spam.
    So what we’re going to try and do is get a list of places to link spam, and indeed drop our links, without bothering to scrape footprints.

    A Quick Understanding of HTTP Proxies
    So the protocol for HTTP proxies is very, very similar to that of normal proxies. Essentially the difference is that you have the domain in the get request. So it’s relatively easy to code an HTTP proxy. All you’re doing is opening a port, reading in the domain and page you’re requesting, getting it, and sending the information back. Not hard at all, right?

    The Software Modification
    Make a slight change to the software. Have it so when it’s reading the request from the client IP, it parses out the GET and POST requests. You’re looking for URLs so that you can substitute your own URL into the post/get data, assuming it’s posting a link somewhere if it’s including a URL. So here’s what you’re looking for:

    • Fields that start with http:// and have no spaces
      These are text fields generally. So if someone is signing up at a forum through the proxy and sets their profile website to be “http://www.TheirLinkSpamDomain.com”.
    • Fields with HTML and a <a href
      These are a bit trickier to parse properly, since you have to not only remove their link, but change the anchor text to reflect your own.
    • Fields with a URL in a GET variable
      A lot of these are dynamic output, so whatever’s in the get variable will get linked to on the page.

    So a query that may have initially been
    POST http://www.targetforum.com/register2.php HTTP/1.1
    (junk header info here)

    Ends up getting modified to instead be

    POST http://www.targetforum.com/register2.php HTTP/1.1
    (junk header info here)

    Get it? You’re not solving captchas, you’re not scraping places to post. Just altering data everyone else’s software is giving to you.

    So How Do Other People Find my Fake Proxy?
    Search Google for “online proxy checker”. You’re looking for sites like http://www.checker.freeproxy.ru/checker/ that check the proxy online and return whether it’s working or not. These checkers are used by the companies that made them to gather up proxies via their checker, which are then freely available to some, and sold to some customers. Submit to several of these sites, and make sure you pass the validation. Over time, other proxy sites will scrape the ones you’ve submitted to(and some have web scanners that will find you naturally), and people will begin to use it. And you can switch domains of every link anyone tries to submit through you.

    The End Result
    Eventually your server will max out to whatever level you allow it. A dedicated server should be able to easily handle a sizable amount of simultaneous connections. Just let it run for a few days at a time. By the end of it, not only will your links be everywhere on the net from your substitutions, but you’ll have a sizable list of places to link spam.

    Benefits and Disadvantages to this Method
    While the benefits are completely passive link dropping, a link spamming list that builds itself, and some pretty killer indexing time, it’s not to say it’s a perfect method. First off, you lose control of where you’re dropping links, which means a lot of the links are going to be horrible. In addition, it uses up an arseload of bandwidth, and it’s a bit tricky to keep the security angle tightened up. It’s not appropriate for most mainstream sites(especially since you lose control of the link anchor text pretty frequently) but does quite good on junk autogenned sites.

    Validation Precautions
    Watch the first several requests(of the proxy list services) and the first few XRumer proxy checks to make sure you’re validating as a proxy correctly. If someone is testing using a link, you may want to add that in as a request that won’t be modified. For example, a lot of xrumer requests involve the string “proxyc” in the url.

    Security  Precautions
    This is obviously a big security risk if it’s not handled properly. Record every domain accessed through your proxy, and in a seperate list keep the ones where you were told to post. Over time, start disallowing certain domains.
    The first thing you’re looking for is sites like yahoo mail and gmail. There’s something called an internal mailer that e-mail spammers use to push mail out of webmail services. You don’t want to be the IP the webmail provider sees as spamming, so disallow these sites early.
    The other thing you’re looking for is any type of e-commerce site, to make sure people don’t try and use fraudulent credit cards and whatnot. It’s a good idea to build up a keyword blacklist as well, so you can disconnect IPs that request pages with certain content (credit card numbers, etc). It’s important to note I’ve never seen anyone do this on a proxy I’ve run. The closest was some ticket scalper automating ticketmaster.

    Also, many of the security issues can be solved by simply not coding to allow HTTPS connections. If you restrict to port 80, standard HTTP communication, most sites that are a security risk are unusable anyways, so you don’t have to worry.

    Hope yall liked it,

    Google’s User Data Empire

    November 24th, 2008 | admin

    I’ve been holding off on doing this entry for a bit, but with the introduction of SearchWiki their aims are so clear to me, I just can’t hold off anymore. Google’s problems over the past 2 years have been the result of an algorithm overly based on links. They’ve finally hit their wall. With the latest batch of link buying platforms, their options for truly detecting it are dying out. One can call Google many things, but ignorant of the marketplace and SEOs is not one of those things. So they needed a response. Their response? User data. Lots of fucking user data.
    I know I’ve covered a similar topic before(how Google is essentially creating it’s own internet), but I wanted to do one specifically on user data.

    The Basic Layout of the Google User Data Empire

    • Google Adsense – Google adsense has the unique ability to track without fear of repurcussion. Why? Because any data they send back can be used and archived in their eternal battle against click fraud. This means they transmit everything from screen resolution to ability/version of flash(things that arguably have nothing to do with click fraud). Either way, it’s a window they have into millions and millions of hits on the internet daily. It’s targetted towards informational sites though, and not commercial sites(Google’s true interest).
    • Google Analytics – This is Google’s window into non informational sites. It tracks an absolutely obscene amount of user data(actually, more than you can see/use in their analytics panel). Without this, they’d have no window into sale based sites that would give the competition traffic if they ran adsense. Webmasters flock to this tool, not realizing the danger of feeding Google all that information. Here’s a hint: it tracks conversion rates. Now, Google is currently taking anywhere from 2-5x the amount of adsense revenue they’re giving to the website owner, which means if you do PPC you’re more or less at their mercy for how much you’re paying per click. Them knowing how much you’re making per click via their conversion tracking could (in theory) allow them to adjust your PPC expenses up, while still remaining profitable. But once again, the real gold here is the ability to track the users.
    • Google Chrome – Google Chrome is an interesting creation. Google is a public company. That means they cannot create something like chrome without a significant financial reason. The trick is they’re already propping up firefox via $59.5-70 million a year in donations(85% of Firefox’s revenue) to keep them as the default search. $70 million is jack shit to Google, so they definitely wouldn’t create Chrome simply to save on that, and they’re already getting the ad revenue from firefox searches so that itself doesn’t make sense. So why would they create Chrome?
      • Unique Identifier – Chrome generates a unique id whether or not you agree to send your data to Google. If you agree to send it, this ID gets trasmitted. So what does that do? It makes it so they can identify you regardless of where your computer is, and regardless of cookies. It’s truly the perfect information gatherer.
      • [Partially] Closed Source – I’m no open source junkie, but let’s not kid ourselves. The one primary difference between Firefox and Chrome is that Chrome is closed source. It’s based off of Chromium, a BSD licensed piece of software. BSD license means you don’t have to open source your modification on their code(unlike the GPL). This means one has to run a sniffer to see the data Chrome is sending out; you can’t simply open the source code. While initial versions don’t send out an excessive amount of data, I’m willing to bet user adoption will change that.
      • Typing Tracking – I just sniffed a Chrome request(opted in to trasmit data). The page I was going to was complete blank except for a fake 404 error. Magically, it created 2 requests to Google. One was a “google suggest” style query(which means yes, Google suggest is used for tracking). The other was a curious query, as it trasmitted events(used generic names so I dont know what each stood for), a unique ID, and interestingly enough a variable called “rep”, presumably implying a user reputation level. A single type in of a domain created 3 of these “events”. I wonder what they are.
    • Google Checkout – One of a few ways Google is moving to be able to identify real people. That is to say it’s a way to be able to tie an IP and a cookie/username to a real, 100% legit name. This is worth more than most could ever imagine. Not only is that person identified as someone with a credit card, but the billing address itself gives you a region the person is from, and a probable demographic. Also used to tie back to a real identity is the much debated Google Health, which can store medical information on an individual.
    • Google Toolbar – Fantastic for identifying webmasters, the Google toolbar is among the most powerful methods of getting user data. How long do you think it will be before they turn users into unknowing cloaking checkers(click search results, omgz this pagerank request isn’t for the right domain)? Every single webpage you access, private or not, gets sent to Google for their page rank check.
    • Google Android – The one set of data they couldn’t access properly before. Phone habits. Note how agressively they’ve pursued the cell phone market(IPhone anyone?)
    • SearchWiki – Google’s latest addition to let you reorganize the search results. They say the data is used only for the user that changes it. Fun fact? That makes no sense. Google already has bookmarks, and if you are logged in and click “Web History”(and are  opted in) it will show you the searches you’ve made and the results you’ve clicked. So their is absolutely no reason for the creation of this other than to alter search results, and more importantly gauge user’s reactions to commercial vs. informational sites.
    • Other Obvious Sources – Gmail(your contacts, your interests), the actual search results, and many more.

    Google justifies all of this on the idea that a lot of other companies have been gathering this data for some time. But there’s a difference. Those companies only had data from one source at a time. For Google, it’s different. Their specialty is organizing information. They have access to more avenues for userdata than any other seo company in the history of the world, and the ability to connect every aspect of every person’s life. Log into gmail on android? Congrats, your phone number can now be tied to your IP home IP. Don’t search using Google? Between adsense and analytics, you’ve probably got a 35-50% chance of sending data to Google anyways with every page load. Did you buy something through an ad served by Google? With conversion tracking, they know you bought, and can tie that back to everything else.

    Why I’m Scared as a User
    I’m really beginning to get scared here. Even ignoring Google’s less than benevolent intentions, can anyone imagine a data breach? No company is truly secure. 4 years ago the entire member database of the largest porn network on the planet was available(including passwords) for 1 grand. over 500,000 records. There have been data breaches at pharmaceutical companies, leaking millions customer records, down to the pill they took and when the prescription was up. Government servers get compromised, credit bureaus get compromised. So why would Google be any different?

    Why I’m Scared as a Webmaster
    Google has an interesting issue. They have more userdata than they can allow adwords advertisers to target. This is an absolutely insane amount of information. So they’re left with 3 options.

    1. Enter the CPA Market – With their Google Affiliate Network, this seems like a likely path. Imagine a massive in house program that can get clicks for dirt cheap(remember, Google takes a HUGE cut out of adsense revenue. Surrendering that they can afford conversion rates that would make normal PPCers cringe).
    2. Not Use the Data  – Google is a publically traded company. Their responsibility is to stock holders. So regardless of how warm and fuzzy they act to the internet community at large, this option is not viable. Their privacy policies contradict the filth they spew towards the consumer about how the data will and won’t be used. And guess which one is legally the reality? The privacy policy. They’re using the data folks.
    3. Take Control from Advertisers – They can’t let me target based on all the data they have, so the alternative is to make the decisions for me based on what they think is best. Well, sort of. Remember that Google automatically optimizes not for conversions, but for click through and profit on their end.

    I don’t understand how prominent geeks normally so paranoid over spyware and whatnot can ignore Google. They function on a higher level than any spyware company in history, and do it all by winking at the webmaster community and acting like they’ll look out for us. “Do No Evil” is the motto of a private company. Not a public company. It’s the antithesis of the free market economy. What is good for the consumer is not good for the company, and that is especially true with an advertising company that has access to so much data.

    Until next time,

    PS: Edited the entry to indicate that chrome is partially closed source. Though the open source aspects are chromium for the most part. To clarify, here’s a line from Chrome’s TOS: 10.2 You may not (and you may not permit anyone else to) copy, modify, create a derivative work of, reverse engineer, decompile or otherwise attempt to extract the source code of the Software or any part thereof, unless this is expressly permitted or required by law, or unless you have been specifically told that you may do so by Google, in writing.

    Marketing & SEO Blogs - Blog Top Sites