I’ve not been posting for some time now. Mostly due to work and also personal issues. However, I do monitor HTNet constantly. I noticed that my installation of Spam Karma 2 has been working in overtime, catching comment and trackback spams numbering to almost over a hundred per day on average.
The positive aspect of this is that presumably HTNet is becoming more popular, hence the efforts to spam it to no end. The negative part is of course, this is an exercise of wasting computing resources.
Irresponsible ways of getting cheap publicity via automated spamming tools has been proven to be mostly ineffective.
rel=nofollow href tags have pretty much stopped many spam web sites from receiving search engine referral “credits” (most famous would be Google’s PageRank algorithm). However, they’re still human readable, meaning that people can be mislead to visit hostile web sites (irresponsible advertising and virus launchpads come to mind almost immediately).
Thankfully, tools like SK2 and Akismet make keeping a spam-free blog a more feasable task. There are also solutions for the more traditional email spams like Spam Assassin and implementations of Bayesian filters. However, this obscures the real question, how was this possible in the first place?
Like many communication mediums, blog comments, trackbacks and email was implemented to solve a problem. Coming up with the solution requires a balancing act of weighing costs vs. benefits. Most of the time, the benefits camp often overshadow the members of costs caste. Costs are often seen as an affirmation to the old adage of “There’s no such thing as free lunch”.
I’ll be the first to admit, that I probably don’t have the brains to propose a new RFC that will be the endgame for spam. My comment here are just food for thought. I appreciate any (non-spam :P) comments from my astute readers about this issue.