BLOG.PE-ELL.NET - Useless rambling...
:: 2010-01-28 20:01:59 ::

Quick and dirty benchmark...

So decided to do a quick benchmark since I'm still probing the "NoSQL" projects looking for potentially useful stuff.  Now these results didn't surprise me really since I'm been messing with these systems for a while.  But this is more for potential feedback and just so I don't forget.  Obviously the test is done in Perl and I force a connection for every loop (closest to our production).

For those that are unfamiliar:

Memcached:  Memory only key/value storage with ttl
MongoDB:    Memory/disk/replicated document storage with hash/array understanding (you can index anything) and completely binary safe, completely schema-less has built in filesystem
Redis:      Memory/disk/replicated key/value, list, set, ordered set storage with optional ttl (much lighter protocol than mongo)

MongoDB tends to try to offset things to the client in order to save server resources (sort, etc all happen client side).

CPU usage of server during run (via top):

    memcached    5-6%
    mongodb        4-7%
    redis              9-10%

[jstephens@ii52-27 (19:54:14) ~/test]$ perl
                    Rate     mongo     redis     memcached
mongo           873/s         --         -3%      -16%
redis              901/s        3%        --         -14%
memcached 1044/s       20%       16%        --

    Server: 1.1.93-beta
    Client: Redis 0.0801

    Server: 1.1.4
    Client: MongoDB 0.26 (patched)

    Server: 1.4.1
    Client: Cache::Memcached::Fast 0.14

Test script:


use strict;
use Cache::Memcached::Fast;
require '';
use MongoDB;
use Benchmark qw/ cmpthese /;

cmpthese(10000, {
    memcached => sub {
            my $m = new Cache::Memcached::Fast { servers => [ '' ], connect_timeout => 2, io_timeout => 1, max_failures => 3, failure_timeout => 2, };
            return "No memcached obj!" unless ref($m);

            my $p = gen_payload();
            $m->set( 'tkey', $p, 15 );
            my $v = $m->get('tkey');

            undef $m;
    redis     => sub {
            my $r = Redis->new( server => '' );
            return "No redis server!" unless $r->ping();

            my $p = gen_payload();
            $r->set( 'tkey' => $p );
            $r->expire( 'tkey', 15 );
            my $v = $r->get('tkey');

    mongo     => sub {
            my $MONGO_CONN = MongoDB::Connection->new(host => "");
            my $MONGO_DB   = $MONGO_CONN->get_database("common");
            my $MONGO_COLL = $MONGO_DB->get_collection("common");
            return "WE SUCK!" unless ref($MONGO_COLL);

            my $p = gen_payload();

            $MONGO_COLL->update({_id => 'tkey'}, { _id => 'tkey', txt => $p }, {"upsert" => 1});
            my $v = $MONGO_COLL->find_one({_id => 'tkey'});

            undef $MONGO_COLL;
            undef $MONGO_DB;
            undef $MONGO_CONN;

sub gen_payload {
    my $length = 2000;
    my $rand_length = 1;

    my @string;
    my @chars = ('a'..'f');
    my $possible_chars = scalar(@chars);
    push(@string, $chars[int(rand($possible_chars))]);

    my $end = $rand_length == 1 ? int(rand($length)) : $length;
    $end = 2 if $end <= 1;

    push @chars,(0..9,'/','+');
    $possible_chars = scalar(@chars);
    push(@string, $chars[int(rand($possible_chars))]) for(2..$end);

    return join('', @string);

:: 2010-01-02 16:00:48 ::

Asus O!Play...

This thing sucks.  I've tried so many times to get it to work correctly.  It can't seem to figure out a HDMI signal through a HDMI switch or through my reciever and only works when plugged directly into the projector.  Which is useless.  I finally got it work at 1080i when hooked up the bedroom LCD TV.  But then it can't connect to my workstation to stream videos/music.

So while out looking for a network bridge I picked up a Datage HD Media Player which once hooked up worked first try.  On all accounts.  And it even includes a HDMI cable (decent one too) compared to the O!Play which could only include some cheap component cables.

Don't buy it unless you hate yourself.

:: 2009-12-16 12:24:23 ::

Gearman testing

So basically I like Gearman and would like to use it.  I can think of a few things I could throw it's way.  But I've discovered that with the current build that I lose data along the way.  I usually still get a request on the worker (but not always), but it will have lost the arguments along the way.  Test files:

Make and install Gearman 1.10 and JSON::XS from CPAN then follow these simple steps:

sudo su
tar -xzvf gearman_test.tar.gz
cd gearman_test
perl [--json] [--storable]

sudo su
tar -xzvf gearman_test.tar.gz
cd gearman_test
perl --server="ip:port" [--json] [--storable] [--loops=x] will default the port if you don't provide one.  Obviously make sure that our json/storable options match between worker and benchmark tool.  If you provide
neither then it will skip serialization and just use a plain scalar.  Default for loops is 100000.

If I do a localhost test I normally don't see problems.  It only happens when going across external interfaces which implies maybe a latecy issue or something.  None
of the many other network related tools we have running show any issues with our network.  And the amount of faults will vary, but you seem to have to run it continuously for a minute or more to see it. Example:

[jstephens@host (12:16:03) ~/gearman_test]$ perl --json --loops=10 --ip=xx.xx.xx.xx
10 loops of other code took: 0 wallclock secs ( 0.00 usr +  0.00 sys =  0.00 CPU)
With 10 requests and  bad requests
[jstephens@host (12:16:11) ~/gearman_test]$ perl --json --loops=10000 --ip=xx.xx.xx.xx
10000 loops of other code took:21 wallclock secs ( 4.67 usr +  1.19 sys =  5.86 CPU) @ 1706.48/s (n=10000)
With 10000 requests and  bad requests
[jstephens@host (12:16:49) ~/gearman_test]$ perl --json --loops=100000 --ip=xx.xx.xx.xx
100000 loops of other code took:211 wallclock secs (46.41 usr + 14.81 sys = 61.22 CPU) @ 1633.45/s (n=100000)
With 99207 requests and 793 bad requests

[jstephens@host (12:20:26) ~/gearman_test]$ ping xx.xx.xx.xx
PING xx.xx.xx.xx (xx.xx.xx.xx) 56(84) bytes of data.
64 bytes from xx.xx.xx.xx: icmp_seq=1 ttl=61 time=0.178 ms
64 bytes from xx.xx.xx.xx: icmp_seq=2 ttl=61 time=0.199 ms
64 bytes from xx.xx.xx.xx: icmp_seq=3 ttl=61 time=0.159 ms
64 bytes from xx.xx.xx.xx: icmp_seq=4 ttl=61 time=0.167 ms
64 bytes from xx.xx.xx.xx: icmp_seq=5 ttl=61 time=0.176 ms
64 bytes from xx.xx.xx.xx: icmp_seq=6 ttl=61 time=0.196 ms
64 bytes from xx.xx.xx.xx: icmp_seq=7 ttl=61 time=0.153 ms
64 bytes from xx.xx.xx.xx: icmp_seq=8 ttl=61 time=0.170 ms
64 bytes from xx.xx.xx.xx: icmp_seq=9 ttl=61 time=0.183 ms
64 bytes from xx.xx.xx.xx: icmp_seq=10 ttl=61 time=0.177 ms
64 bytes from xx.xx.xx.xx: icmp_seq=11 ttl=61 time=0.189 ms
64 bytes from xx.xx.xx.xx: icmp_seq=12 ttl=61 time=0.159 ms
64 bytes from xx.xx.xx.xx: icmp_seq=13 ttl=61 time=0.175 ms
64 bytes from xx.xx.xx.xx: icmp_seq=14 ttl=61 time=0.185 ms
64 bytes from xx.xx.xx.xx: icmp_seq=15 ttl=61 time=0.188 ms
64 bytes from xx.xx.xx.xx: icmp_seq=16 ttl=61 time=0.208 ms
64 bytes from xx.xx.xx.xx: icmp_seq=17 ttl=61 time=0.172 ms
64 bytes from xx.xx.xx.xx: icmp_seq=18 ttl=61 time=0.179 ms
64 bytes from xx.xx.xx.xx: icmp_seq=19 ttl=61 time=0.369 ms
64 bytes from xx.xx.xx.xx: icmp_seq=20 ttl=61 time=0.202 ms
64 bytes from xx.xx.xx.xx: icmp_seq=21 ttl=61 time=0.163 ms

--- xx.xx.xx.xx ping statistics ---
21 packets transmitted, 21 received, 0% packet loss, time 20000ms
rtt min/avg/max/mdev = 0.153/0.187/0.369/0.046 ms

:: 2009-12-04 14:54:59 ::


So far with my testing its been going pretty well.  I did have a minor issue with connections though.  Turned out it was a bug in the version 0.26 driver available from CPAN.  There is a fix for it in git already, but not CPAN.  So if you need RPMs like I do this becomes an issue.  Forutnately I use cpan2rpm and can make patch files.  I've attached the patch and I also put it on git and here as mongodb_perl_0.26_close.patch for people to view.

 For me this was a simple process to fix:

Then just re-install the RPM on the test servers.  Now I'm not running out of threads on the MongoDB server (due to open connections not being closed during DESTROY) and getting semi-decent stats.  E.g.:

[jstephens@host (15:11:14) ~/test]$ perl
Comparison of NFS to MongoDB for reading/writing while updating a key on a data hash
All times in seconds
Event       Total   Avg       Max       Min    
NFS Read    1000    0.001090  0.009237  0.000720
NFS Write   1000    0.001703  0.009273  0.001063
Mongo Get   1000    0.001473  0.012552  0.001293
Mongo Set   1000    0.000499  0.003426  0.000441

The data is delimited text files on NFS filers like INI files and converted back and forth to hashes.  Storable would potentially be faster than parsing text strings but it's less portable.  In the case of MongoDB I'm just using a simple wrapper library to control configuration/connection/set/get with error checking, etc.  In this case NFS has an advantage because of the NFS client which has built in caching.  These are on the 1.1.3 server by the way with a sample of 23 document ids and incrementing a key in the document each time (full read and write).

:: 2009-11-28 21:49:02 ::

Making apache faster for mod_perl on a shared server...

So this isn't exactly new, but the documentation is a bit dumb.  Basically on Apache2 with mod_perl you have a bunch of useless stats going in the background all the time.  In my case the document root isn't local to the box, so the cost of random stats for non-existent files is higher than usual.  There is a hook to bypass this behavior though if you can get it to work (the documentation gave me pretty odd results like everything is a 304 or seg faults).  It's referred to as a PerlTransHandler.  If you have a server that does nothing but mod_perl then you could easily override this completely as they cover in their documentation.

In your conf you could do something like this:

PerlTransHandler Apache2::Const::OK

But that would override all incoming requests to try to bypass the mapping of the url to a file.  Which is something I don't want to do.  If you share mod_perl and non-mod_perl then you need to filter your responses from your handler appropriately.  So my conf looks like this:

PerlTransHandler MODPERL::killstat

Here is my exmaple library:

package MODPERL::killstat;

use strict;
use Apache2::Const qw( OK DECLINED M_TRACE );

sub handler {
    my $r = shift;

    # ignore trace calls
    return DECLINED if $r->method_number == M_TRACE;

    # it's a known handler, don't try to crawl the file system, but return a local file instead so we don't 404
    if (ref($r) && $r->uri() =~ m#^/(some|mod_perl|handler|urls)#i) {
        $r->filename("/var/www/cgi-bin/index.cgi"); # local existing file
        return OK;

    return DECLINED;


And it makes my straces go from this:

22:48:46.540666 read(14, "GET /my/mod_perl/handler?plain_tex"..., 8000) = 567
22:48:46.540722 gettimeofday({1259218126, 540738}, NULL) = 0
22:48:46.540919 gettimeofday({1259218126, 540947}, NULL) = 0
22:48:46.540980 gettimeofday({1259218126, 540995}, NULL) = 0
22:48:46.541250 stat64("/var/www/my/mod_perl/handler", 0xbfb66c1c) = -1 ENOENT (No such file or directory)
22:48:46.541326 lstat64("/var/www", {st_mode=S_IFLNK|0777, st_size=20, ...}) = 0
22:48:46.541405 stat64("/var/www", {st_mode=S_IFDIR|0777, st_size=45056, ...}) = 0
22:48:46.541575 lstat64("/var/www/my", 0xbfb66c1c) = -1 ENOENT (No such file or directory)
22:48:46.541804 stat64("/var/www/mod_perl/handler", 0xbfb66aec) = -1 ENOENT (No such file or directory)
22:48:46.541937 lstat64("/var/www", {st_mode=S_IFLNK|0777, st_size=20, ...}) = 0
22:48:46.542036 stat64("/var/www", {st_mode=S_IFDIR|0777, st_size=45056, ...}) = 0
22:48:46.542132 lstat64("/var/www/mod_perl", 0xbfb66aec) = -1 ENOENT (No such file or directory)
22:48:46.542233 open("/proc/24468/statm", O_RDONLY|O_LARGEFILE)               = 1

To this:

21693 03:14:25.760119 read(0, "GET /my/mod_perl/handler?plain_tex"..., 8000) = 472
21693 03:14:25.760174 gettimeofday({1259234065, 760188}, NULL) = 0
21693 03:14:25.760334 gettimeofday({1259234065, 760347}, NULL) = 0
21693 03:14:25.760413 gettimeofday({1259234065, 760427}, NULL) = 0
21693 03:14:25.760715 stat64("/var/www/cgi-bin/index.cgi", {st_mode=S_IFREG|0775, st_size=4473, ...}) = 0
21693 03:14:25.760941 dup(1) = 14

:: 2009-11-17 14:03:14 ::

What I do....

It occurred to me that I never provided what I do for a living/hobby here.  Basically I'm a Sr. Perl Engineer on the Infrastructure team for Various, Inc (owns Friendfinder and related sites and is owned by Penthouse Media Group, Inc).  So my daily life includes these basic things:

We have ~2000 servers in production (and growing).  We're working on being more SOA but it's taking time since we've never had a major revision of code. The many years of organic growth of code have left us with some interesting challenges though.

Things I work with every day:

Things I work with on a regular basis:

Nothing too out of the ordinary for a web engineer.  The main challenges of this job is trying to make things run better.  Historically we've just thrown hardware at our problems (and that does work to a point).

Basically when you work in an environment that hosts Perl, PHP, Java, and ASP.NET, FMS, Erlang all serving traffic and partner with CDNs for most of our static content there's a lot of things that can go wrong.  But even with optimizing video streaming and CDNs we still push 6-8Gb from one data center (peak of 14Gb at one point) and 3-5Gb from our other data center while maintaining 1M page views every 10 minutes on our larger site and averaging 12M banner impressions an hour.

Part of my responsibilities are:

Mostly when you get sites to a level like this, you realize that while you can probably optimize your template engine a little more (we've done this afew times), data aggregation to render pages usually hurts you the most.  And that a lot of open source software has no idea how to scale.

:: 2009-09-08 13:32:56 ::

Using data objects between PHP and Perl via Memcached

Came across a new problem.  The desire to share data objects between Perl and PHP via Memcached.  This is a bit of an issue since the languages serialize data in different ways and I either have to force them both to conform to the same or modify one to deal with the other.  So I decided to use JSON since there is easy support in both languages (and fairly fast parsers).

Granted the easy solution to this is anywhere in code that you want to move data back and forth just use json calls and convert your data to a string and then store it.  But want if we want to use OOP and not have to worry about eveywhere in code that already calls existing code?  So you'd have to modify the code that creates the inital underlying object that you're wrapping.

For Perl I'm using Cache::Memcached::Fast due to it's performance and ability to easily override functionality.

For PHP I'm using the Memcached library because I think it's much better than the Memcache library that seems to commonly get installed.

Now in theory this should be fairly easily, set both sides to use JSON as the serializer and let'r rip.  Turns out not so (man I wish things were easy).  In Memcached when you set an object you send in four things.  Key name, flags, ttl, and data size.  The PHP and Perl libraries handle flags totally differently.  Perl took the approach of 1/0 where 0 is string and 1 is serialized (if you tell your object to use the wrong one it's your problem).  PHP on the other hand has a hard coded list of serializers and sets the flag to whatever is appropriate (6 in this case for JSON).

So for strings, yes it was quite compatible.  But where I needed it there was a total failure (strace showed both were doing the proper JSON encoding on the data).

After some deliberation I decided to modify the PHP side (Perl is our primary language, so gimp the less used one).  I decided to add a new flag for the Memcached object that worked in conjunction with the JSON support.  Basically if doing JSON and you set the compatability flag you don't set the flag to 6 but to 1 instead to match Perl.  This way both clients think that it's serialized and decode/encode data correctly.

I looked at an Erlang library for Memcached (another language we use) and found they were using a random Int for the flags in memcached.  Still not sure what the point of that was, but perhaps I'll have a Erlang hack too in the future.

So the zip includes the two patch files for the Memcached source and two quick test files.  One is Perl and the other is PHP.  They assume that you have a local Memcached server running on the normal port.  You should see something like this:

[root@host (13:40:16) /local/tmp]# perl -f
Setting string
$VAR1 = 1;
Getting same string
$VAR1 = 'This is a string';
Setting hash object
$VAR1 = 1;
Getting hash object
$VAR1 = {
          'c' => 'yo mama',
          'a' => 2342342
Setting array object
$VAR1 = 1;
Getting array object
$VAR1 = [
Getting php string
$VAR1 = 'this is a string';
Getting php object
$VAR1 = {
          'c' => 12,
          'a' => 4
[root@host (13:40:21) /local/tmp]# php -f local_memcached_test.php
Have JSON: int(1)
Setting string
Getting same string
string(16) "this is a string"
Setting assoc array object
Getting assoc array object
object(stdClass)#2 (2) {
Setting array object
Getting array object
array(4) {
  string(1) "d"
  string(1) "e"
  string(8) "advwewge"
Getting Perl string
string(16) "This is a string"
Getting Perl object
object(stdClass)#2 (2) {
  string(7) "yo mama"

So now I never have to do all of the extra encoding outside of the objects, they just work together.  Lazy++

:: 2009-09-04 18:48:04 ::

Speeding up Perl...

So due to size of our environment at work and trying to keep up with our traffic (top site is <100 on Alexa and other major sites are all <4000 one of which is ~600) we have to look at many ways to optimze code.  In some cases you have done a lot of make your code more effecient but you're still suffering because of the language engine.  Since we're currently using Perl 5.8.8 on a majority of our production servers it made sense to look farther into how Perl was operating.  Now this patch won't be overtly useful for a lot of people due to lack of traffic to their sites.

So basic issue is file system stats.  It seems that Perl really likes to stat things when trying to load a file.  First of all your default @INC has 5 sub versions of Perl in it (in this case you would get 5.8.8, 5.8.7, 5.8.6, 5.8.5, etc) for all of your paths.  So leaving our additional lib paths out of the equation you're base install will have ~32 include paths that it must stat through until it files the library that you're trying to load.

Now on top of that you also have this thing called a "pmc" file.  If you every strace Perl doing things you'll see those going by too.  Those are an old functionality for pre-compiled libraries that it would search for before loading the .pm file instead.   So in the examples I'll give basically what my inital @INC looked like:


So say I wanted to use a library that was in /usr/lib/perl5/5.8.8/ like File::Path.  It would not only have to stat it's way down most of the tree it would also have to stat for for a "pmc" file first in each directory.  While this seems like potentially trivial because the linux file system is pretty good at caching data, if you have a lot of libraries to load it can add a decent amount of overhead (uncached means IO hit).

So when I first was looking into this I discovered that one of our common CPAN modules was requiring 47 stats in order to load the library.  Now it takes 9.  So in our environment we don't upgrade perl (hence the previous sub versions in @INC).  Now there's a compile flag for removing the previous versions or setting it to something arbitrary, but everything I tried still gave me the old versions in the @INC (and sometimes more).  So as a result I gave up on that approach and just created a patch file for our source RPM.

If you apply this patch file it will remove all previous version from the @INC and remove the "pmc" file check when looking for libraries.  My @INC now looks like this:


NOTE: If you do upgrade Perl on your boxes you won't want to use the full patch since your older libraries will be in the previous version directories.  But you could trim out everything but the removal of the"pmc" file check.


And yes we use mod_perl to help keep out library reload rates as low as possible but it will happen, so make it as painless as possible.

:: 2009-09-04 18:16:35 ::

New blog for rambling...

Basically wanted a place to talk or remind myself of technical things.  More posts to come.