[ic] Performance testing: timer variable?

Dan B db@cyclonehq.dnsalias.net
Mon, 12 Mar 2001 01:07:39 -0800


At 11:43 PM 3/9/2001 -0500, you wrote:
>On Fri, Mar 09, 2001 at 03:05:43PM -0800, Dan B wrote:
> > At 08:17 AM 3/9/2001 -0500, you wrote:
> > >On Fri, Mar 09, 2001 at 02:33:58AM -0800, Dan B wrote:
> > > > I am doing some performance testing/debugging, and would love to hear
> > > > comments from others about this.
> > > >
> > > > A) Testing the load time (seconds) for an entire page
> > > >          Is 'time wget $URL' a good indicator?
> > > >
> > > > B) Testing a specific block of code?
> > > >          What kind of [perl] would you put in front and in back to 
> take a
> > > > before and after time sample, subtract it, and give the "x.xxxx" 
> seconds
> > > > passed" result?  Or another way all together?
> > > >
> > > > C) I'm also wondering how much of a difference there is between 
> [query]
> > > and
> > > > direct SQL querying (i.e. how much does the overhead of perl, DBI, IC
> > > > amount to).
> > >
> > >It's not clear what you want to know or what you define as "good 
> performance"
> > >in your situation.  You need to define that first.
> >
> > Good point.  I'm looking for ways to measure "page display times" (when
> > network isn't an issue).  As far as "good performance", here's the
> > background info:
> > For hardware, we've got:
> >          2-way zeon 1ghz web server (1gb ram)
> >          4-way zeon database server (postgresql) with 4gb ram
> >          EMC Clariion fibrechannel SAN array
> >
> > With plans to scale out the webservers and do layer-4 www clustering (and
> > eventually try the Pgsql clustering code).
> >
> > For my application, good performance is sub-second page displays no matter
> > the concurrency load (1 concurrent connection or 1000+ concurrent).
> >
> > Basically I'm worried about performance that I can't fix by throwing more
> > hardware at the problem.  I need a good way to test the performance, and
> > that's what I'm hoping you guys can help me discover. :)  So it seems that
> > it would be very valuable to test the amount of time that it takes to
> > execute a given block of code in an .html so that I can find what areas
> > need tuning the most.
> >
> > But since you've piqued my interest, what are the other metrics (or
> > dimensions) for "performance" testing on an interchange website?  (KB
> > transmit size per HTTP SEND?  processor utilization maybe?  low/high
> > concurrency? etc?).
>
>Setting the reference platform is the first step.  Sometimes it is a
>laptop and rarely a high end mac workstation.  Usually it is a
>$1000 off the shelf windoze system from Staples on a dialup.  IE,
>Netscape, AOL.  We do everything on linux so that is covered, and
>we check with a Mac when we can get it working. Simple page render
>time.
>
>We want a 3 second render time.  Yeah, ugly.  Do we get there
>with everything, no way.  I don't know where you will get sub-second
>page displays in any credible testing environment.

Do you mean to say that 100mbps isn't a credible testing 
environment?  :-)  I thought everyone browsed the net with a 
cross-connected OC-192 into MAE-WEST?

At this point I'm worried about the performance of the site without network 
overhead.  That way I can be sure that a given page is slow because of my 
bad IML programming (vs. wondering if it was because 56k, etc.).  That's 
what I meant by sub-second. :-)

That said, I recently mentioned I found the solution to my question in 
Mike's Tips and Tricks that I saved to my hdd (knowing it would come in 
handy later), but neglected to reference until now.  It will be great to 
see the RedHat Documentation Army applied to Interchange docs -- hopefully 
"tips and tricks" will be expanded and promoted.  Anyway, for the sake of 
list-archive-ness I'll repost what I'm using to benchmark here (from Mike's 
tips and tricks :

From: interchange-users-admin@minivend.com on behalf of Mike Heins
[mikeh@minivend.com]
Sent: Tuesday, November 21, 2000 12:58 PM
To: Interchange User List
Subject: [ic] Tips and Tricks -- optimizing lists

Area: Core
Category: Templates
Item: List optimization
[...]
------------
Benchmarking
------------
A non-precise benchmark of different iteration options can be done
with the following global UserTag. Place this in a file in the
usertag/ directory in the Interchange root:

UserTag benchmark Order start display
UserTag benchmark AddAttr
UserTag benchmark Routine <<EOR
my $bench_start;
my @bench_times;
sub {
     my ($start, $display, $opt) = @_;
     my @times = times();
     if($start or ! defined $bench_start) {
         $bench_start = 0;
         @bench_times = @times;
         for(@bench_times) {
             $bench_start += $_;
         }
     }
     my $current_total;
     if($display or ! $start) {
         for(@times) {
             $current_total += $_;
         }
         unless ($start) {
             $current_total = sprintf '%.3f', $current_total - $bench_start;
             for(my $i = 0; $i < 4; $i++) {
                 $times[$i] = sprintf '%.3f', $times[$i] - $bench_times[$i];
             }
         }
         return $current_total if ! $opt->{verbose};
         return "total=$current_total user=$times[0] sys=$times[1] 
cuser=$times[2] csys=$times[3]";
     }
     return;
}
EOR

Then at the beginning of the code to check, call

         [benchmark start=1]

to start the measurement. At the end

         [benchmark]

will display the time used. Bear in mind that it is not precise, and
that there may be variation due to system conditions. Also, the longer
the times and the bigger the list, the better the comparison.

To see the system/user breakdown, do:

         [benchmark verbose=1]

In general, "user" time measures Interchange processing time and and
the rest are indicative of the database access overhead, which can vary
widely from database to database.
[...]

Thanks Mike!  (And thanks Christopher!)

-Dan


>minivend is **never** the issue.  Poor performance is almost always poor
>design or poor concept; the rest of the time it's not enough RAM.
>
>FWIW, my experience is that performance is relatively insensitive
>to the hardware you through at it; generally our systems chug
>along under .1 load avg; that will spike without bound when we
>make a **mistake**.  Half the hardware or twice as much would
>not make a difference when a few robots chew into badly formed
>queries all at once.
>
>cfm
>
>--
>
>Christopher F. Miller, Publisher                             cfm@maine.com
>MaineStreet Communications, Inc         208 Portland Road, Gray, ME  04039
>1.207.657.5078                                       http://www.maine.com/
>Content management, electronic commerce, internet integration, Debian linux
>
>_______________________________________________
>Interchange-users mailing list
>Interchange-users@lists.akopia.com
>http://lists.akopia.com/mailman/listinfo/interchange-users

Dan Browning, Cyclone Computer Systems, danb@cyclonecomputers.com