[ic] Cluster and/or load balancing question

Dan Browning interchange-users@icdevgroup.org
Tue Jul 16 15:33:01 2002


At 12:24 PM 7/16/2002 -0600, you wrote:
>Quoting Dan Browning <dbml@kavod.com>:
>
> > At 09:04 AM 7/15/2002 +0200, you wrote:
> > >Dan Browning wrote:
> > >....
> > >
> > >>>>There are more that do that as well, just not people that are
> > >>>>on the list.
> > >>>
> > >>>
> > >>>Dear Mike,
> > >>>
> > >>>did you have a white paper to use IC in a cluster with oder without
> > >>>SQL-Server?
> > >>>
> > >>>Thanks!
> > >>>
> > >>>Joachim
> > >>
> > >>I've thought of writing one, since it is really quite simple (well, for
> > >>IC's part anyway -- clusters in general are often non-intuitive).
> > >>However, the thought hasn't yet turned into action as yours is the first
> > >>query I've seen in a few months.
> > >
> > >Dan,
> > >
> > >are you so kind to tell me or us a short summary of your experience with
> > >IC in a cluster, if you have a little bit time?
> > >
> > >Thank you!
> > >
> > >Joachim
> >
> > I think clustering is great.  Two medium boxes are about as expensive as
> > one *big* box, and you get more performance for the dollar, not to mention
> > the SPOF decreases some.  I went with LVS and IP affinity (so I could load
> > balance SSL connections as well), and some other program that notifies my
> > cell via email anytime a server goes down.
> >
> > I haven't clustered the database yet, but if I did I would probably try
> > DBI::Multiplex (I think that's what its called) first.  As far as
> > Interchange fits in, everything went great except for the CPAN
> > CounterFile.pm module used by Interchange isn't NFS-safe yet (i.e. doesn't
> > use fcntl locks).  However, it hasn't yet caused any problems for us;
> > although I suppose if it did, it could be hacked to support fcntl.
>
>Actually Dan, I would have to disagree with that. We have ran tests on 50 
>sites
>for three months. Running Apache & Interchange on one box and MySQL on 
>another.
>The sites ran noticably slower, than running them on one duo-processor box. To
>the sum of a 7-9 second delay on many pages before they were sent out.
>
>We have recently picked up a pure Intel server. Right down to the case. This
>server came with a $12,000.00 US price tag. It is duo processor with 6 Gigs of
>RAM and 4 Utlra 160 Seagate SCSI drives.
>
>On our development server, we have one Interchange page that takes 3-4 minutes
>to send to the client. I really wish I could time-build it, but it is a single
>page in our custom site editor, which has several modes it can be started in,
>anyways that aside. On an Idol Pentium III with 1 Gig of RAM the pages takes
>close to 3 minutes before it starts to send. On the new server, it is now down
>to less than 1 second.
>
>Single processor desktop boxes with Linux on them, are not servers. AND 
>PLEASE I
>am not saying you are saying they are. Building 3 or 4 of them, still wont 
>match
>the IO of a REAL server.
>
>I have over 200 Interchange sites running off that new server, and no one is
>affected by the others. Many small sites take up a desktop box converted. So
>bang for you buck is not true, when you add up how many sites can be run 
>on that
>one box.

We have used the Intel Channel Whiteboxes as well, including the ~$15,000 
Xeon variety (ours is a quad xeon server with 6 Ultra 160 Seagate Cheetahs 
in RAID-50).  I think they have their spot:

         * IO-Intensive functions (i.e. too big to fit in RAM so you need a 
fast disk)
         * Applications that can't be clustered.

However, I think that the majority of e-commerce sites do not fit that 
description and you can get more performance per dollar using six $2,000 
boxes than one $12,000 box.  Once a site is loaded into memory, the $2,000 
box and the $12,000 box only compete on processor speed and memory 
bandwidth.  The Xeons (e.g. 1.5ghz/256k) have a definite leg up, perhaps 
even as much as 25% over a AMD 2.0ghz with 266mhz DDR, but the price is 
certainly more than 25% higher.

Regarding disk IO, that is usually only an issue for session/ and tmp/ 
files (since most e-commerce sites don't have 1GB in data).  If disk IO is 
an issue, a single cheetah disk will outperform a RAID-5 on writes (not a 
RAID-10 or RAID-50), but on reads it will be proportionately 
slower.  However, if the reads are spread over 6 servers, then you regain 
that read speed while keeping write speed.

Again, I think the big boxen definitely have their needed applications, but 
my general opinion is that a cluster will provide more bang for the buck.

+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Dan Browning, Kavod Technologies <db@kavod.com>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fuch's Warning:
         If you actually look like your passport photo, you aren't well
enough to travel.