[ic] Iterations slow with mv_matchlimit incrementations
Gert van der Spoel
gert at 3edge.com
Sun Jul 5 14:52:59 UTC 2009
> -----Original Message-----
> From: interchange-users-bounces at icdevgroup.org [mailto:interchange-
> users-bounces at icdevgroup.org] On Behalf Of Grant
> Sent: Sunday, June 14, 2009 8:04 PM
> To: interchange-users at icdevgroup.org
> Subject: Re: [ic] Iterations slow with mv_matchlimit incrementations
> >> I do not have a categories table to test this ... But 1) your
> >> table probably has round about 100-1000 max results, so you can put
> >> ml=999999999999999 and it won't be making any difference. Then you
> feed that
> >> to the innerloop, where again you probably have 100-5000 results per
> >> category so again the 9999999 match limit does not really get
> reached anyway
> >> ...
> >> So your fast workaround is eventually returning all products, but it
> >> the returns up in pieces ... Less data to handle at once ...
> >> Anyway in case you have a huge speed difference with 10 or 10000
> then it
> >> could be your IC version (I've tested on 5.7.1) , but if 10 and
> 10000 are
> >> similar in speed and the problem really is with the 999999 then
> perhaps you
> >> want to monitor you environment, check what happens when you do the
> >> (swap etc).
> > Is this informative? It looks like the process which is running the
> > job is using quite a bit of memory, or maybe this much is normal?
> > Mem: 1028780k total, 973860k used, 54920k free, 91364k
> > Swap: 2008116k total, 34188k used, 1973928k free, 290136k
> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > 31789 interc 20 0 261m 252m 6324 R 97.2 25.1 2:39.70
> > interchange
> > 31754 interc 20 0 69632 57m 3968 S 0.0 5.7 0:01.77
> > - Grant
> I think the above is evidence that this is a memory problem. If I use
> the workaround I mentioned, the process uses about 7% memory instead
> of 25%. Does this mean my job is too complex? Should I add memory to
> my server? I'm on IC 5.6.1.
> - Grant
> >> I also still do not understand that it is apparently for you working
> >> processing <long break> processing <long break> processing <long
> >> For me it 'thinks' and then put a processing blob all at once on
> >>> So it seems like IC is getting bogged down when there are too many
> >>> matches in a loop search. Should that happen? Does it indicate a
> >>> problem somewhere in my system?
> >>> I tried many times to narrow the problem down to a certain section
> >>> my "processing" code but I always got nowhere. I have the problem
> >>> two separate loop searches of two different tables.
> >>> - Grant
Did you ever get around finding out where things were slowing down and to
We were looking at the overall speed/performance of loops a bit today
(Peter, Phil, myself) and Peter gave some valuable tips, which I can also be
found in the following pages where you might find some other useful
information as well:
Mike had given the rather cryptic response a while back: "Two words: string
copying". I guess the first link gives a bit bigger explanation of what he
"In general, when you are displaying only one item (such as on a flypage) or
a small list (such as shopping cart contents), you can be pretty carefree in
your use of ITL tags. When there are thousands of items, though, you cannot;
each ITL tag requires parsing and argument building, and all complex tests
or embedded Perl blocks cause the Safe module to evaluate code.
The Safe module is pretty fast considering what it does, but it can only
generate a few thousand instances per second even on a fast system. And the
ITL tag parser can likewise only parse thousands of tags per CPU second.
What to do? You want to provide complex conditional tests but you don't want
your system to slow to a crawl. Luckily, there are techniques which can
speed up complex lists by orders of magnitude."
Perhaps some of the examples give you some ideas on how to rewrite your
The basic example that you had given, with simply returning 'processing' did
not really make a difference per loop I believe, this might have been
appearing that way due to incorrect logging of the loop times ... Of course
it will take longer to show the "processing" result when the matchlimit goes
from 10 to 999999 ... because the query itself will take longer to process.
More information about the interchange-users