[ic] Iterations slow with mv_matchlimit incrementations

Grant emailgrant at gmail.com
Sat Jul 11 06:08:14 UTC 2009


>> >> I do not have a categories table to test this ... But 1) your
>> categories
>> >> table probably has round about 100-1000 max results, so you can put
>> >> ml=999999999999999 and it won't be making any difference. Then you
>> feed that
>> >> to the innerloop, where again you probably have 100-5000 results per
>> >> category so again the 9999999 match limit does not really get
>> reached anyway
>> >> ...
>> >>
>> >> So your fast workaround is eventually returning all products, but it
>> breaks
>> >> the returns up in pieces ... Less data to handle at once ...
>> >>
>> >> Anyway in case you have a huge speed difference with 10 or 10000
>> then it
>> >> could be your IC version (I've tested on 5.7.1) , but if 10 and
>> 10000 are
>> >> similar in speed and the problem really is with the 999999 then
>> perhaps you
>> >> want to monitor you environment, check what happens when you do the
>> query
>> >> (swap etc).
>> >
>> > Is this informative?  It looks like the process which is running the
>> > job is using quite a bit of memory, or maybe this much is normal?
>> >
>> > Mem:   1028780k total,   973860k used,    54920k free,    91364k
>> buffers
>> > Swap:  2008116k total,    34188k used,  1973928k free,   290136k
>> cached
>> >
>> >  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>> > 31789 interc   20   0  261m 252m 6324 R 97.2 25.1   2:39.70
>> > interchange
>> > 31754 interc   20   0 69632  57m 3968 S  0.0  5.7   0:01.77
>> interchange
>> >
>> > - Grant
>>
>> I think the above is evidence that this is a memory problem.  If I use
>> the workaround I mentioned, the process uses about 7% memory instead
>> of 25%.  Does this mean my job is too complex?  Should I add memory to
>> my server?  I'm on IC 5.6.1.
>>
>> - Grant
>>
>>
>> >> I also still do not understand that it is apparently for you working
>> as:
>> >> processing <long break> processing <long break> processing <long
>> break>
>> >>
>> >> For me it 'thinks' and then put a processing blob all at once on
>> screen.
>> >>
>> >>
>> >>> So it seems like IC is getting bogged down when there are too many
>> >>> matches in a loop search.  Should that happen?  Does it indicate a
>> >>> problem somewhere in my system?
>> >>>
>> >>> I tried many times to narrow the problem down to a certain section
>> of
>> >>> my "processing" code but I always got nowhere.  I have the problem
>> in
>> >>> two separate loop searches of two different tables.
>> >>>
>> >>> - Grant
>
> Hi Grant,
>
> Did you ever get around finding out where things were slowing down and to
> optimize them.
> We were looking at the overall speed/performance of loops a bit today
> (Peter, Phil, myself) and Peter gave some valuable tips, which I can also be
> found in the following pages where you might find some other useful
> information as well:
>
> http://www.icdevgroup.org/docs/optimization.html
> http://www.interchange.rtfm.info/icdocs/Interchange_tag_introduction.html#In
> terchange_tag_parsing_order
> http://www.interchange.rtfm.info/icdocs/Interchange_Perl_objects.html  and
> then specifically:
> http://www.interchange.rtfm.info/icdocs/Interchange_Perl_objects.html#Sql
>
> Mike had given the rather cryptic response a while back: "Two words: string
> copying". I guess the first link gives a bit bigger explanation of what he
> meant:
>
> "In general, when you are displaying only one item (such as on a flypage) or
> a small list (such as shopping cart contents), you can be pretty carefree in
> your use of ITL tags. When there are thousands of items, though, you cannot;
> each ITL tag requires parsing and argument building, and all complex tests
> or embedded Perl blocks cause the Safe module to evaluate code.
>
> The Safe module is pretty fast considering what it does, but it can only
> generate a few thousand instances per second even on a fast system. And the
> ITL tag parser can likewise only parse thousands of tags per CPU second.
>
> What to do? You want to provide complex conditional tests but you don't want
> your system to slow to a crawl. Luckily, there are techniques which can
> speed up complex lists by orders of magnitude."
>
> Perhaps some of the examples give you some ideas on how to rewrite your
> code.
> The basic example that you had given, with simply returning 'processing' did
> not really make a difference per loop I believe, this might have been
> appearing that way due to incorrect logging of the loop times ... Of course
> it will take longer to show the "processing" result when the matchlimit goes
> from 10 to 999999 ... because the query itself will take longer to process.
>
>
> CU,
>
> Gert

Thank you very much for persisting with this.  I'm hung up right now
(and lately) but I will get back on top of this ASAP.

Thanks again,
Grant



More information about the interchange-users mailing list