[ic] Iterations slow with mv_matchlimit incrementations

Gert van der Spoel gert at 3edge.com
Sun Jun 14 20:57:14 UTC 2009


> -----Original Message-----
> From: interchange-users-bounces at icdevgroup.org [mailto:interchange-
> users-bounces at icdevgroup.org] On Behalf Of Grant
> Sent: Sunday, June 14, 2009 8:04 PM
> To: interchange-users at icdevgroup.org
> Subject: Re: [ic] Iterations slow with mv_matchlimit incrementations
> 
> >> I do not have a categories table to test this ... But 1) your
> categories
> >> table probably has round about 100-1000 max results, so you can put
> >> ml=999999999999999 and it won't be making any difference. Then you
> feed that
> >> to the innerloop, where again you probably have 100-5000 results per
> >> category so again the 9999999 match limit does not really get
> reached anyway
> >> ...
> >>
> >> So your fast workaround is eventually returning all products, but it
> breaks
> >> the returns up in pieces ... Less data to handle at once ...
> >>
> >> Anyway in case you have a huge speed difference with 10 or 10000
> then it
> >> could be your IC version (I've tested on 5.7.1) , but if 10 and
> 10000 are
> >> similar in speed and the problem really is with the 999999 then
> perhaps you
> >> want to monitor you environment, check what happens when you do the
> query
> >> (swap etc).
> >
> > Is this informative?  It looks like the process which is running the
> > job is using quite a bit of memory, or maybe this much is normal?
> >
> > Mem:   1028780k total,   973860k used,    54920k free,    91364k
> buffers
> > Swap:  2008116k total,    34188k used,  1973928k free,   290136k
> cached
> >
> >  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> > 31789 interc   20   0  261m 252m 6324 R 97.2 25.1   2:39.70
> > interchange
> > 31754 interc   20   0 69632  57m 3968 S  0.0  5.7   0:01.77
> interchange
> >
> > - Grant
> 
> I think the above is evidence that this is a memory problem.  If I use
> the workaround I mentioned, the process uses about 7% memory instead
> of 25%.  Does this mean my job is too complex?  Should I add memory to
> my server?  I'm on IC 5.6.1.
> 

It also shows close to 100% CPU usage, you're not ' out of memory ' and it
is hardly using your swap.
So I don't know if adding another Gb of memory will solve it. Or was this
just a snapshot and was
Memory and swap steadily filling?

And if the workaround works, why not use the work around?

> - Grant
> 
> 
> >> I also still do not understand that it is apparently for you working
> as:
> >> processing <long break> processing <long break> processing <long
> break>
> >>
> >> For me it 'thinks' and then put a processing blob all at once on
> screen.

You did not say how it works for you. For you it is doing the ' processing
<long break> processing <long break> '  when you run your loop? Because for
me it does not go like that. It does ' nothing '  on screen and then - Wham
- puts the word ' processing '  as many times as the match limit.


> >>
> >>
> >>> So it seems like IC is getting bogged down when there are too many
> >>> matches in a loop search.  Should that happen?  Does it indicate a
> >>> problem somewhere in my system?
> >>>
> >>> I tried many times to narrow the problem down to a certain section
> of
> >>> my "processing" code but I always got nowhere.  I have the problem
> in
> >>> two separate loop searches of two different tables.
> >>>
> >>> - Grant
> 
> _______________________________________________
> interchange-users mailing list
> interchange-users at icdevgroup.org
> http://www.icdevgroup.org/mailman/listinfo/interchange-users




More information about the interchange-users mailing list