[ic] Large file uploads failing

Mike Heins mike at perusion.com
Tue Aug 12 20:30:21 UTC 2008


Quoting jimbo (jimbo at soundimages.co.uk):
> Gert van der Spoel wrote:
> 
> > Not totally sure but he is probably referring to the HammerLock directive:
> > http://www.interchange.rtfm.info/icdocs/config/HammerLock.html
> 
> I've monitored the upload using tcpdump and can confirm the upload 
> completes, as far as Apache is concerned. Once the interchange cgi link 
> program is invoked the interchange process goes into 99% cpu mode, the 
> memory bounces around like a yo-yo, eventually fails. Raising the Apache 
> TimeOut directive value has no real benefit, as eventually the process 
> dies and I get a 500 error in any case.
> 
> I know this is not a network issue as I've set up a test bed on the 
> machine and all uploads are done using a small Perl script that 
> generates random large uploads.
> 
> > But you do not get any error messages/failures when doing the upload?
> 
> There are no messages in either the catalog error log or the Apache 
> error log until the process fails and Apache logs a premature end of 
> script headers.

What are you doing with the file? If you are parsing it, like putting
it in a [tmp] or [seti] tag, then it might take some parsing....

We would need to see the application code to go further. Also, how
much memory do you have?

If we can make it more efficient somehow and get bigger sizes, it
would be nice I guess. But HTTP upload of 20MB files is not really
the way I would choose to run my particular railroad.... 8-)

-- 
Mike Heins
Perusion -- Expert Interchange Consulting    http://www.perusion.com/
phone +1.765.647.1295  tollfree 800-949-1889 <mike at perusion.com>

Prove you aren't stupid.  Say NO to Passport.



More information about the interchange-users mailing list