[Date Prev][Date Next][Thread Prev][Thread Next][Interchange by date
][Interchange by thread
]
[ic] Random PGP failures
NOW Website Coordinator wrote:
> I am having problems (I use IC 4.8.6) where sometimes the
> mv_credit_card_info is included with an order, sometimes not. The
> error log just says:
>
> [16/November/2002:12:16:43 -0500] store /cgi-bin/store/process.html
> PGP failed with status 0:
>
>
>
> which seems to be an unhelpful status code in PGP 6.5.8. A few
> times, the PGP message was written to pgptemp files and I was able to
> at least retrieve them from there, but the last dropped message was
> not written to a temp file.
>
>
> I've thought of switching to GPG, but it seems to be a problem for
> people with that program as well -- at least in June, but I can't
> access the URL with some help that people reference:
>
> http://interchange.redhat.com/pipermail/interchange-users/200-March/018636.html
>
> nor can I find it on www.archive.org.
>
> I find in the archives (I can only get the Google ones to work for
> some reason, the Swish ones aren't working) the following thread --
> does anyone have more info about this -- should I find a traffic
> setting and set it to RPC?:
>
> [ic] Random GPG failures
> Brian Kosick interchange-users@interchange.redhat.com
> Mon Jun 3 10:48:02 2002
> · Previous message: · [ic] Hard Core customization
> · Next message: · [ic] Random GPG failures
> · Messages sorted by: · [ date ] · [ thread ] · [ subject
> ] · [ author ]
>
> Just to let every one know, this fix has indeed cured my GPG woes, at
> least
> so far, the site went all weekend without a single one. Quite an
> improvement verses 2-3 everyday. A few things that I noticed about
> the new
> conf. IC seems to take twice as much memory as before, the new process's
> are around 30-40MB/process, whereas previously they were
> ~20MB/process. Also, they seem to place less of a load on the CPU when
> they run.
>
> At 03:10 PM 5/31/02 -0400, you wrote:
> >At 01:02 PM 5/31/02 -0600, you wrote:
> >> > >Hi,
> >> > >
> >> > >I had somewhat a similar problem, and went nut. After posting on
> >> > >Interchange, I kindly got a response from Kevin Walsh, who
> >> > pointed me to
> >> > >the following article. I resolved my issue:
> >> >> >
> >> >>
> >>
> >http://interchange.redhat.com/pipermail/interchange-users/200-March/018
> 636.
> >>html
> >> >>
> >> >
> >> >Thanks Tin and Dan,
> >> >
> >> > I've implemented the RPC traffic mode, and the MaxServer 0
> >> >suggestions. More specifically, I followed the articles
> solution. (I
> >> >liked the ifdef solution) I'll watch to see if there are any more
> errors,
> >> >and let everyone know if this has been resolved.
> >>
> >>I just implemented it as well, but there is already and ifdef TRAFFIC
> >>section of the code so you shouldn't have to add one. Not sure the
> effect
> >>if you do either. Just do a search from the top for TRAFFIC. You
> will find
> >>a line that says Variable TRAFFIC low (which you change from low to
> rpc) and
> >>then the ifdef for "low" followed by one for "rpc". Just add the
> MaxServers
> >>0
> >>line to the bottom of that ifdef section.
> >>
> >>By the way, doing an interchange -r every night reduces the
> frequency a lot
> >>but does NOT solve the problem, as I had posted earlier. Several
> days after
> >>that post, it happened again.
> >>
> >>Patrick
> >>
> >>_______________________________________________
> >>interchange-users mailing list
> >>interchange-users@interchange.redhat.com
> >>http://interchange.redhat.com/mailman/listinfo/interchange-users
> >>_______________________________________________
> >>interchange-users mailing list
> >>interchange-users@interchange.redhat.com
> >>http://interchange.redhat.com/mailman/listinfo/interchange-users
> >
> >Oops Thanks for pointing that out, I removed the dup entry, and added
> >MaxServers to the appropriate place....
>
> _______________________________________________
> interchange-users mailing list
> interchange-users@icdevgroup.org
> http://www.icdevgroup.org/mailman/listinfo/interchange-users
This has been gone over many times, try searching on "rpc"