|Message:||I am beginning to wonder whether this is in the realms of 'bad behaviour'.
I was intending to ask whether we could test with the latest version of WebBooster. Could you send the jar to the email address above ?
We had terrible timings that ranged around 180, 200, 240 seconds.
I then started to look at the tcpip settings and manipulated the time_wait parms. Not sure if they're relevant yet but more tests will follow.
Regarding your comment on keep-alive connections - I'll do some more investigations into the exact behaviour of the poorly performing (or should that be poorly reported?) operations. I know that there are redirects following the completion of the operation and there's a possibility that the connection is being torn down or not as the case may be.
While not the greatest fan of LoadRunner I would have to say that it's pretty foolproof in this area and there are many options that can be changed for its simulation of web clients. We could try some of these but I'm not sure that this is the best way to get to the bottom of the latency. But my mind remains open as always.
Everything looks good with webBooster.
Any comments on the overhead/impact of Boosterdebug=1? Would it alter the tests that much? Would this show socket tear downs ?
There's not a lot of caching allowed in this system.
And I'm not sure if regular browser users will see any different behaviour.