#1 2020-09-14 12:31:32

ComingNine
Member
Registered: 2010-07-29
Posts: 294

Inconsistency between receive timeout param and actual HTTP timeout ?

It seems that TSQLHttpClient descendant will try to retry a service call after certain amount of time has lapsed. Furthermore, the client exists with an error code of 505 after another amount of time. However, there seems to be inconsistency between the receive timeout parameter when HttpClient is created and the timing of the second service call and client exit. Therefore, it seems to be confusing concerning setting up the receive timeout parameter.

The compilable programs can be viewed at viewed at gist. The server program is a simple shell wrapper. The client program tries to run a long-running shell command " date ; sleep 600 ".

parameter     second service call  client exit505
------------- ---------------------   ----------------------                   
default(30s)    180s (3m)                    480s (8m)
receiv + 13    258s (4m18s)                 558s (9m18s)
receiv + 15    270s (4m30s)                 510s (8m30s)
receiv + 17    282s (4m42s)                 522s (8m42s)
receiv + 70    300s (5m)                    600s (10m)
receiv + 100   450s (7m30s)                 690s (11m30s)

: data collected for http time out & retry

As seen in the table above, using the default parameters for creating HttpClient, the second service call occurs after 180s has elapsed. The 1st to 4th rows indicate that there is a multiple of 6 between the constructor receive timeout parameter and the timing of the second service call. However, the 5th and 6th rows indicate that there are no consistency.

Could you help to comment how to set up the receive timeout properly ? yikes
Many thanks !

Last edited by ComingNine (2020-09-14 12:58:49)

Offline

#2 2020-09-14 16:39:01

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,183
Website

Re: Inconsistency between receive timeout param and actual HTTP timeout ?

A timeout on the server is a huge problem, and should never occur, but on major instability process.
If I understand correctly, your remote method is taking seconds to execute, and you want to find out where the timeout occurs.

This is not the correct way of implementing services, since it will consume and block a thread during the process, so the server thread pool will quickly be exhausted.
A remote request should at most take half a second.
You need to either use WebSockets and the "Saga" kind of callback, or use a request ID, then let the client poll the server e.g. every second for the process to be finished.
Relying on the HTTP layer ability to reconnect/retry is not a good idea - just switch from Windows to Linux, or socket to libcurl, and the timeout processing won't be the same.

Offline

Board footer

Powered by FluxBB