You are not logged in.
The Batch ability of TSQLRestClientURI is very powerful. But when the BatchSend is failed(eg, the result of BatchSend is not HTTP_SUCCESS), how should we deal with this situation properly?
I have tried just call the BatchSend again, It seems does not work at all.
Any help is appreciated.
Offline
After reading the TSQLRestClientURI.BatchSend, I think it is better to free the fCurrentBatch only after Connecting the HTTP Server successfully:
function TSQLRestClientURI.BatchSend(var Results: TIDDynArray): integer;
begin
if self<>nil then
try
result := BatchSend(fBatchCurrent,Results);
finally
// conditionly free the fBatchCurrent, so we would have more chance to call BatchSend again.
if (Result = HTTP_SUCCESS) then
FreeAndNil(fBatchCurrent);
end else
result := HTTP_BADREQUEST;
end;
@AB, what do you think about this? Will it work? Thanks in advance.
Last edited by houdw2006 (2017-07-10 10:28:54)
Offline
This was as design.
The TSQLRestBatch.PrepareForSending method could be called just once.
So even if you publish the fBatchCurrent, you won't be able to use it again to retry the operation.
Offline
Thank you, @AB.
I think if it was as design, it should be a fault of TSQLRestBatch. Any solution for this? Data loss is always a big issue for customers.
Last edited by houdw2006 (2017-07-10 18:29:25)
Offline
First of all, a batch should better happen on server side, encapsulated in a service. Even if it is available on client side, it is not a safe practice, for sure.
In case of a batch problem, you don't know what caused the error: for instance, some of the data may have been written (if a transaction was committed during a big batch, for instance) before the issue.
So it is not a data loss... Even if you could replay the batch, you may corrupt your data, and write twice the same information, or it may break some unique constraint...
As we stated in the framework documentation, the safe way of implementing such data recovery is to handle it at business logic level.
Always prepare a dual phase commit, or a rollback invert sequence, if you expect some problem to occur.
In practice, if the batch run on the server side, especially if you use a local SQLite3 engine (as we recommend and use on production), the data will be safely written, even faster than with a regular database.
With a local SQlite3 instance (you may define several on the same process), you don't face any network problem.
Offline
Thank you, AB. Really appreciate for your help, and the wonderful mORMot. I am thinking to use a local SQLite3 database in the client side, and then submit the change to server side by Batch.
My problem is that the HTTP Request of BatchSend seldom returns 408 (Timeout of http request). It's a little bit strange as the clients and server are located in the same computer.
Last edited by houdw2006 (2017-07-11 01:02:50)
Offline
You need to speed up the SQLite3 engine.
See https://synopse.info/files/html/Synopse … ml#TITL_60
And create an automated transaction for each batch.
Offline