You are not logged in.
OK - so just try to replace ID with RowID in your query.
SQLite3 doesn't have any ID column in its virtual table.
It only knows RowID.
Does that mean I can't do:
SQL := 'SELECT * FROM Table1 LEFT JOIN Table2 ON Table1.LinkID = Table2.ID WHERE ...';
TSQLRestClientURI.URI('root', 'GET', @Resp, Nil, @SQL);
-- or --
R := TSQLTable2.Create(SQLRestClientURI, 'ID=?', [AnID]);
-- or even --
R := TSQLTable2.Create(SQLRestClientURI, AnID); // perhaps this will use RowID?
As long as the tables were externally registered?
Did you call MapField('ID', 'YourExternalPrimaryKeyIDField')?
Actually, I relied on mORMot itself to create all needed tables and fields.
OK - so just try to replace ID with RowID in your query.
SQLite3 doesn't have any ID column in its virtual table.
It only knows RowID.
But how can I make it "transparent"? I would like to use it with SQLite3 or MSSQL alternatively. Is there some way to overcome that? FYI, I use 'ORDER BY ID DESC' to get the most recent row.
Please also ensure you have the latest version of the source code and the SQLite 3 static files, because your SQlite3 version seems outdated.
I browsed through the commit hstory for hours before posting, but didn't found changes related to that.
I'd prefer not to update too frequently, sometimes regressions broke out
Thanks for your replies,
Please follow the forum rules and don't post huge piece of code or log into the forum threads.
Apologize for that. Just thought the log can be helpful but I didn't found other way to attach it.
Hello,
I have a nicely working app with sqlite, but when I registered it to work with external MSSQL DB (via ODBC) it throws an exception and I can't figure what can be the reason.
ESQLite3Exception {"ErrorCode":1,"SQLite3ErrorCode":"secERROR","Message":"Error SQLITE_ERROR (1) [SELECT DonorKey FROM DataJournal WHERE DonorSiteIdentifier=? AND TableIdentifier=? ORDER BY ID DESC LIMIT 1] using 3.28.0 - no such column: ID, extended_errcode=1"} [] at ...
It says "no such column: ID, extended_errcode=1" but the column ID is there.
Here is how it is registered:
procedure RegisterExtDBTables(AModel: TSQLModel; AConnString: AnsiString);
var
T: TSQLRecordClass;
I: Integer;
begin
ModelProps := TODBCConnectionProperties.Create('', AConnString, '', '');
for I := Low(AModel.Tables) to High(AModel.Tables) do
begin
T := AModel.Tables[I];
VirtualTableExternalRegister(AModel, T, ModelProps,
'XT' + GetDisplayNameFromClass(T));
AModel.Props[T].ExternalDB.MapAutoKeywordFields;
end;
end;
...
ConnStr := Format('DRIVER={SQL Server Native Client 10.0};' +
'SERVER=%s;DATABASE=%s;UID=%s;PWD=%s;WSID=%s;APP=%s', [AServer, ADbName,
AUser, APwd, ServerDB.Site, ExtractFileName(ParamStr(0))]);
ExtModel := TSQLModel.Create(ClientModel);
RegisterExtDBTables(ExtModel, ConnStr);
...
Svr := TSQLRestServerDB.Create(ExtModel, 'epark_ext_work.ext.db3');
...
Here is the row where the exception rises:
OneFieldValue(TSQLDataJournal, 'DonorKey',
'DonorSiteIdentifier=? AND TableIdentifier=? ORDER BY ID DESC', [],
[ADId, TableIndex], Result);
I am on Win10 with Lazarus 1.9.0 r63034 FPC 3.1.1 i386-win32-win32/win64
Best regards,
Below is the log file.
https://gist.github.com/synopse/3bda4a9 … 07544d6512
@MacIn,
AFAIK, the http.sys is well recognized by its high performance and since THttpApiWebSocketServer(THttpApiServer) takes an advantage of it, it should be a good choice for this kind of heavy duty work.
My advice is to have a good unit tests and to continue with caution, because IMHO the authors have put the big emphasis on the rapid expansion of the functionality rather on the robustness ... Either way the Delphi/Win32 target should be on the safe side.
BTW, should the app be oriented to a certain established protocol or just need to serve many subscribers on a custom-made one?
It is not off-topic, but plainly wrong, since I have said same thing in my question.
Oops, you've got me here! Thanks for pointing me that I've said something "plainly wrong" but "the same thing in (my) question"
And yes, pool of threads can handle anything, including a single thread (which can be seen as a trivial case of a pool) can do anything...
You said also:
After first request "falls " into sleep mode, I'm expecting 2nd, 3rd etc to arrive, because server is cloned 8 times. But subsequent requests do not trigger the .onHttpRequest until first thread awakens. Other threads(servers) just do not return from
Err := Http.ReceiveHttpRequest(fReqQueue,ReqID,0,Req^,length(ReqBuf),bytesRead);
before sleep of first thread is over, like it was single-threaded...
I presume you've meant the call in the method THttpApiServer.Execute.
Actually this can happen only in the case when ReceiveHttpRequest() was called in a synchronous manner (it is) and there is no request arrived in the specified queue.
See: https://docs.microsoft.com/en-us/window … ttprequest
I suppose you should check the way you generate the requests first.
Regards,
Hi MacIn,
Perhaps it will be slightly out of 'Synopse' topic, but you can not expect to handle 10K simultaneous connections with a pool of threads, Win32 have practical limit of about 2K threads per process. Hi-performance servers like http.sys, nginx actually all works with some sort of 'callbacks' to overcome this limitation. http.sys works with IO completion ports, which is similar to a callback. The other issue is that threads also include significant overhead in memory usage and context switching. So, if you have such a tough requirements you should consider some kind of queueing, callbacks or perhaps protothreads/coroutines.
Regards,
@danielkuettner,
I apologize (if it is possible at all), in my country we have also saying: "no teeth are examined on a gift horse". You're perfectly right - this single issue can be considered as a "hair in the soup".
We really have to admire people who provide their work for the benefit of others.
Perhaps I should change my attitude.
At the other hand, everybody in that forum can easily check that I've trying to work (benefit) with (from) the framework for the past 4 years. And I really struck an issues.
To put it another way:
@ab,
I want to make a strong contribution to the development as an exchange for the benefits I have received (will receive in the future). But there are things that hinder me, and I believe not only me, but also many other potential contributors. To note a few of them:
- The mammoth size of the main source files; They are beyond my comprehension;
- My unawareness of the development model; AFAIK there is only a development version, nor stable or LTS;
- My ignorance of the framework historical background, in every project there are things you cannot explain without such a context; IMHO there is lot of decisions taken by reasons which I do not fully understand;
I sincerely believe that if you give a little push against the above things, I can be of real benefit to the cause.
Best regards,
@pvn,
I know that.
...As a consequence, the nested AnotherRecord property won't be a true class instance, but one ID trans-typed as TSQLRecord. ...
But that kind of guard will prevent only from 0 ids (non initialized). I am stunned by TSQLRecord.GetID dual use of a self reference/fID as TID and guessing from a magic constants.
@ab,
Why you're talking again in 'patterns'? What about the 'common sense' pattern? How about the VMT^ part in my previous reply? Asserting that a nil reference is usable just isn't sane!
Regards,
Also need to consider - maybe there is another case where those 'if self = nil' guard clauses have reasons to stay - mORMot is a framework, and the clients (the code that the users of mORMot write) might be some mess multi-thread code and cause 'self = nil' to happen?
I'm afraid that may be the case, If regressions happened at some point in the past, or considering the other "trick" from TSQLRecord.GetID. Ab may have been resorted to such decisions.
@edwinsn:
Guard clauses has nothing to do with that. Or to be more specific - yes, it had to have a guard clause before using a nil reference, not after that.
The irony is that I had the intention to refer to the Marco's post in order to support my point. Have you noticed the title?
"When Self in Nil... You Know You are in Trouble"
And, of course, any sort of comments can be expected on any post, I would point to Jolyon Smith's comment, the 5-th after the one you cited.
And no, a class instance being nil is perfectly valid, and checking it into methods is a way to make code shorter (no need to test if something<>nil then) and safer, especially with several expressions in a if clause.
A class instance being nil is a non-existing instance. You can call a (static) instance method on a nil pointer although it's not the sort of thing you want to do deliberately. When that happens, execution continues along quite happily until it needs to access the instance data and then it all goes bang. It is anything but safer. Since VMT^ is part of the instance data (also non-existing) calling any virtual method is out of question, for example:
SynCommons.TObjectListHashed.Add(...); override;
has the similar check (self<>nil), which is pointless - the call won't reach it.
Furthermore, both things are not semantically equivalent, i.e. the life-cycle stage of an object and the default value of an object property. Or to say behavior of non-existing object (?!) in the case of static method.
It is more like a try to bypass and prevent corruptions, instead of solving the real problem whatever it may be.
So you meant "is not needed for me".
No, I meant "is not needed for me, but it has implications on me".
You have perfectly the right to ignore this kind of code.
Just use the IDValue property instead of ID in your programs.
I guess I have that right. But how do I know what to ignore until I read every single line from the source? >65K lines just in mormot.pas and growing...
Best regards,
Edit:
@edwinsn: It took me a while to wrote my reply to @ab then I noticed your reply in-between. Now I'll go thorough
I don't understand why you stated "it doesn't work in practice".
Simply: does not work. I don't use TSQLRecord properties. I use my own ID generating scheme (forced ids) and the program behaves differently on different machines (MSWin).
A class instance being nil happens.
Yes, it happens. But it shouldn't be used. It is a programming error. A big one.
I used to think that the GetID is just a getter of an ID property. Somebody explain me this black magic (because it didn't work in practice):
function TSQLRecord.GetID: TID;
begin
{$ifdef MSWINDOWS}
if PtrUInt(self)<PtrUInt(SystemInfo.lpMinimumApplicationAddress) then
// was called from a TSQLRecord property (sftID type)
// (will return 0 if current instance is nil)
result := PtrUInt(self) else
result := fID;
// was called from a real TSQLRecord instance
{$else}
if PtrUInt(self)<$100000 then // rough estimation, but works in practice
result := PtrUInt(self) else
try
result := fID;
except
result := PtrUInt(self);
end;
{$endif}
end;
Further:
function TSQLRecord.GetHasBlob: boolean;
begin
if Self=nil then
result := false else
result := RecordProps.BlobFields<>nil;
end;
Into the both snippets, the author have assumption that the instance can be nil (!?). Also in 250+ another lines across the mORMot.pas. How is that possible?
Regards,
"Creating the indexes after the insertion is the fastest" is a known fact.
See e.g. https://www.whoishostingthis.com/compar … ulk-insert
This is how all DB engines work: maintaining the index during insertions has a cost (locking the table, fragmenting the disk....), whereas creating the index from scratch is a much faster process - see also https://stackoverflow.com/a/3688755/458259
Hmm, maybe... Depends on the (order of) data you're inserting. Wondering if creating an index does not lock too, doesn't it?
Maintaining a secondary indexes can drop down performances, especially in SQlite3, with a lot of data.
See https://stackoverflow.com/questions/15778716
In our code, we tend to try using only the primary RowID/ID, or a dedicated separated R-Tree index if needed.
What's the difference with good ol' plain fopen()/fseek()/fread()/fwrite()/fclose() then?
Lately I wonder what apps you are dealing with, they must be very problem specific ... to dump a tons of information without provisions for fast retrieving ... DAQ systems?
Regards,
Using batches will not affect speed when DB.Synchronous:=smOff; mORMot uses Int64 as PK, so you can't benefit also from "WITHOUT ROWID Optimization", https://www.sqlite.org/withoutrowid.html. Not sure that the @ab statement:
For instance, from a blank database, inserting the data, then creating the indexes after the insertion is the fastest.
is true, rather no.
Duplicates with UNIQUE INDEX will always give you exceptions. After all, what is the purpose? Fast inserts or fast search?
@edwinsn,
Thanks for sharing your experience!
Currently I have hard times debugging my application, and to be honest, my code also contributed to the issue. As I wrote above there is two things alarming in the log, the first is about "TransactionBegin timeout" and second is that spurious "cannot rollback" thing.
While the former is the one I'm complaining about, the latter was actually provoked by my code. The problem lies into TSQLDataBase.TransactionBegin() method, which detects the active transaction and instead of throwing an exception like "Can't nest transactions", it surprisingly rollbacks the transaction and begins a fresh one! IMHO this is extremely insidious, because it prevents me (developer) from detecting that something is wrong with my code (yes, it is) and throws away all changes made so far without any clue.
Normally, my application works without writing a log. To get this findings I had to make special debug environment with 5 client computers and one server and to record a mouse/keyboard macros to simulate operator activity on each of the client. The macros are replayed at higher speed to stress the server. Even then, I must wait at least one hour to reveal the issue. Then I start digging in a hundred megabytes of log text (@ab: yes, the concurrency is a difficult thing, I agree).
And not surprisingly, the syncing code between the DB-s was involved one more time, the logical operation needs to update the target table and the modifications log table in one batch. That is the relational connection blamed so furiously by Mr Bouchez.
Best regards,
Even a professional RDBS like Oracle or MSSQL may/will fail. What if the storage server is down, or the network unavailable?
Right. MSSQL can fail in some spectacular ways. But in most cases the reason is returned back to the caller.
I order to handle such a fail, the caller must to *know* the reason. The batch may fail because of duplicate PK, unique index, I/O error, etc. Different failures have different resolutions. In case of mORMot I don't know the exact reason because it is not marshaled back correctly. (I've got a text like "batch broken because of ...")
Even If I know that acquiring the transaction failed, I can't replay the batch, because its content is already cleared. I think it is quite trivial to preserve the contents, even the batch is closed for further modifications, and just to add a "BatchResent" method for example. It is matter of few lines of code, IMHO.
Further, I think the constant of 2000 ms for begin transaction timeout is very optimistic and it should be much more than that. Also, some (exponential) backoff must be provided into the retrying loop. We're dealing with RDBMS with limited transaction capabilities and locking resolution (not versioning).
Of course, we are welcome for any patch improving the existing framework code.
How can I make or suggest a patch?
Edit:
Please, explain me the right way to contribute to the project. Should I post patches here with a proper explanation, or, to make a branch in a repository to make changes there? Have you adopted some procedures for collaboration, etc.
Thank you!
Regards,
In mORMot batches, transactions are used mainly for performance, not for atomicity.
I see. Since the SQLite transaction overhead is added to every single statement (which is in its own implicit transaction), you use the side effect of the explicit transactions to skip low level writes in between records. Why then is all the fuss about the "12.3.3. Unit Of Work pattern" in your documentation?
If your data model expects transactions for proper state, it means that you have foreign references between tables.
Correct. What's wrong about it?
We use the "document sharding" or "aggregate" model instead (see the doc about it) instead of a relational model. This aggregate is the bound of the transactions.
Why should I change the model of my application just to walk around an incapability?
Anyway, why we must discuss software architectural matter, when those "batches" exists into the library and obviously they have a concurrency issue?
Kind regards,
If you don't follow this SOA-only design (which is the way to properly implement any client-server process), you may indeed be subject to various problems.
Wow! It sounds like a beginning of a tough discussion.
First of all, why I'll be a subject to various problems when I don't follow a particular pattern? Second - I don't think that alleviates the issue with the transactions. Same thing may happen in the server also, taking in account there are several threads running.
I just have an application which uses mORMot as a DB layer. In some occasions I want to use it standalone, in other - I want to use it from several computers with a separate server. It is not a complicated one and I don't want to write a middle/app layer for it. Nor it will have to have more than 10 clients or be published on Internet to get security issues. Still it has to be consistent, hasn't it? How to achieve that without transactions (a.k.a. batches in your terminology)?
Kind regards,
Hi,
Is there anybody that uses mORMot batches in a production environment and experienced occasionally Client.BatchSend() failures when the load increases? My case is exactly that one.
See: https://synopse.info/forum/viewtopic.php?id=4782
If somebody has a plausible solution I'll very much appreciate if he can share it there, because for me it is kind of a showstopper. Recently I've found that on an installation with only 4 client computers and one mORMot server, in some occasions, the clients fail to execute their batches and the database becomes inconsistent.
Further, I've found that it is not possible to re-send a batch, even the failure is detected. You have just one chance to send it, after that the batch contents were deleted. In the case of failure, there is not much information for the cause (only status code of 500), there is information in the server log but it is misleading.
The problem (for the failure) lies actually in the way the server acquires the DB transaction and the small constant timeout given. And it is aggravated by the fact that not much information is given back to the caller.
Аlthough mORMot pretends to be "made for speed and versatility", my latest experience shows that it can't handle properly even deployments as small as of four clients.
Once more, I'll very much appreciate if anybody can share some similar experience.
Best regards,
Hi Arnaud,
Considering the above findings and my older posts:
https://synopse.info/forum/viewtopic.ph … 505#p26505
My opinion is that it is not reliable to use mORMot batches for UOW or however we call "atomicity" of logical operations. As I can see, the batches are just an intention to speed-up mass inserts into DB.
The only workaround that comes to my mind is to serialize the objects on the client side and to use dedicated method (enclosed in begin/end transaction) on the server side, i.e. duplicating the TSQLRestServer.EngineBatchSend() functionality.
I'll appreciate your feedback on that issue. Thank you!
Kind regards,
1.18.4878
Lazarus 1.9.0
FPC 3.1.1 SVN r63034
i386-win32-win32
TSQLHttpServer kind: useHttpSocket
Database.AcquireWriteMode := amBackgroundThread; {if it makes a difference}
When using TSQLClientWinHTTP.BatchSend(Batch) on TSQLRestServerDB (separate processes), there is a failures and strange entries in the log. They start to appear when the load slightly increases - 4 or 5 clients. There is no failures when server is accessed by a single client.
The log goes that way:
EXC EORMBatchException {"Message":"TSQLServerMIST.EngineBatchSend: TSQLServerMIST.TransactionBegin timeout"} at 00470A60 0046C7EF 00464B1B 00466B5B 0048F86D 005B1DE4 005B1808 005B146F 004264E5 004110B1 76C1343D 777D9802 777D97D5
20181116 14425804 trace DataModel.TSQLServerMIST(003274D8) EngineBatchSend json=4 KB add=0 update=0 delete=0 @AccessTags
20181116 14425804 warn DataModel.TSQLServerMIST(003274D8) {"EORMBatchException(08578680)":{"Message":"TSQLServerMIST.EngineBatchSend: TSQLServerMIST.TransactionBegin timeout"}} -> PARTIAL rollback of latest auto-committed transaction data=["automaticTransactionPerRow",2147483647,"POST@AccessTags~~{"RowID":88926204,"Cron":135469656680,...
20181116 14425804 - 02.010.165
20181116 14425804 debug DataModel.TSQLServerMIST(003274D8) TSQLRestRoutingREST.Error: { "errorCode":500, "error": {"msg":"did break BATCH process","EORMBatchException":{ "ClassName":"EORMBatchException", "Address":"00470A60 ", "Message": "TSQLServerMIST.EngineBatchSend: TSQLServerMIST.TransactionBegin timeout" }} }
20181116 14425804 + DataModel.TSQLServerMIST(003274D8).EngineBatchSend inlen=4100
20181116 14425804 srvr DataModel.TSQLServerMIST(003274D8) 192.168.0.240 PUT client/Batch Write=500 out=242 B in 5.18s
20181116 14425804 - 05.182.903
What I can understand is that there is TRANSACTION BEGIN timeout and that leads up to partial rollback of auto-committed data (none such "auto-committed" data in that case), then "did break BATCH", which is not true since the batch is not yet started to execute (starting transaction failed). All these messages are misleading.
It seems that the source of the exception is:
function TSQLRestServer.EngineBatchSend()
...
if GetTickCount64>timeoutTix then
raise EORMBatchException.CreateUTF8(
'%.EngineBatchSend: %.TransactionBegin timeout',[self,RunningRest]);
Conclusion is that the increased load prevents the simultaneous processes to start a transaction for 2000 milliseconds - which is in turn another "magic" constant at mORMot.pas:44423. My previous post was about replaying the batches - but it seems no simple way of doing that, nor to increase the timeout for acquiring the transaction. I can't figure a workarounds for the issue.
Another alarming entry in the same log:
20181116 14425607 EXC ESQLite3Exception {"ErrorCode":1,"SQLite3ErrorCode":"secERROR","Message":"Error SQLITE_ERROR (1) [Step] using 3.25.2 - cannot rollback - no transaction is active, extended_errcode=1"} at 004A2C8F 004A261C 004A1A36 004A1AA0 0049F4D9 0049DC2C 0049B020 00470A28 0046C7EF 00464B1B 00466B5B 0048F86D 005B1DE4 005B1808 005B146F 004264E5 004110B1
20181116 14425607 SQL SynSQLite3.TSQLDatabase(0036CAE0) 63.64ms MISTd.db3 ROLLBACK TRANSACTION;
I'm not using explicit transactions, only batches. It must be some king of misplaced "try/except" construct in the mORMot.pas or otherwise stated - concurrency issue.
Best regards,
@ab
C'mon, it should be trivial to implement. Perhaps preserving the last Data returned from TSQLRestBatch.PrepareForSending()?
Regards,
Hello,
I would like to re-send a batch on failure, but it seems impossible. On subsequent calls it returns 500.
Please advice.
Thanks!
There was no exception in TSQLRestServerDB.InternalBatchStop in line 1933.
This part of the code only occurs if there is a single POST/INSERT pending.
It will now raise an exception via https://synopse.info/fossil/info/4e42b0c81bFor multiple POST/INSERT, I guess there is already a ESQLite3Exception raised in the loop.
Actually my code goes like this:
Batch := TSQLRestBatch.Create(Client, Nil, MaxInt);
...
for {several times} do begin
Batch.Add(Barcode, True,True);
Batch.Add(Journal, True,True);
end;
...
Client.BatchSend(Batch);
Which effectively calls InternalBatchStart()/InternalBatchStop() pair for every single row in the batch (Didn't know then about 2-nd level batching). So the ESQLite3Exception fires exactly in call to InternalExecute() at TSQLRestServerDB.InternalBatchStop, line 1933. InternalExecute() catches it and (previously) it went unnoticed by InternalBatchStop(). Now it re-raises it, though under a different name, and this is an improvement.
Is it OK now?
It is getting better
Now I'm concerned about the semantics of the boRollbackOnError batch option. I have tried my code and now it correctly rollbacks the whole batch, but I didn't include boRollbackOnError into the batch options!
That happened because the call of InternalBatchStop() at line 43738 raises EORMBatchException way before checking the option at line 43790 and no matter the option was included or not - gets caught at line 43810 and everything roll-backed.
May be an additional try ... catch will be needed with a quick logic to mute the EORMBatchException in case the boRollbackOnError was not included.
PS: Noticed that PerformAutomaticCommit at line 43714 also calls InternalBatchStop with possibly the same effect.
PPS: About the Results[] array and multiple POSTs in a same table:
Browsing through TSQLRestServerDB methods, InternalBatchStart/InternalBatchStop pair and enclosed MainEngineAdd I saw that all INSERTs are merged in one single INSERT in form:
INSERT INTO table (col1,col2,...)
VALUES
(v1, v2,...),
(v1, v2,...),
...
Taking in mind the MainEngineAdd() will return the ID from the Batch JSON (which was generated or supplied, it doesn't matter), the Results[count] will be assigned to the returned value and OK := true. With ESQLite3Exception raised at the time of InternalBatchStop, the whole statement will fail, i.e. no rows will be inserted from the time of the last InternalBatchStart but the Results[n..n+m] will hold some 'good looking' IDs, which in fact are not IDs of successfully inserted rows. Thus, the caller can not rely on contents of the Results[].
Regards,
Not quite sure it helps. The Results[Count] will actually hold an ID from EngineAdd() when mPOST. Also, as shown in the stack trace, using different tables will call InternalBatchStart()/InternalBatchStop() and the exception is namely into InternalExecute() (#3) called by InternalBatchStop() (#4):
#1 SQLITE3_CHECK(78104964, 19, 0xd85e84 'Step') at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SynSQLite3.pas:5345
#2 TSQLREQUEST__STEP(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SynSQLite3.pas:5264
#3 TSQLRESTSERVERDB__INTERNALEXECUTE(0x63b1084 'INSERT INTO Barcodes (RowID,BarcodeNumber,Barcode,Range,PriceID,UserID,Active,DateUpload) VALUES (:(3628054892):,:(1000114):,:(''333''):,:(''123''):,:(615024892):,:(580044892):,:(1):,:(135432434243):);', true, 0x0, 0x0, 0x0, 0x0, 0x0, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMotSQLite3.pas:1190
#4 TSQLRESTSERVERDB__INTERNALBATCHSTOP(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMotSQLite3.pas:1933
...
InternalBatchStop() does not return anything after the exception caught into InternalExecute(). It has no impact on Results[].
The exception got caught in except clause in mORMotSQLite3.pas, line 1212. Then TSQLRestServer.EngineBatchSend() continues execution normally and even processes the following "POST@..." records before making commit. The except clause in the file mORMot.pas, line 43800 has no chance, since there is no re-raise for the exception.
Here is (almost) the same stack dump with SynopseCommit.inc='1.18.4314':
#0 fpc_raiseexception at :0
#1 SQLITE3_CHECK(78104964, 19, 0xd85e84 'Step') at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SynSQLite3.pas:5345
#2 TSQLREQUEST__STEP(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SynSQLite3.pas:5264
#3 TSQLRESTSERVERDB__INTERNALEXECUTE(0x63b1084 'INSERT INTO Barcodes (RowID,BarcodeNumber,Barcode,Range,PriceID,UserID,Active,DateUpload) VALUES (:(3628054892):,:(1000114):,:(''333''):,:(''123''):,:(615024892):,:(580044892):,:(1):,:(135432434243):);', true, 0x0, 0x0, 0x0, 0x0, 0x0, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMotSQLite3.pas:1190
#4 TSQLRESTSERVERDB__INTERNALBATCHSTOP(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMotSQLite3.pas:1933
#5 TSQLSERVERMIST__INTERNALBATCHSTOP(<error reading variable>) at C:\AI\_\MIST\zoo\Model\DataModel.pas:2107
#6 TSQLRESTSERVER__ENGINEBATCHSEND(<incomplete type>, 0x634b604 '["automaticTransactionPerRow",2147483647,"POST@Barcodes', 0x634bb40, 0, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:43734
#7 TSQLRESTSERVER__BATCH(0x62600a8, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:42499
#8 TSQLRESTSERVERURICONTEXT__EXECUTESOABYMETHOD(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:40513
#9 TSQLRESTSERVERURICONTEXT__EXECUTEORMWRITE(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:40981
#10 TSQLRESTSERVERURICONTEXT__EXECUTECOMMAND(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:40397
#11 TSQLRESTSERVER__URI({URL = 0x4948ab4 'client/Batch', METHOD = 0x495225c 'PUT', INHEAD = 0xd548c0 'RemoteIP: 127.0.0.1', INBODY = 0x634b604 '["automaticTransactionPerRow",2147483647,"POST@Barcodes', OUTHEAD = 0x0, OUTBODY = 0x0, OUTSTATUS = 400, OUTINTERNALSTATE = 73, RESTACCESSRIGHTS = 0xbe10a0, LOWLEVELCONNECTIONID = 400, LOWLEVELFLAGS = [LLFSECURED]}, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:42088
#12 TSQLRESTSERVERNAMEDPIPERESPONSE__INTERNALEXECUTE(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:44341
#13 TSQLRESTTHREAD__EXECUTE(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:37236
#14 CLASSES$_$TSTRINGLIST_$__$$_SORT at :0
#15 ?? at :0
The reason I didn't specify a table in TSQLRestBatch.Create(Client, Nil, MaxInt), is the batch actually contains 'POST@...' in several tables. I'have simplified the example to post it in the forum.
Regards,
@ab
Do you consider that issue as a negligible one? I believe it shows the broken 'atomicity' of the batch processing. Given the excerpt from the mORMot SAD Document, "12.3.3. Unit Of Work pattern" :
The ability to handle several TSQLRestBatch classes in the same time will allow to implement the Unit Of Work pattern. It can be used to maintain a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems, especially in a complex SOA application with a huge number of connected clients.
In a way, you can think of the Unit of Work as a place to dump all transaction-handling code.
The responsibilities of the Unit of Work are to:- Manage transactions;
- Order the database inserts, deletes, and updates;
- Prevent concurrency problems;
- Group requests to maximize the database performance.
The value of using a Unit of Work pattern is to free the rest of your code from these concerns so that you can otherwise concentrate on business logic.
and some Web definition of UOW:
The Unit of Work pattern is used to group one or more operations (usually database operations) into a single transaction or “unit of work”, so that all operations either pass or fail as one.
Then I think TSQLRestBatch (with its partial commit) is not what Unit-Of-Work is expected to be. It is just a way to speed up database operations, at least for SQLite, not quite sure for other RDBMS.
I'll appreciate at least some feedback on that issue. Thank you for your time and attention!
Kind regards,
Hi there!
I am sending a batch to a TSQLRestServerDB:
Batch := TSQLRestBatch.Create(Client, Nil, MaxInt);
...
Batch.Add(Barcode, True,True); // several times
...
Client.BatchSend(Batch);
The corresponding batch content (JSON) is as follows:
'["automaticTransactionPerRow",2147483647,
"POST@Barcodes",{"RowID":3625024892,"BarcodeNumber":1000111,"Barcode":"111","Range":"2341","PriceID":615024892,"UserID":580044892,"Active":1,"DateUpload":135431771610},
"POST@Barcodes",{"RowID":3625034892,"BarcodeNumber":1000112,"Barcode":"222","Range":"2341","PriceID":615024892,"UserID":580044892,"Active":1,"DateUpload":135431771610},
"POST@Barcodes",{"RowID":3625044892,"BarcodeNumber":1000113,"Barcode":"333","Range":"2341","PriceID":615024892,"UserID":580044892,"Active":1,"DateUpload":135431771610},
"POST@Barcodes",{"RowID":3625054892,"BarcodeNumber":1000114,"Barcode":"333","Range":"2341","PriceID":615024892,"UserID":580044892,"Active":1,"DateUpload":135431771610}]'
The table 'Barcodes' have an unique index on the 'Barcode' field. Note that on the 4-th post the field 'Barcode' have the same value as the 3-rd post. The sqlite3 engine throws an exception (as expected):
#0 fpc_raiseexception at :0
#1 SQLITE3_CHECK(77908916, 19, 0xd6e5cc 'Step') at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SynSQLite3.pas:5330
#2 TSQLREQUEST__STEP(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SynSQLite3.pas:5249
#3 TSQLRESTSERVERDB__INTERNALEXECUTE(0x5fbaa1c 'INSERT INTO Barcodes (RowID,BarcodeNumber,Barcode,Range,PriceID,UserID,Active,DateUpload) VALUES (:(3625054892):,:(1000114):,:(''333''):,:(''2341''):,:(615024892):,:(580044892):,:(1):,:(135431771610):);', true, 0x0, 0x0, 0x0, 0x0, 0x0, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMotSQLite3.pas:1211
#4 TSQLRESTSERVERDB__INTERNALBATCHSTOP(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMotSQLite3.pas:1958
#5 TSQLSERVERMIST__INTERNALBATCHSTOP(<error reading variable>) at C:\AI\_\MIST\zoo\Model\DataModel.pas:2107
#6 TSQLRESTSERVER__ENGINEBATCHSEND(<incomplete type>, 0x5f3be64 '["automaticTransactionPerRow",2147483647,"POST@Barcodes', 0x5f3c3a0, 0, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:42945
#7 TSQLRESTSERVER__BATCH(0x5e500a8, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:41717
#8 TSQLRESTSERVERURICONTEXT__EXECUTESOABYMETHOD(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:39736
#9 TSQLRESTSERVERURICONTEXT__EXECUTEORMWRITE(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:40204
#10 TSQLRESTSERVERURICONTEXT__EXECUTECOMMAND(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:39620
#11 TSQLRESTSERVER__URI({URL = 0x4918934 'client/Batch', METHOD = 0x4921f5c 'PUT', INHEAD = 0xd41838 'RemoteIP: 127.0.0.1', INBODY = 0x5f3be64 '["automaticTransactionPerRow",2147483647,"POST@Barcodes', OUTHEAD = 0x0, OUTBODY = 0x0, OUTSTATUS = 400, OUTINTERNALSTATE = 73, RESTACCESSRIGHTS = 0xbce280, LOWLEVELCONNECTIONID = 404, LOWLEVELFLAGS = [LLFSECURED]}, <error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:41306
#12 TSQLRESTSERVERNAMEDPIPERESPONSE__INTERNALEXECUTE(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:43552
#13 TSQLRESTTHREAD__EXECUTE(<error reading variable>) at C:\AI\IDE\fpcupdeluxe\ccr\mORMotFPC\SQLite3\mORMot.pas:36529
#14 CLASSES$_$TSTRINGLIST_$__$$_SORT at :0
#15 ?? at :0
But this doesn't result in (batch) transaction rollback, which IMHO is a serious flaw, instead it commits (!?) the first 3 inserts. I am using a batch intentionally for the enclosing transaction. At this point in my program I'm making a data import and I want to mass-insert several thousands of records but without risking inconsistency in the cited column. How can I ensure that the whole batch will be commited/rollback'd at once? Do I missing something?
Using: Lazarus 1.7 i386-win32-win32/win64, FPC 3.1.1, SynopseCommit.inc='1.18.3977'
Best regards,
Yes, commenting out 'Terminate; WaitFor;' worked out... Obviously the thread finishes without being terminated. What a surprise!
And for what was that unfortunate fix of TSynThread.Destroy?
Edit: Found this: https://synopse.info/forum/viewtopic.ph … 188#p19188
It seems that the clean up procedure is not as good as it should be. IMHO forced suspend/resume/terminate of a thread is a bad practice. Furthermore, the TThread.Terminate is of no meaning other than to inform TThread.Execute to complete its work. Calling it have no effect when Execute isn't working.
Hi ab,
the mORMot server application that I'm currently developing stops responding after a few days. The actual observation is that the number of threads incremented to >1800 and then, it fails in the TSynThread.Create(). Actually the number of threads never decrements. I'm creating the server in the following way:
Server := TSQLHttpServer.Create(PublishPort, [Database], '+', useHttpSocket);
The effect is not observed when:
Server := TSQLHttpServer.Create(PublishPort, [Database], '+', useHttpApi);
The OS is Win10, Lazarus v1.7 2017-6-17, FPC 3.1.1 SVN Revision 52654, SynopseCommit.inc '1.18.3447'.
Lazarus+mORMot installed with fpcupdeluxe.
EDIT: Same with SynopseCommit.inc '1.18.3680'
Any idea what might be the cause?
Regards,
On July 7, 2016 all tests went OK using the newpascal.org distribution: http://synopse.info/forum/viewtopic.php … 989#p20989
Target was win32.
I guess the FPC version is not supported, or properly configured.
See what Alfred obtained: http://blog.synopse.info/post/soaarmfpc
Any FPC > 2.6.4 breaks all my string operations, didn't it break yours?
And your automated test suite gives you a bit of false confidence...
I beg you pardon!
Can't even compile it on intel x64-bit Lindows fpc.
Define "even work", please!
@ab
I'm sure my experience it is not of big concern, and you may skip this, but after almost 2yrs using mORMot the recap is as follows:
I'm stuck with Laz1.4.4, FPC 2.6.4. Newer FPC versions breaks my projects by itself (you probably know why).
You're pushing hard for the FPC IF RTTI, your new revisions are not compatible with 2.6.4 (but you still keep them compatible with D7).
You're fixing bugs in the main trunk (AFAIK) together with the improvements, no separate branch for LTS.
Since lot of efforts were made to make it quicker, a lot of inline x86 assembly was put in the way. Slight chances to make it work on other OS/same arch, no chances to work on a different processor. There are full-blown Linux servers with other archs than x86, lot more stable and secure.
There is other subtle nuisances from the speed optimizations which makes me to believe that mORMot will not be easily ported, if ported at all.
And that is why I said It is up to my regret, too
IMHO mORMot should have some stable REST CS subset, which is not related to the interfaces. Method services are also useful, right? Why not CrossPlatform server?
Regards,
@ab
No, that is not the issue, sorry ... It's just full of tricks which just don't work on FPC It can't work reliably on the same OS/Arch (Win/x86). It is up to my regret, too
Actually, you should port mORMot for FPC, not vice-versa.
Regards,
Dear ab,
Perhaps I'm adding a unnecessary fuzz here, but ... FPC wiki says: String is the same as AnsiString (with {H+}) and AnsiString is actually AnsiString(CP_ACP). In my case CP_ACP<>CP_UTF8 as probably in your case. One of my projects is already almost broken because of the new FPC way of dealing with strings. In that project I'm using mORMot and since it requires FPC 3.0+, my forms got weird messages with half readable letters and the other half cryptic.
Anyway, Isn't it better for RawUTF8 to be AnsiString(CP_NONE) to bypass the internal conversions (and despite it's name)?
Hello,
I've tried to use FPC in the most neutral way that comes to my mind - through CrossPlatform units, but then I found the following declaration in the SynCrossPlatformREST.pas:
type
/// alias to share the same string type between client and server
RawUTF8 = string;
from the other hand, in SynCommons.pas:
{$ifdef HASCODEPAGE}
RawUTF8 = type AnsiString(CP_UTF8); // Codepage for an UTF8 string
{$else}
RawUTF8 = type AnsiString;
{$endif}
Although I know how to adjust the compiler directives to match both declarations, I am a bit confused which is the right way of using FPC for making a mORMot client. Regarding this forum, the FPC is not quite 'mORMot' ready to be a fully blown-up server/client.
Please advice!
Hello,
I'm trying with SynCrossPlatformREST.TSQLRestClientHTTP.CallBackGetResult() to consume a method defined service, but I can't find a way to check the returned HTTP status. Previously I used the LastErrorCode property of the mORMot.TSQLRestClientURI, but there is no corresponding property into the cross platform unit. How can I check against the HTTP return codes?
Best regards,
There is an overload that returns Int64.
Besides, I don't use Delphi for daily work, just Delphi7 to check whether some mORMot bugs are FPC specific or general.
ISmartPointer<T> is not an option.
Hi ab,
The following code works on Delphi7 but fails with SIGSEGV on FPC 3.1.1
{$MODE DELPHI}
uses
SynCommons;
var
VPay: Variant;
S: String;
begin
S := 'pos';
VPay := _Obj([]);
if not VPay.Exists(S) then
DocVariantData(VPay).AddValue(S, _Obj(['Count', 0, 'Sum', 0]));
VPay.Value(S).Count := VPay.Value(S).Count + 1;
VPay.Value(S).Sum := VPay.Value(S).Sum + Int64(1000);
end.
The fault is on DocVariantData.AddValue(). What is wrong?
@danielkuettner
Thanks, that leads to:
Rest.OneFieldValue(TSQLTable, 'COUNT(*)', 'Status=?', [], [requiredStatus], Count);
And no need to destroy anything. Nice!
In fact, there is a special handling in TSQLRest.MultiFieldValue:
if (n=1) and IdemPChar(pointer(FieldName[0]),'COUNT(*)') then ...
(found with the debugger )
Found workable solution (requires TSQLRestClient):
with Rest.ListFmt([TSQLTable], 'COUNT(*)', 'Status=%', [requiredStatus]), []) do
begin
Result := GetAsInteger(1, 0);
Free;
end;
Anything simpler?
I would like to execute simple query in form:
SELECT COUNT(*) FROM Table WHERE <some-criteria>
assuming that the Table is included into the model and the criteria will be something according to the conventions, like:
T := TSQLTable.Create(Rest, 'Status=?',[requiredStatus]);
but I want to receive just the count of records satisfying the criteria.
Thanks,
Hi ab,
There is a bug in TSQLRestBatch.Update() method, preventing it work with IDs bigger than 2^31-1.
function TSQLRestBatch.Update(Value: TSQLRecord;
const CustomFields: TSQLFieldBits; DoNotAutoComputeFields: boolean): integer;
var Props: TSQLRecordProperties;
FieldBits: TSQLFieldBits;
ID, tableIndex: integer;
begin
result := -1;
if (Value=nil) or (fBatch=nil) then
exit;
ID := Value.IDValue;
if (ID<=0) or not fRest.RecordCanBeUpdated(Value.RecordClass,ID,seUpdate) then
exit; // invalid parameters, or not opened BATCH sequence
The local variable ID is declared integer while Value.IDValue is of type TID. Bigger values cause overflow and the subsequent if treats it as a negative value.
Regards,
Another little annoyance, the file SynVirtualDataset.pas referred in uses clause of another unit as SynVirtualDataSet (note the difference in the 'S' character case). Since linux file names are case sensitive this caused a compile error. Passed after changing the file name with the (in)correct case. The strangest thing is that I have reverted the file name back but now I can't reproduce the error and I can't tell you the name of the referrer unit
It is okay now. Thanks again!