You are not logged in.
Arnaud, please help me with external database (problem exists with zeos dbo - i tested with postgres and firebird).
I create external database via:
ConnectionDef := TSynConnectionDefinition.Create;
ConnectionDef.Kind := 'TSqlDBZeosConnectionProperties';
ConnectionDef.ServerName := 'zdbc:postgresql://myserver:myport';
{then set databasename, user, password as usual}
MyRest := TRestExternalDBCreate(MyModel, ConnectionDef, true, [regMapAutoKeywordFields, regClearPoolOnConnectionIssue]);
TRestServerDB(MyRest).Server.CreateMissingTables(0,[]);
{all the rest stuff}
then i create basic rest interface (as you recommend for fast transaction handling, i use ORM only via rest interfaces on clients)
BasicRest := TRestServerFullMemory.CreateWithOwnModel([TAuthGroup, TAuthUser], true, mykey);
BasicRest.Server.CreateMissingTables(0, []);
BasicRest.ServiceDefine(mySomeInterfaceClass,[MySomeInterface], sicShared);
BasicRest.ServiceDefine(MyOtherInterfaceClass,[MyOtherInterface],sicShared).SetOptions([],[optExecLockedPerInterface]);
BasicRest.ServiceDefine(MySyncInterfaceClass,[MySyncInterface],sicShared).SetOptions([],[optExecLockedPerInterface]);
So, everything as usual. Then connects from client, send/receive data via interfaces, etc.
If i'm using sqlite or MongoDB everything works as expected. But if i'm using Zeos for external db i stuck with such situation:
I have orm table with approx. 69946 rows in it. I've got some data from client and process it for this orm table in server interface method. I've got one TRestBatch:
tmpbatch := TRestBatch.Create(myServerORM, MyOrmClass,0);
Then i fill my batch with values (approx. 300-1000 at once) - some with add method, some with update method (because new data may be really new or just updated values for the old one).
Then i start transaction, send my batch to ORM via
MyServerORM.Orm.BatchSend(tmpBatch, tmpDynArray)
and finally commit transaction (or rollback, if batchsend returns not HTTP_Success)
If i only add TOrms to batch, or update few TOrms - my batch sends (just sends via batchsend) very fast, about 24-50 ms for 900 TOrms in batch.
But if i've update some TOrms - batch sending time takes toooooooooo long, for example 5412 ms. (345 TOrms added, 340 TOrms updated) or even 6568 ms (403 TOrms added, 409 TOrms updated). Not committing, just sending takes so much time.
Postrgres or Firebird - doesn't matter, it stucks the same. Mormot compiled with ZEOS support, ZEOS compiled with mormot2 support, both the latest trunk version.
Is there any limitation in mormot 2 for external databases? Or am i doing something wrong?
Offline
This is not as expected for sure. We reached much higher numbers on our tests.
First I guess you are messing with the transactions.
You should not create the transaction manually, but let the TRestBatch do it for you using the corresponding parameter: don't put 0 but 100000 in AutomaticTransactionPerRow.
Perhaps the batch sending is slown down because of nested transactions.
Then try with our native mormot.db.sql.postgres PostgreSQL access.
To see if it is linked to Zeos itself.
Offline
Thank you, i'll try it tomorrow
But why it stucks only with updating orms? Adding orms works very fast. Maybe i misunderstood batch logic?
I call tmpbatch.add(mytorminstance, true) for new orm and tmpbatch.update(mytorminstance) for orm values change and pass whole mytorm instance in both cases (i use myorminstance:= tmyormclass.createandfillprepared, then call myorminstance.fillone and check myorminstance.id to search do i need update (if myorminstance.id>0) or add new one.
Offline
Can i mix add and update in one batch?
Offline
You can mix Add and Update in one batch, but the ORM won't be able to regroup them in bulk inserts/updates.
So the best is to regroup all Add() together, and all Update() together, if possible, in the same batch.
But my guess is that your performance problem comes mainly from the transactions.
Offline
But my guess is that your performance problem comes mainly from the transactions.
I start transaction for my tormclass immediately before call batchsend and commit/rollback immediately after call batchsend. There is no another transactions for this class at all.
I saw you recommend this way of transactions handling somewhere on this forum some time ago.
This stucks does not exists neither wih sqlite nor with mongodb, only with zeos-powered databases.
I'll try suggested solutions tomorrow and post results, thank you!
Last edited by avavdoshin (2023-02-15 16:46:17)
Offline
Also it works fast when number of rows lower (f.e. 10000-15000 rows).
When my table grows update takes more and more time
Offline
Missing indices? So search during update check whole table?
Offline
Missing indices? So search during update check whole table?
It should search only via id, isn't it?
Db created by mormot engine via createmissingtable
Offline
BTW automatic transaction doesn't make effect for me, tested with firebird.
Split add and update to different batches helps a lot, thank you.
Not tested mormot.db.sql.postgres yet
Last edited by avavdoshin (2023-02-16 08:43:23)
Offline
Missing indices? So search during update check whole table?
Problem solved.
Arnaud, it was my mistake, so sorry to bother you.
I use data sending in complicated way: i made my own batch on client (select records from LOCAL orm sqlite database, split it to batches with 10000 record in each and fill in-memory rest storage with in-memory table, then export its contents to binary rawbytestring and call server interface method, passing this rawbytestring).
On the server side i import this binary rawbytestring back to another in-memory table in another in-memory rest storage and process it. Then i send data to SERVER orm database.
I used wrong models scheme, that's why it doesn't work as expected. Have no idea why it doesn't affect on MongoDB and SQLite backends, only to external tables created via TRestExternalDBCreate.
BTW, mormot.db.sql.postgres much faster and more stable than ZEOS postgres implimentation, thank you for recommendation.
Last edited by avavdoshin (2023-02-16 13:09:32)
Offline
One of the goals of mormot.db.sql.postgres development was to participate in TechEmpower benchmark. as of today we are in TOP15 there. So yes - it's overhead is minimal.
Offline