You are not logged in.
Pages: 1
Solved completely in 2.1.5812, thank you!
Hello, Arnaud.
I update mormot from 2.1.5794 to 2.1.5808 for my server app (fpc 3.3.1 / win64) and CPU consumption for server app is 25-30% after any client connected
The same project, but mormot 2.1.5794 - server app CPU consumption is about 3-6%
Can you please check what is broken with update?
Now BatchUpdate() works with TRecordVersion, at least in the newly introduced regression tests.
Thank you so much!
I'll try ASAP
It looks like BatchAdd and BatchDelete update the Version field but BatchUpdate does not.
Arnaud, can you please try to fix this issue?
Not sure what you try to accomplish with the Version field. I use it for master slave replication without any issues. Never checked the numbers but the result of the replication.
It is difficult helping you with this fragment of code. Please provide a working example. But please follow the forum rules and do not copy long code directly into the forum.
I do not understand your batch approach. Please check the documentation 12.3. The batch process is pretty straightforward and works for me.
Hello and thanks for your answer.
First of all, sorry for my very lame English.
I use Version field for periodical replication from my master database to slave database(s) (from master read-write server to read-only server).
I use batch because data in srcorm are came from another server(s), then i process them and store correspondent data in master orm. There is batch of data in srcorm, some data may result in an dstorm entry being added, some may result in an dstorm
entry being updated, but i need to process all of them in one transaction for many reasons.
Here is a working example with results in csv: https://gist.github.com/avavdoshin/fd1d … f8f0638691
Record versions in this example doesn't changes for updates, so this changes will not be replicated to slave database.
if dstOrm.ID > 0 then tmpAddBatch.Delete(dstOrm.IDValue); tmpAddBatch.Add(dstORM, true);
Thanks for idea, but i have 2 problems with this solution:
1. For some classes i need to keep ID for dstORM if it already exists for reference integrity
2. I synchronize my master db with slave db periodically, so i have this problem: https://github.com/synopse/mORMot2/issues/147
(synchronize doesn't work if i create record on master db, delete it and then synchronize with slave db - because it tries to delete record which doesn't exist in slave db yet)
I use TRecordVersion. Since I haven't noticed anything negative yet, I haven't looked for a bug either. Can you give us any more information, such as:
Delphi 11.3, mORMot2 GitHub commit 5186
Or a whole example, as Martin Doyle has done here. Then it is easier to search for the error.
Thomas
hello, Thomas, thank you for answer.
FPC 3.3.1, x86_64, windows 10 (codetyphon 8.0). Mormot2 github version 2.1.5479
Whole example too complicated, here is simplified part of code:
type
TFSDBGlobalStates = class(TOrm)
private
{ some attributes }
FChangeTimeStamp: TUnixMSTime;
FVersion: TRecordVersion;
public
class procedure InitializeTable(const Server: IRestOrmServer; const FieldNames: RawUtf8; Options: TOrmInitializeTableOptions); override;
published
{ some properties }
property ChangeTimeStamp: TUnixMSTime read FChangeTimeStamp write FChangeTimeStamp;
property Version: TRecordVersion read FVersion write FVersion;
end;
function findORM(asrcORM: TOrm; ormClass: TOrmClass):TOrm;
begin
{some code}
if ormClass = TFSDBGlobalStates then
result := ormClass.CreateAndFillPrepare(LocalDBOrm.Orm, 'ObjectGUIDid = ? and SourceGUIDid = ? and StateCodeID = ?', [FindGlobalDictID(SrvId, (aSrcOrm as TFSDBLocalStates).ObjectGUIDid, false), FindGlobalDictID(SrvId, (aSrcOrm as TFSDBLocalStates).SourceGUIDid, false), FindGlobalDictID(SrvId, (aSrcOrm as TFSDBLocalStates).StateCodeID, false)])
{some code}
else
result := nil;
if assigned(result) then
try
result.FillOne;
except
FreeAndNilSafe(result);
end;
end;
procedure CopyOrmClass(aSrcOrm: TOrm; var aDstOrm: TOrm);
begin
{ some code }
if aSrcOrm.ClassName = 'TFSDBLocalStates' then
begin
{some code}
(aDstOrm as TFSDBGlobalStates).ChangeTimeStamp := (aSrcOrm as TFSDBLocalStates).ChangeTimeStamp;
end;
{some code}
end;
Procedure MainProc;
var
srcORM, dstORM: TORM;
SendResult: integer;
begin
tmpAddBatch := TRestBatch.Create(localDBOrm.Orm, TFSDBGlobalStates, 0);
tmpUpdBatch := TRestBatch.Create(localDBOrm.Orm, TFSDBGlobalStates, 0);
{ some code where i get srcOrm }
try
while srcORM.FillOne do
begin
dstORM := FindORM(srcORM, TFSDBGlobalStates);
try
if not assigned(dstORM) then
dstOrm := TFSDBGlobalStates.Create;
CopyOrmClass(srcOrm, dstORM);
if dstOrm.ID > 0 then
tmpUpdBatch.Update(dstOrm)
else
tmpAddBatch.Add(dstORM, true);
finally
FreeAndNilSafe(dstOrm);
end;
end;
finally
FreeAndNilSafe(srcOrm);
end;
{some code}
if (tmpAddBatch.Count>0) or (tmpUpdBatch.count>0) then
begin
{some code}
if tmpAddBatch.Count>0 then
sendResult := LocalDBOrm.Orm.BatchSend(tmpAddBatch)
else
sendResult := HTTP_SUCCESS;
if sendResult=HTTP_SUCCESS then
if tmpUpdBatch.Count >0 then
sendResult := LocalDBOrm.Orm.BatchSend(tmpUpdBatch)
else
sendResult := HTTP_SUCCESS;
{check for unsuccessfull send removed for this example}
if sendresult=HTTP_SUCCESS then
LocalDBOrm.Orm.commit;
end;
end;
Hello, all!
Does anybody else using TRecordVersion in mormot2?
I'm stuck with TRecordVersion handling and batch updates.
SQlite db via sqlite3static.
Batch add works perfect, TRecordVersion increments as expected.
But for batch updates via TRestBatch only few of TOrms update their TRecordVersion field, for others TRecordVersion field stays remained. I can see that TRecordVersion doesn't change because i store date/time timestamp in my TOrm class and change it for every update operation.
Maybe i'm doing something wrong way? First i load my TOrm from DB via CreateAndFillPrepared, then i modify some fields and add modified TOrm to batch via TRestBatch.add(TOrm, true) (repeat this for all my TOrms i need to change). Then I use batchSend for sending changes to db.
Is it the same if the slave is SQlite3?
No, there is no problem at all if the slave is SQlite3.
fails on TRestOrmServerBatchSend.AutomaticCommit after sequence POST-POST-POST-POST-DELETE-POST if ID for delete doesn't exists. Can't find out reason yet. No error message except "TSqlDBPostgresConnectionProperties.SharedTransaction( )"
upd: If i comment whole part for encDelete in TRestOrmServerBatchSend.ExecuteValue it fails too with the same exception
(i leave the same message in ISSUES on github)
First of all i delete all databases (master and slave)
Then i create new database on master (sqlite), create some orm instances (TSomeOrm, TSomeOtherOrm)
Then i create TMyBlobOrm instances with blobs (just simple strings in blob) on master, they had IDs 1,2,3,4,5,6
Then i delete TMyBlobOrm with id 4 on master
then create some other orm instances (TSomeAnotherORM) on master
Then i create SLAVE database (postgres) and synchronise from master (sqlite) to slave (postgresql) (calling RecordVersionSynchroniseSlave from slave).
In step-by-step debug i've got error
"Unexpected TSqlDBPostgresConnectionProperties.SharedTransaction (1,2)" in mormot.db.sql line 3701
Maybe it's because in prepared for sending batch i see in JSON that DELETE goes before all other lines? But this record cannot be deleted, it doesn't exist on slave server yet.
If i only create orm instances on master (with or without blobs, but do not delete any instance) - synchronise works ok
Please try with https://github.com/synopse/mORMot2/commit/7609b12b
It would be slower, but should not raise any exception any more.
It works now, but only if there is no deleted rows. If i have any deleted row - table doesn't synchronise at all.
Maybe it affected not only tables with blobs, i don't know. In my case i do not delete rows in another tables.
Could you investigate a little and find out what actually happens?
Batch prepared correct, but something happened in BatchSend, i suppose, because i've got error when step-by-step debug "TSqlDBPostgresStatement.ExecutePrepared: Invalid array type ftBlob on bound parameter #3" (it is one of my blob fields) "in mormot.db.sql.postgres.pas line 656"
Arnaud, RecordVersionSynchroniseSlave doesn't synchronize tables with blob fields if SLAVE db is PostgreSQL (tested with mormot.db.sql.postgres).
It affect only with synchronisation. If i use PostgreSQL on master server, saving blobs works ok. Synchronisation works ok if slave db is sqlite.
Missing indices? So search during update check whole table?
Problem solved.
Arnaud, it was my mistake, so sorry to bother you.
I use data sending in complicated way: i made my own batch on client (select records from LOCAL orm sqlite database, split it to batches with 10000 record in each and fill in-memory rest storage with in-memory table, then export its contents to binary rawbytestring and call server interface method, passing this rawbytestring).
On the server side i import this binary rawbytestring back to another in-memory table in another in-memory rest storage and process it. Then i send data to SERVER orm database.
I used wrong models scheme, that's why it doesn't work as expected. Have no idea why it doesn't affect on MongoDB and SQLite backends, only to external tables created via TRestExternalDBCreate.
BTW, mormot.db.sql.postgres much faster and more stable than ZEOS postgres implimentation, thank you for recommendation.
BTW automatic transaction doesn't make effect for me, tested with firebird.
Split add and update to different batches helps a lot, thank you.
Not tested mormot.db.sql.postgres yet
Missing indices? So search during update check whole table?
It should search only via id, isn't it?
Db created by mormot engine via createmissingtable
Also it works fast when number of rows lower (f.e. 10000-15000 rows).
When my table grows update takes more and more time
But my guess is that your performance problem comes mainly from the transactions.
I start transaction for my tormclass immediately before call batchsend and commit/rollback immediately after call batchsend. There is no another transactions for this class at all.
I saw you recommend this way of transactions handling somewhere on this forum some time ago.
This stucks does not exists neither wih sqlite nor with mongodb, only with zeos-powered databases.
I'll try suggested solutions tomorrow and post results, thank you!
Can i mix add and update in one batch?
Thank you, i'll try it tomorrow
But why it stucks only with updating orms? Adding orms works very fast. Maybe i misunderstood batch logic?
I call tmpbatch.add(mytorminstance, true) for new orm and tmpbatch.update(mytorminstance) for orm values change and pass whole mytorm instance in both cases (i use myorminstance:= tmyormclass.createandfillprepared, then call myorminstance.fillone and check myorminstance.id to search do i need update (if myorminstance.id>0) or add new one.
Arnaud, please help me with external database (problem exists with zeos dbo - i tested with postgres and firebird).
I create external database via:
ConnectionDef := TSynConnectionDefinition.Create;
ConnectionDef.Kind := 'TSqlDBZeosConnectionProperties';
ConnectionDef.ServerName := 'zdbc:postgresql://myserver:myport';
{then set databasename, user, password as usual}
MyRest := TRestExternalDBCreate(MyModel, ConnectionDef, true, [regMapAutoKeywordFields, regClearPoolOnConnectionIssue]);
TRestServerDB(MyRest).Server.CreateMissingTables(0,[]);
{all the rest stuff}
then i create basic rest interface (as you recommend for fast transaction handling, i use ORM only via rest interfaces on clients)
BasicRest := TRestServerFullMemory.CreateWithOwnModel([TAuthGroup, TAuthUser], true, mykey);
BasicRest.Server.CreateMissingTables(0, []);
BasicRest.ServiceDefine(mySomeInterfaceClass,[MySomeInterface], sicShared);
BasicRest.ServiceDefine(MyOtherInterfaceClass,[MyOtherInterface],sicShared).SetOptions([],[optExecLockedPerInterface]);
BasicRest.ServiceDefine(MySyncInterfaceClass,[MySyncInterface],sicShared).SetOptions([],[optExecLockedPerInterface]);
So, everything as usual. Then connects from client, send/receive data via interfaces, etc.
If i'm using sqlite or MongoDB everything works as expected. But if i'm using Zeos for external db i stuck with such situation:
I have orm table with approx. 69946 rows in it. I've got some data from client and process it for this orm table in server interface method. I've got one TRestBatch:
tmpbatch := TRestBatch.Create(myServerORM, MyOrmClass,0);
Then i fill my batch with values (approx. 300-1000 at once) - some with add method, some with update method (because new data may be really new or just updated values for the old one).
Then i start transaction, send my batch to ORM via
MyServerORM.Orm.BatchSend(tmpBatch, tmpDynArray)
and finally commit transaction (or rollback, if batchsend returns not HTTP_Success)
If i only add TOrms to batch, or update few TOrms - my batch sends (just sends via batchsend) very fast, about 24-50 ms for 900 TOrms in batch.
But if i've update some TOrms - batch sending time takes toooooooooo long, for example 5412 ms. (345 TOrms added, 340 TOrms updated) or even 6568 ms (403 TOrms added, 409 TOrms updated). Not committing, just sending takes so much time.
Postrgres or Firebird - doesn't matter, it stucks the same. Mormot compiled with ZEOS support, ZEOS compiled with mormot2 support, both the latest trunk version.
Is there any limitation in mormot 2 for external databases? Or am i doing something wrong?
It works now, thank you!
I closed issue.
We only validated it for a single TOrm table IIRC.
So it doesn't work for multiple TOrm tables?
So sad to hear it. I suppose it should be clarify in documentation.
I have few classes with recordversion field (consts, dict, params, paramhistory) and want to periodically synchronise changes from master server to slave servers, but stuck with wrong recordversion logic.
AFAIK recordversion is monotonic for whole master server. All classes saves to database in mixed order and often updated. So now i have maximum recordversion for my tables, for example:
dict: 100302
consts: 169333
params: 169125
paramhistory: 169226
If i call RecordVersionSynchroniseSlave for my classes in order (just for example) dict, consts, params, paramhistory - tables "params" and "paramhistory" are never synchronised because of fRecordVersionMax, which now (after synchronisation consts class) is 169333.
Maybe i'm doing replication wrong way? I need offline synchronisation, real-time synchronisation not suitable for me.
P.S. Sorry for my lame english
Pages: 1