You are not logged in.
Pages: 1
Hi,
i'm trying to use the master-slave replication for some tables, but itsn't working as i wish..
on the master database i've something like this:
TableA:
ID Name RecordVersion
1 John Cenna 5180
2 Terry Crews 5182
TableB:
ID Game RecordVersion
1 Dota 2 5170
2 Besiege 5171
so, when i call the replication for the first table the records are synchronized properly:
FSlaveDatabase.RecordVersionSynchronizeSlave(TTableA, FMasterSQLHTTPConnection);
but when calling for TTableB, no records are synchronized.. and here's why: the internal method InternalRecordVersionMaxFromExisting loop through all my slave model (that matches with the master model of course), and find the max(RecordVersion) value from all tables, instead of considering only the current table.
That will return 5182 and, AFAIK, only records with RecordVersion>5182 will be retrieved from master.
My question is, why the recordversion is not table-based?
Last edited by Fernandoprsbr (2016-01-08 18:25:10)
Offline
We had a similar problem, trying to replicate the master over an empty slave first time.
We solved adding a call to this function at the initialization of the slave server:
function TMyRestServer.RecordVersionSynchronizeSlaveIni(Master: TSQLRest;
ChunkRowLimit: integer; OnWrite: TOnBatchWrite): integer;
var
Table: TSQLRecordClass;
RecordVersionIni,Tvers,Tprev: TRecordVersion;
field: TSQLPropInfoRTTIRecordVersion;
tx: integer;
begin
Result := 1;
RecordVersionIni := Self.RecordVersionCurrent();
if RecordVersionIni=0 then
RecordVersionIni := -1;
for tx := 0 to Self.Model.TablesMax do begin
Table := Self.Model.Tables[tx];
field := Table.RecordProps.RecordVersionField;
if field<>nil then begin
Tvers := 0;
Self.fRecordVersionMax := RecordVersionIni;
repeat
Tprev := Tvers;
Tvers := inherited RecordVersionSynchronizeSlave(Table, Master, ChunkRowLimit, OnWrite);
if Tvers<0 then begin
InternalLog('%.RecordVersionSynchronizeSlaveStart(%): REST failure',[self,Table],sllError);
Result := 0;
break;
end;
until Tprev = Tvers;
end;
end;
Self.fRecordVersionMax := 0;
end;
Regards
Offline
Could anyone create a ticket with a link to this forum thread, and a quick sumup of the problem?
Here we use the feature on a single table per model, so perhaps we missed something.
The idea of recordversion is that it is global to all tables of the model, since it is just an increasing number.
Perhaps it should be one number per table...
Offline
Here we use the feature on a single table per model, so perhaps we missed something.
The idea of recordversion is that it is global to all tables of the model, since it is just an increasing number.
Perhaps it should be one number per table...
I agree with one number per table approach.
- It would enable us to sync multiple tables
- It would be easier to follow changes to database (when debugging)
- We could sync some tables in one direction while syncing others in opposite direction without messing up recordversion
(e.g. "Stock" from HQ to Retail and "Invoices" from Retail to HQ at the same time)
Offline
AB,
I think the record versioning approach in itself is problematic. The replication can cause inconsistent states for periods of times. Following the unit of work pattern and assigning the same version number to every record modified by processing a "unit of work" could ensure consistency. In simple cases like updating a single record it would be almost the same as record versioning.
Last edited by Leslie7 (2016-05-12 09:07:48)
Offline
In fact, we usually implement the "unit of work" pattern at TSQLRecord level, using data sharding.
Each TSQLRecord is an aggregate, so there is no JOIN needed, so inconsistency should not occur.
Please see http://synopse.info/files/html/Synopse% … l#TITL_124
Offline
This is only one way mORMot can be used, which seems more like the exception. I think my concerns apply to all the rest. Sharding does not suit all purposes and is not a pattern that can be easily implemented cleanly. Mainly because aggregates have practical size limits. Sometimes the "unit of work" can be a selected group of aggregates which need to be processed together.
Last edited by Leslie7 (2016-05-12 21:01:02)
Offline
... in this case, if you need a "group of aggregates", these are not aggregate any more, in DDD's terms!
You have to create a new aggregate, for this particular bounded context of the domain modelization.
I use this pattern for years now, and I can assure you that it works very well, and together with the CQRS and data sharding patterns, even when your data is of big size.
Some truly ACID behavior with master/slave replication would need a much complex mechanism than what is currently implemented.
I have no time nor need to do it. Any input is welcome!
About the issue for the record version with multiple tables, may someone create a ticket?
Offline
I found the ticket at http://synopse.info/fossil/info/74e76e975a401
Thanks for the report.
I'll look into it.
Offline
Pages: 1