You are not logged in.
Pages: 1
Hi,
sorry to noise you but I'm not able to find a solution : on server side, is there a way to know what properties have been changed in a tsqlRecord before update the database ?
I'd like to write a log containing timestamp and old values of modified properties.
Thank you very much.
Regards
Mario
Offline
This is ticket http://synopse.info/fossil/info/a78ffe992b
Perhaps this may also benefit on the client side, to let transmit only modified changes to the server. So there may be a good idea to implement it at TSQLRecord level.
On server side, a global "content history" feature may be worth it. I mean, something which would tracks all changes, and store the fields modifications in a separate table, able to view the "history changes" of each record. A new dedicated TSQLFieldType may be defined, to display a global version number (the "global" counter which would help identifying a state of the data). For huge fields (binary or text), it may be a good idea to store diff patches instead of full content. For JSON fields, we may track sub-property changes (or just handle it as huge text field).
Please refer to the "modeling" part of our introduction to DDD in the SAD 1.18 pdf, about the difficulty of "state modeling" in OOP. In short, OOP will by definition overwrite the previous states, and store only the latest values of each property. This is pretty convenient in most use, but we may want not to forget previous states in other cases, so a "change tracking" feature may be convenient.
Offline
A global "content history" would be awesome. And if that's stored in a separate table, one could store the history in a separate DB easily by using virtual external tables... so it doesn't fill up your main db. Great.
Offline
Certainly also clients may benefit.
About the patch container it could be, depending on cases, a single table, a specific table for each versioned table or, also , a new variant field ,never exposed to clients : where changes are limited to very few properties and version retrival is frequently asked , this could improve performances.
Offline
Hi ab,
I'm trying to inherit a class from tSQLRecord or tSQLRecordSigned.
I've thought to use OnUpdateEvent , but it is called AFTER update. Could you give me any suggestion ? Than you very much
Mario
Offline
Yes, i've read it.
Insertion
no problem old values are all null.
Deletion
I'll add a status field to mark record no more valid : no need to write to log.
Update
I need to know old property values to save them in log structure : are them available at this time ?
Offline
I'm debugging... So finally i,ve stated that stateless is stateless !
Sorry : i'm a rest beginner.
If i've understood correctly, there are two chanches to recognize changed values : (i'll use your acronym)
a) return from client only modified values : good but safe only with delphi clients.
b) read from table before apply updates. In this case , cache could help, but the documentation states that should be used only when data doesn't change frequently, so...
Offline
Yes, there is no transaction in a RESTful model.
You can use transaction methods in our ORM, but it won't scale very well, and a pure stateless model is preferred.
For atomic writes (i.e. ensure that several values are written at once), you can use a BATCH on client side (or even on server side).
It will ensure that all write operations (on one or several tables) are sent at once to the DB.
And it will work with NoSQL database like MongoDB, which do not support transactions.
In practice, reading from the DB will be very fast (at least for our SQlite3 engine), so you can safely read before write.
It would be much faster if it is made on the server side only, not from a client (whose connection would be much slower).
Within an interface-based SOA service, for instance.
Offline
Well,
the idea is
Create a new class , inherited from tsqlrecordsigned , with :
a new huge field which will contain all changes , not published;
a startdate and an enddate published property containing the validity interval (UTC) of properties actually stored in published properties : a row deletion could be marked by and enddate of 0.
The version indicator will be a public property of type tTimeLog with UTC values.
Setting the indicator to a date will permit to store and retrieve values at the desired timestamp (updates will be accepted only for now or a future date : in this case the changes wil be stored in the log and automatically applied when the date will be reached). Setting the version timelog to 0, will permit to store and retrieve values at current_date.
Define an interface with some common methods, i.e.
setversiondate(aDate : tTimelog) that will set the version tTimelog to aDate.
resetversiondate that will set the version property to 0.
.....
This can be made in a new unit (i hope also fro not published huge field) , but how to define the methods needed to maintain versioning on insert, update, delete and returning values at a desired timestamp on get? Any suggestion will be appreciated.
Offline
Sounds just like http://synopse.info/fossil/tktview?name=a78ffe992b feature request to me.
Offline
Yes ab, it's a mine request but... do you think to implement it ?
Offline
Global history of any table has been implemented.
For corresponding commits, see http://synopse.info/fossil/tkttimeline/a78ffe992b?y=ci
The main entry point is the new TSQLRecordHistory class type (used to store the history), and TSQLRestServer.TrackChanges() to initiate change tracking.
In short, you call TSQLRestServer.TrackChanges() with the proper table(s) to track. You can even define a dedicated TSQLRecordHistory inherited table, if you want to have several storage (but you can have all changes within a single table).
Then any modification via the ORM (direct SQL changes are not handled) will be stored in the TSQLRecordHistory table, as JSON (in TSQLRecordHistory.SentDataJSON).
When the history grows, the JSON are gathered and compressed into a BLOB ((in TSQLRecordHistory.History).
You can use TSQLRecordHistory.CreateHistory() to browse change history, via HistoryGetLast, HistoryCount or HistoryGet() methods.
I still want to tune the client-side update, so that only changed fields will be transmitted to the server, and stored in TSQLRecordHistory.SentDataJSON.
Offline
We just updated the SAD 1.18 pdf documentation about it, and wrote a blog article about this nice feature!
Offline
ab,
i've no words !
I cannot understand how you, sleeping as a mormot , can do all what you do.
Weel , clearly thanks, and also the adoption of a little mormot , also if i'm yet far from making a profit from my project!
Mario
Offline
AWESOME !!!!
This is a killer feature - along with the many others available in mORMot - !
Offline
bien, mais est largement mériter
Offline
Hy Arnaud,
After a while....
I've changed in mormot tSQLRestClient.BeforeUpdateEvent (and related calls and update methods to eliminate const from CustomFields)
from
function BeforeUpdateEvent(Value: TSQLRecord): Boolean; virtual;
to
function BeforeUpdateEvent(Value: TSQLRecord; Var CustomFields : tSQLFieldBits): Boolean; virtual;
then i have overriden the method in my own tSQLRestClient :
function TMySQLRestClient.BeforeUpdateEvent(Value: TSQLRecord; Var CustomFields : tSQLFieldBits): Boolean;
var I : integer;
Empty : Boolean;
OldData : tSQLRecord;
ChgFields : tSQLFieldBits;
begin
OldData := Value.RecordClass.Create(Self, Value);
try
ChgFields := CustomFields;
Empty := IsZero(CustomFields);
CustomFields := [];
Result := false;
with Value.RecordProps do
for I := 0 to high(SimpleFields) do
// compare not TSQLRawBlob/TSQLRecordMany fields
with SimpleFields[ I] do
if Empty or (I in ChgFields) then
if CompareValue(Value, OldData, false)<>0 then
begin
include(customfields, I);
result := true;
end;
finally
OldData.free;
end;
end;
Obviously, could be implemented on clients but so, i don't lock data from clients when reading first and save some bandwidth when reading data before update.
Do you think that could be implemented in mormot ?
Thanks.
regards
Mario.
Offline
It would read the previous content each time in OldData, so it is very unefficient.
Another pattern may be to store the previous content in Value (e.g. as JSON or binary), then compare it before writing.
Sadly, it would not know if the data has been updated in-between in another thread.
But I'm not sure it would be worth it, in practice, for several reasons:
1. Bandwidth is not the biggest issue, it is the number of calls. Saving a few bytes won't make a huge difference.
2. You would need to create several UPDATE statements, with a set of diverse columns, so it would use more resources at DB level, for almost no speed penalty (the whole record handled as a whole, so some columns less won't change anything).
3. The data may have been modified in another thread, so such an "optimized" update may reflect only your local record state, and the resulting content in the DB may be inconsistent.
4. You already have the ability to specify the modified fields by hand, in the CustomFields parameter of TSQLRestBatch.Update: this is the proper way of doing it
Offline
Hy Ab,
you are right !
I hope it's due to this oppressive heat and not to age....
Tx
Mario
Offline
Pages: 1