mORMot and Open Source friends
Check-in [064dd5562c]
Not logged in

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
SHA1 Hash:064dd5562c70906f538aa583a8b6e29a2509c856
Date: 2012-05-18 15:36:31
User: G018869
Comment:refactored/updated ORM documentation about Objects relationship (i.e. cardinality)
Tags And Properties
Changes
Hide Diffs Unified Diffs Ignore Whitespace Patch

Changes to SQLite3/Documentation/Synopse SQLite3 Framework.pro

935
936
937
938
939
940
941







942
943
944
945
946
947
948
949
....
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017


1018
1019
1020
1021
1022

































































































































































































































































































1023






1024
1025
1026
1027
1028
1029
1030
....
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
....
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
!    'Name LIKE ? AND Sex = ?',[],[Letters+'%',ord(Sex)]);
is the same as this code:
!  aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
!    'Name LIKE :(%): AND Sex = :(%):', [QuotedStr(Letters+'%'),ord(Sex)]);
In both cases, the parameters will be inlined, in order to prepare the statements, and improve execution speed.
We used the {\f1\fs20 QuotedStr} standard function to embrace the {\f1\fs20 Letters} parameter with quotes, as expected per the @*SQL@ syntax.
Of course, using '?' and bounds parameters is much easier than '%' and manual {\f1\fs20 :(%):} inline and {\f1\fs20 QuotedStr()} function call. In your client code, you should better use '?' - but if you find some {\f1\fs20 ':(%):'} in the framework source code and when a WHERE clause is expected, you won't be surprised.







: "One to one" or "One to many"
In order to handle "{\i @**One to one@}" or "{\i @**One to many@}" relationship between tables (i.e. normalized @**Master/Detail@ in a classical @*RDBMS@ approach), you could define {\f1\fs20 @*TSQLRecord@} @*published properties@ in the object definition.
For instance, you could declare classes as such:
!  TSQLMyFileInfo = class(TSQLRecord)
!  private
!    FMyFileDate: TDateTime;
!    FMyFileSize: Int64;
!  published
................................................................................
Or with a {\f1\fs20 with} statement:
!    with TSQLMyFileInfo.Create(Client,MyFile.FirstOne) do
!    try
!      // here you can access MyFileDate or MyFileSize
!    finally
!      Free;
!    end;
Up to now, there is no {\i @**Lazy Loading@} feature in our ORM, for the {\f1\fs20 @*TSQLRecord@} classes. This could sound like a limitation, but it allows to manage exactly the data to be retrieved from the server in your code, and maintain bandwidth and memory use as low as possible. {\i Lazy loading} is available by defining {\f1\fs20 @*TPersistent@, @*TCollection@} or {\i @*dynamic array@s} in the {\f1\fs20 TSQLRecord} @*published properties@: as we'll state in the following paragraphs, the content of those fields are stored and retrieved together with the other simple published fields (e.g. {\f1\fs20 integer, @*RawUTF8@}...).
The only case when some {\f1\fs20 TSQLRecord} instance are automatically created is for {\f1\fs20 TSQLRecordMany} published properties (see next paragraph).
: "Has many" and "has many through"
:  Many-to-many relationship
As @http://en.wikipedia.org/wiki/Many-to-many_(data_model) wrote:
{\i In systems analysis, a many-to-many relationship is a type of cardinality that refers to the relationship between two entities (see also Entity-Relationship Model) A and B in which A may contain a parent row for which there are many children in B and vice versa. For instance, think of A as Authors, and B as Books. An Author can write several Books, and a Book can be written by several Authors. Because most database management systems only support one-to-many relationships, it is necessary to implement such relationships physically via a third and fourth junction table, say, AB with two one-to-many relationships A -> AB and B -> AB. In this case the logical primary key for AB is formed from the two foreign keys (i.e. copies of the primary keys of A and B).}
From the record point of view, and to follow the @*ORM@ vocabulary (in Ruby on Rails, Python, or other {\i ActiveRecord} clones), we could speak of "has many" relationships.


In fact, a pivot table is created, containing two references to both related records. Additional information can be stored in this pivot table. It could be used, for instance, to store association time or corresponding permissions of the relationship. This is called a "has many through" relationship.
Martin Fowler (a well-known software design and agile methods expert) defines the Active Record pattern as:
{\i An object that wraps a row in a database table or view, encapsulates the database access, and adds domain logic on that data.}
In short, this is the ORM approach.
In our framework, we started from the record point of view. Since the "has many through" relationships will need one record with optional additional fields, we'll start from this pivot table.

































































































































































































































































































:  ORM implementation






A dedicated class, inheriting from the standard {\f1\fs20 @*TSQLRecord@} class (which is the base of all objects stored in our ORM), has been created, named {\f1\fs20 @*TSQLRecordMany@}. This table will turn the "many to many" relationship into two "one to many" relationships pointing in opposite directions. It shall contain at least two {\f1\fs20 TSQLRecord} (i.e. INTEGER) @*published properties@, named "{\f1\fs20 Source}" and "{\f1\fs20 Dest}" (name is mandatory, because the ORM will share for exact matches). The first pointing to the source record (the one with a {\f1\fs20 TSQLRecordMany} published property) and the second to the destination record.
For instance:
! TSQLDest = class(TSQLRecord);
! TSQLSource = class;
!! TSQLDestPivot = class(TSQLRecordMany)
! private
!  fSource: TSQLSource;
................................................................................
!        Check(not MS.DestList.ManySelect(aClient,sID[i],dID[i]));
This code will delete one association per four, and ensure that {\f1\fs20 ManySelect} will retrieve only expected associations.
!  finally
!    MD.Free;
!    MS.Free;
!  end;
This will release associated memory, and also the instance of {\f1\fs20 TSQLDestPivot} created in the {\f1\fs20 DestList} property.
:  Automatic JOIN query
All those methods ({\f1\fs20 ManySelect, DestGetJoined...}) are used to retrieve the relations between tables from the pivot table point of view. This saves bandwidth, and can be used in most simple cases, but it is not the only way to perform requests on many-to-many relationships. And you may have several {\f1\fs20 @*TSQLRecordMany@} instances in the same main record - in this case, those methods won't help you.
It is very common, in the SQL world, to create a @**JOIN@ed request at the main "{\i Source}" table level, and combine records from two or more tables in a database. It creates a set that can be saved as a table or used as is. A JOIN is a means for combining fields from two or more tables by using values common to each. Writing such JOINed statements is not so easy by hand, especially because you'll have to work with several tables, and have to specify the exact fields to be retrieved; if you have several pivot tables, it may start to be a nightmare. Let's see how our @*ORM@ will handle it.
A dedicated {\f1\fs20 FillPrepareMany} method has been added to the {\f1\fs20 @*TSQLRecord@} class, in conjunction with a new {\f1\fs20 constructor} named {\f1\fs20 CreateAndFillPrepareMany}. This particular method will:
- Instantiate all {\f1\fs20 Dest} properties of each {\f1\fs20 TSQLRecordMany} instances - so that the JOINed request will be able to populate directly those values;
- Create the appropriate {\f1\fs20 SELECT} statement, with an optional WHERE clause.
Here is the test included in our regression suite, working with the same database:
!Check(MS.FillPrepareMany(aClient,
................................................................................
!    Check(MS.DestList.AssociationTime=i);
!    Check(MS.DestList.Dest.fID=dID[i]);
!    Check(MS.DestList.Dest.SignatureTime=MD.fSignatureTime);
!    Check(MS.DestList.Dest.Signature=FormatUTF8('% %',[aClient.ClassName,i]));
!  end;
!!  MS.FillClose;
Note that in our case, an explicit call to {\f1\fs20 FillClose} has been added in order to release all {\f1\fs20 Dest} instances created in {\f1\fs20 FillPrepareMany}. This call is not mandatory if you call {\f1\fs20 MS.Free} directly, but it is required if the same {\f1\fs20 MS} instance is about to use some regular many-to-many methods, like {\f1\fs20 MS.DestList.ManySelect()} - it will prevent any GPF exception to occur with code expecting the {\f1\fs20 Dest} property not to be an instance, but a {\f1\fs20 pointer(DestID)} value.
: What about arrays, TPersistent, TCollection, TMyClass?
The "{\i has many}" and "{\i has many through}" relationship we just described does follow the classic process of rows association in a relational database, using a pivot table. This does make sense if you have some DB background, but it is sometimes not worth it. See for instance what we will state about the "sharding" technique @29@.
One drawback of this approach is that the data is split into several tables, and you should carefully take care of data integrity to ensure for instance that when you delete a record, all references to it are also deleted in the associated tables. Our @*ORM@ engine will take care of it, but could fail sometimes, especially if you play directly with the tables, instead of using high-level methods like {\f1\fs20 FillMany*} or {\f1\fs20 DestGetJoined}.
Another potential issue is that one business logical unit is split into several tables, therefore into several diverse {\f1\fs20 @*TSQLRecord@} and {\f1\fs20 @*TSQLRecordMany@} classes. From the @*ORM@ point of view, this could be confusing.
Starting with the revision 1.13 of the framework, {\i @*dynamic array@s}, {\f1\fs20 @*TStrings@} and {\f1\fs20 @*TCollection@} can be used as @*published properties@ in the {\f1\fs20 TSQLRecord} class definition. This won't be strong enough to implement all possible "Has many" architectures, but could be used in most case, when you need to add a list of records within a particular record, and when this list won't have to be referenced as a stand-alone table.
{\i @*Dynamic array@s} will be stored as @*BLOB@ fields in the database, retrieved with {\i Base64} encoding in the @*JSON@ content, the serialized using the {\f1\fs20 TDynArray} wrapper. Therefore, only Delphi clients would be able to use this field content: you'll loose the @*AJAX@ capability of the ORM, at the benefit of better integration with object pascal code. Some dedicated SQL functions have been added to the {\i SQLite} engine, like {\f1\fs20 @*IntegerDynArrayContains@}, to search inside this @*BLOB@ field content from the WHERE clause of any search (see @21@). Those functions are available from AJAX queries.
{\f1\fs20 @*TPersistent@, @*TStrings@} and {\f1\fs20 @*TCollection@} will be stored as TEXT fields in the database, following the {\f1\fs20 ObjectToJSON} function format (you can even serialize any @*TObject@ class, via a previous call to the {\f1\fs20 TJSONSerializer. @*RegisterCustomSerializer@} class method). This format contains only valid JSON arrays or objects: so it could be unserialized via an AJAX application, for instance.
About this (trolling?) subject, and why/when you should use plain Delphi objects or arrays instead of classic @*Master/Detail@ DB relationship, please read "{\i Objects, not tables}" and "{\i ORM is not DB}" paragraphs below.
:  Dynamic arrays fields
:   Dynamic arrays from Delphi Code
For instance, here is how the regression @*test@s included in the framework define a {\f1\fs20 @*TSQLRecord@} class with some additional {\i @*dynamic array@s} fields:
!  TFV = packed record
!    Major, Minor, Release, Build: integer;
!    Main, Detailed: string;
!  end;
!  TFVs = array of TFV;
!  TSQLRecordPeopleArray = class(TSQLRecordPeople)
!  private
!    fInts: TIntegerDynArray;
!    fCurrency: TCurrencyDynArray;
!    fFileVersion: TFVs;
!    fUTF8: RawUTF8;
!  published
!    property UTF8: RawUTF8 read fUTF8 write fUTF8;
!    property Ints: TIntegerDynArray index 1 read fInts write fInts;
!    property Currency: TCurrencyDynArray index 2 read fCurrency write fCurrency;
!    property FileVersion: TFVs index 3 read fFileVersion write fFileVersion;
!  end;
This {\f1\fs20 TSQLRecordPeopleArray} class inherits from {\f1\fs20 TSQLRecordPeople}, that is it will add {\f1\fs20 UTF8, Ints, Currency} and {\f1\fs20 FileVersion} fields to this root class.
Some content is added to the {\f1\fs20 PeopleArray} table, with the following code:
!var V: TSQLRecordPeople;
!    VA: TSQLRecordPeopleArray;
!    FV: TFV;
!  (...)
!  V2.FillPrepare(Client,'LastName=:(''Dali''):');
!  n := 0;
!  while V2.FillOne do
!  begin
!    VA.FillFrom(V2); // fast copy some content from TSQLRecordPeople
The {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} method are used to loop through all {\f1\fs20 People} table rows with a {\f1\fs20 LastName} column value equal to 'Dali' (with a @*prepared@ statement thanks to {\f1\fs20 :( ):}), then initialize a {\f1\fs20 TSQLRecordPeopleArray} instance with those values, using a {\f1\fs20 FillFrom} method call.
!    inc(n);
!    if n and 31=0 then
!    begin
!      VA.UTF8 := '';
!!      VA.DynArray('Ints').Add(n);
!      Curr := n*0.01;
!!      VA.DynArray(2).Add(Curr);
!      FV.Major := n;
!      FV.Minor := n+2000;
!      FV.Release := n+3000;
!      FV.Build := n+4000;
!      str(n,FV.Main);
!      str(n+1000,FV.Detailed);
!!      VA.DynArray('FileVersion').Add(FV);
!    end else
!      str(n,VA.fUTF8);
The {\f1\fs20 n} variable is used to follow the {\f1\fs20 PeopleArray} number, and will most of the type set its textual converted value in the {\f1\fs20 UTF8} column, and once per 32 rows, will add one item to both {\f1\fs20 VA} and {\f1\fs20 FV} {\i @*dynamic array@} fields.
We could have used normal access to V{\f1\fs20 VA} and {\f1\fs20 FV} {\i dynamic arrays}, as such:
!     SetLength(VA.Ints,length(VA.Ints)+1);
!     VA.Ints[high(VA.Ints)] := n;
But the {\f1\fs20 DynArray} method is used instead, to allow direct access to the {\i dynamic array} via a {\f1\fs20 TDynArray} wrapper. Those two lines behave therefore the same as this code:
!      VA.DynArray('Ints').Add(n);
Note that the {\f1\fs20 DynArray} method can be used via two overloaded set of parameters: either the field name ({\f1\fs20 'Ints'}), either an {\f1\fs20 index} value, as was defined in the class definition. So we could have written:
!      VA.DynArray(1).Add(n);
since the {\f1\fs20 Ints} published property has been defined as such:
!    property Ints: TIntegerDynArray index 1 read fInts write fInts;
Similarly, the following line will add a {\f1\fs20 @*currency@} value to the {\f1\fs20 Currency} field:
!      VA.DynArray(2).Add(Curr);
And a more complex {\f1\fs20 TFV} record is added to the {\f1\fs20 FileVersion} field {\i dynamic array} with just one line:
!      VA.DynArray('FileVersion').Add(FV);
Of course, using the {\f1\fs20 DynArray} method is a bit slower than direct {\f1\fs20 SetLength / Ints[]} use. Using {\f1\fs20 DynArray} with an index should be also a bit faster than using {\f1\fs20 DynArray} with a textual field name (like {\f1\fs20 'Ints'}), with the benefit of perhaps less keyboard errors at typing the property name. But if you need to fast add a lot of items to a {\i dynamic array}, you could use a custom {\f1\fs20 TDynArray} wrapper with an associated external {\f1\fs20 Count} value, or direct access to its content (like {\f1\fs20 SetLength + Ints[]}).
Then the {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} loop ends with the following line:
!!    Check(Client.Add(VA,true)=n);
!  end;
This will add the {\f1\fs20 VA} fields content into the database, creating a new row in the {\f1\fs20 PeopleArray} table, with an {\f1\fs20 ID} following the value of the {\f1\fs20 n} variable. All {\i dynamic array} fields will be serialized as BLOB into the database table.
:21   Dynamic arrays from SQL code
In order to access the @*BLOB@ content of the dynamic arrays directly from @*SQL@ statements, some new @**SQL function@s have been defined in {\f1\fs20 TSQLDataBase}, named after their native simple types:
- {\f1\fs20 ByteDynArrayContains(BlobField,I64)};
- {\f1\fs20 WordDynArrayContains(BlobField,I64)};
- {\f1\fs20 @**IntegerDynArrayContains@(BlobField,I64)};
- {\f1\fs20 CardinalDynArrayContains(BlobField,I64)};
- {\f1\fs20 CurrencyDynArrayContains(BlobField,I64)} - in this case, {\f1\fs20 I64} is not the {\f1\fs20 @*currency@} value directly converted into an {\f1\fs20 Int64} value (i.e. not {\f1\fs20 Int64(aCurrency)}), but the binary mapping of the {\f1\fs20 currency} value, i.e. {\f1\fs20 aCurrency*10000} or {\f1\fs20 PInt64(@aCurrency)^};
- {\f1\fs20 Int64DynArrayContains(BlobField,I64)};
- {\f1\fs20 RawUTF8DynArrayContainsCase(BlobField,'Text')};
- {\f1\fs20 RawUTF8DynArrayContainsNoCase(BlobField,'Text')}.
Those functions allow direct access to the BLOB content like this:
!  for i := 1 to n shr 5 do
!  begin
!    k := i shl 5;
!!    aClient.OneFieldValues(TSQLRecordPeopleArray,'ID',
!!      FormatUTF8('IntegerDynArrayContains(Ints,?)',[],[k]),IDs);
!    Check(length(IDs)=n+1-32*i);
!    for j := 0 to high(IDs) do
!      Check(IDs[j]=k+j);
!  end;
In the above code, the WHERE clause of the {\f1\fs20 OneFieldValues} method will use the dedicated {\f1\fs20 IntegerDynArrayContains} @*SQL function@ to retrieve all records containing the specified {\f1\fs20 integer} value {\f1\fs20 k} in its {\f1\fs20 Ints} BLOB column. With such a function, all the process is performed Server-side, with no slow data transmission nor JSON/Base64 @*serialization@.
For instance, using such a SQL function, you are able to store multiple {\f1\fs20 @*TSQLRecord@. ID} field values into one {\f1\fs20 TIntegerDynArray} property column, and have direct search ability inside the SQL statement. This could be a very handy way of implementing "one to many" or "many to many" relationship, without the need of a pivot table.
Those functions were implemented to be very efficient for speed. They won't create any temporary dynamic array during the search, but will access directly to the BLOB raw memory content, as returned by the {\i SQlite} engine. The {\f1\fs20 RawUTF8DynArrayContainsCase / RawUTF8DynArrayContainsNoCase} functions also will search directly inside the BLOB. With huge number of requests, this could be slower than using a {\f1\fs20 @*TSQLRecordMany@} pivot table, since the search won't use any index, and will have to read all BLOB field during the request. But, in practice, those functions behave nicely with a relative small amount of data (up to about 50,000 rows). Don't forget that BLOB column access are very optimized in {\i @*SQlite3@}.
For more complex dynamic array content handling, you'll have either to create your own @*SQL function@ using the {\f1\fs20 TSQLDataBase. RegisterSQLFunction} method and an associated {\f1\fs20 TSQLDataBaseSQLFunction} class, or via a dedicated @*Service@ or a @*stored procedure@ - see @22@ on how to implement it.
:  TPersistent/TCollection fields
For instance, here is the way regression @*test@s included in the framework define a {\f1\fs20 @*TSQLRecord@} class with some additional {\f1\fs20 @**TPersistent@}, {\f1\fs20 @**TCollection@} or {\f1\fs20 TRawUTF8List} fields ({\f1\fs20 TRawUTF8List} is just a {\f1\fs20 TStringList}-like component, dedicated to handle {\f1\fs20 @*RawUTF8@} kind of {\f1\fs20 string}):
!  TSQLRecordPeopleObject = class(TSQLRecordPeople)
!  private
!    fPersistent: TCollTst;
!    fUTF8: TRawUTF8List;
!  public
!    constructor Create; override;
!    destructor Destroy; override;
!  published
!    property UTF8: TRawUTF8List read fUTF8;
!    property Persistent: TCollTst read fPersistent;
!  end;
In order to avoid any memory leak or access violation, it's mandatory to initialize then release all internal property instances in the overridden {\f1\fs20 constructor} and {\f1\fs20 destructor} of the class:
!constructor TSQLRecordPeopleObject.Create;
!begin
!  inherited;
!  fPersistent := TCollTst.Create;
!  fUTF8 := TRawUTF8List.Create;
!end;
!
!destructor TSQLRecordPeopleObject.Destroy;
!begin
!  inherited;
!  FreeAndNil(fPersistent);
!  FreeAndNil(fUTF8);
!end;
Here is how the regression @*test@s are performed:
!var VO: TSQLRecordPeopleObject;
!  (...)
!if Client.TransactionBegin(TSQLRecordPeopleObject) then
!try
!  V2.FillPrepare(Client,'LastName=:(''Morse''):');
!  n := 0;
!  while V2.FillOne do
!  begin
!    VO.FillFrom(V2); // fast copy some content from TSQLRecordPeople
!    inc(n);
!    VO.Persistent.One.Color := n+100;
!    VO.Persistent.One.Length := n;
!    VO.Persistent.One.Name := Int32ToUtf8(n);
!    if n and 31=0 then
!    begin
!      VO.UTF8.Add(VO.Persistent.One.Name);
!      with VO.Persistent.Coll.Add do
!      begin
!        Color := n+1000;
!        Length := n*2;
!        Name := Int32ToUtf8(n*3);
!      end;
!    end;
!!    Check(Client.Add(VO,true)=n);
!  end;
!  Client.Commit;
!except
!  Client.RollBack; // in case of error
!end;
This will add 1000 rows to the {\f1\fs20 PeopleObject} table.
First of all, the adding is nested inside a @**transaction@ call, to speed up @*SQL@ {\f1\fs20 INSERT} statements, via {\f1\fs20 TransactionBegin} and {\f1\fs20 Commit} methods. Please note that the {\f1\fs20 TransactionBegin} method returns a {\f1\fs20 boolean} value, and should be checked in a multi-threaded or Client-Server environment (in this part of the test suit, content is accessed in the same thread, so checking the result is not mandatory, but shown here for accuracy). In the current implementation of the framework, transactions should not be nested. The typical transaction usage should be the following:
!if Client.TransactionBegin(TSQLRecordPeopleObject) then
!try
!  //.... modify the database content, raise exceptions on error
!  Client.Commit;
!except
!  Client.RollBack; // in case of error
!end;
In a @*Client-Server@ environment with multiple Clients connected at the same time, you can use the dedicated {\f1\fs20 TSQLRestClientURI.TransactionBeginRetry} method:
!if Client.TransactionBeginRetry(TSQLRecordPeopleObject,20) then
!  ...
Note that the transactions are handled according to the corresponding client @*session@: the client should make the transaction block as short as possible (e.g. using a @*batch@ command), since any write attempt by other clients will wait for the transaction to be released (with either a commit or rollback).
The fields inherited from the {\f1\fs20 @*TSQLRecord@} class are retrieved via {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} method calls, for columns with the {\f1\fs20 LastName} matching {\f1\fs20 'Morse'}. One {\f1\fs20 TPersistent} property instance values are set ({\f1\fs20 VO.Persistent.One}), then, for every 32 rows, a new item is added to the {\f1\fs20 VO.Persistent.Coll} collection.
Here is the data sent for instance to the Server, when the item with {\f1\fs20 ID=32} is added:
${"FirstName":"Samuel Finley Breese31",
$"LastName":"Morse",
$"YearOfBirth":1791,
$"YearOfDeath":1872,
$"UTF8":["32"],
$"Persistent":{"One":{"Color":132,"Length":32,"Name":"32"},"Coll":[{"Color":1032,"Length":64,"Name":"96"}]}
$}
Up to revision 1.15 of the framework, the transmitted JSON content was not a true JSON object, but sent as {\f1\fs20 @*RawUTF8@} TEXT values (i.e. every double-quote ({\f1\fs20 "}) character is escaped as {\f1\fs20 \"} - e.g. {\f1\fs20 "UTF8":"[\"32\"]"}). Starting with revision 1.16 of the framework, the transmitted data is a true JSON object, to allow better integration with an AJAX client. That is, {\f1\fs20 UTF8} field is transmitted as a valid JSON array of string, and {\f1\fs20 Persistent} as a valid JSON object with nested objects and arrays.
When all 1000 rows were added to the database file, the following loop is called once with direct connection to the DB engine, once with a remote client connection (with all available connection protocols):
!  for i := 1 to n do
!  begin
!    VO.ClearProperties;
!!    Client.Retrieve(i,VO);
!    Check(VO.ID=i);
!    Check(VO.LastName='Morse');
!    Check(VO.UTF8.Count=i shr 5);
!    for j := 0 to VO.UTF8.Count-1 do
!      Check(GetInteger(pointer(VO.UTF8[j]))=(j+1) shl 5);
!    Check(VO.Persistent.One.Length=i);
!    Check(VO.Persistent.One.Color=i+100);
!    Check(GetInteger(pointer(VO.Persistent.One.Name))=i);
!    Check(VO.Persistent.Coll.Count=i shr 5);
!    for j := 0 to VO.Persistent.Coll.Count-1 do
!     with VO.Persistent.Coll[j] do
!     begin
!       k := (j+1) shl 5;
!       Check(Color=k+1000);
!       Check(Length=k*2);
!       Check(GetInteger(pointer(Name))=k*3);
!     end;
!  end;
All the magic is made in the {\f1\fs20 Client.Retrieve(i,VO)} method. Data is retrieved from the database as TEXT values, then unserialized from @*JSON@ arrays or objects into the internal {\f1\fs20 TRawUTF8List} and {\f1\fs20 TPersistent} instances.
When the {\f1\fs20 ID=33} row is retrieved, the following JSON content is received from the server:
${"ID":33,
$"FirstName":"Samuel Finley Breese32",
$"LastName":"Morse",
$"YearOfBirth":1791,
$"YearOfDeath":1872,
$"UTF8":"[\"32\"]",
$"Persistent":"{\"One\":{\"Color\":133,\"Length\":33,\"Name\":\"33\"},\"Coll\":[{\"Color\":1032,\"Length\":64,\"Name\":\"96\"}]}"}
In contradiction with POST content, this defines no valid nested JSON objects nor arrays, but {\f1\fs20 UTF8} and {\f1\fs20 Persistent} fields transmitted as JSON strings. This is a known limitation of the framework, due to the fact that it is much faster to retrieve directly the text from the database than process for this operation. For an AJAX application, this won't be difficult to use a temporary {\f1\fs20 string} property, and evaluate the JSON content from it, in order to replace the property with a corresponding object content. Implementation may change in the future.
:52  Custom TObject JSON serialization
Not only {\f1\fs20 TPersistent, TCollection} and {\f1\fs20 TSQLRecord} types can be serialized by writting all {\f1\fs20 published} property.
In fact, any {\f1\fs20 @*TObject@} can be serialized as @*JSON@ in the whole framework: not only for the ORM part (for {\f1\fs20 published} properties), but also for SOA (as parameters of interface-based service methods). All JSON @**serialization@ is centralized in {\f1\fs20 ObjectToJSON()} and {\f1\fs20 JSONToObject()} (aka {\f1\fs20 TJSONSerializer.WriteObject}) functions.
In some cases, it may be handy to have a custom serialization, for instance if you want to manage some third-party classes, or to adapt the serialization scheme to a particular purpose, at runtime.
You can add a customized serialization of any {\f1\fs20 class}, by calling the {\f1\fs20 TJSONSerializer. @**RegisterCustomSerializer@} class method. Two callbacks are to be defined for a specific class type, and will be used to serialize or un-serialize the object instance. The callbacks are class methods ({\f1\fs20 procedure() of object}), and not plain functions (for some evolved objects, it may have sense to use a context during serialization).
In the current implementation of this feature, callbacks expect low-level implementation. That is, their implementation code shall follow function {\f1\fs20 JSONToObject()} patterns, i.e. calling low-level {\f1\fs20 GetJSONField()} function to decode the JSON content, and follow function {\f1\fs20 TJSONSerializer.WriteObject()} patterns, i.e. {\f1\fs20 aSerializer.Add/AddInstanceName/AddJSONEscapeString} to encode the class instance as JSON.
Note that the process is called outside the "{\f1\fs20 \{...\}}" JSON object layout, allowing any serialization scheme: even a class content can be serialized as a JSON string, JSON array or JSON number, on request.
For instance, we'd like to customize the serialization of this class (defined in {\f1\fs20 SynCommons.pas}):
!  TFileVersion = class
!  protected
!    fDetailed: string;
!    fBuildDateTime: TDateTime;
!  public
!    Major: Integer;
!    Minor: Integer;
!    Release: Integer;
!    Build: Integer;
!    BuildYear: integer;
!    Main: string;
!  published
!    property Detailed: string read fDetailed write fDetailed;
!    property BuildDateTime: TDateTime read fBuildDateTime write fBuildDateTime;
!  end;
By default, since it has been defined within {\f1\fs20 \{$M+\} ... \{$M-\}} conditionals, RTTI is available for the {\f1\fs20 published} properties (just as if it were inheriting from {\f1\fs20 TPersistent}). That is, the default JSON serialization will be for instance:
& {"Detailed":"1.2.3.4","BuildDateTime":"1911-03-14T00:00:00"}
This is what is expected when serialized within a {\f1\fs20 TSynLog} content, or for main use.
We would like to serialize this {\f1\fs20 class} as such:
& {"Major":1,"Minor":2001,"Release":3001,"Build":4001,"Main":"1","BuildDateTime":"1911-03-14"}
We will therefore define the {\i Writer} callback, as such:
!class procedure TCollTstDynArray.FVClassWriter(const aSerializer: TJSONSerializer;
!  aValue: TObject; aHumanReadable, aDontStoreDefault, aFullExpand: Boolean);
!var V: TFileVersion absolute aValue;
!begin
!  aSerializer.AddJSONEscape(['Major',V.Major,'Minor',V.Minor,'Release',V.Release,
!    'Build',V.Build,'Main',V.Main,'BuildDateTime',DateTimeToIso8601Text(V.BuildDateTime)]);
!end;
Most of the JSON serialization work will be made within the {\f1\fs20 AddJSONEscape} method, expecting the JSON object description as an array of name/value pairs.
Then the associated {\i Reader} callback could be, for instance:
!class function TCollTstDynArray.FVClassReader(const aValue: TObject; aFrom: PUTF8Char;
!  var aValid: Boolean): PUTF8Char;
!var V: TFileVersion absolute aValue;
!    Values: TPUtf8CharDynArray;
!begin
!  aValid := false;
!  aFrom := JSONDecode(aFrom,['Major','Minor','Release','Build','Main','BuildDateTime'],Values);
!  if aFrom=nil then
!    exit;
!  V.Major := GetInteger(Values[0]);
!  V.Minor := GetInteger(Values[1]);
!  V.Release := GetInteger(Values[2]);
!  V.Build := GetInteger(Values[3]);
!  V.Main := UTF8DecodeToString(Values[4],StrLen(Values[4]));
!  V.BuildDateTime := Iso8601ToDateTimePUTF8Char(Values[5]);
!  aValid := true;
!  result := aFrom;
!end;
Here, the {\f1\fs20 JSONDecode} function will un-serialize the JSON object into an array of {\f1\fs20 PUTF8Char} values, without any memory allocation (in fact, {\f1\fs20 Values[]} will point to un-escaped and #0 terminated content within the {\f1\fs20 aFrom} memory buffer. So decoding is very fast.
Then, the registration step will be defined as such:
!  TJSONSerializer.RegisterCustomSerializer(TFileVersion,
!    TCollTstDynArray.FVClassReader,TCollTstDynArray.FVClassWriter);
If you want to disable the custom serialization, you may call the same method as such:
!  TJSONSerializer.RegisterCustomSerializer(TFileVersion,nil,nil);
This will reset the JSON serialization of the specified class to the default serializer (i.e. writing of {\f1\fs20 published} properties).
The above code uses some low-level functions of the framework (i.e. {\f1\fs20 AddJSONEscape} and {\f1\fs20 JSONDecode}) to implement serialization as a JSON object, but you may use any other serialization scheme, on need. That is, you may serialize the whole class instance just as one JSON string or numerical value, or even a JSON array. It will depend of the implementation of the {\i Reader} and {\i Writer} registered callbacks.
:29  Shared nothing architecture (or sharding)
Here is what {\i wikipedia} states at @http://en.wikipedia.org/wiki/Shared_nothing_architecture
{\i A @**shared nothing architecture@ (SN) is a distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system. People typically contrast SN with systems that keep a large amount of centrally-stored state information, whether in a database, an application server, or any other similar single point of contention.}
As we just stated, in our ORM, high-level types like @*dynamic array@s or {\f1\fs20 @**TPersistent@} / {\f1\fs20 @**TCollection@} properties are stored as BLOB or TEXT inside the main data row. There is no external linked table, no {\i @*Master/Detail@} to maintain. In fact, each {\f1\fs20 @*TSQLRecord@} instance content could be made self-contained in our ORM.
When the server starts to have an increasing number of clients, such a data layout could be a major benefit. In fact, the so-called {\i @**sharding@}, or horizontal partitioning of data, is a proven solution for web-scale databases, such as those in use by social networking sites. How does {\i EBay} or {\i Facebook} scale with so many users? Just by {\i sharding}.
A simple but very efficient {\i sharding} mechanism could therefore be implemented with our ORM. In-memory databases, or our {\i BigTable} component are good candidate for light speed data process. Even {\i SQLite} could scale very well in some cases.
Storing detailed data in BLOB or in TEXT as JSON could first sounds a wrong idea. It does break one widely accepted principle of the @*RDBMS@ architecture. But even {\i Google} had to break this dogma. And when {\i MySQL} or such tries to implement sharding, it does need a lot of effort. Others, like the NoSQL {\i MongoDB}, are better candidates: they are not tight to the SQL flat scheme.
Therefore, on second thought, having at hand a shared nothing architecture could be a great advantage. Our ORM is already ready to break the table-oriented scheme of SQL.
: Calculated fields
It is often useful to handle some calculated fields. That is, having some field values computed when you set another field value. For instance, if you set an error code from an enumeration (stored in an INTEGER field), you may want the corresponding text (to be stored on a TEXT field). Or you may want a total amount to be computed automatically from some detailed records.
This should not be done on the Server side. In fact, the framework expects the transmitted JSON transmitted from client to be set directly to the database layer, as stated by this code from the {\f1\fs20 SQLite3} unit:
!function TSQLRestServerDB.EngineUpdate(Table: TSQLRecordClass; ID: integer;
!  const SentData: RawUTF8): boolean;
!begin
!  if (self=nil) or (Table=nil) or (ID<=0) then







>
>
>
>
>
>
>
|







 







|
<
|
<


|
>
>
|
|
|
|
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
>
>
>
>
>
>







 







|







 







<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<







935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
....
1011
1012
1013
1014
1015
1016
1017
1018

1019

1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
....
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
....
1493
1494
1495
1496
1497
1498
1499




























































































































































































































































































1500
1501
1502
1503
1504
1505
1506
!    'Name LIKE ? AND Sex = ?',[],[Letters+'%',ord(Sex)]);
is the same as this code:
!  aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
!    'Name LIKE :(%): AND Sex = :(%):', [QuotedStr(Letters+'%'),ord(Sex)]);
In both cases, the parameters will be inlined, in order to prepare the statements, and improve execution speed.
We used the {\f1\fs20 QuotedStr} standard function to embrace the {\f1\fs20 Letters} parameter with quotes, as expected per the @*SQL@ syntax.
Of course, using '?' and bounds parameters is much easier than '%' and manual {\f1\fs20 :(%):} inline and {\f1\fs20 QuotedStr()} function call. In your client code, you should better use '?' - but if you find some {\f1\fs20 ':(%):'} in the framework source code and when a WHERE clause is expected, you won't be surprised.
: Objects relationship: cardinality
All previous code is fine if your application requires "flat" data. But most of the time, you'll need to define master/child relationship, perhaps over several levels. In data modeling, the {\i @**cardinality@} of one data table with respect to another data table is a critical aspect of database design. Relationships between data tables define {\i cardinality} when explaining how each table links to another.
In the relational model, tables can have the following {\i cardinality}, i.e. can be related as any of:
- "@*One to one@".
- "Many to one" (rev. "@*One to many@");
- "Many to many" (or "@has many@").
Our {\i mORMot framework} handles all those kinds of {\i cardinality}.
:  "One to one" or "One to many"
In order to handle "{\i @**One to one@}" or "{\i @**One to many@}" relationship between tables (i.e. normalized @**Master/Detail@ in a classical @*RDBMS@ approach), you could define {\f1\fs20 @*TSQLRecord@} @*published properties@ in the object definition.
For instance, you could declare classes as such:
!  TSQLMyFileInfo = class(TSQLRecord)
!  private
!    FMyFileDate: TDateTime;
!    FMyFileSize: Int64;
!  published
................................................................................
Or with a {\f1\fs20 with} statement:
!    with TSQLMyFileInfo.Create(Client,MyFile.FirstOne) do
!    try
!      // here you can access MyFileDate or MyFileSize
!    finally
!      Free;
!    end;
Mapping a {\f1\fs20 TSQLRecord} field into an {\f1\fs20 integer} ID is a bit difficult to learn at first. It was the only way we found out in order to define a "one to one" or "one to many" relationship within the class definition, without any property attribute features of the Delphi compiler (only introduced in newer versions). The main drawback is that the compiler won't be able to identify at compile time some potential GPF issues at run time. This is up to the developper to write correct code, when dealing with {\f1\fs20 TSQLRecord} properties.

:  "Has many" and "has many through"

As @http://en.wikipedia.org/wiki/Many-to-many_(data_model) wrote:
{\i In systems analysis, a many-to-many relationship is a type of cardinality that refers to the relationship between two entities (see also Entity-Relationship Model) A and B in which A may contain a parent row for which there are many children in B and vice versa. For instance, think of A as Authors, and B as Books. An Author can write several Books, and a Book can be written by several Authors. Because most database management systems only support one-to-many relationships, it is necessary to implement such relationships physically via a third and fourth junction table, say, AB with two one-to-many relationships A -> AB and B -> AB. In this case the logical primary key for AB is formed from the two foreign keys (i.e. copies of the primary keys of A and B).}
From the record point of view, and to follow the @*ORM@ vocabulary (in Ruby on Rails, Python, or other {\i ActiveRecord} clones), we could speak of "@**has many@" relationship. In the classic RDBMS implementation, a pivot table is created, containing two references to both related records. Additional information can be stored within this pivot table. It could be used, for instance, to store association time or corresponding permissions of the relationship. This is called a "@**has many through@" relationship.
In fact, there are several families of ORM design, when implementing the "many to many" @*cardinality@:
- Map collections into {\f1\fs20 JOIN}ed query from the ORM (i.e. pivot tables are abstracted from object lists or collections by the framework, to implement "has many" relationship, but you will have to define lazy loading and won't have "has many through" relationship at hand);
- Explicitly handle pivot tables as ORM classes, and provide methods to access to them (it will allow both "has many" and "has many through" relationship).
- Store collections within the ORM classes property (data @*sharding@).
In the {\i mORMot framework}, we did not implement the 1st implementation pattern, but the 2nd and 3rd:
- You can map the DB with dedicated {\f1\fs20 @*TSQLRecordMany@} classes, which allows some true pivot table to be available (that is the 2nd family), introducing true "has many through" cardinality;
- But for most applications, it sounds definitively more easy to use {\f1\fs20 TCollection} (of {\f1\fs20 TPersistent} classes) or {\i dynamic arrays} within one {\f1\fs20 TSQLRecord} class, and data sharding (i.e. the 3rd family).
Up to now, there is no explicit {\i @**Lazy Loading@} feature in our ORM. There is no native handling of {\f1\fs20 @*TSQLRecord@} collections or lists (as they do appear in the first family of ORMs). This could sound like a limitation, but it allows to manage exactly the data to be retrieved from the server in your code, and maintain bandwidth and memory use as low as possible. Use of a pivot table (via the {\f1\fs20 @*TSQLRecordMany@} kind of records) allows tuned access to the data, and implements optimal {\i lazy loading} feature. Note that the only case when some {\f1\fs20 TSQLRecord} instances are automatically created by the ORM is for those {\f1\fs20 TSQLRecordMany} published properties.
:29   Shared nothing architecture (or sharding)
Defining a pivot table is a classic and powerful use of relational database, and unleash its power (especially when linked data is huge).
But it is not easy nor natural to properly handle it, since it introduces some dependencies from the DB layer into the business model. For instance, it does introduce some additional requirements, like constraints / integrity checking and tables/classes inter-dependency.
Furthermore, in real life, we do not have such a separated storage, but we store all details within the main data. So for a domain-driven design, which tries to map the real objects of its own domain, such a pivot table is breaking the business logic. With today's computer power, we can safely implement a centralized way of storing data into our data repository.
Let us quote what {\i wikipedia} states at @http://en.wikipedia.org/wiki/Shared_nothing_architecture
{\i A @**shared nothing architecture@ (SN) is a distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system. People typically contrast SN with systems that keep a large amount of centrally-stored state information, whether in a database, an application server, or any other similar single point of contention.}
As we stated in @26@, in our ORM, high-level types like @*dynamic array@s or {\f1\fs20 @**TPersistent@} / {\f1\fs20 @**TCollection@} properties are stored as BLOB or TEXT inside the main data row. There is no external linked table, no {\i @*Master/Detail@} to maintain. In fact, each {\f1\fs20 @*TSQLRecord@} instance content could be made self-contained in our ORM.
When the server starts to have an increasing number of clients, such a data layout could be a major benefit. In fact, the so-called {\i @**sharding@}, or horizontal partitioning of data, is a proven solution for web-scale databases, such as those in use by social networking sites. How does {\i EBay} or {\i Facebook} scale with so many users? Just by {\i sharding}.
A simple but very efficient {\i sharding} mechanism could therefore be implemented with our ORM. In-memory databases, or our {\i BigTable} component are good candidate for light speed data process. Even {\i SQLite} could scale very well in most cases.
Storing detailed data in BLOB or in TEXT as JSON could first sounds a wrong idea. It does break one widely accepted principle of the @*RDBMS@ architecture. But even {\i Google} had to break this dogma. And when {\i MySQL} or such tries to implement sharding, it does need a lot of effort. Others, like the NoSQL {\i MongoDB}, are better candidates: they are not tight to the SQL flat scheme.
Finally, this implementation pattern fits much better with a Domain-Driven Design.
Therefore, on second thought, having at hand a shared nothing architecture could be a great advantage. Our ORM is already ready to break the table-oriented of SQL. Let us go one step further.
:    Arrays, TPersistent, TCollection, TMyClass
The "{\i has many}" and "{\i has many through}" relationship we just described does follow the classic process of rows association in a relational database, using a pivot table. This does make sense if you have some DB background, but it is sometimes not worth it.
One drawback of this approach is that the data is split into several tables, and you should carefully take care of data integrity to ensure for instance that when you delete a record, all references to it are also deleted in the associated tables. Our @*ORM@ engine will take care of it, but could fail sometimes, especially if you play directly with the tables via SQL, instead of using high-level methods like {\f1\fs20 FillMany*} or {\f1\fs20 DestGetJoined}.
Another potential issue is that one business logical unit is split into several tables, therefore into several diverse {\f1\fs20 @*TSQLRecord@} and {\f1\fs20 @*TSQLRecordMany@} classes. From the @*ORM@ point of view, this could be confusing.
Starting with the revision 1.13 of the framework, {\i @*dynamic array@s}, {\f1\fs20 @*TStrings@} and {\f1\fs20 @*TCollection@} can be used as @*published properties@ in the {\f1\fs20 TSQLRecord} class definition. This won't be strong enough to implement all possible "Has many" architectures, but could be used in most case, when you need to add a list of records within a particular record, and when this list won't have to be referenced as a stand-alone table.
{\i @*Dynamic array@s} will be stored as @*BLOB@ fields in the database, retrieved with {\i Base64} encoding in the @*JSON@ content, the serialized using the {\f1\fs20 TDynArray} wrapper. Therefore, only Delphi clients would be able to use this field content: you'll loose the @*AJAX@ capability of the ORM, at the benefit of better integration with object pascal code. Some dedicated SQL functions have been added to the {\i SQLite} engine, like {\f1\fs20 @*IntegerDynArrayContains@}, to search inside this @*BLOB@ field content from the WHERE clause of any search (see @21@). Those functions are available from AJAX queries.
{\f1\fs20 @*TPersistent@, @*TStrings@} and {\f1\fs20 @*TCollection@} will be stored as TEXT fields in the database, following the {\f1\fs20 ObjectToJSON} function format (you can even serialize any @*TObject@ class, via a previous call to the {\f1\fs20 TJSONSerializer. @*RegisterCustomSerializer@} class method). This format contains only valid JSON arrays or objects: so it could be unserialized via an AJAX application, for instance.
About this (trolling?) subject, and why/when you should use plain Delphi objects or arrays instead of classic @*Master/Detail@ DB relationship, please read "{\i Objects, not tables}" and "{\i ORM is not DB}" paragraphs below.
:     Dynamic arrays fields
:      Dynamic arrays from Delphi Code
For instance, here is how the regression @*test@s included in the framework define a {\f1\fs20 @*TSQLRecord@} class with some additional {\i @*dynamic array@s} fields:
!  TFV = packed record
!    Major, Minor, Release, Build: integer;
!    Main, Detailed: string;
!  end;
!  TFVs = array of TFV;
!  TSQLRecordPeopleArray = class(TSQLRecordPeople)
!  private
!    fInts: TIntegerDynArray;
!    fCurrency: TCurrencyDynArray;
!    fFileVersion: TFVs;
!    fUTF8: RawUTF8;
!  published
!    property UTF8: RawUTF8 read fUTF8 write fUTF8;
!    property Ints: TIntegerDynArray index 1 read fInts write fInts;
!    property Currency: TCurrencyDynArray index 2 read fCurrency write fCurrency;
!    property FileVersion: TFVs index 3 read fFileVersion write fFileVersion;
!  end;
This {\f1\fs20 TSQLRecordPeopleArray} class inherits from {\f1\fs20 TSQLRecordPeople}, that is it will add {\f1\fs20 UTF8, Ints, Currency} and {\f1\fs20 FileVersion} fields to this root class.
Some content is added to the {\f1\fs20 PeopleArray} table, with the following code:
!var V: TSQLRecordPeople;
!    VA: TSQLRecordPeopleArray;
!    FV: TFV;
!  (...)
!  V2.FillPrepare(Client,'LastName=:(''Dali''):');
!  n := 0;
!  while V2.FillOne do
!  begin
!    VA.FillFrom(V2); // fast copy some content from TSQLRecordPeople
The {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} method are used to loop through all {\f1\fs20 People} table rows with a {\f1\fs20 LastName} column value equal to 'Dali' (with a @*prepared@ statement thanks to {\f1\fs20 :( ):}), then initialize a {\f1\fs20 TSQLRecordPeopleArray} instance with those values, using a {\f1\fs20 FillFrom} method call.
!    inc(n);
!    if n and 31=0 then
!    begin
!      VA.UTF8 := '';
!!      VA.DynArray('Ints').Add(n);
!      Curr := n*0.01;
!!      VA.DynArray(2).Add(Curr);
!      FV.Major := n;
!      FV.Minor := n+2000;
!      FV.Release := n+3000;
!      FV.Build := n+4000;
!      str(n,FV.Main);
!      str(n+1000,FV.Detailed);
!!      VA.DynArray('FileVersion').Add(FV);
!    end else
!      str(n,VA.fUTF8);
The {\f1\fs20 n} variable is used to follow the {\f1\fs20 PeopleArray} number, and will most of the type set its textual converted value in the {\f1\fs20 UTF8} column, and once per 32 rows, will add one item to both {\f1\fs20 VA} and {\f1\fs20 FV} {\i @*dynamic array@} fields.
We could have used normal access to V{\f1\fs20 VA} and {\f1\fs20 FV} {\i dynamic arrays}, as such:
!     SetLength(VA.Ints,length(VA.Ints)+1);
!     VA.Ints[high(VA.Ints)] := n;
But the {\f1\fs20 DynArray} method is used instead, to allow direct access to the {\i dynamic array} via a {\f1\fs20 TDynArray} wrapper. Those two lines behave therefore the same as this code:
!      VA.DynArray('Ints').Add(n);
Note that the {\f1\fs20 DynArray} method can be used via two overloaded set of parameters: either the field name ({\f1\fs20 'Ints'}), either an {\f1\fs20 index} value, as was defined in the class definition. So we could have written:
!      VA.DynArray(1).Add(n);
since the {\f1\fs20 Ints} published property has been defined as such:
!    property Ints: TIntegerDynArray index 1 read fInts write fInts;
Similarly, the following line will add a {\f1\fs20 @*currency@} value to the {\f1\fs20 Currency} field:
!      VA.DynArray(2).Add(Curr);
And a more complex {\f1\fs20 TFV} record is added to the {\f1\fs20 FileVersion} field {\i dynamic array} with just one line:
!      VA.DynArray('FileVersion').Add(FV);
Of course, using the {\f1\fs20 DynArray} method is a bit slower than direct {\f1\fs20 SetLength / Ints[]} use. Using {\f1\fs20 DynArray} with an index should be also a bit faster than using {\f1\fs20 DynArray} with a textual field name (like {\f1\fs20 'Ints'}), with the benefit of perhaps less keyboard errors at typing the property name. But if you need to fast add a lot of items to a {\i dynamic array}, you could use a custom {\f1\fs20 TDynArray} wrapper with an associated external {\f1\fs20 Count} value, or direct access to its content (like {\f1\fs20 SetLength + Ints[]}).
Then the {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} loop ends with the following line:
!!    Check(Client.Add(VA,true)=n);
!  end;
This will add the {\f1\fs20 VA} fields content into the database, creating a new row in the {\f1\fs20 PeopleArray} table, with an {\f1\fs20 ID} following the value of the {\f1\fs20 n} variable. All {\i dynamic array} fields will be serialized as BLOB into the database table.
:21      Dynamic arrays from SQL code
In order to access the @*BLOB@ content of the dynamic arrays directly from @*SQL@ statements, some new @**SQL function@s have been defined in {\f1\fs20 TSQLDataBase}, named after their native simple types:
- {\f1\fs20 ByteDynArrayContains(BlobField,I64)};
- {\f1\fs20 WordDynArrayContains(BlobField,I64)};
- {\f1\fs20 @**IntegerDynArrayContains@(BlobField,I64)};
- {\f1\fs20 CardinalDynArrayContains(BlobField,I64)};
- {\f1\fs20 CurrencyDynArrayContains(BlobField,I64)} - in this case, {\f1\fs20 I64} is not the {\f1\fs20 @*currency@} value directly converted into an {\f1\fs20 Int64} value (i.e. not {\f1\fs20 Int64(aCurrency)}), but the binary mapping of the {\f1\fs20 currency} value, i.e. {\f1\fs20 aCurrency*10000} or {\f1\fs20 PInt64(@aCurrency)^};
- {\f1\fs20 Int64DynArrayContains(BlobField,I64)};
- {\f1\fs20 RawUTF8DynArrayContainsCase(BlobField,'Text')};
- {\f1\fs20 RawUTF8DynArrayContainsNoCase(BlobField,'Text')}.
Those functions allow direct access to the BLOB content like this:
!  for i := 1 to n shr 5 do
!  begin
!    k := i shl 5;
!!    aClient.OneFieldValues(TSQLRecordPeopleArray,'ID',
!!      FormatUTF8('IntegerDynArrayContains(Ints,?)',[],[k]),IDs);
!    Check(length(IDs)=n+1-32*i);
!    for j := 0 to high(IDs) do
!      Check(IDs[j]=k+j);
!  end;
In the above code, the WHERE clause of the {\f1\fs20 OneFieldValues} method will use the dedicated {\f1\fs20 IntegerDynArrayContains} @*SQL function@ to retrieve all records containing the specified {\f1\fs20 integer} value {\f1\fs20 k} in its {\f1\fs20 Ints} BLOB column. With such a function, all the process is performed Server-side, with no slow data transmission nor JSON/Base64 @*serialization@.
For instance, using such a SQL function, you are able to store multiple {\f1\fs20 @*TSQLRecord@. ID} field values into one {\f1\fs20 TIntegerDynArray} property column, and have direct search ability inside the SQL statement. This could be a very handy way of implementing "one to many" or "many to many" relationship, without the need of a pivot table.
Those functions were implemented to be very efficient for speed. They won't create any temporary dynamic array during the search, but will access directly to the BLOB raw memory content, as returned by the {\i SQlite} engine. The {\f1\fs20 RawUTF8DynArrayContainsCase / RawUTF8DynArrayContainsNoCase} functions also will search directly inside the BLOB. With huge number of requests, this could be slower than using a {\f1\fs20 @*TSQLRecordMany@} pivot table, since the search won't use any index, and will have to read all BLOB field during the request. But, in practice, those functions behave nicely with a relative small amount of data (up to about 50,000 rows). Don't forget that BLOB column access are very optimized in {\i @*SQlite3@}.
For more complex dynamic array content handling, you'll have either to create your own @*SQL function@ using the {\f1\fs20 TSQLDataBase. RegisterSQLFunction} method and an associated {\f1\fs20 TSQLDataBaseSQLFunction} class, or via a dedicated @*Service@ or a @*stored procedure@ - see @22@ on how to implement it.
:     TPersistent/TCollection fields
For instance, here is the way regression @*test@s included in the framework define a {\f1\fs20 @*TSQLRecord@} class with some additional {\f1\fs20 @**TPersistent@}, {\f1\fs20 @**TCollection@} or {\f1\fs20 TRawUTF8List} fields ({\f1\fs20 TRawUTF8List} is just a {\f1\fs20 TStringList}-like component, dedicated to handle {\f1\fs20 @*RawUTF8@} kind of {\f1\fs20 string}):
!  TSQLRecordPeopleObject = class(TSQLRecordPeople)
!  private
!    fPersistent: TCollTst;
!    fUTF8: TRawUTF8List;
!  public
!    constructor Create; override;
!    destructor Destroy; override;
!  published
!    property UTF8: TRawUTF8List read fUTF8;
!    property Persistent: TCollTst read fPersistent;
!  end;
In order to avoid any memory leak or access violation, it's mandatory to initialize then release all internal property instances in the overridden {\f1\fs20 constructor} and {\f1\fs20 destructor} of the class:
!constructor TSQLRecordPeopleObject.Create;
!begin
!  inherited;
!  fPersistent := TCollTst.Create;
!  fUTF8 := TRawUTF8List.Create;
!end;
!
!destructor TSQLRecordPeopleObject.Destroy;
!begin
!  inherited;
!  FreeAndNil(fPersistent);
!  FreeAndNil(fUTF8);
!end;
Here is how the regression @*test@s are performed:
!var VO: TSQLRecordPeopleObject;
!  (...)
!if Client.TransactionBegin(TSQLRecordPeopleObject) then
!try
!  V2.FillPrepare(Client,'LastName=:(''Morse''):');
!  n := 0;
!  while V2.FillOne do
!  begin
!    VO.FillFrom(V2); // fast copy some content from TSQLRecordPeople
!    inc(n);
!    VO.Persistent.One.Color := n+100;
!    VO.Persistent.One.Length := n;
!    VO.Persistent.One.Name := Int32ToUtf8(n);
!    if n and 31=0 then
!    begin
!      VO.UTF8.Add(VO.Persistent.One.Name);
!      with VO.Persistent.Coll.Add do
!      begin
!        Color := n+1000;
!        Length := n*2;
!        Name := Int32ToUtf8(n*3);
!      end;
!    end;
!!    Check(Client.Add(VO,true)=n);
!  end;
!  Client.Commit;
!except
!  Client.RollBack; // in case of error
!end;
This will add 1000 rows to the {\f1\fs20 PeopleObject} table.
First of all, the adding is nested inside a @**transaction@ call, to speed up @*SQL@ {\f1\fs20 INSERT} statements, via {\f1\fs20 TransactionBegin} and {\f1\fs20 Commit} methods. Please note that the {\f1\fs20 TransactionBegin} method returns a {\f1\fs20 boolean} value, and should be checked in a multi-threaded or Client-Server environment (in this part of the test suit, content is accessed in the same thread, so checking the result is not mandatory, but shown here for accuracy). In the current implementation of the framework, transactions should not be nested. The typical transaction usage should be the following:
!if Client.TransactionBegin(TSQLRecordPeopleObject) then
!try
!  //.... modify the database content, raise exceptions on error
!  Client.Commit;
!except
!  Client.RollBack; // in case of error
!end;
In a @*Client-Server@ environment with multiple Clients connected at the same time, you can use the dedicated {\f1\fs20 TSQLRestClientURI.TransactionBeginRetry} method:
!if Client.TransactionBeginRetry(TSQLRecordPeopleObject,20) then
!  ...
Note that the transactions are handled according to the corresponding client @*session@: the client should make the transaction block as short as possible (e.g. using a @*batch@ command), since any write attempt by other clients will wait for the transaction to be released (with either a commit or rollback).
The fields inherited from the {\f1\fs20 @*TSQLRecord@} class are retrieved via {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} method calls, for columns with the {\f1\fs20 LastName} matching {\f1\fs20 'Morse'}. One {\f1\fs20 TPersistent} property instance values are set ({\f1\fs20 VO.Persistent.One}), then, for every 32 rows, a new item is added to the {\f1\fs20 VO.Persistent.Coll} collection.
Here is the data sent for instance to the Server, when the item with {\f1\fs20 ID=32} is added:
${"FirstName":"Samuel Finley Breese31",
$"LastName":"Morse",
$"YearOfBirth":1791,
$"YearOfDeath":1872,
$"UTF8":["32"],
$"Persistent":{"One":{"Color":132,"Length":32,"Name":"32"},"Coll":[{"Color":1032,"Length":64,"Name":"96"}]}
$}
Up to revision 1.15 of the framework, the transmitted JSON content was not a true JSON object, but sent as {\f1\fs20 @*RawUTF8@} TEXT values (i.e. every double-quote ({\f1\fs20 "}) character is escaped as {\f1\fs20 \"} - e.g. {\f1\fs20 "UTF8":"[\"32\"]"}). Starting with revision 1.16 of the framework, the transmitted data is a true JSON object, to allow better integration with an AJAX client. That is, {\f1\fs20 UTF8} field is transmitted as a valid JSON array of string, and {\f1\fs20 Persistent} as a valid JSON object with nested objects and arrays.
When all 1000 rows were added to the database file, the following loop is called once with direct connection to the DB engine, once with a remote client connection (with all available connection protocols):
!  for i := 1 to n do
!  begin
!    VO.ClearProperties;
!!    Client.Retrieve(i,VO);
!    Check(VO.ID=i);
!    Check(VO.LastName='Morse');
!    Check(VO.UTF8.Count=i shr 5);
!    for j := 0 to VO.UTF8.Count-1 do
!      Check(GetInteger(pointer(VO.UTF8[j]))=(j+1) shl 5);
!    Check(VO.Persistent.One.Length=i);
!    Check(VO.Persistent.One.Color=i+100);
!    Check(GetInteger(pointer(VO.Persistent.One.Name))=i);
!    Check(VO.Persistent.Coll.Count=i shr 5);
!    for j := 0 to VO.Persistent.Coll.Count-1 do
!     with VO.Persistent.Coll[j] do
!     begin
!       k := (j+1) shl 5;
!       Check(Color=k+1000);
!       Check(Length=k*2);
!       Check(GetInteger(pointer(Name))=k*3);
!     end;
!  end;
All the magic is made in the {\f1\fs20 Client.Retrieve(i,VO)} method. Data is retrieved from the database as TEXT values, then unserialized from @*JSON@ arrays or objects into the internal {\f1\fs20 TRawUTF8List} and {\f1\fs20 TPersistent} instances.
When the {\f1\fs20 ID=33} row is retrieved, the following JSON content is received from the server:
${"ID":33,
$"FirstName":"Samuel Finley Breese32",
$"LastName":"Morse",
$"YearOfBirth":1791,
$"YearOfDeath":1872,
$"UTF8":"[\"32\"]",
$"Persistent":"{\"One\":{\"Color\":133,\"Length\":33,\"Name\":\"33\"},\"Coll\":[{\"Color\":1032,\"Length\":64,\"Name\":\"96\"}]}"}
In contradiction with POST content, this defines no valid nested JSON objects nor arrays, but {\f1\fs20 UTF8} and {\f1\fs20 Persistent} fields transmitted as JSON strings. This is a known limitation of the framework, due to the fact that it is much faster to retrieve directly the text from the database than process for this operation. For an AJAX application, this won't be difficult to use a temporary {\f1\fs20 string} property, and evaluate the JSON content from it, in order to replace the property with a corresponding object content. Implementation may change in the future.
:52     Custom TObject JSON serialization
Not only {\f1\fs20 TPersistent, TCollection} and {\f1\fs20 TSQLRecord} types can be serialized by writting all {\f1\fs20 published} properties.
In fact, any {\f1\fs20 @*TObject@} can be serialized as @*JSON@ in the whole framework: not only for the ORM part (for {\f1\fs20 published} properties), but also for SOA (as parameters of interface-based service methods). All JSON @**serialization@ is centralized in {\f1\fs20 ObjectToJSON()} and {\f1\fs20 JSONToObject()} (aka {\f1\fs20 TJSONSerializer.WriteObject}) functions.
In some cases, it may be handy to have a custom serialization, for instance if you want to manage some third-party classes, or to adapt the serialization scheme to a particular purpose, at runtime.
You can add a customized serialization of any {\f1\fs20 class}, by calling the {\f1\fs20 TJSONSerializer. @**RegisterCustomSerializer@} class method. Two callbacks are to be defined for a specific class type, and will be used to serialize or un-serialize the object instance. The callbacks are class methods ({\f1\fs20 procedure() of object}), and not plain functions (for some evolved objects, it may have sense to use a context during serialization).
In the current implementation of this feature, callbacks expect low-level implementation. That is, their implementation code shall follow function {\f1\fs20 JSONToObject()} patterns, i.e. calling low-level {\f1\fs20 GetJSONField()} function to decode the JSON content, and follow function {\f1\fs20 TJSONSerializer.WriteObject()} patterns, i.e. {\f1\fs20 aSerializer.Add/AddInstanceName/AddJSONEscapeString} to encode the class instance as JSON.
Note that the process is called outside the "{\f1\fs20 \{...\}}" JSON object layout, allowing any serialization scheme: even a class content can be serialized as a JSON string, JSON array or JSON number, on request.
For instance, we'd like to customize the serialization of this class (defined in {\f1\fs20 SynCommons.pas}):
!  TFileVersion = class
!  protected
!    fDetailed: string;
!    fBuildDateTime: TDateTime;
!  public
!    Major: Integer;
!    Minor: Integer;
!    Release: Integer;
!    Build: Integer;
!    BuildYear: integer;
!    Main: string;
!  published
!    property Detailed: string read fDetailed write fDetailed;
!    property BuildDateTime: TDateTime read fBuildDateTime write fBuildDateTime;
!  end;
By default, since it has been defined within {\f1\fs20 \{$M+\} ... \{$M-\}} conditionals, RTTI is available for the {\f1\fs20 published} properties (just as if it were inheriting from {\f1\fs20 TPersistent}). That is, the default JSON serialization will be for instance:
& {"Detailed":"1.2.3.4","BuildDateTime":"1911-03-14T00:00:00"}
This is what is expected when serialized within a {\f1\fs20 TSynLog} content, or for main use.
We would like to serialize this {\f1\fs20 class} as such:
& {"Major":1,"Minor":2001,"Release":3001,"Build":4001,"Main":"1","BuildDateTime":"1911-03-14"}
We will therefore define the {\i Writer} callback, as such:
!class procedure TCollTstDynArray.FVClassWriter(const aSerializer: TJSONSerializer;
!  aValue: TObject; aHumanReadable, aDontStoreDefault, aFullExpand: Boolean);
!var V: TFileVersion absolute aValue;
!begin
!  aSerializer.AddJSONEscape(['Major',V.Major,'Minor',V.Minor,'Release',V.Release,
!    'Build',V.Build,'Main',V.Main,'BuildDateTime',DateTimeToIso8601Text(V.BuildDateTime)]);
!end;
Most of the JSON serialization work will be made within the {\f1\fs20 AddJSONEscape} method, expecting the JSON object description as an array of name/value pairs.
Then the associated {\i Reader} callback could be, for instance:
!class function TCollTstDynArray.FVClassReader(const aValue: TObject; aFrom: PUTF8Char;
!  var aValid: Boolean): PUTF8Char;
!var V: TFileVersion absolute aValue;
!    Values: TPUtf8CharDynArray;
!begin
!  aValid := false;
!  aFrom := JSONDecode(aFrom,['Major','Minor','Release','Build','Main','BuildDateTime'],Values);
!  if aFrom=nil then
!    exit;
!  V.Major := GetInteger(Values[0]);
!  V.Minor := GetInteger(Values[1]);
!  V.Release := GetInteger(Values[2]);
!  V.Build := GetInteger(Values[3]);
!  V.Main := UTF8DecodeToString(Values[4],StrLen(Values[4]));
!  V.BuildDateTime := Iso8601ToDateTimePUTF8Char(Values[5]);
!  aValid := true;
!  result := aFrom;
!end;
Here, the {\f1\fs20 JSONDecode} function will un-serialize the JSON object into an array of {\f1\fs20 PUTF8Char} values, without any memory allocation (in fact, {\f1\fs20 Values[]} will point to un-escaped and #0 terminated content within the {\f1\fs20 aFrom} memory buffer. So decoding is very fast.
Then, the registration step will be defined as such:
!  TJSONSerializer.RegisterCustomSerializer(TFileVersion,
!    TCollTstDynArray.FVClassReader,TCollTstDynArray.FVClassWriter);
If you want to disable the custom serialization, you may call the same method as such:
!  TJSONSerializer.RegisterCustomSerializer(TFileVersion,nil,nil);
This will reset the JSON serialization of the specified class to the default serializer (i.e. writing of {\f1\fs20 published} properties).
The above code uses some low-level functions of the framework (i.e. {\f1\fs20 AddJSONEscape} and {\f1\fs20 JSONDecode}) to implement serialization as a JSON object, but you may use any other serialization scheme, on need. That is, you may serialize the whole class instance just as one JSON string or numerical value, or even a JSON array. It will depend of the implementation of the {\i Reader} and {\i Writer} registered callbacks.
:   ORM implementation via pivot table
Data sharding just feels natural, from the @*ORM@ point of view.
But defining a pivot table is a classic and powerful use of relational database, and will unleash its power:
- When data is huge, you can query only for the needed data, without having to load the whole content (it is something similar to {\i lazy loading} in ORM terminology);
- In a master/detail data model, sometimes it can be handy to access directly to the detail records, e.g. for data consolidation;
- And, last but not least, the pivot table is the natural way of storing data associated with "@*has many through@" relationship (e.g. association time or corresponding permissions).
:    Introducing TSQLRecordMany
A dedicated class, inheriting from the standard {\f1\fs20 @*TSQLRecord@} class (which is the base of all objects stored in our ORM), has been created, named {\f1\fs20 @*TSQLRecordMany@}. This table will turn the "many to many" relationship into two "one to many" relationships pointing in opposite directions. It shall contain at least two {\f1\fs20 TSQLRecord} (i.e. INTEGER) @*published properties@, named "{\f1\fs20 Source}" and "{\f1\fs20 Dest}" (name is mandatory, because the ORM will share for exact matches). The first pointing to the source record (the one with a {\f1\fs20 TSQLRecordMany} published property) and the second to the destination record.
For instance:
! TSQLDest = class(TSQLRecord);
! TSQLSource = class;
!! TSQLDestPivot = class(TSQLRecordMany)
! private
!  fSource: TSQLSource;
................................................................................
!        Check(not MS.DestList.ManySelect(aClient,sID[i],dID[i]));
This code will delete one association per four, and ensure that {\f1\fs20 ManySelect} will retrieve only expected associations.
!  finally
!    MD.Free;
!    MS.Free;
!  end;
This will release associated memory, and also the instance of {\f1\fs20 TSQLDestPivot} created in the {\f1\fs20 DestList} property.
:    Automatic JOIN query
All those methods ({\f1\fs20 ManySelect, DestGetJoined...}) are used to retrieve the relations between tables from the pivot table point of view. This saves bandwidth, and can be used in most simple cases, but it is not the only way to perform requests on many-to-many relationships. And you may have several {\f1\fs20 @*TSQLRecordMany@} instances in the same main record - in this case, those methods won't help you.
It is very common, in the SQL world, to create a @**JOIN@ed request at the main "{\i Source}" table level, and combine records from two or more tables in a database. It creates a set that can be saved as a table or used as is. A JOIN is a means for combining fields from two or more tables by using values common to each. Writing such JOINed statements is not so easy by hand, especially because you'll have to work with several tables, and have to specify the exact fields to be retrieved; if you have several pivot tables, it may start to be a nightmare. Let's see how our @*ORM@ will handle it.
A dedicated {\f1\fs20 FillPrepareMany} method has been added to the {\f1\fs20 @*TSQLRecord@} class, in conjunction with a new {\f1\fs20 constructor} named {\f1\fs20 CreateAndFillPrepareMany}. This particular method will:
- Instantiate all {\f1\fs20 Dest} properties of each {\f1\fs20 TSQLRecordMany} instances - so that the JOINed request will be able to populate directly those values;
- Create the appropriate {\f1\fs20 SELECT} statement, with an optional WHERE clause.
Here is the test included in our regression suite, working with the same database:
!Check(MS.FillPrepareMany(aClient,
................................................................................
!    Check(MS.DestList.AssociationTime=i);
!    Check(MS.DestList.Dest.fID=dID[i]);
!    Check(MS.DestList.Dest.SignatureTime=MD.fSignatureTime);
!    Check(MS.DestList.Dest.Signature=FormatUTF8('% %',[aClient.ClassName,i]));
!  end;
!!  MS.FillClose;
Note that in our case, an explicit call to {\f1\fs20 FillClose} has been added in order to release all {\f1\fs20 Dest} instances created in {\f1\fs20 FillPrepareMany}. This call is not mandatory if you call {\f1\fs20 MS.Free} directly, but it is required if the same {\f1\fs20 MS} instance is about to use some regular many-to-many methods, like {\f1\fs20 MS.DestList.ManySelect()} - it will prevent any GPF exception to occur with code expecting the {\f1\fs20 Dest} property not to be an instance, but a {\f1\fs20 pointer(DestID)} value.




























































































































































































































































































: Calculated fields
It is often useful to handle some calculated fields. That is, having some field values computed when you set another field value. For instance, if you set an error code from an enumeration (stored in an INTEGER field), you may want the corresponding text (to be stored on a TEXT field). Or you may want a total amount to be computed automatically from some detailed records.
This should not be done on the Server side. In fact, the framework expects the transmitted JSON transmitted from client to be set directly to the database layer, as stated by this code from the {\f1\fs20 SQLite3} unit:
!function TSQLRestServerDB.EngineUpdate(Table: TSQLRecordClass; ID: integer;
!  const SentData: RawUTF8): boolean;
!begin
!  if (self=nil) or (Table=nil) or (ID<=0) then

Changes to SQLite3/SQLite3Commons.pas

24552
24553
24554
24555
24556
24557
24558
24559
24560
24561
24562
24563
24564
24565
24566
.....
24767
24768
24769
24770
24771
24772
24773



24774
24775
24776
24777
24778
24779
24780
begin
  result := inherited Info(aTypeInfo);
  if (result=nil) and AddInterface(aTypeInfo,sicClientDriven) then
    result := inherited Info(aTypeInfo);
end;


{ TServiceFactoryClient }

type
  /// map the stack memory layout at TInterfacedObjectFake.FakeCall()
  TFakeCallStack = packed record
    EDX, ECX, MethodIndex, EBP, Ret: Cardinal;
    Stack: array[word] of byte;
  end;
................................................................................
      smvDouble,smvDateTime: asm mov eax,V; fld  qword ptr [eax] end;  // in st(0)
      smvCurrency: asm mov eax,V; fild qword ptr [eax] end;  // in st(0)
      end;
    end;
  end;
end;




function TServiceFactoryClient.CallClient(const aMethod: RawUTF8;
  aErrorMsg: PRawUTF8; const aParams: RawUTF8; aResult: PRawUTF8;
  aClientDrivenID: PCardinal; aServiceCustomAnswer: PServiceCustomAnswer): boolean;
var uri,sent,resp,head: RawUTF8;
    Values: TPUtf8CharDynArray;
begin
  result := false;







|







 







>
>
>







24552
24553
24554
24555
24556
24557
24558
24559
24560
24561
24562
24563
24564
24565
24566
.....
24767
24768
24769
24770
24771
24772
24773
24774
24775
24776
24777
24778
24779
24780
24781
24782
24783
begin
  result := inherited Info(aTypeInfo);
  if (result=nil) and AddInterface(aTypeInfo,sicClientDriven) then
    result := inherited Info(aTypeInfo);
end;


{ TInterfacedObjectFake (private for TServiceFactoryClient) }

type
  /// map the stack memory layout at TInterfacedObjectFake.FakeCall()
  TFakeCallStack = packed record
    EDX, ECX, MethodIndex, EBP, Ret: Cardinal;
    Stack: array[word] of byte;
  end;
................................................................................
      smvDouble,smvDateTime: asm mov eax,V; fld  qword ptr [eax] end;  // in st(0)
      smvCurrency: asm mov eax,V; fild qword ptr [eax] end;  // in st(0)
      end;
    end;
  end;
end;


{ TServiceFactoryClient }

function TServiceFactoryClient.CallClient(const aMethod: RawUTF8;
  aErrorMsg: PRawUTF8; const aParams: RawUTF8; aResult: PRawUTF8;
  aClientDrivenID: PCardinal; aServiceCustomAnswer: PServiceCustomAnswer): boolean;
var uri,sent,resp,head: RawUTF8;
    Values: TPUtf8CharDynArray;
begin
  result := false;