mORMot and Open Source friends
Check-in [064dd5562c]
Not logged in

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
SHA1 Hash:064dd5562c70906f538aa583a8b6e29a2509c856
Date: 2012-05-18 15:36:31
User: G018869
Comment:refactored/updated ORM documentation about Objects relationship (i.e. cardinality)
Tags And Properties
Changes
hide diffs unified diffs patch

Changes to SQLite3/Documentation/Synopse SQLite3 Framework.pro

935 ! 'Name LIKE ? AND Sex = ?',[],[Letters+'%',ord(Sex)]); 935 ! 'Name LIKE ? AND Sex = ?',[],[Letters+'%',ord(Sex)]); 936 is the same as this code: 936 is the same as this code: 937 ! aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', 937 ! aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', 938 ! 'Name LIKE :(%): AND Sex = :(%):', [QuotedStr(Letters+'%'),ord(Sex)]); 938 ! 'Name LIKE :(%): AND Sex = :(%):', [QuotedStr(Letters+'%'),ord(Sex)]); 939 In both cases, the parameters will be inlined, in order to prepare the statement 939 In both cases, the parameters will be inlined, in order to prepare the statement 940 We used the {\f1\fs20 QuotedStr} standard function to embrace the {\f1\fs20 Lett 940 We used the {\f1\fs20 QuotedStr} standard function to embrace the {\f1\fs20 Lett 941 Of course, using '?' and bounds parameters is much easier than '%' and manual {\ 941 Of course, using '?' and bounds parameters is much easier than '%' and manual {\ > 942 : Objects relationship: cardinality > 943 All previous code is fine if your application requires "flat" data. But most of > 944 In the relational model, tables can have the following {\i cardinality}, i.e. ca > 945 - "@*One to one@". > 946 - "Many to one" (rev. "@*One to many@"); > 947 - "Many to many" (or "@has many@"). > 948 Our {\i mORMot framework} handles all those kinds of {\i cardinality}. 942 : "One to one" or "One to many" | 949 : "One to one" or "One to many" 943 In order to handle "{\i @**One to one@}" or "{\i @**One to many@}" relationship 950 In order to handle "{\i @**One to one@}" or "{\i @**One to many@}" relationship 944 For instance, you could declare classes as such: 951 For instance, you could declare classes as such: 945 ! TSQLMyFileInfo = class(TSQLRecord) 952 ! TSQLMyFileInfo = class(TSQLRecord) 946 ! private 953 ! private 947 ! FMyFileDate: TDateTime; 954 ! FMyFileDate: TDateTime; 948 ! FMyFileSize: Int64; 955 ! FMyFileSize: Int64; 949 ! published 956 ! published ................................................................................................................................................................................ 1004 Or with a {\f1\fs20 with} statement: 1011 Or with a {\f1\fs20 with} statement: 1005 ! with TSQLMyFileInfo.Create(Client,MyFile.FirstOne) do 1012 ! with TSQLMyFileInfo.Create(Client,MyFile.FirstOne) do 1006 ! try 1013 ! try 1007 ! // here you can access MyFileDate or MyFileSize 1014 ! // here you can access MyFileDate or MyFileSize 1008 ! finally 1015 ! finally 1009 ! Free; 1016 ! Free; 1010 ! end; 1017 ! end; 1011 Up to now, there is no {\i @**Lazy Loading@} feature in our ORM, for the {\f1\fs | 1018 Mapping a {\f1\fs20 TSQLRecord} field into an {\f1\fs20 integer} ID is a bit dif 1012 The only case when some {\f1\fs20 TSQLRecord} instance are automatically created < 1013 : "Has many" and "has many through" | 1019 : "Has many" and "has many through" 1014 : Many-to-many relationship < 1015 As @http://en.wikipedia.org/wiki/Many-to-many_(data_model) wrote: 1020 As @http://en.wikipedia.org/wiki/Many-to-many_(data_model) wrote: 1016 {\i In systems analysis, a many-to-many relationship is a type of cardinality th 1021 {\i In systems analysis, a many-to-many relationship is a type of cardinality th 1017 From the record point of view, and to follow the @*ORM@ vocabulary (in Ruby on R | 1022 From the record point of view, and to follow the @*ORM@ vocabulary (in Ruby on R 1018 In fact, a pivot table is created, containing two references to both related rec | 1023 In fact, there are several families of ORM design, when implementing the "many t 1019 Martin Fowler (a well-known software design and agile methods expert) defines th | 1024 - Map collections into {\f1\fs20 JOIN}ed query from the ORM (i.e. pivot tables a 1020 {\i An object that wraps a row in a database table or view, encapsulates the dat | 1025 - Explicitly handle pivot tables as ORM classes, and provide methods to access t 1021 In short, this is the ORM approach. | 1026 - Store collections within the ORM classes property (data @*sharding@). 1022 In our framework, we started from the record point of view. Since the "has many | 1027 In the {\i mORMot framework}, we did not implement the 1st implementation patter > 1028 - You can map the DB with dedicated {\f1\fs20 @*TSQLRecordMany@} classes, which > 1029 - But for most applications, it sounds definitively more easy to use {\f1\fs20 T > 1030 Up to now, there is no explicit {\i @**Lazy Loading@} feature in our ORM. There > 1031 :29 Shared nothing architecture (or sharding) > 1032 Defining a pivot table is a classic and powerful use of relational database, and > 1033 But it is not easy nor natural to properly handle it, since it introduces some d > 1034 Furthermore, in real life, we do not have such a separated storage, but we store > 1035 Let us quote what {\i wikipedia} states at @http://en.wikipedia.org/wiki/Shared_ > 1036 {\i A @**shared nothing architecture@ (SN) is a distributed computing architectu > 1037 As we stated in @26@, in our ORM, high-level types like @*dynamic array@s or {\f > 1038 When the server starts to have an increasing number of clients, such a data layo > 1039 A simple but very efficient {\i sharding} mechanism could therefore be implement > 1040 Storing detailed data in BLOB or in TEXT as JSON could first sounds a wrong idea > 1041 Finally, this implementation pattern fits much better with a Domain-Driven Desig > 1042 Therefore, on second thought, having at hand a shared nothing architecture could > 1043 : Arrays, TPersistent, TCollection, TMyClass > 1044 The "{\i has many}" and "{\i has many through}" relationship we just described d > 1045 One drawback of this approach is that the data is split into several tables, and > 1046 Another potential issue is that one business logical unit is split into several > 1047 Starting with the revision 1.13 of the framework, {\i @*dynamic array@s}, {\f1\f > 1048 {\i @*Dynamic array@s} will be stored as @*BLOB@ fields in the database, retriev > 1049 {\f1\fs20 @*TPersistent@, @*TStrings@} and {\f1\fs20 @*TCollection@} will be sto > 1050 About this (trolling?) subject, and why/when you should use plain Delphi objects > 1051 : Dynamic arrays fields > 1052 : Dynamic arrays from Delphi Code > 1053 For instance, here is how the regression @*test@s included in the framework defi > 1054 ! TFV = packed record > 1055 ! Major, Minor, Release, Build: integer; > 1056 ! Main, Detailed: string; > 1057 ! end; > 1058 ! TFVs = array of TFV; > 1059 ! TSQLRecordPeopleArray = class(TSQLRecordPeople) > 1060 ! private > 1061 ! fInts: TIntegerDynArray; > 1062 ! fCurrency: TCurrencyDynArray; > 1063 ! fFileVersion: TFVs; > 1064 ! fUTF8: RawUTF8; > 1065 ! published > 1066 ! property UTF8: RawUTF8 read fUTF8 write fUTF8; > 1067 ! property Ints: TIntegerDynArray index 1 read fInts write fInts; > 1068 ! property Currency: TCurrencyDynArray index 2 read fCurrency write fCurrency > 1069 ! property FileVersion: TFVs index 3 read fFileVersion write fFileVersion; > 1070 ! end; > 1071 This {\f1\fs20 TSQLRecordPeopleArray} class inherits from {\f1\fs20 TSQLRecordPe > 1072 Some content is added to the {\f1\fs20 PeopleArray} table, with the following co > 1073 !var V: TSQLRecordPeople; > 1074 ! VA: TSQLRecordPeopleArray; > 1075 ! FV: TFV; > 1076 ! (...) > 1077 ! V2.FillPrepare(Client,'LastName=:(''Dali''):'); > 1078 ! n := 0; > 1079 ! while V2.FillOne do > 1080 ! begin > 1081 ! VA.FillFrom(V2); // fast copy some content from TSQLRecordPeople > 1082 The {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} method are used to loop through > 1083 ! inc(n); > 1084 ! if n and 31=0 then > 1085 ! begin > 1086 ! VA.UTF8 := ''; > 1087 !! VA.DynArray('Ints').Add(n); > 1088 ! Curr := n*0.01; > 1089 !! VA.DynArray(2).Add(Curr); > 1090 ! FV.Major := n; > 1091 ! FV.Minor := n+2000; > 1092 ! FV.Release := n+3000; > 1093 ! FV.Build := n+4000; > 1094 ! str(n,FV.Main); > 1095 ! str(n+1000,FV.Detailed); > 1096 !! VA.DynArray('FileVersion').Add(FV); > 1097 ! end else > 1098 ! str(n,VA.fUTF8); > 1099 The {\f1\fs20 n} variable is used to follow the {\f1\fs20 PeopleArray} number, a > 1100 We could have used normal access to V{\f1\fs20 VA} and {\f1\fs20 FV} {\i dynamic > 1101 ! SetLength(VA.Ints,length(VA.Ints)+1); > 1102 ! VA.Ints[high(VA.Ints)] := n; > 1103 But the {\f1\fs20 DynArray} method is used instead, to allow direct access to th > 1104 ! VA.DynArray('Ints').Add(n); > 1105 Note that the {\f1\fs20 DynArray} method can be used via two overloaded set of p > 1106 ! VA.DynArray(1).Add(n); > 1107 since the {\f1\fs20 Ints} published property has been defined as such: > 1108 ! property Ints: TIntegerDynArray index 1 read fInts write fInts; > 1109 Similarly, the following line will add a {\f1\fs20 @*currency@} value to the {\f > 1110 ! VA.DynArray(2).Add(Curr); > 1111 And a more complex {\f1\fs20 TFV} record is added to the {\f1\fs20 FileVersion} > 1112 ! VA.DynArray('FileVersion').Add(FV); > 1113 Of course, using the {\f1\fs20 DynArray} method is a bit slower than direct {\f1 > 1114 Then the {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} loop ends with the followin > 1115 !! Check(Client.Add(VA,true)=n); > 1116 ! end; > 1117 This will add the {\f1\fs20 VA} fields content into the database, creating a new > 1118 :21 Dynamic arrays from SQL code > 1119 In order to access the @*BLOB@ content of the dynamic arrays directly from @*SQL > 1120 - {\f1\fs20 ByteDynArrayContains(BlobField,I64)}; > 1121 - {\f1\fs20 WordDynArrayContains(BlobField,I64)}; > 1122 - {\f1\fs20 @**IntegerDynArrayContains@(BlobField,I64)}; > 1123 - {\f1\fs20 CardinalDynArrayContains(BlobField,I64)}; > 1124 - {\f1\fs20 CurrencyDynArrayContains(BlobField,I64)} - in this case, {\f1\fs20 I > 1125 - {\f1\fs20 Int64DynArrayContains(BlobField,I64)}; > 1126 - {\f1\fs20 RawUTF8DynArrayContainsCase(BlobField,'Text')}; > 1127 - {\f1\fs20 RawUTF8DynArrayContainsNoCase(BlobField,'Text')}. > 1128 Those functions allow direct access to the BLOB content like this: > 1129 ! for i := 1 to n shr 5 do > 1130 ! begin > 1131 ! k := i shl 5; > 1132 !! aClient.OneFieldValues(TSQLRecordPeopleArray,'ID', > 1133 !! FormatUTF8('IntegerDynArrayContains(Ints,?)',[],[k]),IDs); > 1134 ! Check(length(IDs)=n+1-32*i); > 1135 ! for j := 0 to high(IDs) do > 1136 ! Check(IDs[j]=k+j); > 1137 ! end; > 1138 In the above code, the WHERE clause of the {\f1\fs20 OneFieldValues} method will > 1139 For instance, using such a SQL function, you are able to store multiple {\f1\fs2 > 1140 Those functions were implemented to be very efficient for speed. They won't crea > 1141 For more complex dynamic array content handling, you'll have either to create yo > 1142 : TPersistent/TCollection fields > 1143 For instance, here is the way regression @*test@s included in the framework defi > 1144 ! TSQLRecordPeopleObject = class(TSQLRecordPeople) > 1145 ! private > 1146 ! fPersistent: TCollTst; > 1147 ! fUTF8: TRawUTF8List; > 1148 ! public > 1149 ! constructor Create; override; > 1150 ! destructor Destroy; override; > 1151 ! published > 1152 ! property UTF8: TRawUTF8List read fUTF8; > 1153 ! property Persistent: TCollTst read fPersistent; > 1154 ! end; > 1155 In order to avoid any memory leak or access violation, it's mandatory to initial > 1156 !constructor TSQLRecordPeopleObject.Create; > 1157 !begin > 1158 ! inherited; > 1159 ! fPersistent := TCollTst.Create; > 1160 ! fUTF8 := TRawUTF8List.Create; > 1161 !end; > 1162 ! > 1163 !destructor TSQLRecordPeopleObject.Destroy; > 1164 !begin > 1165 ! inherited; > 1166 ! FreeAndNil(fPersistent); > 1167 ! FreeAndNil(fUTF8); > 1168 !end; > 1169 Here is how the regression @*test@s are performed: > 1170 !var VO: TSQLRecordPeopleObject; > 1171 ! (...) > 1172 !if Client.TransactionBegin(TSQLRecordPeopleObject) then > 1173 !try > 1174 ! V2.FillPrepare(Client,'LastName=:(''Morse''):'); > 1175 ! n := 0; > 1176 ! while V2.FillOne do > 1177 ! begin > 1178 ! VO.FillFrom(V2); // fast copy some content from TSQLRecordPeople > 1179 ! inc(n); > 1180 ! VO.Persistent.One.Color := n+100; > 1181 ! VO.Persistent.One.Length := n; > 1182 ! VO.Persistent.One.Name := Int32ToUtf8(n); > 1183 ! if n and 31=0 then > 1184 ! begin > 1185 ! VO.UTF8.Add(VO.Persistent.One.Name); > 1186 ! with VO.Persistent.Coll.Add do > 1187 ! begin > 1188 ! Color := n+1000; > 1189 ! Length := n*2; > 1190 ! Name := Int32ToUtf8(n*3); > 1191 ! end; > 1192 ! end; > 1193 !! Check(Client.Add(VO,true)=n); > 1194 ! end; > 1195 ! Client.Commit; > 1196 !except > 1197 ! Client.RollBack; // in case of error > 1198 !end; > 1199 This will add 1000 rows to the {\f1\fs20 PeopleObject} table. > 1200 First of all, the adding is nested inside a @**transaction@ call, to speed up @* > 1201 !if Client.TransactionBegin(TSQLRecordPeopleObject) then > 1202 !try > 1203 ! //.... modify the database content, raise exceptions on error > 1204 ! Client.Commit; > 1205 !except > 1206 ! Client.RollBack; // in case of error > 1207 !end; > 1208 In a @*Client-Server@ environment with multiple Clients connected at the same ti > 1209 !if Client.TransactionBeginRetry(TSQLRecordPeopleObject,20) then > 1210 ! ... > 1211 Note that the transactions are handled according to the corresponding client @*s > 1212 The fields inherited from the {\f1\fs20 @*TSQLRecord@} class are retrieved via { > 1213 Here is the data sent for instance to the Server, when the item with {\f1\fs20 I > 1214 ${"FirstName":"Samuel Finley Breese31", > 1215 $"LastName":"Morse", > 1216 $"YearOfBirth":1791, > 1217 $"YearOfDeath":1872, > 1218 $"UTF8":["32"], > 1219 $"Persistent":{"One":{"Color":132,"Length":32,"Name":"32"},"Coll":[{"Color":1032 > 1220 $} > 1221 Up to revision 1.15 of the framework, the transmitted JSON content was not a tru > 1222 When all 1000 rows were added to the database file, the following loop is called > 1223 ! for i := 1 to n do > 1224 ! begin > 1225 ! VO.ClearProperties; > 1226 !! Client.Retrieve(i,VO); > 1227 ! Check(VO.ID=i); > 1228 ! Check(VO.LastName='Morse'); > 1229 ! Check(VO.UTF8.Count=i shr 5); > 1230 ! for j := 0 to VO.UTF8.Count-1 do > 1231 ! Check(GetInteger(pointer(VO.UTF8[j]))=(j+1) shl 5); > 1232 ! Check(VO.Persistent.One.Length=i); > 1233 ! Check(VO.Persistent.One.Color=i+100); > 1234 ! Check(GetInteger(pointer(VO.Persistent.One.Name))=i); > 1235 ! Check(VO.Persistent.Coll.Count=i shr 5); > 1236 ! for j := 0 to VO.Persistent.Coll.Count-1 do > 1237 ! with VO.Persistent.Coll[j] do > 1238 ! begin > 1239 ! k := (j+1) shl 5; > 1240 ! Check(Color=k+1000); > 1241 ! Check(Length=k*2); > 1242 ! Check(GetInteger(pointer(Name))=k*3); > 1243 ! end; > 1244 ! end; > 1245 All the magic is made in the {\f1\fs20 Client.Retrieve(i,VO)} method. Data is re > 1246 When the {\f1\fs20 ID=33} row is retrieved, the following JSON content is receiv > 1247 ${"ID":33, > 1248 $"FirstName":"Samuel Finley Breese32", > 1249 $"LastName":"Morse", > 1250 $"YearOfBirth":1791, > 1251 $"YearOfDeath":1872, > 1252 $"UTF8":"[\"32\"]", > 1253 $"Persistent":"{\"One\":{\"Color\":133,\"Length\":33,\"Name\":\"33\"},\"Coll\":[ > 1254 In contradiction with POST content, this defines no valid nested JSON objects no > 1255 :52 Custom TObject JSON serialization > 1256 Not only {\f1\fs20 TPersistent, TCollection} and {\f1\fs20 TSQLRecord} types can > 1257 In fact, any {\f1\fs20 @*TObject@} can be serialized as @*JSON@ in the whole fra > 1258 In some cases, it may be handy to have a custom serialization, for instance if y > 1259 You can add a customized serialization of any {\f1\fs20 class}, by calling the { > 1260 In the current implementation of this feature, callbacks expect low-level implem > 1261 Note that the process is called outside the "{\f1\fs20 \{...\}}" JSON object lay > 1262 For instance, we'd like to customize the serialization of this class (defined in > 1263 ! TFileVersion = class > 1264 ! protected > 1265 ! fDetailed: string; > 1266 ! fBuildDateTime: TDateTime; > 1267 ! public > 1268 ! Major: Integer; > 1269 ! Minor: Integer; > 1270 ! Release: Integer; > 1271 ! Build: Integer; > 1272 ! BuildYear: integer; > 1273 ! Main: string; > 1274 ! published > 1275 ! property Detailed: string read fDetailed write fDetailed; > 1276 ! property BuildDateTime: TDateTime read fBuildDateTime write fBuildDateTime; > 1277 ! end; > 1278 By default, since it has been defined within {\f1\fs20 \{$M+\} ... \{$M-\}} cond > 1279 & {"Detailed":"1.2.3.4","BuildDateTime":"1911-03-14T00:00:00"} > 1280 This is what is expected when serialized within a {\f1\fs20 TSynLog} content, or > 1281 We would like to serialize this {\f1\fs20 class} as such: > 1282 & {"Major":1,"Minor":2001,"Release":3001,"Build":4001,"Main":"1","BuildDateTime" > 1283 We will therefore define the {\i Writer} callback, as such: > 1284 !class procedure TCollTstDynArray.FVClassWriter(const aSerializer: TJSONSerializ > 1285 ! aValue: TObject; aHumanReadable, aDontStoreDefault, aFullExpand: Boolean); > 1286 !var V: TFileVersion absolute aValue; > 1287 !begin > 1288 ! aSerializer.AddJSONEscape(['Major',V.Major,'Minor',V.Minor,'Release',V.Releas > 1289 ! 'Build',V.Build,'Main',V.Main,'BuildDateTime',DateTimeToIso8601Text(V.Build > 1290 !end; > 1291 Most of the JSON serialization work will be made within the {\f1\fs20 AddJSONEsc > 1292 Then the associated {\i Reader} callback could be, for instance: > 1293 !class function TCollTstDynArray.FVClassReader(const aValue: TObject; aFrom: PUT > 1294 ! var aValid: Boolean): PUTF8Char; > 1295 !var V: TFileVersion absolute aValue; > 1296 ! Values: TPUtf8CharDynArray; > 1297 !begin > 1298 ! aValid := false; > 1299 ! aFrom := JSONDecode(aFrom,['Major','Minor','Release','Build','Main','BuildDat > 1300 ! if aFrom=nil then > 1301 ! exit; > 1302 ! V.Major := GetInteger(Values[0]); > 1303 ! V.Minor := GetInteger(Values[1]); > 1304 ! V.Release := GetInteger(Values[2]); > 1305 ! V.Build := GetInteger(Values[3]); > 1306 ! V.Main := UTF8DecodeToString(Values[4],StrLen(Values[4])); > 1307 ! V.BuildDateTime := Iso8601ToDateTimePUTF8Char(Values[5]); > 1308 ! aValid := true; > 1309 ! result := aFrom; > 1310 !end; > 1311 Here, the {\f1\fs20 JSONDecode} function will un-serialize the JSON object into > 1312 Then, the registration step will be defined as such: > 1313 ! TJSONSerializer.RegisterCustomSerializer(TFileVersion, > 1314 ! TCollTstDynArray.FVClassReader,TCollTstDynArray.FVClassWriter); > 1315 If you want to disable the custom serialization, you may call the same method as > 1316 ! TJSONSerializer.RegisterCustomSerializer(TFileVersion,nil,nil); > 1317 This will reset the JSON serialization of the specified class to the default ser > 1318 The above code uses some low-level functions of the framework (i.e. {\f1\fs20 Ad 1023 : ORM implementation | 1319 : ORM implementation via pivot table > 1320 Data sharding just feels natural, from the @*ORM@ point of view. > 1321 But defining a pivot table is a classic and powerful use of relational database, > 1322 - When data is huge, you can query only for the needed data, without having to l > 1323 - In a master/detail data model, sometimes it can be handy to access directly to > 1324 - And, last but not least, the pivot table is the natural way of storing data as > 1325 : Introducing TSQLRecordMany 1024 A dedicated class, inheriting from the standard {\f1\fs20 @*TSQLRecord@} class ( 1326 A dedicated class, inheriting from the standard {\f1\fs20 @*TSQLRecord@} class ( 1025 For instance: 1327 For instance: 1026 ! TSQLDest = class(TSQLRecord); 1328 ! TSQLDest = class(TSQLRecord); 1027 ! TSQLSource = class; 1329 ! TSQLSource = class; 1028 !! TSQLDestPivot = class(TSQLRecordMany) 1330 !! TSQLDestPivot = class(TSQLRecordMany) 1029 ! private 1331 ! private 1030 ! fSource: TSQLSource; 1332 ! fSource: TSQLSource; ................................................................................................................................................................................ 1152 ! Check(not MS.DestList.ManySelect(aClient,sID[i],dID[i])); 1454 ! Check(not MS.DestList.ManySelect(aClient,sID[i],dID[i])); 1153 This code will delete one association per four, and ensure that {\f1\fs20 ManySe 1455 This code will delete one association per four, and ensure that {\f1\fs20 ManySe 1154 ! finally 1456 ! finally 1155 ! MD.Free; 1457 ! MD.Free; 1156 ! MS.Free; 1458 ! MS.Free; 1157 ! end; 1459 ! end; 1158 This will release associated memory, and also the instance of {\f1\fs20 TSQLDest 1460 This will release associated memory, and also the instance of {\f1\fs20 TSQLDest 1159 : Automatic JOIN query | 1461 : Automatic JOIN query 1160 All those methods ({\f1\fs20 ManySelect, DestGetJoined...}) are used to retrieve 1462 All those methods ({\f1\fs20 ManySelect, DestGetJoined...}) are used to retrieve 1161 It is very common, in the SQL world, to create a @**JOIN@ed request at the main 1463 It is very common, in the SQL world, to create a @**JOIN@ed request at the main 1162 A dedicated {\f1\fs20 FillPrepareMany} method has been added to the {\f1\fs20 @* 1464 A dedicated {\f1\fs20 FillPrepareMany} method has been added to the {\f1\fs20 @* 1163 - Instantiate all {\f1\fs20 Dest} properties of each {\f1\fs20 TSQLRecordMany} i 1465 - Instantiate all {\f1\fs20 Dest} properties of each {\f1\fs20 TSQLRecordMany} i 1164 - Create the appropriate {\f1\fs20 SELECT} statement, with an optional WHERE cla 1466 - Create the appropriate {\f1\fs20 SELECT} statement, with an optional WHERE cla 1165 Here is the test included in our regression suite, working with the same databas 1467 Here is the test included in our regression suite, working with the same databas 1166 !Check(MS.FillPrepareMany(aClient, 1468 !Check(MS.FillPrepareMany(aClient, ................................................................................................................................................................................ 1191 ! Check(MS.DestList.AssociationTime=i); 1493 ! Check(MS.DestList.AssociationTime=i); 1192 ! Check(MS.DestList.Dest.fID=dID[i]); 1494 ! Check(MS.DestList.Dest.fID=dID[i]); 1193 ! Check(MS.DestList.Dest.SignatureTime=MD.fSignatureTime); 1495 ! Check(MS.DestList.Dest.SignatureTime=MD.fSignatureTime); 1194 ! Check(MS.DestList.Dest.Signature=FormatUTF8('% %',[aClient.ClassName,i])); 1496 ! Check(MS.DestList.Dest.Signature=FormatUTF8('% %',[aClient.ClassName,i])); 1195 ! end; 1497 ! end; 1196 !! MS.FillClose; 1498 !! MS.FillClose; 1197 Note that in our case, an explicit call to {\f1\fs20 FillClose} has been added i 1499 Note that in our case, an explicit call to {\f1\fs20 FillClose} has been added i 1198 : What about arrays, TPersistent, TCollection, TMyClass? < 1199 The "{\i has many}" and "{\i has many through}" relationship we just described d < 1200 One drawback of this approach is that the data is split into several tables, and < 1201 Another potential issue is that one business logical unit is split into several < 1202 Starting with the revision 1.13 of the framework, {\i @*dynamic array@s}, {\f1\f < 1203 {\i @*Dynamic array@s} will be stored as @*BLOB@ fields in the database, retriev < 1204 {\f1\fs20 @*TPersistent@, @*TStrings@} and {\f1\fs20 @*TCollection@} will be sto < 1205 About this (trolling?) subject, and why/when you should use plain Delphi objects < 1206 : Dynamic arrays fields < 1207 : Dynamic arrays from Delphi Code < 1208 For instance, here is how the regression @*test@s included in the framework defi < 1209 ! TFV = packed record < 1210 ! Major, Minor, Release, Build: integer; < 1211 ! Main, Detailed: string; < 1212 ! end; < 1213 ! TFVs = array of TFV; < 1214 ! TSQLRecordPeopleArray = class(TSQLRecordPeople) < 1215 ! private < 1216 ! fInts: TIntegerDynArray; < 1217 ! fCurrency: TCurrencyDynArray; < 1218 ! fFileVersion: TFVs; < 1219 ! fUTF8: RawUTF8; < 1220 ! published < 1221 ! property UTF8: RawUTF8 read fUTF8 write fUTF8; < 1222 ! property Ints: TIntegerDynArray index 1 read fInts write fInts; < 1223 ! property Currency: TCurrencyDynArray index 2 read fCurrency write fCurrency < 1224 ! property FileVersion: TFVs index 3 read fFileVersion write fFileVersion; < 1225 ! end; < 1226 This {\f1\fs20 TSQLRecordPeopleArray} class inherits from {\f1\fs20 TSQLRecordPe < 1227 Some content is added to the {\f1\fs20 PeopleArray} table, with the following co < 1228 !var V: TSQLRecordPeople; < 1229 ! VA: TSQLRecordPeopleArray; < 1230 ! FV: TFV; < 1231 ! (...) < 1232 ! V2.FillPrepare(Client,'LastName=:(''Dali''):'); < 1233 ! n := 0; < 1234 ! while V2.FillOne do < 1235 ! begin < 1236 ! VA.FillFrom(V2); // fast copy some content from TSQLRecordPeople < 1237 The {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} method are used to loop through < 1238 ! inc(n); < 1239 ! if n and 31=0 then < 1240 ! begin < 1241 ! VA.UTF8 := ''; < 1242 !! VA.DynArray('Ints').Add(n); < 1243 ! Curr := n*0.01; < 1244 !! VA.DynArray(2).Add(Curr); < 1245 ! FV.Major := n; < 1246 ! FV.Minor := n+2000; < 1247 ! FV.Release := n+3000; < 1248 ! FV.Build := n+4000; < 1249 ! str(n,FV.Main); < 1250 ! str(n+1000,FV.Detailed); < 1251 !! VA.DynArray('FileVersion').Add(FV); < 1252 ! end else < 1253 ! str(n,VA.fUTF8); < 1254 The {\f1\fs20 n} variable is used to follow the {\f1\fs20 PeopleArray} number, a < 1255 We could have used normal access to V{\f1\fs20 VA} and {\f1\fs20 FV} {\i dynamic < 1256 ! SetLength(VA.Ints,length(VA.Ints)+1); < 1257 ! VA.Ints[high(VA.Ints)] := n; < 1258 But the {\f1\fs20 DynArray} method is used instead, to allow direct access to th < 1259 ! VA.DynArray('Ints').Add(n); < 1260 Note that the {\f1\fs20 DynArray} method can be used via two overloaded set of p < 1261 ! VA.DynArray(1).Add(n); < 1262 since the {\f1\fs20 Ints} published property has been defined as such: < 1263 ! property Ints: TIntegerDynArray index 1 read fInts write fInts; < 1264 Similarly, the following line will add a {\f1\fs20 @*currency@} value to the {\f < 1265 ! VA.DynArray(2).Add(Curr); < 1266 And a more complex {\f1\fs20 TFV} record is added to the {\f1\fs20 FileVersion} < 1267 ! VA.DynArray('FileVersion').Add(FV); < 1268 Of course, using the {\f1\fs20 DynArray} method is a bit slower than direct {\f1 < 1269 Then the {\f1\fs20 FillPrepare} / {\f1\fs20 FillOne} loop ends with the followin < 1270 !! Check(Client.Add(VA,true)=n); < 1271 ! end; < 1272 This will add the {\f1\fs20 VA} fields content into the database, creating a new < 1273 :21 Dynamic arrays from SQL code < 1274 In order to access the @*BLOB@ content of the dynamic arrays directly from @*SQL < 1275 - {\f1\fs20 ByteDynArrayContains(BlobField,I64)}; < 1276 - {\f1\fs20 WordDynArrayContains(BlobField,I64)}; < 1277 - {\f1\fs20 @**IntegerDynArrayContains@(BlobField,I64)}; < 1278 - {\f1\fs20 CardinalDynArrayContains(BlobField,I64)}; < 1279 - {\f1\fs20 CurrencyDynArrayContains(BlobField,I64)} - in this case, {\f1\fs20 I < 1280 - {\f1\fs20 Int64DynArrayContains(BlobField,I64)}; < 1281 - {\f1\fs20 RawUTF8DynArrayContainsCase(BlobField,'Text')}; < 1282 - {\f1\fs20 RawUTF8DynArrayContainsNoCase(BlobField,'Text')}. < 1283 Those functions allow direct access to the BLOB content like this: < 1284 ! for i := 1 to n shr 5 do < 1285 ! begin < 1286 ! k := i shl 5; < 1287 !! aClient.OneFieldValues(TSQLRecordPeopleArray,'ID', < 1288 !! FormatUTF8('IntegerDynArrayContains(Ints,?)',[],[k]),IDs); < 1289 ! Check(length(IDs)=n+1-32*i); < 1290 ! for j := 0 to high(IDs) do < 1291 ! Check(IDs[j]=k+j); < 1292 ! end; < 1293 In the above code, the WHERE clause of the {\f1\fs20 OneFieldValues} method will < 1294 For instance, using such a SQL function, you are able to store multiple {\f1\fs2 < 1295 Those functions were implemented to be very efficient for speed. They won't crea < 1296 For more complex dynamic array content handling, you'll have either to create yo < 1297 : TPersistent/TCollection fields < 1298 For instance, here is the way regression @*test@s included in the framework defi < 1299 ! TSQLRecordPeopleObject = class(TSQLRecordPeople) < 1300 ! private < 1301 ! fPersistent: TCollTst; < 1302 ! fUTF8: TRawUTF8List; < 1303 ! public < 1304 ! constructor Create; override; < 1305 ! destructor Destroy; override; < 1306 ! published < 1307 ! property UTF8: TRawUTF8List read fUTF8; < 1308 ! property Persistent: TCollTst read fPersistent; < 1309 ! end; < 1310 In order to avoid any memory leak or access violation, it's mandatory to initial < 1311 !constructor TSQLRecordPeopleObject.Create; < 1312 !begin < 1313 ! inherited; < 1314 ! fPersistent := TCollTst.Create; < 1315 ! fUTF8 := TRawUTF8List.Create; < 1316 !end; < 1317 ! < 1318 !destructor TSQLRecordPeopleObject.Destroy; < 1319 !begin < 1320 ! inherited; < 1321 ! FreeAndNil(fPersistent); < 1322 ! FreeAndNil(fUTF8); < 1323 !end; < 1324 Here is how the regression @*test@s are performed: < 1325 !var VO: TSQLRecordPeopleObject; < 1326 ! (...) < 1327 !if Client.TransactionBegin(TSQLRecordPeopleObject) then < 1328 !try < 1329 ! V2.FillPrepare(Client,'LastName=:(''Morse''):'); < 1330 ! n := 0; < 1331 ! while V2.FillOne do < 1332 ! begin < 1333 ! VO.FillFrom(V2); // fast copy some content from TSQLRecordPeople < 1334 ! inc(n); < 1335 ! VO.Persistent.One.Color := n+100; < 1336 ! VO.Persistent.One.Length := n; < 1337 ! VO.Persistent.One.Name := Int32ToUtf8(n); < 1338 ! if n and 31=0 then < 1339 ! begin < 1340 ! VO.UTF8.Add(VO.Persistent.One.Name); < 1341 ! with VO.Persistent.Coll.Add do < 1342 ! begin < 1343 ! Color := n+1000; < 1344 ! Length := n*2; < 1345 ! Name := Int32ToUtf8(n*3); < 1346 ! end; < 1347 ! end; < 1348 !! Check(Client.Add(VO,true)=n); < 1349 ! end; < 1350 ! Client.Commit; < 1351 !except < 1352 ! Client.RollBack; // in case of error < 1353 !end; < 1354 This will add 1000 rows to the {\f1\fs20 PeopleObject} table. < 1355 First of all, the adding is nested inside a @**transaction@ call, to speed up @* < 1356 !if Client.TransactionBegin(TSQLRecordPeopleObject) then < 1357 !try < 1358 ! //.... modify the database content, raise exceptions on error < 1359 ! Client.Commit; < 1360 !except < 1361 ! Client.RollBack; // in case of error < 1362 !end; < 1363 In a @*Client-Server@ environment with multiple Clients connected at the same ti < 1364 !if Client.TransactionBeginRetry(TSQLRecordPeopleObject,20) then < 1365 ! ... < 1366 Note that the transactions are handled according to the corresponding client @*s < 1367 The fields inherited from the {\f1\fs20 @*TSQLRecord@} class are retrieved via { < 1368 Here is the data sent for instance to the Server, when the item with {\f1\fs20 I < 1369 ${"FirstName":"Samuel Finley Breese31", < 1370 $"LastName":"Morse", < 1371 $"YearOfBirth":1791, < 1372 $"YearOfDeath":1872, < 1373 $"UTF8":["32"], < 1374 $"Persistent":{"One":{"Color":132,"Length":32,"Name":"32"},"Coll":[{"Color":1032 < 1375 $} < 1376 Up to revision 1.15 of the framework, the transmitted JSON content was not a tru < 1377 When all 1000 rows were added to the database file, the following loop is called < 1378 ! for i := 1 to n do < 1379 ! begin < 1380 ! VO.ClearProperties; < 1381 !! Client.Retrieve(i,VO); < 1382 ! Check(VO.ID=i); < 1383 ! Check(VO.LastName='Morse'); < 1384 ! Check(VO.UTF8.Count=i shr 5); < 1385 ! for j := 0 to VO.UTF8.Count-1 do < 1386 ! Check(GetInteger(pointer(VO.UTF8[j]))=(j+1) shl 5); < 1387 ! Check(VO.Persistent.One.Length=i); < 1388 ! Check(VO.Persistent.One.Color=i+100); < 1389 ! Check(GetInteger(pointer(VO.Persistent.One.Name))=i); < 1390 ! Check(VO.Persistent.Coll.Count=i shr 5); < 1391 ! for j := 0 to VO.Persistent.Coll.Count-1 do < 1392 ! with VO.Persistent.Coll[j] do < 1393 ! begin < 1394 ! k := (j+1) shl 5; < 1395 ! Check(Color=k+1000); < 1396 ! Check(Length=k*2); < 1397 ! Check(GetInteger(pointer(Name))=k*3); < 1398 ! end; < 1399 ! end; < 1400 All the magic is made in the {\f1\fs20 Client.Retrieve(i,VO)} method. Data is re < 1401 When the {\f1\fs20 ID=33} row is retrieved, the following JSON content is receiv < 1402 ${"ID":33, < 1403 $"FirstName":"Samuel Finley Breese32", < 1404 $"LastName":"Morse", < 1405 $"YearOfBirth":1791, < 1406 $"YearOfDeath":1872, < 1407 $"UTF8":"[\"32\"]", < 1408 $"Persistent":"{\"One\":{\"Color\":133,\"Length\":33,\"Name\":\"33\"},\"Coll\":[ < 1409 In contradiction with POST content, this defines no valid nested JSON objects no < 1410 :52 Custom TObject JSON serialization < 1411 Not only {\f1\fs20 TPersistent, TCollection} and {\f1\fs20 TSQLRecord} types can < 1412 In fact, any {\f1\fs20 @*TObject@} can be serialized as @*JSON@ in the whole fra < 1413 In some cases, it may be handy to have a custom serialization, for instance if y < 1414 You can add a customized serialization of any {\f1\fs20 class}, by calling the { < 1415 In the current implementation of this feature, callbacks expect low-level implem < 1416 Note that the process is called outside the "{\f1\fs20 \{...\}}" JSON object lay < 1417 For instance, we'd like to customize the serialization of this class (defined in < 1418 ! TFileVersion = class < 1419 ! protected < 1420 ! fDetailed: string; < 1421 ! fBuildDateTime: TDateTime; < 1422 ! public < 1423 ! Major: Integer; < 1424 ! Minor: Integer; < 1425 ! Release: Integer; < 1426 ! Build: Integer; < 1427 ! BuildYear: integer; < 1428 ! Main: string; < 1429 ! published < 1430 ! property Detailed: string read fDetailed write fDetailed; < 1431 ! property BuildDateTime: TDateTime read fBuildDateTime write fBuildDateTime; < 1432 ! end; < 1433 By default, since it has been defined within {\f1\fs20 \{$M+\} ... \{$M-\}} cond < 1434 & {"Detailed":"1.2.3.4","BuildDateTime":"1911-03-14T00:00:00"} < 1435 This is what is expected when serialized within a {\f1\fs20 TSynLog} content, or < 1436 We would like to serialize this {\f1\fs20 class} as such: < 1437 & {"Major":1,"Minor":2001,"Release":3001,"Build":4001,"Main":"1","BuildDateTime" < 1438 We will therefore define the {\i Writer} callback, as such: < 1439 !class procedure TCollTstDynArray.FVClassWriter(const aSerializer: TJSONSerializ < 1440 ! aValue: TObject; aHumanReadable, aDontStoreDefault, aFullExpand: Boolean); < 1441 !var V: TFileVersion absolute aValue; < 1442 !begin < 1443 ! aSerializer.AddJSONEscape(['Major',V.Major,'Minor',V.Minor,'Release',V.Releas < 1444 ! 'Build',V.Build,'Main',V.Main,'BuildDateTime',DateTimeToIso8601Text(V.Build < 1445 !end; < 1446 Most of the JSON serialization work will be made within the {\f1\fs20 AddJSONEsc < 1447 Then the associated {\i Reader} callback could be, for instance: < 1448 !class function TCollTstDynArray.FVClassReader(const aValue: TObject; aFrom: PUT < 1449 ! var aValid: Boolean): PUTF8Char; < 1450 !var V: TFileVersion absolute aValue; < 1451 ! Values: TPUtf8CharDynArray; < 1452 !begin < 1453 ! aValid := false; < 1454 ! aFrom := JSONDecode(aFrom,['Major','Minor','Release','Build','Main','BuildDat < 1455 ! if aFrom=nil then < 1456 ! exit; < 1457 ! V.Major := GetInteger(Values[0]); < 1458 ! V.Minor := GetInteger(Values[1]); < 1459 ! V.Release := GetInteger(Values[2]); < 1460 ! V.Build := GetInteger(Values[3]); < 1461 ! V.Main := UTF8DecodeToString(Values[4],StrLen(Values[4])); < 1462 ! V.BuildDateTime := Iso8601ToDateTimePUTF8Char(Values[5]); < 1463 ! aValid := true; < 1464 ! result := aFrom; < 1465 !end; < 1466 Here, the {\f1\fs20 JSONDecode} function will un-serialize the JSON object into < 1467 Then, the registration step will be defined as such: < 1468 ! TJSONSerializer.RegisterCustomSerializer(TFileVersion, < 1469 ! TCollTstDynArray.FVClassReader,TCollTstDynArray.FVClassWriter); < 1470 If you want to disable the custom serialization, you may call the same method as < 1471 ! TJSONSerializer.RegisterCustomSerializer(TFileVersion,nil,nil); < 1472 This will reset the JSON serialization of the specified class to the default ser < 1473 The above code uses some low-level functions of the framework (i.e. {\f1\fs20 Ad < 1474 :29 Shared nothing architecture (or sharding) < 1475 Here is what {\i wikipedia} states at @http://en.wikipedia.org/wiki/Shared_nothi < 1476 {\i A @**shared nothing architecture@ (SN) is a distributed computing architectu < 1477 As we just stated, in our ORM, high-level types like @*dynamic array@s or {\f1\f < 1478 When the server starts to have an increasing number of clients, such a data layo < 1479 A simple but very efficient {\i sharding} mechanism could therefore be implement < 1480 Storing detailed data in BLOB or in TEXT as JSON could first sounds a wrong idea < 1481 Therefore, on second thought, having at hand a shared nothing architecture could < 1482 : Calculated fields 1500 : Calculated fields 1483 It is often useful to handle some calculated fields. That is, having some field 1501 It is often useful to handle some calculated fields. That is, having some field 1484 This should not be done on the Server side. In fact, the framework expects the t 1502 This should not be done on the Server side. In fact, the framework expects the t 1485 !function TSQLRestServerDB.EngineUpdate(Table: TSQLRecordClass; ID: integer; 1503 !function TSQLRestServerDB.EngineUpdate(Table: TSQLRecordClass; ID: integer; 1486 ! const SentData: RawUTF8): boolean; 1504 ! const SentData: RawUTF8): boolean; 1487 !begin 1505 !begin 1488 ! if (self=nil) or (Table=nil) or (ID<=0) then 1506 ! if (self=nil) or (Table=nil) or (ID<=0) then

Changes to SQLite3/SQLite3Commons.pas

24552 begin 24552 begin 24553 result := inherited Info(aTypeInfo); 24553 result := inherited Info(aTypeInfo); 24554 if (result=nil) and AddInterface(aTypeInfo,sicClientDriven) then 24554 if (result=nil) and AddInterface(aTypeInfo,sicClientDriven) then 24555 result := inherited Info(aTypeInfo); 24555 result := inherited Info(aTypeInfo); 24556 end; 24556 end; 24557 24557 24558 24558 24559 { TServiceFactoryClient } | 24559 { TInterfacedObjectFake (private for TServiceFactoryClient) } 24560 24560 24561 type 24561 type 24562 /// map the stack memory layout at TInterfacedObjectFake.FakeCall() 24562 /// map the stack memory layout at TInterfacedObjectFake.FakeCall() 24563 TFakeCallStack = packed record 24563 TFakeCallStack = packed record 24564 EDX, ECX, MethodIndex, EBP, Ret: Cardinal; 24564 EDX, ECX, MethodIndex, EBP, Ret: Cardinal; 24565 Stack: array[word] of byte; 24565 Stack: array[word] of byte; 24566 end; 24566 end; ................................................................................................................................................................................ 24767 smvDouble,smvDateTime: asm mov eax,V; fld qword ptr [eax] end; // in st( 24767 smvDouble,smvDateTime: asm mov eax,V; fld qword ptr [eax] end; // in st( 24768 smvCurrency: asm mov eax,V; fild qword ptr [eax] end; // in st(0) 24768 smvCurrency: asm mov eax,V; fild qword ptr [eax] end; // in st(0) 24769 end; 24769 end; 24770 end; 24770 end; 24771 end; 24771 end; 24772 end; 24772 end; 24773 24773 > 24774 > 24775 { TServiceFactoryClient } > 24776 24774 function TServiceFactoryClient.CallClient(const aMethod: RawUTF8; 24777 function TServiceFactoryClient.CallClient(const aMethod: RawUTF8; 24775 aErrorMsg: PRawUTF8; const aParams: RawUTF8; aResult: PRawUTF8; 24778 aErrorMsg: PRawUTF8; const aParams: RawUTF8; aResult: PRawUTF8; 24776 aClientDrivenID: PCardinal; aServiceCustomAnswer: PServiceCustomAnswer): boole 24779 aClientDrivenID: PCardinal; aServiceCustomAnswer: PServiceCustomAnswer): boole 24777 var uri,sent,resp,head: RawUTF8; 24780 var uri,sent,resp,head: RawUTF8; 24778 Values: TPUtf8CharDynArray; 24781 Values: TPUtf8CharDynArray; 24779 begin 24782 begin 24780 result := false; 24783 result := false;