You are not logged in.
I've looked at the TimerEnable code but I'm not sure if I'm following - if a process takes some time, and the timer is short, what happens? Is the timer callback called immediately, or it is enqueued until the already running process finishes?
Thanks,
Interesting. I've replaced the above part with this, and it appears not to leak, and it seems to be working correctly:
var
aList: TSQLTableJSON;
begin
aList := ServerDB.MultiFieldValues(TSomething, Fields, Where, [], Params);
if aList <> nil then
Result := TSynSQLTableDataSet.CreateOwnedTable(AOwner, aList);
...
I have a function that returns a Dataset to be used in some grid in my code. However, I've found out that I don't understand the life cycle of some variables.
// TSomething is a TSQLRecord descendant
b := TSomething.CreateAndFillPrepare(ServerDB, Where, [], Params,
'FieldA,FieldB,FieldC');
try
Result := TSynSQLTableDataSet.CreateOwnedTable(AOwner, b.FillTable); // using CreateTable instead of CreateOwnedTable leaks
finally
// Freeing this variable makes it AV
// b.Free;
end;
How can I return a DataSet without leaking that TSQLRecord descentand? Shoud I just keep a reference around to free it when the dataset is also freed? Is there a better way to return a TDataSet so my DevExpress grid keeps working?
I'm trying to come up with a solution for this (by the way, the updated documentation link is now https://synopse.info/files/html/api-1.1 … ADPROCESS).
Do anybody have some sample code that allows to program task to run at a *specific* time every day?
Excellent!
A related question: are parenthesis, IS and NOT statements supported by the internal SQL adapter? Something like 'A=1 and B=1 and not (D is null or D = 0)'?
Thank you for your incredible work, Arnaud.
It was this (columns removed):
'SELECT RowID,...COLFROM,COLTO FROM SomeObject WHERE :(3001): BETWEEN COLFROM AND COLTO LIMIT 1'
It all went well until SynTable.TSynTableStatement.Create, around line 7616. Here the P buffer was
P = ':(3001): BETWEEN COLFROM AND COLTO LIMIT 1'
And in line 7616, GetPropIndex returns -1 and it causes Stmt.Where to remain empty, which causes the problem.
It seems that the problem is only when the parameter is first in the WHERE clause, because if I do '1=1 AND ? BETWEEN COLFROM AND COLTO' it works without problems (however it discarded it and switched engines because it was a complex statement).
I have a table with two values (COLFROM and COLTO). When I do the following:
ServerDB.Retrieve('? BETWEEN COLFROM AND COLTO', [], [SomeNumber], Result);
Using SQLite3 as backend, the results are as expected, but using MySQL, it gets all rows and returns whatever is first.
After some debugging, I found out that in this function, right after callin "InternalAdaptSQL", the StaticSQL variable loses its WHERE clause when using a MySQL server with Zeos (SQLite3 works fine). I *think* the problem may be in TSynTableStatement, because its Where property returns empty.
function TSQLRestServer.EngineList(const SQL: RawUTF8; ForceAJAX: Boolean;
ReturnedRowCount: PPtrInt): RawUTF8;
var Rest: TSQLRest;
StaticSQL: RawUTF8;
begin
StaticSQL := SQL;
Rest := InternalAdaptSQL(Model.GetTableIndexFromSQLSelect(SQL,false),StaticSQL); // <-- here
if Rest=nil then
result := MainEngineList(SQL,ForceAJAX,ReturnedRowCount) else
result := Rest.EngineList(StaticSQL,ForceAJAX,ReturnedRowCount);
end;
As a workaround, I replaced the code as follows, and it works as expected in both databases.
ServerDB.Retrieve('COLFROM <= ? AND COLTO >= ?', [], [SomeNumber,SomeNumber], Result);
I have a TSQLRestServerDB with a nice model, and its working wonders.
Now, normally, in a classic database-backed application I would just simply do something like this:
select distinct InterestColumn from MyObject where SomeColumn is NULL
MyObject is a table managed by mORMot. How do I execute and fetch the list of unique InterestColumn? There can be quite a lot of MyObject rows in the database.
Sounds more like a problem on Zeos side, since it involves a Zeos method.
I don't know the internals involved with these table names casing.Could you ask this on Zeos support forum, please?
Yes, I already filed a bug report, but I also think there are some weird stuff happening there on the mORMot side. I had to move to another stuff for a while but I'll revisit this topic later.
Did some extra debugging, and the issue seems to be table case sensitivity (it appears to be something related to Zeos). There is a variable lower_case_table_names but I don't know yet where it is set.
Edit: I think mormot tries to get the columns in SynDBZeos.pas, around line 720:
procedure TSQLDBZEOSConnectionProperties.GetFields(
const aTableName: RawUTF8; out Fields: TSQLDBColumnDefineDynArray);
var meta: IZDatabaseMetadata;
res: IZResultSet;
n, i: integer;
Schema, TableName: RawUTF8;
sSchema, sTableName: string;
F: TSQLDBColumnDefine;
FA: TDynArray;
begin
if GetDatabaseMetadata(meta) then begin
SQLSplitTableName(aTableName, Schema,TableName);
sSchema := UTF8ToString(Schema);
{ mormot does not create the Tables casesensitive but gives mixed cased strings as tablename
so we normalize the identifiers to database defaults : }
sTableName := meta.GetIdentifierConvertor.ExtractQuote(UTF8ToString(TableName));
sTableName := meta.AddEscapeCharToWildcards(sTableName); //do not use "like" search patterns ['_','%'] so they need to be escaped
res := meta.GetColumns('',sSchema,sTableName,'');
The line with sTableName := meta.GetIdentifierConvertor.ExtractQuote(UTF8ToString(TableName)) converts the table name to lowercase. This causes meta.GetColumns() to fail.
I'm getting the following error during CreateMissingTables() on a MySQL database hosted on Amazon. User has full permissions on this schema.
Error SQLITE_ERROR (1) [Step] using 3.34.0 - TSQLRestStorageExternal.Create: external table creation TStorageXX failed: GetFields() returned nil - SQL="StorageXX"
The first exception occurs in mORMotDB.pas, line 780:
SQL := fProperties.SQLCreate(fTableName,CreateColumns,false);
if SQL<>'' then
if ExecuteDirect(pointer(SQL),[],[],false)<>nil then begin
GetFields;
if fFieldsExternal=nil then
raise EORMException.CreateUTF8('%.Create: external table creation % failed:'+
' GetFields() returned nil - SQL="%"',[self,StoredClass,fTableName,SQL]);
TableCreated := true;
end;
If I break during the exception and check the database, the table creation was succesful.
If I just ignore it and run the program a second time, I get a different error:
Error SQLITE_ERROR (1) [Step] using 3.34.0 - TSQLRestStorageExternal.Create: TStorageXX: unable to create external missing field StorageBoleta.Sucursal - SQL="ALTER TABLE StorageXX ADD Sucursal mediumtext character set UTF8"
This code runs locally without issues, but throws these errors when trying to connect to Amazon.
This is the Amazon information (captured with HeidiSQL):
Host: XXXXX.sa-east-1.rds.amazonaws.com
Network type: MySQL (TCP/IP)
Connected: Yes
Real Hostname: xxxxx
Server OS: Linux
Server version: 5.6.44-log - Source distribution
Connection port: 3306
Compressed protocol: No
Unicode enabled: Yes
SSL enabled: No
max_allowed_packet: 4,0 MiB
Client version (C:\Program Files\HeidiSQL\libmariadb.dll): 10.4.12
Uptime: 235 days, 09:38:38.0
Threads: 8
Questions: 17.212.537
Slow queries: 289
Opens: 253.268
Flush tables: 1
Open tables: 1.977
Queries per second avg: 0,846
For the moment, I added a retry logic on the calling method:
repeat
if Result.Retries > 0 then
TThread.Sleep(FPreferences.WaitBetweenRetries);
Result.PreviouslyAssignedNumber := ReserveRecord(Branch, Terminal);
Inc(Result.Retries);
until (Result.PreviouslyAssignedNumber > 0) or (Result.Retries >= FPreferences.NumberOfRetries);
After some testing, I found out that the ReserveRecord() function takes about 50ms to complete. So I tried sending 5 requests at the same time, and have configured 0 retries.
Start
Thread: 1 - Error obtaining record (2 ms., 1 retries)
Thread: 1 - Error obtaining record (0 ms., 1 retries)
Thread: 1 - Error obtaining record (0 ms., 1 retries)
Thread: 1 - Error obtaining record (0 ms., 1 retries)
Thread: 1 - Assigned number: 2290 (52 ms., 1 retries)
End
Adding 10ms delay, 2 retries:
Start
Thread: 1 - Error obtaining record (53 ms., 2 retries)
Thread: 1 - Error obtaining record (59 ms., 2 retries)
Thread: 1 - Error obtaining record (64 ms., 2 retries)
Thread: 1 - Assigned number: 2292 (146 ms., 2 retries)
Thread: 1 - Assigned number: 2291 (173 ms., 1 retries)
End
Adding 50ms delay, 10 retries:
Start
Thread: 1 - Assigned number: 2296 (139 ms., 2 retries)
Thread: 1 - Assigned number: 2297 (195 ms., 3 retries)
Thread: 1 - Assigned number: 2295 (208 ms., 1 retries)
Thread: 1 - Assigned number: 2298 (295 ms., 5 retries)
Thread: 1 - Assigned number: 2299 (359 ms., 6 retries)
End
In my current case, I have about 20 terminals, so worst case scenario the server would receive 20 requests at the same time (and this is likely never going to happen), and a 100ms or so wait time for a ticket to complete is not big deal, so for my current needs this is OK. However, if I wanted to scale (for example, a server in the cloud attending thousands of terminals), what could be a good way to "reserve" a record? I imagine creating an in-memory concurrent cache (queue?) would do the trick, but I don't know how easy is to do that in Delphi and mORMot.
Sending 1GB at a time is obviously not a good idea. You can check `TSQLRestBatch.SizeBytes` and when it reaches a certain number, for example, 5MB, then you send it, and start another batch for the remaining data.
Thank you, I'll take a look.
I'm using the following code to reserve a record for further processing (selecting available record, update to ensure it is not used by another thread).
function ReserveRecord(BranchOffice, Terminal: RawUTF8): Int64;
var
MyRecord: TSomeRecord;
log: TSynLog;
begin
log := TSynLog.Add;
Result := -1;
if ServerDB.TransactionBegin(TSomeRecord) then
begin
try
MyRecord := TSomeRecord.Create;
try
if ServerDB.Retrieve('Available = 1', MyRecord) then
begin
MyRecord.Available := false;
MyRecord.TrackId := 0;
MyRecord.BranchOffice := BranchOffice;
MyRecord.Terminal := Terminal;
if ServerDB.Update(MyRecord) then
begin
Result := MyRecord.PreviouslyAssignedNumber;
end;
end;
ServerDB.Commit();
finally
MyRecord.Free;
end;
except
ServerDB.RollBack();
end;
end
else
begin
log.log(sllError, 'Error acquiring transaction');
end;
end;
When many requests arrive at the same time I get 'Error acquiring transaction' for most of them (18 out of 20). I understand some locking is involved. What would be good ways to either minimize locking, or a good way to "retry" without hammering the server?
You can specify the fields.
But note that the Blob content will be serialized as JSON during transmission so it is only good for small Blob size .
This is interesting.
I need to do this operation once a day, and we are talking between 1,000 and 10,000 records with a blob size that may vary between 2kb up to 100kb (I don't have the number yet). Assuming worst case scenario, we are talking about 1GB in raw binary before any JSON transformation. Is it better to transfer one-by-one in this case?
Hello,
Is it possible to transfer BLOB fields in a Batch Update operation?
Given the following class:
TMyObj = class(TSQLRecord)
private
FBody: TSQLRawBlob;
published
property Body: TSQLRawBlob read FBody write FBody;
end;
I want to store an arbitrary stream, that I receive from somewhere else.
AStream.Position := 0;
MyObj.Body := StreamToRawByteString(AStream);
And to retrieve:
AStream := RawByteStringToStream(obj.Body);
AStream.Position := 0;
Is this the right way to save a stream to a blob? Does this work for binary files?
I have a program that emits sales receipts. Correlative numbers for the receipt are assigned beforehand by the Tax office, and need to be distributed between branch offices and points of sales (terminals).
I found out that the most convenient way is, once a correlative series has been received from the Tax office, is to "preload" all receipts, so I have the correlative numbers already stored in the database. This way I can "move" series of numbers between branches and not worry about internet connectivity.
The problem is that the created receipt is a signed XML document, and it *needs* the correlative number beforehand.
So, to emit a single receipt, a terminal (client, POS) must be able to reserve the next available correlative number, then create the XML document, then update the record, and all of this in an atomic operation. This is my current design:
Read the first record that has the Branch and Terminal fields as null (unused)
Update the record with the appropriate values for Branch and Terminal (thus marking it as "used")
Create the receipt data (this need the Correlative number obtained above)
Update the record with the generated receipt XML
Right now, I'm doing the following for the first part (obtain the reserved receipt). While this seems to work, I wonder if there is a way to update a single record in an atomic way.
function GetReservedReceipt(Branch, Terminal: RawUTF8): Int64;
var
Receipt: TReceipt;
Id: TID;
begin
Result := -1;
if ServerDB.TransactionBegin(TReceipt) then
begin
try
Receipt := TReceipt.Create;
try
if ServerDB.Retrieve
('Available is NULL and Branch is NULL and Terminal is NULL',
Receipt) then
begin
Receipt.Available := false;
Receipt.TrackId := 0;
Receipt.Branch := Branch;
Receipt.Terminal := Terminal;
if ServerDB.Update(Receipt) then
begin
Result := Receipt.AssignedCorrelativeNumber;
end;
end;
ServerDB.Commit();
finally
Receipt.Free;
end;
except
ServerDB.RollBack();
end;
end;
end;
So I guess my question is, is there a way to update a single record based on a criteria that may match multiple records?
I don't understand. Should I worry about cache updates? If another client retrieves any of the updated records right after this method is executed, will they get the updated ID?
I was about to post my code, and found out that effectively, I was using the wrong object during the update.
This is my current code, which seems to be working fine:
procedure UpdateTrackIds(docs: TSomeList; TrackId: Int64);
var
batchUpdater: TSQLRestBatch;
i: Integer;
fieldBits: TSQLFieldBits;
rec: TMyRecord;
begin
fieldBits := TMyRecord.RecordProps.FieldBitsFromRawUTF8
([StringToUTF8('TrackId')]);
batchUpdater := TSQLRestBatch.Create(ServerDB, TMyRecord);
try
rec := TMyRecord.Create;
try
rec.TrackId := TrackId;
for i := 0 to Length(docs) - 1 do
begin
rec.IDValue := docs[i].Id;
batchUpdater.Update(rec, fieldBits);
end;
finally
rec.Free;
end;
ServerDB.BatchSend(batchUpdater);
finally
batchUpdater.Free;
end;
end;
Thanks, that really helps.
Anyways, apparently I'm doing something wrong, because the "Add" method always exits because fails this check:
if PSQLRecordClass(Value)^<>fTable then
exit else begin // '{"Table":[...,"PUT",{object},...]}'
The debugger evaluates PSQLRecordClass(Value)^ as TSQLRecordClass($204B0) (my record class derives from TSQLRecord).
What should I use to fix this?
I have some operations that require an update on a list of records that I retrieved from the database before.
In essence, I have a list of IDs that I need to update a single field. I've looking at TSQLRestBatch, but that needs the whole record loaded, I believe.
Is there a way to update a single field in a list of record Ids?
(Slowly making progress - by the way, when doing Google searchs, I get flooded with commit messages from Fossil, I think that should be addressed).
I did this, because my task lives inside my service (not in the server). Is this right?
someService := T.Create();
aRestServer.ServiceDefine(someService, [ISomeService]);
aRestServer.TimerEnable(someService.MyBackgroundTask, 360);
That's the question.
Hello,
Can anybody please share an example on how to use TimerEnable / TimerDisable? I cannot find any example or documentation about it. The commentary on the source code says the following:
Define a task running on a periodic number of seconds in a background thread
could be used to run background maintenance or monitoring tasks on this TSQLRest instance, at a low pace (typically every few minutes)
will instantiate and run a shared TSynBackgroundTimer instance for this TSQLRest, so all tasks will share the very same thread
you can run BackgroundTimer.EnQueue or ExecuteNow methods to implement a FIFO queue, or force immediate execution of the process
will call BeginCurrentThread/EndCurrentThread as expected e.g. by logs
But no mention on how to actually use it. Any example on how to use it?
Do you mean, overloading those methods in a descendant class?
TMyRestServer = class(TSQLRestServerFullMemory)
protected
procedure BeginCurrentThread(Sender: TThread); override;
procedure EndCurrentThread(Sender: TThread); override;
end;
procedure TMyRestServer.BeginCurrentThread(Sender: TThread);
begin
inherited;
CoInitialize(nil);
end;
procedure TMyRestServer.EndCurrentThread(Sender: TThread);
begin
CoUninitialize;
inherited;
end;
By the way, I just called CoInitialize() in the service method itself, as repeated calls in the same thread should not do anything extra, according to some guy on the internet. Is this right?
I'm using a `TSQLRestServerFullMemory` object, and a service that will internally use MSXML. I'm getting this error:
Debug Output:
onecore\com\combase\objact\objact.cxx(827)\combase.dll!767FF2A9: (caller: 767FE1DB) ReturnHr(20) tid(3664) 800401F0 CoInitialize has not been called.
Where should I call CoInitialize / CoUnitialize methods in my server?
Okay - so no TObjectList but instead use Delphi arrays. Got it.
Right now I have a service that receives a custom object.
IMyService = interface(IInvokable)
['{E3186F91-F373-48A6-AE92-8931437D4AAF}']
function MyMethod(const ClientId: RawUTF8; const MyClass: TMyClass): TCQRSResult;
end;
TMyClass contains a TObjectList<TAnotherObject>. What is the proper, transparent way to handle this? Register a custom deserialization? I cannot use the "ClassName" hack because I don't control the requests being sent to this service, and I find the custom collections method really cumbersome.
Hello,
Is this the case still? I'm trying to serialize a TObjectList but have a hard time understanding this.
I'm writing a smallish application that will work with some existing project.
Currently, my main application uses a MySQL database, and it's an old-fashion client-server architecture. I'm using Delphi and AnyDAC in this application. This is an invoicing app.
Now, I'm in the process to create a small client application that will create legal electronic receipts, and save them in the existing database. My initial design is a *new* mORMot server application that will act as a proxy between the existing MySQL database and this new client. This is going well for the most part, but I have one problem: this client application sometimes will need to work *without* connectivity. I'm thinking of having a local storage (using SQLite3) and, when connectivity is available, push any records to the server.
Now, the question: is there a way to do this with automagically with mORMot, or is this something that need to implemented in a custom way, for example, a column in the database?
Ok, I had deleted that one in my quest to find the wrong file. In any case, I'll make sure the file remains there.
Using '.\sqlite3.obj' does not work, but thanks for the suggestion.
Found the issue - somewhere in my existing library path, there is a sqlite3.obj that is way old. I don't want to disturb my old old old AnyDAC libraries so I changed the {$L} directive in SynSQLite3Static.pas around line 340, but I'll have to keep an eye on it in the future.
{$ifdef MSWINDOWS}
{$ifdef CPU64}
{$L sqlite3.o} // compiled with C++ Builder 10.3 Community Edition bcc64
{$else}
{$L D:\delphi\mORMot\static\sqlite3.obj} // compiled with free Borland C++ Compiler 5.5
{$endif}
Using Delphi Tokyo, Win32, trying to include SynSQLite3Static in my project but I'm getting the following errors:
[dcc32 Error] SynSQLite3Static.pas(787): E2065 Unsatisfied forward or external declaration: 'CodecGetReadKey'
[dcc32 Error] SynSQLite3Static.pas(788): E2065 Unsatisfied forward or external declaration: 'CodecGetWriteKey'
[dcc32 Error] SynSQLite3Static.pas(1026): E2065 Unsatisfied forward or external declaration: 'sqlite3_key'
[dcc32 Error] SynSQLite3Static.pas(1027): E2065 Unsatisfied forward or external declaration: 'sqlite3_rekey'
[dcc32 Error] SynSQLite3Static.pas(1036): E2065 Unsatisfied forward or external declaration: 'sqlite3_create_window_function'
[dcc32 Error] SynSQLite3Static.pas(1130): E2065 Unsatisfied forward or external declaration: 'sqlite3_serialize'
[dcc32 Error] SynSQLite3Static.pas(1132): E2065 Unsatisfied forward or external declaration: 'sqlite3_deserialize'
[dcc32 Error] SynSQLite3Static.pas(1138): E2065 Unsatisfied forward or external declaration: 'sqlite3_trace_v2'
[dcc32 Error] SynSQLite3Static.pas(1287): E2065 Unsatisfied forward or external declaration: 'sqlite3CodecAttach'
[dcc32 Error] SynSQLite3Static.pas(1287): E2065 Unsatisfied forward or external declaration: 'sqlite3CodecGetKey'
[dcc32 Error] SynSQLite3Static.pas(1287): E2065 Unsatisfied forward or external declaration: 'sqlite3_activate_see'
[dcc32 Fatal Error] BoletasLocalStorage.pas(13): F2063 Could not compile used unit 'SynSQLite3Static.pas'
My local mORMot directory is a fresh git clone from Github, and all paths are already in the search path. According to the documentation, "referring to SynSQLite3Static.pas in the uses clause of your project is enough to link the .obj/.o engine into your executable."
Do I need to add anything else to my project?
Yes, I'm slowly making progress.
I think people don't say this as often as we should - thank you for your incredible work, Arnaud.
Thanks, I had just found out by debugging line-by-line what the constructor needed. It was quicker than loading that file in my browser, let alone navigating it.
I'm having similar issues. What should I put in "Server" to connect to MySQL using ZEOS?
localhost:
Requested database driver was not found
mysql:localhost
Project SynDBExplorer.exe raised exception class EZSQLException with message 'SQL Error: Unknown database 'localhost' Code: 1049 Message: Connect to "localhost" as user "root"'.
mysql:127.0.0.1
Project SynDBExplorer.exe raised exception class EZSQLException with message 'SQL Error: Unknown database '127.0.0.1' Code: 1049 Message: Connect to "127.0.0.1" as user "root"'.
Of course, I can connect without problems with any other program (heidisql, my own anydac based programs, etc.)
Many people see for that and it's 2019, still not supported.
Please add.
I can see English is not your primary language, so here's a tip: your post is extremely rude.
So how should the ORM generate the above query for MS SQL?
I don't see the full query, but
SELECT DISTINCT TOP 10 ...
should work. It is worth noticing that DISTINCT is applied first and then TOP (as it should.) Also, DISTINCT applies to all columns in the query, after grouping.
SQL Server support of "limit" is not a limitation. MSSQL uses "select top 100 * from..." instead of "select * from ... limit 100"
Perhaps just use plain records and dynamic arrays. Do not make something complicated, but define the simple value objects possible, just holding the data values.
Make the naming match the reality, with explicit names: no abbreviation, and don't follow the SQL table field names.
Encapsulate all record/array type definitions in a single unit, with no link to SQL or whatever: they will be used for value transmission.
Then write another unit to retrieve the information from the databases, from SQL. You may identify some arrays (e.g. lookups) which may be reused by other structures: they may be used instead of a JOIN, to make the join at record/code level.
Then write the last unit taking the records as input, and generating a report (on screen or pdf) as output.You may end up with creating some DDD-like data structure, which may be exposed as a service later on...
This is what I want to do. I want to avoid SQL because I'm looking for this project as a learning opportunity (and I've been using SQL for over twenty years, so no much point to keep learning that.)
Hi,
I have to write a program to create a report based on data from three different data sources. I could go ahead and do the easy thing but I want to seize the opportunity to learn more about mORMot.
Some facts about the program:
The three databases are quite old, and there is little to no normalization
They all are SQL server databases, but I really don't want to limit my solution to that.
I have to get data from different tables (JOINs) but I wonder if there is a mORMotish way to do that.
Any tips? How should I create my data objects? One for each table, or there is a way to create compound objects from different tables in the database?
What other catches should I be aware of?
I'm loving this thread!
This is the most important topic for mORMot to really take off, in my opinion.
Cool!
Just curious, do you see a steady income from this kind of books? Have you considered releasing the book for free to gain traction and then try to earn something doing consulting and such?
@leus, because I tried.
What did you tried? You clone a repository, make your local changes (including "git remove" and "git add" operations) then commit to your local branch. Then, pull request and if it is approved, all changes (including added and deleted files) get merged in.
If changing contents inside a folder wasn't allowed, git wouldn't be very useful, don't you think?
Yes, but I cannot rename/drop folders inside the my folder, I don't know other constraints.
Sure you can. Why do you think you can't do that?
It is just written everywhere in the documentation and the samples...
Search CreateMissingTables in https://synopse.info/files/html/Synopse … 01.18.html
Well, I did:
Create all missing tables, via the CreateMissingTables method - and not compute by hand a "CREATE TABLE IF NOT EXISTS..." SQL statement;
This is precisely what I don't want to do.
As you can see, there is no difference with using the local SQLite3 engine or a remote database engine.
From the Client point of view, you just call the usual RESTful CRUD methods, i.e. Add() Retrieve() Update() UnLock() Delete() - or their faster Batch*() revision - and you can even handle advanced methods like a FillPrepare with a complex WHERE clause, or CreateSQLMultiIndex / CreateMissingTables on the server side.
Even the creation of the table in the remote database (the 'CREATE TABLE...' SQL statement) is performed by the framework when the CreateMissingTables method is called, with the appropriate column properties according to the database expectations (e.g. a TEXT for SQLite3 will be a NVARCHAR2 field for Oracle).
Again, this sounds dangerously close to modify the existing database.
Using this class will include the CreateMissingTables call to create both AuthGroup and AuthUser tables needed for authentication. But the resulting executable will be lighter: only 200 KB when compiled with Delphi 7 and our LVCL classes, for a full service provider.
There we go again, creating tables (perhaps I'm reading it wrong, and it means tables on the client side?)
Also, in the API reference:
procedure CreateMissingTables(user_version: cardinal=0; Options: TSQLInitializeTableOptions=[]); override;
Missing tables are created if they don't exist yet for every TSQLRecord class of the Database Model
- you must call explicitely this before having called StaticDataCreate()
- all table description (even Unique feature) is retrieved from the Model
- this method also create additional fields, if the TSQLRecord definition has been modified; only field adding is available, field renaming or field deleting are not allowed in the FrameWork (in such cases, you must create a new TSQLRecord type)
Still doesn't dispel my fear of it trying to modify the underlying database.
That's everything I could find in the document you linked regarding this function, and in the API reference. In fact, I had already read it. Perhaps I just don't understand it?
Arnaud, I don't mean to bash your documentation, but it is huge. Personally, I don't learn that way - I cannot consume a whole book and suddenly become an expert on the subject matter. I'm a slow learner, and I need to bit small pieces to understand what I'm doing. The function name itself (CreateMissingTables) informs me that it wants to modify my database. The documentation doesn't state otherwise. The code, while well written, is extremely complicated and difficult to understand (as it should be, it is doing some amazing magic.) This is a perfect storm of documentation to be drown in.
And some positive criticism: RTFM is not always the best response, especially when the person asking the question shows that it is trying hard.
Anyways, I'll keep trying. Thanks for your awesome work!