You are not logged in.
@flydev, that sounds very interesting, couldn't wait to see what you has to offer!
I agree with MPV, if you have the plan to continually improve it, you might want to make it a standalone repo on github, so that even the others can make PRs.
Thanks @edwinsn
but I wanted to understand how it works.
There are different parts in the framework - ORM, web service and so on.
Take ORM for an example, if you know how published properties work in Delphi, how JSON works, know well enough about SQLite (including basic db concepts and virtual tables in SQLite), and so on, you can find out how it works.
Actually I know very little about the framework - there are a bunch of modules that I've not used yet. That being said, give the fact that the v2 version is lacking, I'd suggest you learn from v1, I believe everything is there for you to understand it - the full set of examples, the documentation and the source code.
I believe V2 is very similar to v1 in terms of the concepts and architecture, except that the code organization, the class names and some of the implementation have changed. Moreover, with a bunch of new features added - off the top of my head, I know there is a new quickjs-based engine, LDAP, X.509, OpenSSL 3.x, and so on - BTW, I think ab should include a "What's new in mORMot 2 section" in the github home page
Don't be afraid my friend, Let me share with you my real life experience with mORMot 1 (even before Stephan Bester has written his excellent blog posts - not complete BTW, but it's still excellent), I have had success with mORMot 1 for several projects:
- I started with the examples.
- From time to time I encountered various questions/issues and all of them were addressed by one of the following:
- Check the documentation - you don't have to go through it in one time because it's HUGE, just find the info you need.
- Ask question here, ab the developer and other users were supper helpful and active.
- Submit issue and ab usually solve the issue in less a day or several days depending on the severity.
- And most of the time, check the source code and you'll get your answer.
One might have a steep learning curve with mORMot, and one might don't like the coding style , but it's DEFINITELY worth it because it's so feature-rich, stable, so fast and so fully-documented (the last point applies to v1 only, thus far, but I believe ab will fix it)! And the developer ab is so passionated and reliable, and the forum users are so active and helpful!
I hope you enjoyed your holidays
Mustache is great, especially your Pascal implementation - it's fast.
I think the difference is that Mustache is logic-less, while Sempare template has features like for loop, sometimes it's a must-have, for example in case the logic must be written out side of Pascal/Delphi code.
An alternative could be to backup the database in the background using the corresponding mORMot command: a backup database is already VACCUMed IIRC.
Strange, backup works, but the file size is the same...
In SQLiteStudio you can define your own collation (Tools - Open collations editor) and open DB as usual.
https://i.ibb.co/crSzwwb/1678775092.png
For the SYSTEMNOCASE you can try this snippet:
function system_nocase(a, b) { a = a.toUpperCase(); b = b.toUpperCase(); return (a < b ? -1 : (a > b ? 1 : 0)); }
Wow, didn't know about this before! Thanks zed!
Trying some selects in mORMot created tables, SQLite Administrator give me this error message:
SQL Error: no such collation sequence: SYSTEMNOCASE.
I think I should change my tool. Which free sqlite admin do you recommend?
There is a solution here: https://synopse.info/forum/viewtopic.ph … 320#p39320
I found a workaround when using the external tools to query table rows with mORMot-specific collations - SQLite supports the following sql syntax:
select id, name collate nocase, datecreated from table 1
In other words, append 'collate nocase' to each of the field that contain non-standard collations. It might be a bit of a hassle, but this allowed me to take advantage of mORMot's great feature, while still using productive SQLite GUI like Sqlite Studio, what' a great world!
If VACUUM on SynDBExplorer didn't change the disk file size, it means that it was not worth it.
Even after deleting a lot of records, it is likely that SQLite3 would manage to not waste any space.
But in this case, I'm sure there were a lot record has been deleted and a lot of spaces should be freed by VACUUM.
Thanks for the suggestion, it seems there is not an alternative easier way...
Thanks ab.
So tried executing VACUUM with SynExplorer, the result is similar - it took a long time (over 15 minutes I guess) and returned, but checking the db disk file the size didn't change...
Maybe I'll have to try the backup API...
Yes, I use VACUUM only after deleting a lot of records.
@ab,
Is there a way to change all collations in a SQLite database? Collations like WIN32CASE created by mORMot is preventing me from using high productivity database GUI tools like Sqlite Studio, so I want to change all the mORMot-specific collations to the standard SQLite collations. Would you advise? Thanks.
Update 1: I assume TSQLRecordProperties.SetCustomCollation and TSQLRecordProperties.SetCustomCollationForAll are for newly created records only, but not existing records, is that correct?
Update 1: Was trying to see if the auto_vacuum pragma can get around the hanging issue, but obviously the pragma cannot be changed in an existing db with existing tables: https://www.sqlite.org/pragma.html#pragma_auto_vacuum
Update 2: Is it possible that the custom collation used by mORMot affecting this issue? When I trying to execute 'VACUUM' with Sqlite Studio, I got the following error (we usually see this when we try to modify a database created by mORMot with an external tool):
Error while executing SQL query on database 'db1.sqlitedb': no such collation sequence: SYSTEMNOCASE
Hi ab,
I have a situation and I'm wondering if you can give me some advises.
I have an interface-based service (IMyDbApi) and it has a method `VacuumDb` which calls `vacuum` on the SQLite db. The db file is around 10 GB and the vacuum operation took very much long time until a "(501 - Server not reachable or broken connection)" error is raised:
exception class : EInterfaceFactoryException
exception message : TInterfacedObjectFakeClient.FakeCall(IMyDbApi.VacuumDb) failed: 'URI root/IMyDbApi.VacuumDb/8308 [] returned status 'Not Implemented' (501 - Server not reachable or broken connection)'.
main thread ($1f4c):
main thread ($1f4c):
00a14386 MyProgram1.exe mORMot 53552 RaiseError
00a14713 MyProgram1.exe mORMot 53638 InternalProcess
00a14ac7 MyProgram1.exe mORMot 53707 TInterfacedObjectFake.FakeCall
010dd674 MyProgram1.exe BaseAccountEditFormU 245 TfrmBaseAccountEdit.TryDeleteAccount
Where is the issue do you think? Is there some kind of locking happening? The service method is quite simple:
function IMyDbApi.VacuumDb: Boolean;
var
mySvr: TSQLRestServer;
begin
Result := False;
mySvr := self.fServer;
if mySvr.TransactionActiveSession <> 0 then
Exit;
Result := mySvr.Execute('vacuum'); // note: vacuum is slow!
end;
Thanks for sharing, when not using the ORM but the database layer directly, I also increase the value of BusyTimeout
...Currently I can't use sqlite3 on real productions where long transactions are commonplace (because of sqlite3_busy error)...
Would you share with us some pseudo code about how do you deal with the sqlite3_busy error?
I guess you use a retrying mechanism?
Great!
But note that this hctree database is a new DB format. It can open SQlite3 databases, but it is a new file layout.
I don't think it's an issue as of now...
Good news from the SQLite tean:
Real high-concurrency & replication is coming to SQLite!
@ab,
Since you already marked mORMot 1 to be in maintenance mode, will you keep updating the SQLite engine for mORMot 1 in the future?
ab wrote:So consider defining NOSQLITE3STATIC conditional for your project on Win64, and use the external .dll.
I can see the sqlite3-64.dll only for win64. Is there a sqlite3.dll for win32 as well?
You are supposed to use static linking for Win32.
The json format is exactly the same between mormot 1 and 2.
@ab, can we say "mORMot 1 and 2 are binary-compatible"?
The problem is what you do with the returned error code.
As I said, the framework don't have to worry about how the users use the returned error code, but the framework need to return the error code after executing a SQL, doesn't it?
In my case, I want to check if my 'DELETE FROM' statement was a success or not; If SQLite returned the "busy" code, I can initiate retries. And so on. But to reiterate - the framework don't have to worry about this, but it has to return the error code, via another field, or via function call return value, or any other means you think appropriate.
So it won't help you switching to other DB easily.
It does help. If I keep using ISQLDBStatement, to switch to another db all I need to do is changing the `CheckStepCode` function; But if I use TSQLRequest, I'll have to change change all places where TSQLRequest is used. I think this is very obvious.
I hope it makes sense.
The value returned by SQLite is of course SQLite-specific, but 'returning a result code after executing a SQL' is not.
For example, in TSQLDBOracleStatement.ExecutePrepared, StmtExecute returns an integer value.
For example, in TSQLDBFirebirdStatement, isc_dsql_execute also returns an integer value.
My point is that, ExecutePrepared, no matter what the backend db is, should return a result code, the framework don't have to consider or worry about what the value is.
After checking out TSqlRequest, I think I won't use it since it's SQLite-specific, it won't allow me switch to other database systems easily.
So I will keep using the current modification of `ISQLDBStatement`, because if the PR mentioned above is not accepted, I can keep my local changes, it's a small modification and I can maintain that.
Edit:
I looked again at the source code.TSqlRequest.Execute will raise an exception on any error, i.e. if result <> SQLITE_ROW.
So I don't understand in which case you may have something interesting as result...
I'm not sure if I understand it correctly, but if the SQL being executed is like this "delete from table1", SQLite's Step function will return SQLITE_DONE, but not SQLITE_ROW, and I have a `CheckStepCode` like the following to check if the "step" is a success or not:
function CheckStepCode(const aStatement: ISQLDBStatement): Boolean;
begin
Result := not (aStatement.LastStepReturnedValue in SQLITE_ERRORS);
end;
I have a lot of code like the following that calls `CheckStepCode`:
var
stmt: ISQLDBStatement;
begin
stmt := FDb.NewThreadSafeStatementPrepared('delete from table1 where field1 = ?', False);
stmt.Bind(1, aDoc.DocId);
stmt.Step;
Result := CheckStepCode(stmt);
end;
Thanks ab, I'll check TSqlRequest and feedback later.
It seems that I'm messing up ISQLDBStatement.ExecutePrepared with ISQLDBRows.Step? Maybe I should use the former instead?
I guess so... So I go ahead and checked the implementation of 'TSQLDBSQLite3Statement.ExecutePrepared', it still has the same issue - didn't take into account result code other than SQLITE_ROW:
try // INSERT/UPDATE/DELETE (i.e. not SELECT) -> try to execute directly now
repeat // Execute all steps of the first statement
until fStatement.Step<>SQLITE_ROW;
fUpdateCount := DB.LastChangeCount;
finally
...
end;
So it seems that my proposed pull request should have been done in TSQLDBSQLite3Statement.ExecutePrepared?
More thoughts on the current design:
- Semantically, it only makes sense to check only against SQLITE_ROW for ISQLDBRows.Step, but for ISQLDBStatement.Step, the term "statement" has a broader meaning (it can be any CURD statement), the return value should be Integer.
- Semantically, ISQLDBRows should be inherited from ISQLDBStatement, but not the other way around as with the current design, because the term "statement" is more "low level", it can be any CURD statement; On the other hand, ISQLDBRows semantically equals the SELECT SQL statement. In other words, actually the class hierarchy should have been:
ISQLDBRows = interface(ISQLDBStatement)
Instead of the current design:
ISQLDBStatement = interface(ISQLDBRows)
Of course I'm not asking to change the design, I can live with that. But of course I'll be appreciated if you accept my github pull request above
@ab,
I've added ISQLDBStatement.LastStepReturnedValue, without affecting all previously existing functions,
github PR: https://github.com/synopse/mORMot/pull/435
Update 1: I just checked `TSQLDBConnection.NewThreadSafeStatementPrepared`, it seems that caching of the statement objects are already taking into account, so calling NewThreadSafeStatementPrepared is already very optimal, no need of things like `Reset` as I suggested in the OP?
Imagine situations like executing `PRAGMA ...`, the return result is SQLITE_DONE in case of a successfully call, but inside `TSQLDBSQLite3Statement.Step`, only SQLITE_ROW is checked:
result := fStatement.Step=SQLITE_ROW;
So there is no way to check the real return value.
I suggest to add a new ISQLDBStatement.LastStepReturnValue: Integer;
@ab,
In situations like in a loop, can `ISQLDBStatement` be reused? Or do I have to get a new instance via `NewThreadSafeStatementPrepared` before calling `Bind*` and `Step` in the loop?
By 'in a loop' I mean something like following:
var
stmt: ISQLDBStatement;
begin
stmt := FDatabase.NewThreadSafeStatementPrepared('update...', False);
for i := 0 to 10 do
begin
stmt.Bind(0, i);
stmt.Step;
stmt.Reset; // possible?
end;
end;
Does mORMot has a something like Reset method for ISQLDBStatement? Or do I have to call `NewThreadSafeStatementPrepared` to get a new instance in the for loop?
@ab,
I think ISQLDBStatement.Step should return Integer value instead of boolean?
Otherwise in case of "update/delete" statements, how do we know if the returned value is SQLITE_OK, SQLITE_ERROR or SQLITE_BUSY?
I'm talking about mORMot 1, not sure about mORMot 2
Note, I know there is the `UpdateCount` function, but sometimes the update count is zero, all the caller wants to know is if the SQL was executed successfully by the engine.
OK, I was out and reading the docs without a computer to test when posting this question
Now just tested it, after calling TSQLDBSQLite3ConnectionProperties.MainConnection.Connect the db file will be auto created.
I've just started using direct SQLite db access (with TSQLDBSQLite3ConnectionProperties,TSQLDBSQLite3Connection and so on), without using the ORM layer.
The the db file is not existing yet, does TSQLDBSQLite3ConnectionProperties.Create auto creates the db file?
If not, how to create the db file?
Thanks.
So the units are included twice?
No, just once, in the build-first package. As we know, a unit can only be included in one package.
So I found a solution for this issue.
According to the document: E2201 Need imported data reference ($G) to access '%s' from unit '%s' (Delphi)
The solution is to add the units that are missing imported data reference to the build-first package (which has the {$IMPORTEDDATA ON} directive).
For example, the following error is what I got today:
[dcc32 Error] E2201 Need imported data reference ($G) to access 'crcblock' from unit 'SynCrypto'
[dcc32 Error] E2201 Need imported data reference ($G) to access 'FillcharFast' from unit 'mORMot'
The solution was to add both SynCrypto.pas and mORMot.pas to my package that's supposed to be built and installed first.
Thanks, it works now.
Sometimes we have to bear with the Delphi compiler bugs/limitations, unfortunately...
I always use Google search instead of the search function provided by **any** forum software - how a forum software can beat the big G in terms of text searching
An interesting discussion about an open source project for replicating SQLite:
LiteFS a FUSE-based file system for replicating SQLite (https://github.com/superfly/litefs)
Maybe put all the code in a github repository?
I know TDocVariant.NewFast is faster, but I'm still not very sure about when to use TDocVariant.NewFast or TDocVariant.New.
From the manual:
- by default, every internal value will be copied, so access of nested properties can be slow - if
you expect the data to be read-only or not propagated into another place, set
Options=[dvoValueCopiedByReference] will increase the process speed a lot
- in practice, you should better use _Obj()/_ObjFast() _Arr()/_ArrFast() functions or
TDocVariant.NewFast()
The obscure part is this:
if you expect the data to be read-only or not propagated into another place...
My json is not read-only, but what does "not propagated into another place" mean?
Thanks.
Oh I see.
masterJson.MyArray := arrayVar;
equals to:
_Safe(masterJson).Value['MyArray'] := arrayVar;
Read the docs again and experimented again, I found that there are actually several ways to add an array of json objects to another json object
Thanks!
@ab,
I'm constructing a json object consists array of json objects, like the following:
{
"MyItems":
[
{
"FileName": "111.txt",
"Position": 0
},
{
"FileName": "abc.exe",
"Position": 1
}
]
}
Code for testing, and the 2 questions are written in the comments
procedure TForm4.btnTestAddArrayClick(Sender: TObject);
var
i: Integer;
itemVar: Variant;
arrayVar: Variant;
masterJson: Variant;
begin
TDocVariant.New(arrayVar);
TDocVariant.New(itemVar);
TDocVariant.New(masterJson);
for i := 0 to 1 do
begin
itemVar.FileName := 'file' + IntToStr(i);
itemVar.Position := i;
_Safe(arrayVar).AddItem(itemVar); // is there a better method for adding array item here?
end;
// the following two calls will cause 'Variant method calls not supported.' error, why?
// masterJson.AddOrUpdateValue('ProjectItems', arrayVar);
// masterJson.Value['ProjectItems'] := arrayVar;
_Safe(masterJson).Value['MyArray'] := arrayVar; // ok
ShowMessage(JSONReformat(VariantSaveJSON(masterJson)));
end;
Thanks for your help in advance!
That's how a great library gets popularized. Well done, tbo!
Is it about mORMot 1 or mORMot 2?
Great job and congratulations!
no problem, just providing the info.