You are not logged in.
Default MM leaves the memory and does not free it to the OS as far as I know.
You are right.
Thank you for the notes.
Your guess about FillZero is right.
Are you testing on Windows 11 and see different results?
In theory the MM can allocate big blocks (128 MB) and free them when all of the sub blocks are used. But it can be a complicated task if needed to cover general cases.
That can be done but it complicates the code quite a bit, and makes it harder to read. Preferably it should be done by the MM.
I made a test to verify it: https://gitlab.com/-/snippets/4771697
On my Windows 10 machine and FPC, blocks of 128 MB are much faster, but still much slower than FreeSingle on Linux (near 100ms).
Here are the times on Windows:
32 GB, 1 MB: 32768
Allocate: 32 GB in 4.62s i.e. 6.9 GB/s
FreeSingle: 32 GB in 2.57s i.e. 12.4 GB/s
Allocate: 32 GB in 4.66s i.e. 6.8 GB/s
FreeMulti: 32 GB in 2.87s i.e. 11.1 GB/s
32 GB, 4 MB: 8192
Allocate: 32 GB in 4.32s i.e. 7.4 GB/s
FreeSingle: 32 GB in 2.49s i.e. 12.8 GB/s
Allocate: 32 GB in 4.17s i.e. 7.6 GB/s
FreeMulti: 32 GB in 1.41s i.e. 22.6 GB/s
32 GB, 64 MB: 512
Allocate: 32 GB in 4.15s i.e. 7.7 GB/s
FreeSingle: 32 GB in 2.35s i.e. 13.5 GB/s
Allocate: 32 GB in 3.89s i.e. 8.2 GB/s
FreeMulti: 32 GB in 561.76ms i.e. 56.9 GB/s
32 GB, 128 MB: 256
Allocate: 32 GB in 3.96s i.e. 8 GB/s
FreeSingle: 32 GB in 2.40s i.e. 13.3 GB/s
Allocate: 32 GB in 3.85s i.e. 8.2 GB/s
FreeMulti: 32 GB in 514.55ms i.e. 62.1 GB/s
32 GB, 256 MB: 128
Allocate: 32 GB in 4.07s i.e. 7.8 GB/s
FreeSingle: 32 GB in 2.42s i.e. 13.1 GB/s
Allocate: 32 GB in 3.93s i.e. 8.1 GB/s
FreeMulti: 32 GB in 599.45ms i.e. 53.3 GB/s
32 GB, 1 GB: 32
Allocate: 32 GB in 4.04s i.e. 7.9 GB/s
FreeSingle: 32 GB in 2.41s i.e. 13.2 GB/s
Allocate: 32 GB in 3.96s i.e. 8 GB/s
FreeMulti: 32 GB in 1.36s i.e. 23.4 GB/s
What do you think?
One other question about fpcx64mm is that why it allocates much more Private bytes compare to Default MM?
Here are the numbers I get:
Default: Private Bytes: 32GB, Peak Working Set: 32GB
fpcx64mm: Private Bytes: 48GB, Peak Working Set: 32GB
Yes! My test is done with fpcx64mm. The default memory manager allocation time is much slower. fpcx64mm allocation is very fast, but still freeing this much of memory takes time and I want to speed it up.
I just tested on Linux and it takes 100ms to free, but on Windows near 4 seconds.
@ab yes. They are a couple of megabytes. I hoped there is a way to speed it up you may be aware of.
One program is allocating huge memory parts (Total of 32GB) and I want to try to free it using multi threads to speed it up from near 4 seconds.
But in my tries, it seems completely linear.
Default memory manager of FPC does not free it until closing the program or on reuse.
Can it be done?
I am working with FPC and on Windows.
I suggest adding many examples instead.
There are good ones in the "ThirdPartyDemos" directory that teach the basics step by step, but they are about server stuff, and mORMot is much more.
As for test coverages, I would suggest increasing the examples' coverage to teach newcomers (or old who don't know a topic like RTTI units) how to use it.
Also, these will help people to find those features, as there are many features that are buried in the huge units that if you don't know there are, you will not find them unless you read the whole repo or see them used someplace else. The examples can play a helpful role here.
@ab, Did you forgot to post mORMot 2.3 Stable Release on the forum here? I saw it on Lazarus forum by chance but not here.
This post may be interesting: https://justine.lol/mutex/
It is based on https://github.com/google/nsync
No reason to change for mORMot I think. But interesting hash nevertheless.
As I said I like to query the list using SQLite VTab and I can not change the algorithm of VTab if you mean that. SQLite xBestIndex asks for a ordered result to speedup order or group by and that is the reason I asked the question.
I have a list of log values (that I can not insert into SQLite, always changing and takes to much time to insert) and I want to query them using SQLite VTab, but as you know it can not use SQLite Index. So I thought about sorting them for xFilter to speedup order and group by. But for 100 million, it takes 8s to use the QuickSortInteger and more for string values.
In https://xxhash.com/ footer there is a list of clients.
I don't see any reason to change our AesNiHash with those XXH3_64 which are pretty much experimental and likely to be slower.
Why experimental?
I am looking for different ways of achieving this, and I was wondering if people here have any insight or code.
PasMP and OmniThread have samples for it, but I am hoping for other info.
Hi,
Does mORMot support ECH (https://blog.cloudflare.com/encrypted-client-hello) or have a plan for it?
Thank you very much. Your estimation seems solid.
Yes, a good chunk of the data is updated (hence the need to sync before letting the user know it is done), and most of the other requests would be read and not written (I should have been more clear).
You are right; there is no need to close the connections if the cache size is so low. I could keep them open indefinitely.
Having multiple servers answer write requests seems like the way to go, as if one goes down, the data will not be lost. And in this way, we can use queue and batch, as you said, to speed it up.
I assumed PostgreSQL does that but I am not familiar with it much.
There are max estimations:
How many users? 1K/5K (User means separate database)
How many users at the same time? 2K/10K, for each database/user there would be 1 to 3 sub-user.
How many records per user per second? 1/2 per user (2K/10K for all users at max, in a second)
How much data per record? 1KB/8KB
So if each DB/user commits in a 2 second delay, there may be 10K to 20K (at max) records that can get lost if an OS crashes happen.
I was thinking about relying each record from load balancer to two server at the same time, so if one goes down, another have a chance of commiting.
Great advice. Yes TRestBatch seems the way to go. I was and am mostly worried of returning success to the user, but in a crash I lose their information, hence thinkin about Full.
I should ask (as you may be one of the best people who used SQLite a lot), compare to PostgreSQL, what is your go to for servers with a lot of concurrent connection, or even for my case, many databases for many users?
I looked at benchmark (extdb-bench) test you did and SQLite is 4 to 5 times (900K/s to 190K/s) faster than PostgreSQL if you use Off and Batch and Transaction, but PostgreSQL is much faster (9.5K/s to 190/s) if I want to use Full without Transaction (separate connections doing one at a time).
Thank you very much for the explanation. I will work on it more and see what I can do.
About SQLite smOff, I would lose data in case of OS crash. Any better way?
Thank you for the helpful answer. Yes it is in the wrong forum, but if I remember correctly, when I initially opened this topic, Version 2 was in Beta and there was not a dedicated forum for it. If it is possible, please move it.
Those samples were life-saving; without them, I couldn't find out what I should do. Thanks to Thomas and you for them. Martin samples are great too.
I will look forward to the updates, but please create samples for them so a newbie like me knows how to use them. Your codes are great and advanced; please dont assume finding a way in them is walking in a park. They are like being in a big library—too many great things to see and read and get overwhelmed
Anyway, I tried my best to make it work. Can you please check it?
https://gitlab.com/-/snippets/3670020
Questions:
1- I didn't use Interface based (SOA?) because I wanted to use the server with a Browser client. Is that a right assumption?
2- I tried using TRestOrmServerDB , but it asks for a Rest, what should I give it? If TRestServerDB, then what was the reason you said I should use TRestOrmServerDB? I tried passing nil for the Rest parameter, but TRestOrm.InternalAdd raise error because it needs fRest.
3- Is there a better way to handle caching?
4- If I don't set LockingMode, and Synchronous the speed is very low. LockingMode is fine, but setting Synchronous to smOff would be dangerous I guess. Can I ask what would you suggest in production? I can use PostgreSQL too, but I guessed SQLite would be faster and less hassle.
5- What is the best way to handle the locking problem when multiple connection come and want to use one TRestServerDB/DB?
I am puzzled by the name of the classes.
What I understand is:
TRestServerDB is a REST server that has a DB.
TRestOrmServerDB should be a REST server that has DB and ORM support, but it is the ORM of TRestServerDB, right?
And when should I use TRestServerDB or TRestOrmServerDB? I cannot understand their usage, even after reading the readme files.
And what is the place that I should choose the TRestXDB if I am going to have multiple one per user? In OnRequest of HTTPServer?
I know mORMot2 should be easier compared to 1, but I still get lost.
After a while I am wondering around in the ORM part of mORMot (always funny that you call it little) and I looking for my way to implement that routing to multiple database files. So I guessed asking, maybe you did something in past years.
After a couple of years, I came back to this case again. Last time the project got canceled.
I am curious, @ab, did you make the cache mechanism or similar?
Honestly I expected even more powerful one. mORMot results are fantastic.
Did you configure the server machine for such high performance?
Thanks for the commit.
Hi,
In httpServerRaw project, there are sample numbers of a server. Can I ask what was the hardware for that? or maybe including them in that comment for future?
Thank you!
I am trying the blog post sample and I can not find First() function in the sample code.
Ok good to here that. I tried two computers, one a x86 VM and one M1. Both installed Trunk for x86 with fpcupdeluxe and that was it. I didn't do anything in particular.
My current version is Lazarus 3.99 (rev main_3_99-1261-gb2b86ebaef) FPC 3.3.1 x86_64-darwin-cocoa
Lazarus on macOS is not the best in user experience, but was your problem compiler issues?
Hello,
I tried a new macOS machine with latest mORMot, and FPC and compiling the packge does not work with error like: Linker: cmpl %rdx,-12(%rax)
In summery, I can not compile mORMot.
Does anyone used it recently with mac and any note I should know?
Oh believe me, compare to me, you are a god of C
I will look into it.
Yes I benchmarked all compression library available in mORMot and ZSTD was faster and had better compression. libdeflate is great, but ZSTD can do better.
Although I agree, libdeflate is the good way especially for server needs. But for custom client and server codes, ZSTD can be the better choice.
I could take care of adding ZSTD to mORMot 2 in the current TAlgoCompress style, but the problem is mostly managing to prepare the best static libraries (including custom emmory manger like you done for SQLite and others, or shared C Lib codes). I don't know C and building their libraries is just a pain.
If thats possible for you to prepare that, I can write the code for the current official dll.
Currently mORMot supports many compression algorithm, and I wanted to ask if there is a plan for ZSTD support?
@mpv I wanted to say thank you and I am following the work.
Hi,
I am curious about the state of working with Fossil and Git.
Looking at mORMot and mORMot2 repositories, it seems ab uses Fossil for V1 and GitHub for V2.
Can I ask, especially from ab, how did you find working with Fossil and syncing with Git? I like to work with Fossil, but the world uses Git and GitHub, so if I even do that, I probably need a way to sync them.
I want to know, from your experience, is it worth it?
Here is a post that may be interesting to check and compare with mORMot: https://www.corsix.org/content/fast-crc32c-4k
And here is another one for Apple M1: https://dougallj.wordpress.com/2022/05/ … -apple-m1/
Btw I’m still amazed at how you made faster encryption code then OpenSSL. It is not a simple code to even understand, let alone make it this fast.
Well done!
Great! Thank you. I suggest add them to the encryption code of SQLite unit for future. It is informative. Also adding a line for how to prepare the JSON structure is good. I can provide a patch if you see fit.
So th patch is needed mostly for WAL and adding PRAGMA, other features seems not very needed.
Why did you said the AES256 is not needed password-derivated key? I couldn’t find a reference for it?
When is suggest to use OFB or CTR?
Exactly what bytes are not encrypted? From 0 to 16 or more for page size?
Can we have them encrypted to make the file fully noise like? It will be useful when it is not needed to be used by other tools other than severs, and also storing backups of the database without leaking any information.
As one who read it and know what it does exactly, can you explain how this way of encryption works?
I guess every page is encrypted when written. But a little more on how every page is encrypted in relation to the neighbors pages or HMAC storage. Also why the first bytes of the file (SQLite format) is not encrypted.
In summery I like to know more on how the encryption of SQLite is implemented without going in depth.
@ab can I ask why did you use SQLite3MultipleCiphers VFS approach instead of adding VFS in runtime in using SQLite API? I mean why patching SQLite when you write most of the encryption anyway.
Could you add a VFS in Pascal like the sqlite3mc_vfs?
Thank you very much. I need to test your suggestion and will let you know if the cache is needed.
@ab can you help on this?
@JD thank you, although I need to have independent file for databases.
@ab thank for the notes. I want to do it with SQLite if possible. Can you elaborate on how can I route the class to different DBs while using interface based?
I was thinking about a virtual table but maybe it is over engineering? I guessed maybe there is something better in mORMot.
PS, I’m using mORMot 2 for this new project.
Hello,
I have multiple databases for each customer (need to be independent databases for legal reasons), with the same structure or model. And I like to use interface methods to access them. The API should have independent methods called by user, so no direct access to database tables.
What is the best way to rout incoming requests to multiple databases?
To be clear, what is the best way to rout like:
User1 calls /api/logger/add?data=XYZ&key=user1_key
Server checks user1_key, finds that the data need to be added to user1.db and like to insert to that file.
Perhaps, in the future, the Server need to call another server (User1_Server) to do the actual insert. I think this part can be done by the master/slave architecture described in the documentation, but I appreciate any needed info.
It was a copy and paste issue, fixed it.
Is the style of the solution correct? Registering the class and using ParseNewInstance?
Here is my response to my question. I do not know if this is the best option though.
procedure TItemReader.ReadItem(var Context: TJsonParserContext; Data: pointer);
var
S: PUTF8Char;
SP: RawUTF8;
L: integer;
C: TItemClass;
Instance: TObject;
begin
Result := nil;
P := Context.Json;
OldP := P;
P := JsonObjectItem(P, 'CustomClass');
if P = nil then
raise Exception.Create('Invalid Item');
JSONRetrieveStringField(P, S, L, False);
SetString(SP, S, L);
case SP of
'ItemA': C := TItemA;
'ItemB': C := TItemB;
else
raise Exception.Create('Unknown Item: ' + SP);
end;
Context.Info := Rtti.RegisterClass(C);
Instance := TRttiJson(Context.Info).ParseNewInstance(Context);
end;
And register it like this:
TRttiJson.RegisterCustomSerializer(TypeInfo(TItem), @ReadItem, nil);
I had a code with mORMot1 like:
TTextWriter.RegisterCustomJSONSerializer(TypeInfo(TItems), @ReadItem, nil);
and
function TItemReader.ReadItem(P: PUTF8Char; var AValue; out AValid: boolean): PUTF8Char;
var
V: TObject absolute AValue;
OldP, S: PUTF8Char;
SP: RawUTF8;
L: integer;
begin
AValid := False;
Result := nil;
if P = nil then
Exit;
OldP := P;
P := JsonObjectItem(P, 'CustomClass');
if P = nil then
raise Exception.Create('Invalid Item');
JSONRetrieveStringField(P, S, L, False);
SetString(SP, S, L);
case SP of
'ItemA': V := TItemA.Create;
'ItemB': V := TItemB.Create;
else
raise Exception.Create('Unknown Item: ' + SP);
end;
AValid := True;
Result := JSONToObject(AValue, OldP, AValid, nil, JSONTOOBJECT_TOLERANTOPTIONS);
end;
and now with mORMot2, I can not convert it.
I thought I should do this but it leads to unknown errors.
function TItemReader.ReadItem(P: PUTF8Char; var AValue; out AValid: boolean): PUTF8Char;
var
V: TObject absolute AValue;
OldP, S: PUTF8Char;
SP: RawUTF8;
L: integer;
begin
AValid := False;
Result := nil;
P := Context.Json;
OldP := P;
P := JsonObjectItem(P, 'CustomClass');
if P = nil then
raise Exception.Create('Invalid Item');
JSONRetrieveStringField(P, S, L, False);
SetString(SP, S, L);
case SP of
'ItemA': V := TItemA.Create;
'ItemB': V := TItemB.Create;
else
raise Exception.Create('Unknown Item: ' + SP);
end;
AValid := True;
Context.Json := JSONToObject(Instance, Context.Json, Context.Valid, nil, JSONTOOBJECT_TOLERANTOPTIONS);
end;
What is the correct way to unserializing and array of classes?
Note: I can not use ParseNewObject, as the CustomClass property may not be the first properties and some other custom needs.
Now it is working. Thank you.
Just checked again,
The error can be reproduced with with either the latest mORMot2 or sqlite revision 3.37.2 and FPC trunk of now.
Most mORMot tests fails, it seems a memory problem.
One interesting thing for me is that, why such an error is not detected by some catch and can mess around with other parts of code, even the parts that seem unrelated.
I was suspected of patching RTL, but activating NOPATCHRTL, makes everything worse.
I was using an old Trunk with mORMot2 from two weeks ago (sqlite revision 3.37.2 172d789404d400ed60447a2b757c5ac770906ae9) and everything was fine, but I pulled mORMot2 as habit and memory crash ...
So I updated my Trunk and now the problem I reported.
Unfortunately I'm stuck with Trunk (3.3.1) for generics and I cannot go back.