You are not logged in.
Pages: 1
Hello, I'm trying to write an application that requires high-performance inserts from multiple clients using mORMot RestORM, so I created a benchmark to compare the performance of SQLite and PostgreSQL. However, because RestORM only uses a single connection to PostgreSQL, the speed of the application suffers greatly.
The source code for the benchmark and the outcome on my computer
The mormot2 and multithreadprocslaz packages are required to execute benchmark.
What is the proper way to use mORMot RestORM with PostgreSQL?
And what are the advantages of SQLite over PostgreSQL with mORMot if PostgreSQL + mORMot performance improves? As a server database, I believe PostgreSQL is safer.
However, it's worth noting that PostgreSQL's performance is equivalent to SQLite's with Sync Off. So, if PostgreSQL isn't thoroughly flushing data, there's a chance the data will be corrupted if there's a sudden crash.
Any input on this is welcome.
Offline
Your code is using manual inserts, even on the ORM, so it is not as tuned as possible.
The fastest is to use our ORM, but in BATCH mode.
It will redirect the insert to the fastest way on each database.
And you could switch to SQlite3 to PostgreSQL when you need.
Ensure you read
https://synopse.info/files/html/Synopse … ml#TITL_59
- old and for mORMot 1, but still relevant, especially for the SQlite3 access mode (off + exclusive is the way to go for performance).
About mORMot 2, see
https://blog.synopse.info/?post/2022/02 … erformance
- you can reach 1 millions inserts per second using SQlite3, in off / exclusive mode, in your configuration, I guess.
- on Linux x86_64, also ensure you use our memory manager.
I guess you could get much more that what you got with your first attempt.
Note that multi-threading could help a little bit for a small number of records, but there will be contention in SQLite3 (which has a single writer), and also by PostgreSQL (which has HW contention).
Online
For my use case, BATCH mode is not usable as many users with many connections need the database.
When I tried the mORMot memory manager, it didn't make much of a difference (10%).
My main question is more about why I can get more from PostgreSQL in multi-connection design. Is it correct or is there something I should try to do with SQLite? Or as PostgreSQL numbers are like off+exclusive mode, there is a chance of losing data with PostgreSQL?
Two more questions:
- Using PostgreSQL with mORMot in multi connection/thread seems slower. Is there an option to change in mORMot to have the max speed? without BATCH, as it is not useful for my case.
- Do you know anything about https://www.sqlite.org/src/vdiff?branch … concurrent? Can it be useful in this case?
My whole target is to use the best multi connection/thread database and using is with mORMot if the max speed is viable.
Offline
Our MM makes more difference of course when it is really stressed: e.g. on multi threaded process, or when it has a lot of small allocations to do.
If most of the time is spent waiting for the DB to write on disk (SQLite3) or send over the network (PostgreSQL), of course the MM won't change much.
I don't understand why BATCH mode is not usable.
It is just a way to prepare some ORM write operations, and send them at once and execute them in a single transaction.
With multiple users, you can e.g. share a TRestBatch instance on the server side, then fill it from several users and threads (using a lock), then send it at once, e.g. in a mORMot background thread, or after a number of operations.
I have done this several times, and it gave awesome numbers. Much faster than multiplying the connections.
Don't try to go in any other directions, you will have lower performance, and end up with something more complicated, not maintainable, nor open to change.
And put all DB process on the server side, within a service. You need to think more high level, and use the framework for what it is good at.
Online
Yes, I can use BATCH mode with a timer to open and close the transaction.
However, it causes a delay in the user's response, which is undesirable in my use case.
For the time being, I can't decide whether to use PostgreSQL directly or SQLite and mORMot together.
SQLite's performance and PostgreSQL's security are both in question.
Offline
@tihory,
From your description I guess you might have misunderstood the batch updating mode in mORMot.
Basically, it's a cache you put your updated, deleted or newly added records in a **cache**, then send all the data in a single shot.
I don't use Postgresql with mORMot, so couldn't comment.
Delphi XE4 Pro on Windows 7 64bit.
Lazarus trunk built with fpcupdelux on Windows with cross-compile for Linux 64bit.
Offline
The issue with BATCH mode is that if I set a timer to open and close it every 100ms, the response time to the user to show that the data is truly committed will increase by about 50ms on average.
Although this may not seem like much, it makes choosing between PostgreSQL and SQLite more difficult, and it also makes me wonder whether there is a way to utilize mORMot with PostgreSQL without having to use BATCH mode and still obtain the same speed.
Offline
Correct me if I'm wrong, but it seems from your comments that your primary concern is the user UI stalling/locking/delaying due to waiting for database action to complete?
Offline
Yes, In my application, it was important to guarantee that the database committed data before sending a confirmation message to the user.
Offline
let me rephrase , why exactly do you care if a batch transaction takes 50 ms? Does your client application lock up and stop responding?
Offline
why exactly do you care if a batch transaction takes 50 ms?
Although 50ms is not a huge delay, PostgreSQL does not impose it across multiple connections, thus I'm not sure which is better: PostgreSQL or SQLite + mORMot.
Does your client application lock up and stop responding?
Yes, the client must wait until the action on the request is completed before allowing another request.
Offline
The issue you have is that you're executing database requests in the same thread as the user UI which is just bad all around. You should decouple your logic from the UI. Database requests should be done in a separate thread. However, this requires some threading knowledge on your own part. Please read : https://synopse.info/files/html/Synopse … #TITLE_304
Offline
@tihory, your test benchmark is not complete. You don't test PQ ORM multithreaded inserts. Try to add http server in test TRestHttpServer. TRestHttpServer.Create have aThreadPoolCount param default to 32 worker threads with own PQ connection pool.
After PQProps := TSqlDBPostgresConnectionProperties.Create('127.0.0.1', 'mormot_postgres_test', 'root', '1');
Set
PQProps.ThreadingMode := tmThreadPool;
If you use
PQProps.ThreadingMode := tmMainConnection;
you will have only 1 db connection and 32 working threads will use this one connection with Lock/UnLock.
You will need multiple TRestHttpClient's in multiple threads for test.
Offline
Thank you very much, @ttomas.
I made a fresh gist and used your advice to benchmark, and mORMot created one connection per thread, but most of the time just one connection was sending requests to the PostgreSQL database.
This is the gist of a new benchmark.
Last edited by tihory (2022-05-04 11:17:54)
Offline
@tihory you don't publish results row/sec
With small changes you can change PQ to SQLite3 ORM to compare results.
You don't stress database enough with this micro bench. You see only 1-2 connections with insert in a time.
Try to increase client threads to some real numbers ex.100. Try with 10 worker threads and 100 clients threads.
It will be nice if you have 1 client server, 1 app server and 1 PQ server in LAN, real HW servers.
You can add in Client
writeln('Start Client: ', Index); //Add
for I := Index * PerConnection to (Index + 1) * PerConnection - 1 do
begin
Http.Add(Texts[I], True);
end;
writeln('End Client: ', Index); //Add
Just to check that all clients run in parallel.
Edited
Try to create separate client and server bench application. Too many threads in 1 process will stress memory manager.
Last edited by ttomas (2022-05-04 19:22:29)
Offline
@ttomas I made a new gist based on your proposal, which you can find here.
I run client on an another PC and use my PC as a server, but nothings changed.
And the number of client threads is irrelevant.
Last edited by tihory (2022-05-07 06:55:31)
Offline
@tihory, I play with your gist bench, add SQLite3 and FirebirdSQL server and BatchAdd in client.
SQLite3 DB ORM : Records Per Second : 17623
SQLite3 DB ORM : Batch Records Per Second : 509113
PostgreSQL DB ORM : Records Per Second : 1295
PostgreSQL DB ORM : Batch Records Per Second : 108610
FirebirdSQL DB ORM : Records Per Second : 2605
FirebirdSQL DB ORM : Batch Records Per Second : 37846
This is localhost test on my 3-4 year old laptop with i3-8145U with 2 cores 4 HT.
On server you can play with PQ and FB defines
On client you have BatchAdd define to switch client mode
With your hardware you can expect much better req/sec
Batch Inserts is not a silver bullet for every case/project.
Without Batch all clients sends 1000 http req. and most of the time is spent in http parsing, object serialization/deserialization, ORM mapping and small time in real DB insert.
http and ORM have some overhead in this test, but this is much closer to real server app.
My changes in gist https://gist.github.com/TTomas/619173ca … 3c9c290cd7
Last edited by ttomas (2022-05-07 20:30:31)
Offline
Pages: 1