You are not logged in.
The state of async summarized form last 2 runs:
db queries fortunes updates
sync
-s28 -t8 -p 362,787 31,436 353,182 16,750
async
-s28 -t8 -p 381,633 32,725 290,512 23,022
-s56 -t1 -p 398,619 32,892 312,836 20,154
-s1 -t56 -nop 371,371 31,766 330,803 19,995
I made a new PR 8232 with changed [async] to -s28 -t4 -p, the same [async,nopin] and binary array binding. After receiving the results, we will be able to choose the best param's for async
TFB state
Weights 1.000 1.737 21.745 4.077 68.363 0.163
# JSON 1-query 20-q Fortunes Updates Plaintext Scores
38 731,119 308,233 19,074 288,432 3,431 2,423,283 3,486 2022-10-26 - 64 thread limitation
43 320,078 354,421 19,460 322,786 2,757 2,333,124 3,243 2022-11-13 - 112 thread (28CPU*4)
44 317,009 359,874 19,303 324,360 1,443 2,180,582 3,138 2022-11-25 - 140 thread (28CPU*5) SQL pipelining
51 563,506 235,378 19,145 246,719 1,440 2,219,248 2,854 2022-12-01 - 112 thread (28CPU*4) CPU affinity
51 394,333 285,352 18,688 205,305 1,345 2,216,469 2,586 2022-12-22 - 112 threads CPU affinity + pthread_mutex
34 859,539 376,786 18,542 349,999 1,434 2,611,307 3,867 2023-01-10 - 168 threads (28 thread * 6 instances) no affinity
28 948,354 373,531 18,496 366,488 11,256 2,759,065 4,712 2023-01-27 - 168 threads (28 thread * 6 instances) no hsoThreadSmooting, improved ORM batch updates
16 957,252 392,683 49,339 393,643 22,446 2,709,301 6,293 2023-02-14 - 168 threads, cmem, inproved PG pipelining
15 963,953 394,036 33,366 393,209 18,353 6,973,762 6,368 2023-02-21 - 168 threads, improved HTTP pipelining, PG pipelining uses Sync() as required, -O4 optimization
17 915,202 376,813 30,659 350,490 17,051 6,824,917 5,943 2023-03-03 - 168 threads, minor improvements, Ubuntu 22.02
17 1,011,928 370,424 30,674 357,605 13,994 6,958,656 5,871 2023-03-10 - 224 threads (8 thread * 28 instances) eventfd, ThreadSmooting, update use when..then
11 1,039,306 362,739 29,363 354,564 15,748 6,959,479 5,964 2023-03-16 - 224 threads (8*28 eft, ts), update with unnest, binary binding
17 1,045,953 362,716 30,896 353,131 16,568 6,994,573 6,060 2023-04-13 - 224 threads (8*28 eft, ts), update using VALUES (),().., removed Connection: Keep-Alive resp header
13 1,109,267 363,671 31,652 352,706 16,897 6,956,038 6,156 2023-04-24 - 224 threads (-s 28 -t8 -p), each server (with all threads) are pinned to the different CPU
7 1,109,693 381,633 32,725 353,182 23,022 6,975,086 6,634 2023-05-13 - 224 threads, added async test in -s 28 -t8 -p mode: db, queries & updates is for async, fortunes for direct
We are #7 even with non-optimal thread/server count for async tests. And #2 in cached-queries
Tomorrow new results are expected - async tests will be executed in `-s 56 -t 1 -p` and `-s 1 -t 56 --nopin`. I am waiting for the results with bated breath..
TFB MR is merged. Results expected at 2023-05-29
No, I'm 0 in python
Env. variable now correctly passed into app container see modified dockerfile
I suggest running it once without a binary array binding to have a basis for comparison..
New MR 8207 based on latest sources and a new test case `-s 1 -t CPU*2 -nopin` is ready.
I use `unnest` pattern for /asyncUpdates (as in prev. MR), because your implementation fails on ?queries=501 test (too many parameters). In /rawupdaes we use if count>20 - use `unnest` else use `select from values`, but unnest works well, IMHO.
I understand why updates is better for async - this is because of less concurrency - in fact on my environment I also got ~23K fro updates..
I will made new PR today with latest sources and new async test-case with `-s 1 -t CPU*4`
And YES - we are in TOP10 now!!!! Congratulations!!!!
I also do not fully understand the numbers, but we have what we have. TFB results for first async implementation should appears at 2023-05-11, after this I'll made a PR with new implementation and one more test case for async, se we will verify both `-s CPU*2 -t 1 -p` and `-s 1 -t CPU*4` cases
Tested new async implementation on 2X Xeon(R) Silver 4214R CPU @ 2.40GHz. Each component limited by taskset to use, first 16 CPU for app, second 16 CPU for db and third 16CPU for wrk - to emulate three TFB servers)
Result is better than initial async implementation (first table row). The best values is still for -s 32 -t 1 -p mode
See table - on google drive
TFB state
Weights 1.000 1.737 21.745 4.077 68.363 0.163
# JSON 1-query 20-q Fortunes Updates Plaintext Scores
38 731,119 308,233 19,074 288,432 3,431 2,423,283 3,486 2022-10-26 - 64 thread limitation
43 320,078 354,421 19,460 322,786 2,757 2,333,124 3,243 2022-11-13 - 112 thread (28CPU*4)
44 317,009 359,874 19,303 324,360 1,443 2,180,582 3,138 2022-11-25 - 140 thread (28CPU*5) SQL pipelining
51 563,506 235,378 19,145 246,719 1,440 2,219,248 2,854 2022-12-01 - 112 thread (28CPU*4) CPU affinity
51 394,333 285,352 18,688 205,305 1,345 2,216,469 2,586 2022-12-22 - 112 threads CPU affinity + pthread_mutex
34 859,539 376,786 18,542 349,999 1,434 2,611,307 3,867 2023-01-10 - 168 threads (28 thread * 6 instances) no affinity
28 948,354 373,531 18,496 366,488 11,256 2,759,065 4,712 2023-01-27 - 168 threads (28 thread * 6 instances) no hsoThreadSmooting, improved ORM batch updates
16 957,252 392,683 49,339 393,643 22,446 2,709,301 6,293 2023-02-14 - 168 threads, cmem, inproved PG pipelining
15 963,953 394,036 33,366 393,209 18,353 6,973,762 6,368 2023-02-21 - 168 threads, improved HTTP pipelining, PG pipelining uses Sync() as required, -O4 optimization
17 915,202 376,813 30,659 350,490 17,051 6,824,917 5,943 2023-03-03 - 168 threads, minor improvements, Ubuntu 22.02
17 1,011,928 370,424 30,674 357,605 13,994 6,958,656 5,871 2023-03-10 - 224 threads (8 thread * 28 instances) eventfd, ThreadSmooting, update use when..then
11 1,039,306 362,739 29,363 354,564 15,748 6,959,479 5,964 2023-03-16 - 224 threads (8*28 eft, ts), update with unnest, binary binding
17 1,045,953 362,716 30,896 353,131 16,568 6,994,573 6,060 2023-04-13 - 224 threads (8*28 eft, ts), update using VALUES (),().., removed Connection: Keep-Alive resp header
13 1,109,267 363,671 31,652 352,706 16,897 6,956,038 6,156 2023-04-24 - 224 threads (-s 28 -t8 -p), each server (with all threads) are pinned to the different CPU
Thanks to the CPU ping, we are now #13 (above .NET). Today's round started without a merge.
We hope that our MR with *async* test suite and improved Int64 JSON serialization will be merged in the next round.
It is very likely that we will be in the top 10 (and #1 in cached queries) after that.
BTW - modification to libpq, similar to our is applied by Postgres reviewers and should be included into Postgres v17 (in ~1 year). Next h2o test should also use modified libpq what do not flush on every sync.
I've updated TFB PR 8182 with current (refactored PostgreSQL async DB) sources state - new async test suit added. They usually merging today's (Saturday) night - so we may participate with async in Monday`s run
Just tested current implementation - for a while best results is with `-s CPU*2 -t 1 -p`. /acyncdb and /asyncfortunes is faster (+25%) compared to rawdb/fortunes.
I can wait while you implement `ExecuteAsyncPrepared`, or update TFB PR with current implementation - what is your opinion?
For a while the best /asyncfortunes result I was able to achieve is for servers=CPUCount*2, thread per server =1, pinned
# taskset -c 0-15 ./raw12 -s 32 -t 1
....
num servers=32, threads per server=1, total threads=32, total CPU=48, accessible CPU=16, pinned=TRUE
taskset -c 31-47 ./wrk -H 'Host: 10.0.0.1' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 10 -c 512 --timeout 8 -t 16 "http://localhost:8080/asyncfortunes"
Requests/sec: 482751.02
This is +25%, what is very VERY good, but I almost sure I need to play more with parameters...
And we do not need asoForceConnectionFlush option at all. Even on modified libpq PQGetResult will internally call flush first. In rawqueries pConn.Flush; can also be removed
Please, see this PR
What cmd line parameters do you use to test (threads/servers/pinning)? On my server hw acync* results (with servers=CPUCount threads=8 pinning) is a little worse compared to raw*
num servers=16, threads per server=8, total threads=128, total CPU=48, accessible CPU=16, pinned=TRUE, db=PostgreSQL
taskset -c 31-47 ./wrk -H 'Host: 10.0.0.1' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 10 -c 512 --timeout 8 -t 16 "http://localhost:8080/asyncfortunes"
Requests/sec: 353990.97
taskset -c 31-47 ./wrk -H 'Host: 10.0.0.1' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 10 -c 512 --timeout 8 -t 16 "http://localhost:8080/rawfortunes"
Requests/sec: 393226.26
BTW - nice pictures
About /asyncupdates - from my POV it is not correct to pipeline updates - in realistic /updates scenario we should do all select's together with update in one transaction (even if TFB do not require this). But in our "acync" model we can't (actually can, but from consistency POW it is not correct) do transactions at all - only atomic select operations.
This is why I consider to do /async* endpoints only for db queries and fortunes - I'll create a separate test-case in benchmark_config.json with "approach": "Stripped" for such endpoints.
I'm right, or I miss something?
I'll test it over the weekend or tonight - the server I'm testing on is busy (end of the month - people are creating reports, etc.)
There is missed connect:
--- a/src/db/mormot.db.sql.postgres.pas
+++ b/src/db/mormot.db.sql.postgres.pas
@@ -1379,6 +1379,7 @@ begin
fProperties := Owner;
fStatements := TSynObjectListLightLocked.Create;
fConnection := fProperties.NewConnection as TSqlDBPostgresConnection;
+ fConnection.Connect;
fConnection.EnterPipelineMode;
end;
And should be tested, because currently result is always `{"id":0,"randomNumber":0}` and only 19 RPS per server
Looking forward to it! We still have at least 7 days until the next merge request...
I know this cool article about async in .NET. In fact, the same steps were taken in JS. In browser client for UnityBase I started with callbacks 13 years ago, then moved to iterator-based Promises poly-fill, then to Promises and finally - to async/await.
In Pascal we need at least iterator support on compiler level, without this the only option is callbacks, but this is hell.... Callback-based implementation example is h2o
I like our current implementation - at the app level, everything is quite simple. Complicating it to the level of manual implementation of asynchronization is likely to alienate potential users.
I'm still confident that we can find a way to improve the current implementation (and I'm working on it periodically) - we only need +200 composite points to get into the top 10 TFB...
TFB PR 8182 is ready - should improve /cached-queries and may be /queries also.
We can elso avoid tmp and MoveFast, isnt't it? At least in writer.Add.
While looking on cached-queries performance I found *VERY* unexpected thing:
TTextWriter.Add(Value: PtrInt) uses fast lookup table for values < 999.
I decide to increase it to 9999 (TFB ID's are 0..10000) and..... performance has gotten worse
If I comment lookup code - performance increases.
For cached-queries?count=100:
- no lookup: 511k RPS
- 999 lookup size: 503k RPS
- 9999 lookup size: 466k RPS
@ab - do you have any ideas why so? Relative numbers do not depends on CPU pinning, server count, thread count.....
About POrmCacheTable of course we could put it as a field. But I doubt it would make any performance change: it is a O(1) lookup process.
It called 400k times per second. Caching can give us +0.1% performance boots we need to be #1... At least in my environment, this is happening.
Unfortunately, rawcached is brake a rules. There is already discussions in TFB issues what such implementations should be banned - I don't want to take risks.
About pipelining DB requests for /db and /fortunes - this is interesting idea. Actually top rated frameworks did such..
In this case we need
- callback on HTTP server level and
- callback on DB level
Each server can use single per-server DB connection and new method on DB layer stmt.ExecutePipelining(maxCnt, timeout, callback);
stmt.ExecutePipelining can do buffering up to maxCnt statements or until timeout, run they in single pipeline and notify callback for each caller.
And finally we got a callback hell (especially while handling exceptions) - I've seen this in old .NET and JavaScript before they implemented async/await at the runtime level.
But for benchmark purpose we can try
In current round we moved above actix and .NETCore
The final results will be in 3 days, I expect we will be #15
@ab - is it correct to calc POrmCacheTable once in TRawAsyncServer constructor (instead of calc it every time here)? This should give us +few request we need to be #1 in cached queries...
Current TFB status
Weights 1.000 1.737 21.745 4.077 68.363 0.163
# JSON 1-query 20-q Fortunes Updates Plaintext Scores
38 731,119 308,233 19,074 288,432 3,431 2,423,283 3,486 2022-10-26 - 64 thread limitation
43 320,078 354,421 19,460 322,786 2,757 2,333,124 3,243 2022-11-13 - 112 thread (28CPU*4)
44 317,009 359,874 19,303 324,360 1,443 2,180,582 3,138 2022-11-25 - 140 thread (28CPU*5) SQL pipelining
51 563,506 235,378 19,145 246,719 1,440 2,219,248 2,854 2022-12-01 - 112 thread (28CPU*4) CPU affinity
51 394,333 285,352 18,688 205,305 1,345 2,216,469 2,586 2022-12-22 - 112 threads CPU affinity + pthread_mutex
34 859,539 376,786 18,542 349,999 1,434 2,611,307 3,867 2023-01-10 - 168 threads (28 thread * 6 instances) no affinity
28 948,354 373,531 18,496 366,488 11,256 2,759,065 4,712 2023-01-27 - 168 threads (28 thread * 6 instances) no hsoThreadSmooting, improved ORM batch updates
16 957,252 392,683 49,339 393,643 22,446 2,709,301 6,293 2023-02-14 - 168 threads, cmem, inproved PG pipelining
15 963,953 394,036 33,366 393,209 18,353 6,973,762 6,368 2023-02-21 - 168 threads, improved HTTP pipelining, PG pipelining uses Sync() as required, -O4 optimization
17 915,202 376,813 30,659 350,490 17,051 6,824,917 5,943 2023-03-03 - 168 threads, minor improvements, Ubuntu 22.02
17 1,011,928 370,424 30,674 357,605 13,994 6,958,656 5,871 2023-03-10 - 224 threads (8 thread * 28 instances) eventfd, ThreadSmooting, update use when..then
11 1,039,306 362,739 29,363 354,564 15,748 6,959,479 5,964 2023-03-16 - 224 threads (8*28 eft, ts), update with unnest, binary binding
17 1,045,953 362,716 30,896 353,131 16,568 6,994,573 6,060 2023-04-13 - 224 threads (8*28 eft, ts), update using VALUES (),().., removed Connection: Keep-Alive resp header
We still #17, but Composite scores improves for every new run. Also we moved up form #7 to #3 in cached-queries test
Now we tries with CPU pinning - I expect good improvement in /json and /cached-queries....
Today I run TFB tests 5 times (each run takes ~30 minutes) and memory error not occurs (with old sources, without GetAsText), so it's really a hisenbug.
It occurs NOT on server shutdown, but just after wrk command ends, I think - when sockets are closing... Will continue to investigate...
About command line parameters - nice code. Please - look at PR 176 - I made a more Unix-way formatting of help message
HTTP pipelining fixed - thanks! I made a TFB PR 8153 with CPU pinning - let's wait for results.
Memory problems still exists. Today I catch it twice (from 5-6 runs) - once after /db and once - after /rawqueries while running
./tfb --test mormot mormot-postgres-raw --query-levels 20 -m benchmark
Still can't reproduce in more "debuggable' way
Also synced my latest changes to TFB with ex/techempower-bench/raw.pas - see PR 175 for mORMot2
@ab - HTTP pipelining is currently broken. Introduced by feature "added Basic and Digest auth".
Last good commit is [1434d3e1] prepare HTTP server authentications - 2023-04-13 1:48. After that series of commits what not compiles die to new param aAuthorize for THttpServerRequestAbstract.Prepare, and first commit what compiles responds only for first pipeline request.
Can be verified using console commad below - should return 2 Hello, World!
(echo -en "GET /plaintext HTTP/1.1\nHost: foo.com\nConnection: keep-alive\n\nGET /plaintext HTTP/1.1\nHost: foo.com\n\n"; sleep 10) | telnet localhost 8080
@ttomas - thanks for idea - added CONN param into gist - a connection count for wrk, for plaintext 1024 is used (all fw shows best results for 1024)
@ab - I add shebang to gist (first line) - may be your default shell is not bash.. Also ensure you have `bc` utility (apt install bc)
Nice to head what our measurement with pinning match now... I do not understand why in your case json is better than plaintext - in my case plaintext is always better.
I will made PR to TFB on Sunday (when current run result for mormot appears) - we can see what pinning give us on real hardware.. BTW pinning is a common practice for acync servers - even nginx have worker affinity option in config. In TFB tests pinning is used at least by libreactor and H2O
W/O pipelining (with cmem) results are (node 100 for cached queries - as in TFB test):
/json /plaintext /cached-queries?count=100
pinning 1,281,204 1,301,311 493,913
default 1,088,939 1,168,009 471,235
I put program, I use to create load for smoke tests in this gist. CORES2USE and CORES2USE_COUNT should be edited to match CPUs used by wrk
Actually json is not x4 slower, because plaintext is pipelining with 16 HTTP requests in one package, so there is 7000000/16 packages, and performance is limited by 10G network.
I analyse json valgrind many times and currently do not see any possible improvements, except minimizing a cpu-migrations and conttext-switch'es using CPU pinning.
Your results is strange for me.. Did you try to use first 10 CPU for app and second 10 for wrk ? And please, check what you use a cmem.
TFB hardware is 1 socket CPU....
I run tests on 48 cores server (2 sockets * 24 cores each) using
taskset -c 0-15 ./raw
num thread=8, total CPU=48, accessible CPU=16, num servers=16, pinned=TRUE, total workers=128, db=PostgreSQL
Postgres is limited to cores 15-31 by adding systemd-dropin /etc/systemd/system.control/postgresql.service.d/50-AllowedCPUs.conf with content
[Service]
AllowedCPUs=15-31
and wrk limited to last 16 cores
taskset -c 31-47 ./wrk
In this case results are
json 1,207,744
rawdb 412,057
rawfortunes 352,382
rawqueries?queries=20 48,465
cached-queries?count=100 483,290
db 376,684
queries?queries=20 32,878
updates?queries=20 22,016
fortunes 300,411
plaintext 3,847,097
while the same without pinning are
json 1,076,755
rawdb 409,145
rawfortunes 359,764
rawqueries?queries=20 47,887
cached-queries?count=100 456,215
db 395,335
queries?queries=20 33,542
updates?queries=20 22,148
fortunes 306,237
plaintext 3,838,749
There is a small degradation in db-related tests, but composit scores is better. I plane to check pinning on TFB hardware and decide what to do - depending on results. We can, for example, create separate docker file with pinning for non-db endpoints and w/o pinning for db related (as @ttomas propose)
Added CPU pinning feature to the TFB example - see mORMot2 PR #172. I will do the same PR for TFB after got results for next (should starts on 2023-04-13) run (with new update algo and removed keep-alive header)
In this PR I add accessible CPU analyzing - for testing purpose, when we limit CPUs using `taskset`..
About memory error - unfortunately this is all I currently have. If I enable logging it not reproduced, currently reproduced ONLY during `./tfb --test mormot --query-levels 20 -m benchmark`. but not for every run
I found we do not initialize global flags variable in raw.pas - may be unexpected flags is added and this is a reason of our memory problems.... Will fix it in next MR (to both TFB and mORMot).
Also I verify my new idea - we create 28 servers with 8 threads each, I binds all threads for each server to the same CPU and on my hardware it gives 1 002K -> 1200k boots for /json. Please, give me access to TAsyncConnections.fThreads - MR #171 - it's allow me to set affinity mask from TFB test program
Memory problem is reproduced when I run `./tfb --test mormot --query-levels 20 -m benchmark` - not for every run, randomly. And in random places..
I hadn't seen it before 2023-03-08 [46f5360a66], first time It appears when I checkout to commit [2ae346fe11b] (2023-03-14), so it introduced somewhere in-between 08-14 March
I'll try to come closer to a faulty commit using bisect technique, but this is long process.....
Thanks! I update TFB MR. Will sync all changes back in mormot after new update algo will be verified.
Starting from commit [2ae346fe11b91fbe6fa1945cf535abed3de99d37] (Mar 14, 2023) I observe memory problems.
Occurs randomly (sometimes after /db, sometimes after /json), but always after wrk session is finished (on sockets closing?)
I can't reproduce it in normal execution, only during tfb --bencmark.
Also reproduced once 2023-03-30 on TFB environment - this is why this round not contains cached-queries results
glibc MM messages are:
- corrupted size vs. prev_size while consolidating
- double free or corruption (!prev)
I modify my prev. post after round finished - we are #16 (all frameworks returns back to rating)
@ab - I found what here we add a `Connection: Keep-Alive` header for HTTP 1.1. This is not necessary - by default HTTP 1.1 is keep alive.
So, I propose to replace
result^.AppendShort('Connection: Keep-Alive'#13#10#13#10);
by
result^.AppendCRLF;
Or, if you preferring, add an option for this.
I checked - such replacement works correctly and improve plaintext performance (may be we even got beautiful 7M req/sec on TFB hardware)
Current TFB results
Weights 1.000 1.737 21.745 4.077 68.363 0.163
# JSON 1-query 20-q Fortunes Updates Plaintext Scores
38 731,119 308,233 19,074 288,432 3,431 2,423,283 3,486 2022-10-26 - 64 thread limitation
43 320,078 354,421 19,460 322,786 2,757 2,333,124 3,243 2022-11-13 - 112 thread (28CPU*4)
44 317,009 359,874 19,303 324,360 1,443 2,180,582 3,138 2022-11-25 - 140 thread (28CPU*5) SQL pipelining
51 563,506 235,378 19,145 246,719 1,440 2,219,248 2,854 2022-12-01 - 112 thread (28CPU*4) CPU affinity
51 394,333 285,352 18,688 205,305 1,345 2,216,469 2,586 2022-12-22 - 112 threads CPU affinity + pthread_mutex
34 859,539 376,786 18,542 349,999 1,434 2,611,307 3,867 2023-01-10 - 168 threads (28 thread * 6 instances) no affinity
28 948,354 373,531 18,496 366,488 11,256 2,759,065 4,712 2023-01-27 - 168 threads (28 thread * 6 instances) no hsoThreadSmooting, improved ORM batch updates
16 957,252 392,683 49,339 393,643 22,446 2,709,301 6,293 2023-02-14 - 168 threads, cmem, inproved PG pipelining
15 963,953 394,036 33,366 393,209 18,353 6,973,762 6,368 2023-02-21 - 168 threads, improved HTTP pipelining, PG pipelining uses Sync() as required, -O4 optimization
17 915,202 376,813 30,659 350,490 17,051 6,824,917 5,943 2023-03-03 - 168 threads, minor improvements, Ubuntu 22.02
17 1,011,928 370,424 30,674 357,605 13,994 6,958,656 5,871 2023-03-10 - 224 threads (8 thread * 28 instances) eventfd, ThreadSmooting, update use when..then
11 1,039,306 362,739 29,363 354,564 15,748 6,959,479 5,964 2023-03-16 - 224 threads (8*28 eft, ts), update with unnest, binary binding
16 1,046,044 360,576 30,919 352,592 16,509 6,982,578 6,048 2023-03-30 - 224 threads (8*28 eft, ts), modified libpq, header `Server: M`
- tiny (<1%) improved plaintext and json (shorten Server header value)
- +1.5K (~5%) improved rawqueries (and rawupdates as side effect), thank`s to modified libpq
I tries to use update table set .. from values (), () pattern for rawupdates in MR 8128. On my env it's works better than CASE and UNNEST patterns.
Also periodically made some tests by directly modifying libpq to improve db performance, for a while w/o success
@radexpol - it's not correct to compare commercial and open source projects. IMHO @ab provides the best support I have ever seen in the open source world. Thousands of questions have been answered in this forum (for free)
@claudneysessa - there is already a link to JavaScript auth example in this answer
Yes, I also saw just-js code - it's good. But this is just a proof-of-concept, as author notes. Repository is not maintained for a long time. In last round just-js (as many others, who implement pg by hand) fails because TFB team change PG auth algo from MD5 to something other.
So my guess is to use libpq as much as possible, and implement only a subset of methods and only for raw* tests.
About having a separate pool of DB connections: IMHO this will complicate everything, but I do not sure this gives better results. .net, for example, have a separate DB thread pool, but their results is not better compared to our current implementation.
I almost sure what removing a unneeded `poll` call in libpq gives us very valuable boots.
P.S.
PG auth problem is describedhere - https://github.com/TechEmpower/Framewor … ssues/8061
I attach test zip file into https://github.com/synopse/mORMot/pull/444
Current round ends
Weights 1.000 1.737 21.745 4.077 68.363 0.163
# JSON 1-query 20-q Fortunes Updates Plaintext Scores
38 731,119 308,233 19,074 288,432 3,431 2,423,283 3,486 2022-10-26 - 64 thread limitation
43 320,078 354,421 19,460 322,786 2,757 2,333,124 3,243 2022-11-13 - 112 thread (28CPU*4)
44 317,009 359,874 19,303 324,360 1,443 2,180,582 3,138 2022-11-25 - 140 thread (28CPU*5) SQL pipelining
51 563,506 235,378 19,145 246,719 1,440 2,219,248 2,854 2022-12-01 - 112 thread (28CPU*4) CPU affinity
51 394,333 285,352 18,688 205,305 1,345 2,216,469 2,586 2022-12-22 - 112 threads CPU affinity + pthread_mutex
34 859,539 376,786 18,542 349,999 1,434 2,611,307 3,867 2023-01-10 - 168 threads (28 thread * 6 instances) no affinity
28 948,354 373,531 18,496 366,488 11,256 2,759,065 4,712 2023-01-27 - 168 threads (28 thread * 6 instances) no hsoThreadSmooting, improved ORM batch updates
16 957,252 392,683 49,339 393,643 22,446 2,709,301 6,293 2023-02-14 - 168 threads, cmem, inproved PG pipelining
15 963,953 394,036 33,366 393,209 18,353 6,973,762 6,368 2023-02-21 - 168 threads, improved HTTP pipelining, PG pipelining uses Sync() as required, -O4 optimization
17 915,202 376,813 30,659 350,490 17,051 6,824,917 5,943 2023-03-03 - 168 threads, minor improvements, Ubuntu 22.02
17 1,011,928 370,424 30,674 357,605 13,994 6,958,656 5,871 2023-03-10 - 224 threads (8 thread * 28 instances) eventfd, ThreadSmooting, update use when..then
11 1,039,306 362,739 29,363 354,564 15,748 6,959,479 5,964 2023-03-16 - 224 threads (8*28 eft, ts), update with unnest, binary binding
We are on #11, mostly because many top-rated frameworks fail in this round. Good news what we are VERY close to .net now.
It looks like the next round will be without our latest changes, which caused a lot of discussion.
I found how to improve db-related performance, but such changes requires rewriting a part of libpq on pascal: currently to get a result libpq call poll and then recv. Pull call can be avoided - it used only to implement a timeout. On Linux we can use SO_RCVTIMEO for this. Such changes should improve db round-trip by 10-30%
So my idea is to use libpq for connection establishing, and then operate directly with socket PQSocket. I will do it little by little...
Another patch for non-english file names in zip - see mORMot1 MR444. We discover what build-in zip on latest Windows10 Home put such names into Unicode Path Extra Field.
Good to be ported also in mORMot2
About libldap it is a long story - first we use Synapse, but there is an TLS problems there, when - libcurl, but it also have a known LDAP issues. So we switch to libldap. At last ldapsearch (what builds on top on libldap) utility is well documented, and our customer can use it to verify their problems. libldap works for us for a long time..
Our latest TFB changes in PR 8057 generated some discussion (as expected)
About "with more threads per core, e.g. 16 instead of 8." - we can. But I almost sure this not help, because currently our server uses 100% CPU on rawdb. Let's wait for next round and after try with more threads.
About LDAP - I discover different MS implementations with different Windows Server versions. Also Azure AD (ADFS) has its own nuances
When you get tired of fighting it - here is how I use libldap (mormot1 compatible). I need only ldapbind (use it to verify user password), but I sure it's work in many scenarios. Here is URL's example and some troubleshooting.
And a small hack - set a server name to 'M' instead of default "mORMot (linux)". On 7 millions responses it's meter . Just saw that .net calls its server 'K' instead of 'Kestrel' for TFB tests
About use one connection for several threads - I don't like this idea (even if it may improve performance), because it's not "realistic" in terms of transaction. In TFB bench we don't need transactions, but in real life - yes. Several thread may want to commit/rollback their own transactions (in parallel) and this is impossible in single connection.
In fact, I still don't understand why our /rawdb result is twice lower compared to top "async" frameworks. This is abnormally. DB server CPU load is ~70% for our test in this case, so bottleneck is on our side. But there? In libpq select call`s? Or because all our threads are busy? No answer yet, but solving this problem is for sure a way to top10. I sure we can solve it without going to "async"
TFB's requirement to call Sync() after each pipeline step is highly debatable. Moreover - this discussion was started by the .net team, and they may have their reasons for doing so (after all, MS sponsors citrine hardware, so they can do it).
My opinion is what we do not require sync() at all, the same opinion have postgres tests authors on their test_nosync.
So I decide not to do a PR to libpq but use a modified version (I place it on github). New TFB#8057 is ready. Based on latest mORMot sources.
About current state - mORMot results for round 2023-03-16 is ready, but seams all frameworks results are lower in this round, se we can`t ensure binary bindings helps. We move one place up in composite score.
Good news - I found a way to improve PG pipelining performance (rawqueries, rawupdates)
libpq PQPipelineSync function flush socket for each call (do write syscall).
I trace justjs implementation and observe whey send all pipeline commands in one write syscall. After this I rebuild libpq with commented flush and all works correctly and performance increased +4k(10%) RPS for rawqueries and +1k for rawupdates. (I'm using local Postgres, it should increase more over the network). This should add ~150 composite scores
Now I either implement PQPipelineSync in Pascal (need access to internal libpq structures) or, if I can`t, add a modified pibpq into docker file