You are not logged in.
And we do not need asoForceConnectionFlush option at all. Even on modified libpq PQGetResult will internally call flush first. In rawqueries pConn.Flush; can also be removed
Please, see this PR
Last edited by mpv (2023-04-28 18:56:21)
Offline
For a while the best /asyncfortunes result I was able to achieve is for servers=CPUCount*2, thread per server =1, pinned
# taskset -c 0-15 ./raw12 -s 32 -t 1
....
num servers=32, threads per server=1, total threads=32, total CPU=48, accessible CPU=16, pinned=TRUE
taskset -c 31-47 ./wrk -H 'Host: 10.0.0.1' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 10 -c 512 --timeout 8 -t 16 "http://localhost:8080/asyncfortunes"
Requests/sec: 482751.02
This is +25%, what is very VERY good, but I almost sure I need to play more with parameters...
Offline
For DB queries, you need to use more cores for ./raw and less for wrk.
Perhaps https://github.com/synopse/mORMot2/commit/35dbef14
makes sense.
I guess there are some missing files in your PR.
Offline
Now I have merged your PR.
I did miss the line about Flush - now I got it.
My next step is to create a new per-connection ExecuteAsyncPrepared() method, in addition to the current per-properties pattern.
As a result, there should be no lock at all during the DB requests. The responses would be handled by a new async thread, one per connection thread, so one per core.
My guess is that we could try to have a more regular threading model, perhaps with a single server, and no pinning.
Offline
Just tested current implementation - for a while best results is with `-s CPU*2 -t 1 -p`. /acyncdb and /asyncfortunes is faster (+25%) compared to rawdb/fortunes.
I can wait while you implement `ExecuteAsyncPrepared`, or update TFB PR with current implementation - what is your opinion?
Offline
I've updated TFB PR 8182 with current (refactored PostgreSQL async DB) sources state - new async test suit added. They usually merging today's (Saturday) night - so we may participate with async in Monday`s run
Offline
BTW - modification to libpq, similar to our is applied by Postgres reviewers and should be included into Postgres v17 (in ~1 year). Next h2o test should also use modified libpq what do not flush on every sync.
Offline
Yes, better have a round with the first algorithm of async writings.
I won't be able to deliver something stable until the next round.
-s CPU*2 -t 1 -p
makes somehow sense - but is a pretty weird setting for sure.
Nice seeing the modified libpq.
We could identify the new endpoint and use it with a "pre-release" build - which no one could argue against. This is just the future official version.
Offline
TFB state
Weights 1.000 1.737 21.745 4.077 68.363 0.163
# JSON 1-query 20-q Fortunes Updates Plaintext Scores
38 731,119 308,233 19,074 288,432 3,431 2,423,283 3,486 2022-10-26 - 64 thread limitation
43 320,078 354,421 19,460 322,786 2,757 2,333,124 3,243 2022-11-13 - 112 thread (28CPU*4)
44 317,009 359,874 19,303 324,360 1,443 2,180,582 3,138 2022-11-25 - 140 thread (28CPU*5) SQL pipelining
51 563,506 235,378 19,145 246,719 1,440 2,219,248 2,854 2022-12-01 - 112 thread (28CPU*4) CPU affinity
51 394,333 285,352 18,688 205,305 1,345 2,216,469 2,586 2022-12-22 - 112 threads CPU affinity + pthread_mutex
34 859,539 376,786 18,542 349,999 1,434 2,611,307 3,867 2023-01-10 - 168 threads (28 thread * 6 instances) no affinity
28 948,354 373,531 18,496 366,488 11,256 2,759,065 4,712 2023-01-27 - 168 threads (28 thread * 6 instances) no hsoThreadSmooting, improved ORM batch updates
16 957,252 392,683 49,339 393,643 22,446 2,709,301 6,293 2023-02-14 - 168 threads, cmem, inproved PG pipelining
15 963,953 394,036 33,366 393,209 18,353 6,973,762 6,368 2023-02-21 - 168 threads, improved HTTP pipelining, PG pipelining uses Sync() as required, -O4 optimization
17 915,202 376,813 30,659 350,490 17,051 6,824,917 5,943 2023-03-03 - 168 threads, minor improvements, Ubuntu 22.02
17 1,011,928 370,424 30,674 357,605 13,994 6,958,656 5,871 2023-03-10 - 224 threads (8 thread * 28 instances) eventfd, ThreadSmooting, update use when..then
11 1,039,306 362,739 29,363 354,564 15,748 6,959,479 5,964 2023-03-16 - 224 threads (8*28 eft, ts), update with unnest, binary binding
17 1,045,953 362,716 30,896 353,131 16,568 6,994,573 6,060 2023-04-13 - 224 threads (8*28 eft, ts), update using VALUES (),().., removed Connection: Keep-Alive resp header
13 1,109,267 363,671 31,652 352,706 16,897 6,956,038 6,156 2023-04-24 - 224 threads (-s 28 -t8 -p), each server (with all threads) are pinned to the different CPU
Thanks to the CPU ping, we are now #13 (above .NET). Today's round started without a merge.
We hope that our MR with *async* test suite and improved Int64 JSON serialization will be merged in the next round.
It is very likely that we will be in the top 10 (and #1 in cached queries) after that.
Last edited by mpv (2023-05-01 11:30:11)
Offline
Your MR has been merged.
We will see in the next round what's up with the initial async process.
Here is some new threading model for async process, with one async/pipelined connection per thread (in addition to the default non-pipelined connection per thread).
This thread and connection is only initialized if async methods are used - so there is no change for regular/non-pipelined connections.
Please try https://github.com/synopse/mORMot2/commit/44cc2507
and https://github.com/synopse/mORMot2/commit/6b4e1a98
From my tests, it gives better results with a single instance and no pinning nor affinity, and around 2-8 threads per cpu core.
It makes sense to me that several instances, and core pinning may not be mandatory for the best performance: we could let the Kernel to its scheduling job...
Offline
Tested new async implementation on 2X Xeon(R) Silver 4214R CPU @ 2.40GHz. Each component limited by taskset to use, first 16 CPU for app, second 16 CPU for db and third 16CPU for wrk - to emulate three TFB servers)
Result is better than initial async implementation (first table row). The best values is still for -s 32 -t 1 -p mode
See table - on google drive
Offline
Thanks a lot for the numbers!
Which I don't fully understand, to be honest: with -s 32 my guess would be that the new raw14 implementation would be slower than the previous raw12.
But anyway, it sounds like a good step up in respect to the non-async version of the code.
We could try a round on TFB HW with -s 32 -t 1 -p then at least two rounds with -s 1 -t 64, one with hsoThreadSmooting and another without.
(with 32 or 64 numbers changed to match the TFB core counts, of course)
Offline
I also do not fully understand the numbers, but we have what we have. TFB results for first async implementation should appears at 2023-05-11, after this I'll made a PR with new implementation and one more test case for async, se we will verify both `-s CPU*2 -t 1 -p` and `-s 1 -t CPU*4` cases
Offline
I submitted this issue https://github.com/TechEmpower/Framewor … ssues/8205 to TFB.
Some of the frameworks or benchmarks are clearly cheating and are no robust HTTP servers at all.
Therefore, I proposed to validate if the server is able to properly respond to HTTP/1.0 or Connection: Close input
My guess is that a few framework should be marked as "Stripped" and disappear from the benchmark ranking - until the most basic HTTP behavior is implemented.
Some of the buggy/unrealistic benchmark are part of the top #20 - e.g. several rust implementations or even some asp.net core.
If you find it meaningful, you can comment on the issue too.
Even propose a simple script to validate the fact (I am not fluent in bash/grep).
I guess we will appear soon in https://www.techempower.com/benchmarks/ … d7dc0a0f74
And I hope we will be higher with the asynchronous database requests.
Offline
我们拭目以待:)
Offline
Numbers for this round did appear.
Not as good as I hoped.
They are pretty weird, not consistent with what we expected by using taskset on our hardware.
The async version is not faster than the blocking version, apart for the updates.
This is exactly the contrary of what we observed with taskset: no benefit for updates, but faster db/queries/fortunes.
My best guess is that
1) this first async mode was using a single async connection per server, which is not scaling so well.
2) pinning was not beneficiary at all with the async way of execution - or we should also pin the async thread.
3) the raw config was not properly setup for async (to be verified in the logs when they will be available)
But thanks to the updates better numbers, we are now higher in the composite ranking. We should be in the top #10 now.
@mpv
Perhaps we could now make a MR with the current state of the framework and raw source, i.e. with the new async code.
With whatever config `-s CPU*2 -t 1 -p` or `-s 1 -t CPU*4` you want.
I would not be surprised that `-s 1 -t CPU*4` would be better than we measured before.
Offline
I understand why updates is better for async - this is because of less concurrency - in fact on my environment I also got ~23K fro updates..
I will made new PR today with latest sources and new async test-case with `-s 1 -t CPU*4`
And YES - we are in TOP10 now!!!! Congratulations!!!!
Offline
New MR 8207 based on latest sources and a new test case `-s 1 -t CPU*2 -nopin` is ready.
I use `unnest` pattern for /asyncUpdates (as in prev. MR), because your implementation fails on ?queries=501 test (too many parameters). In /rawupdaes we use if count>20 - use `unnest` else use `select from values`, but unnest works well, IMHO.
Offline
Nice!
It makes sense to to use "unnest" - especially because this is what previous async did - with pretty good numbers.
And the new test case would give use result within the next round, without the need to wait for another one.
Perhaps https://github.com/synopse/mORMot2/commit/37fe0d8d may help for updates too.
It would reduce the number of memory allocations during the array binding.
Worth trying on the next MR.
Hope they will merge the request before the next round.
Edit: perhaps https://github.com/synopse/mORMot2/commit/f58c289e would help a little better too.
Edit2: I have added binary array binding for 32-bit and 64-bit parameters.
https://github.com/synopse/mORMot2/commit/fa3cd430
But from my tests, it is not really faster - as you already stated.
Offline
Env. variable now correctly passed into app container see modified dockerfile
I suggest running it once without a binary array binding to have a basis for comparison..
Offline
Nice finding!
Sad that your MR was not take into account before this new round.
Hope they will include it in the next!
Do you know enough python code to write how to check that HTTP 1.0 is properly interpreted for my issue?
Offline
TFB state
Weights 1.000 1.737 21.745 4.077 68.363 0.163
# JSON 1-query 20-q Fortunes Updates Plaintext Scores
38 731,119 308,233 19,074 288,432 3,431 2,423,283 3,486 2022-10-26 - 64 thread limitation
43 320,078 354,421 19,460 322,786 2,757 2,333,124 3,243 2022-11-13 - 112 thread (28CPU*4)
44 317,009 359,874 19,303 324,360 1,443 2,180,582 3,138 2022-11-25 - 140 thread (28CPU*5) SQL pipelining
51 563,506 235,378 19,145 246,719 1,440 2,219,248 2,854 2022-12-01 - 112 thread (28CPU*4) CPU affinity
51 394,333 285,352 18,688 205,305 1,345 2,216,469 2,586 2022-12-22 - 112 threads CPU affinity + pthread_mutex
34 859,539 376,786 18,542 349,999 1,434 2,611,307 3,867 2023-01-10 - 168 threads (28 thread * 6 instances) no affinity
28 948,354 373,531 18,496 366,488 11,256 2,759,065 4,712 2023-01-27 - 168 threads (28 thread * 6 instances) no hsoThreadSmooting, improved ORM batch updates
16 957,252 392,683 49,339 393,643 22,446 2,709,301 6,293 2023-02-14 - 168 threads, cmem, inproved PG pipelining
15 963,953 394,036 33,366 393,209 18,353 6,973,762 6,368 2023-02-21 - 168 threads, improved HTTP pipelining, PG pipelining uses Sync() as required, -O4 optimization
17 915,202 376,813 30,659 350,490 17,051 6,824,917 5,943 2023-03-03 - 168 threads, minor improvements, Ubuntu 22.02
17 1,011,928 370,424 30,674 357,605 13,994 6,958,656 5,871 2023-03-10 - 224 threads (8 thread * 28 instances) eventfd, ThreadSmooting, update use when..then
11 1,039,306 362,739 29,363 354,564 15,748 6,959,479 5,964 2023-03-16 - 224 threads (8*28 eft, ts), update with unnest, binary binding
17 1,045,953 362,716 30,896 353,131 16,568 6,994,573 6,060 2023-04-13 - 224 threads (8*28 eft, ts), update using VALUES (),().., removed Connection: Keep-Alive resp header
13 1,109,267 363,671 31,652 352,706 16,897 6,956,038 6,156 2023-04-24 - 224 threads (-s 28 -t8 -p), each server (with all threads) are pinned to the different CPU
7 1,109,693 381,633 32,725 353,182 23,022 6,975,086 6,634 2023-05-13 - 224 threads, added async test in -s 28 -t8 -p mode: db, queries & updates is for async, fortunes for direct
We are #7 even with non-optimal thread/server count for async tests. And #2 in cached-queries
Tomorrow new results are expected - async tests will be executed in `-s 56 -t 1 -p` and `-s 1 -t 56 --nopin`. I am waiting for the results with bated breath..
Last edited by mpv (2023-05-23 10:54:32)
Offline
The state of async summarized form last 2 runs:
db queries fortunes updates
sync
-s28 -t8 -p 362,787 31,436 353,182 16,750
async
-s28 -t8 -p 381,633 32,725 290,512 23,022
-s56 -t1 -p 398,619 32,892 312,836 20,154
-s1 -t56 -nop 371,371 31,766 330,803 19,995
I made a new PR 8232 with changed [async] to -s28 -t4 -p, the same [async,nopin] and binary array binding. After receiving the results, we will be able to choose the best param's for async
Offline
any news?
Offline
It is a very "hot" summer in my country, something burns every day (the occupiers, thank God, have more).
I decide to set [async] test back to `-s 28 -t 8 -p` mode - this gives us a best results for `/update` (~23k), even if we do not fully understand a reason.
See PR 2803
Perhaps I will return to optimization closer to September
Offline
Official TFB Round 22 expected in August 2023. I expect we will be #7 in composite score rating.
BTW in last run we #1 in Cached queries - see https://www.techempower.com/benchmarks/ … ched-query
Offline
To be #7 would be just amazing, because some top frameworks are more like proof-of-concept than a versatile software solution.
The cached queries efficiency is a good show case of what mORMot could do.
Thanks a lot mpv for the feedback and hard work.
BTW we pray and wish the best for you and your family.
Offline
Just to ask, why doesn't sqlite be used for this test?
Offline
Because the TFB team does not support it as a database.
Note that PostgreSQL is the main DB used by top frameworks tests, because it is the best scaling option.
Also because in order to maximize network performance, we create several instances of our HTTP server class in the process, so we share the DB among all those instances.
This is not the best pattern for SQLite3.
Offline
I understand, thank you. I still prefer sqlite for updates. It is very convenient and meets most needs, right?
Offline
https://tfb-status.techempower.com/ is up and running, but the current results are very strange - almost all frameworks (including mormot) lost up to 20% of RPS in /json and /plaintext, except two (with no changes in sources). It looks like "All animals are equal, but some are more equal than others" (hope I'm wrong)
Offline
After a while, they seem to have relocated the servers, and start running the tests again.
Nice @mpv to have you back online.
The run is not yet finished, but you are right: we seem to have a slight regression about /plaintext queries:
https://www.techempower.com/benchmarks/ … =plaintext
We were used to be around 7M requests / second, as the best frameworks, and now we have 5.5 M / second.
I don't think it is a regression about our async server, because we still use the source code commit from last may.
There may be some weird artifact during the tests on their server... we will wait for the next rounds to see more clearly about the situation.
Perhaps they tuned the machine to favor asp.net... which is their sponsor IIRC.
Offline
@ab, something is wrong with asyncserver or dbpull!
Running /rawdb wrk test (/db,/asyncdb also) ./raw -s 8 -t 8, total threads 64, I expect 64 connection to postgresql, but only 56 found.
DBPull conections are servers*(threads-1), not servers*threads!
Running ./raw -s 8 -t 4, only 24 connections, not 32
You can check active statements/connection with:
SELECT * from pg_stat_activity WHERE query='select id,randomNumber from World where id=$1'
When raw start we have 8 (servers) connections from main thread to load cache. Running wrk create servers*(threads-1) connections.
Main thread connections + threads connection = servers*threads, but worker threads use only s*(t-1).
Source /ex/techempower-bench, pull 2 days ago, 23.08.2024
raw -s 8 -t 8, htop show 81 threads.
Last edited by ttomas (2023-08-25 08:59:24)
Offline
@ttomas
I would say it is as expected.
One the reading threads (R0) is reserved for the epoll queries, and does not make any DB connection.
Look at the threads names in htop (enable the thread names in htop setup by F2): you will see a e.g. 8 sets of A, R0..R7, W threads for "-s 8 -t 8" parameters.
A is for the "accept" blocking process. W is the for asynchronous writes of huge content (never used during TFB). R0 is the epoll thread. R1..R7 are the processing threads.
Offline
Not expected. What about "-s 8 -t 1" vs "-s 8 -t 2", both use same number of dbpull connections 8, -t1 (10K req/sec, 25 threads) , -t2 (7K req/sec, 33 threads). For -t 1 look like R0 make db connections?
Offline
For -t 1 the single R0 thread does everything.
Note that for localhost tests, -s 1 should be used.
There is no benefit of multiple instances outside of several PCs configurations, i.e. the clients outside the server PC.
The setup of several instances and threads is very specific to the TFB hardware, which does not match a realistic usecase.
Offline
Update:
We finished #10 in this round.
https://www.techempower.com/benchmarks/ … =composite
But the results seems weird, because there were some regression in respect to the previous rounds, not only for us (/plaintext from 7M/s to 5M/s) but for a lot of frameworks.
In short: we did "less regress" than other frameworks. So we went up from #11 to #10 rank.
We will see what the next round offers us.
I just hope that the TFB system will be stable enough to resume as in its previous setup.
Offline
非常棒:)
Offline
TFB people did clearly something wrong with their new Citrine HW or SW setup.
No framework reaches 7,000,000 /plaintext requests per second any more.
https://www.techempower.com/benchmarks/ … =plaintext
In the previous setup, a lot of frameworks did overload the network interface, and were thresholding at 7,000,000 requests per second.
Now, asp.net core is at the top and no one is above 5,700,000 RPS.
I have created an issue - but with no real hope we could get any explanation...
https://github.com/TechEmpower/Framewor … ssues/8397
Offline
The performance issue was due to aggressive sound proofing they did on the servers: they lacked proper ventilation, so the CPU did throttle...
I therefore closed the issue, which was well handled by TFB people! I should have hoped better.
With the last two rounds, top frameworks (including mORMot) are back at 7,000,000 requests per second for /plaintext and the composite scores are as they were.
https://tfb-status.techempower.com/
Now mORMot is back at 12-13th rank in composite score: https://www.techempower.com/benchmarks/ … =composite
Perhaps @mpv we could make some new trials with the latest source code of the framework, because some fixes and optimizations have lately been included.
But since the next official round is around the corner (probably beginning of October), we may rather stay idle, and keep the source as it is. They won't accept any PR anyway until round 22.
https://github.com/TechEmpower/Framewor … 1728477796 states "we'll look to start the round run on 10/3."
Including our little mORMot this time!
We will see what the ranking becomes with some newly computed weights for this round - https://github.com/TechEmpower/Framewor … -algorithm
Offline
Let's keep source as is for a while - in any case, they will not be merged until Round 22.
I expect what mormot [async] demonstrate the same updates results as in run from 2023-05-20 and we'll be #7, because all conditions is the same from our side, but we have what we have
At least there will be something to work on in Round 23
Offline
Official Round 22 is completed and will be published soon. We are #12 in composite scores. Unfortunately cached-queries is not completed (server crashes with double free). So some memory problem is still exists. We are lucky what cached-queries not counted in composite section...
@ab - please, see https://github.com/TechEmpower/Framewor … ssues/8501 - maybe you have the inspiration to write some blog article?
Last edited by mpv (2023-10-18 14:32:22)
Offline
I have made some modifications to the HTTP async server.
Perhaps it could scale a little better. At least, some obscure regressions about HTTP/1.0 have been fixed - Apache Bench was not working any more!
I have tried to find out what occurs about /cached-queries memory issues.
Sadly I was not able to reproduce it at all... so any input is welcome!
We have now identified that the remaining bottleneck is for single row "select", i.e. /db or /fortunes, which are around 60% of the top frameworks, whereas other endpoints are 90% or more.
There may be a way to enhance the performance of such queries.
Anyway, I have also disabled HTTP pipelining by default, for security reasons.
There is a new hsoEnablePipelining to be set for our TFB server code:
https://github.com/synopse/mORMot2/comm … fad3cf123c
@mpv Could you make a pull request because this is a breaking change and would affect our numbers?
Offline
PR 8612 is ready.
/cached-queries memory issues occurred too rarely (once per 10 TFB run) - I also can`t reproduce it on my environment
As for the performance of single select, I'm afraid that the only way to improve it is to switch the Postgres connection to non-blocking mode (PQsetnonblocking).
But IMHO this requires a huge rework of the framework to become event driven:
- we need a .NET-like pool of database connections (with minimum and maximum size) that is not based on the thread ID (threadSafeConnection), but manages a list of all connections to one database and can provide the user with an unoccupied connection (or block if all connections are busy)
- Ideally, the database connection sockets waiting for the result should be in the same epool as HTTP sockets, so we need a callback-based event loop backed by an epoll (like in libuv). This is a complete redesign of the current HTTP server architecture
Another IMHO:
An event-driven, callback-based architecture is the only choice we have with FPC. But it's a road to callback hell. I know very well what this means because I worked with callbacks in early JS for 5 years, up until Promises. Any complex logic based on callbacks is hell.
So my suggestion is to stay where we are. After all, we are now in the top 3 for ORM, and our code is production-ready unlike many others on TOP
Offline
Yes, you are right, the problem comes from the fact that we use a per-thread Postgres connection, even for the async callbacks.
Then each async connection, as owned by TSqlDBPostgresAsync, has its own TSqlDBPostgresAsyncThread, waiting for the process to happen.
What we could do, for the async methods only:
1) switch to a connection pool, and not a per-thread connection
2) switch to a one background thread per connection to a single background thread, but using epoll to monitor all the pending async connection statements.
Then no need to hack anything else in the web server or whereever...
Offline