You are not logged in.
This is really exciting stuff - well done guys!
@mpv
>For a long time we can't participate because we can't compile a mORMot in cloud environment. Now that FPC 3.2 is released it is possible.
I'm curious to understand what the problem was - just for my own understanding :-)
Thanks - and good luck.
Offline
@rdevine
>For a long time we can't participate because we can't compile a mORMot in cloud environment.
For a long time (while FPS3.2 is not releases) mORMot uses FPC compiled from master (we need RTTI patches available only in 3.2), but building a compiler takes too long. And having a benchmark using compiler what is in development is not good IMHO.
@ab - unfortunately TFB workflow fails - see logs
VERIFYING QUERY COUNT FOR http://tfb-server:8080/db
--------------------------------------------------------------------------------
ERROR:root:Terminating process 21 by timeout of 20 secs.
FAIL for http://tfb-server:8080/db
Only 481 executed queries in the database out of roughly 512 expected.
The strange thing is what /rawdb is OK. So this is not because of slow environment
Offline
I think it's possible either because threads are spawn slowly or because test environment uses Ubuntu 18.04 (our Docker is builds on 20.04, and may be there is some problems with 18.04 kernel and 20.04 libraries)
I will setup VM with 18.04 and verify on it
Last edited by mpv (2022-07-21 15:14:45)
Offline
Do you have the same warnings during compiling mORMot ?
I don't see them, which are all false positives, and should be hidden by {%H-} in the source.
Weird.
What is even weirder is that http://tfb-server:8080/queries?queries=20 do pass - but not /db ?
It does not make much sense... they use the same ORM method (retrieve by ID).
I doubt the issue comes from the test machine speed.
Spawning a thread is very fast on Linux. Even on 18.04 IIRC.
Or perhaps other frameworks would have troubles too...
My guess is that there was a network problem between their containers.
Offline
Perhaps you may add some comment to https://github.com/TechEmpower/Framewor … /pull/7481
Saying it passes on your environment, with some console output to show it.
And that there may be some performance/timeout issue during the validation tests.
Perhaps the test machine has small number of cores?
I have no problem here with 2 cores / 4 threads.
Offline
Something weird: when I use /rawqueries the memory usage of the raw process grows to several GB!
It is not observable with /rawdb or /rawfortunes or /rawupdates or other endpoints like /queries or /db or /fortunes.
And when I quit the program, no memory leak is reported...
I tracked the number of PostgresSQL connections, and I only have 32 of them, one for each, during all the process.
Edit If I fix the following function as such, the memory waste disappears:
procedure TRawAsyncServer.getRandomWorlds(cnt: PtrInt; out res: TWorlds);
var
conn: TSqlDBConnection;
stmt: ISQLDBStatement;
i: PtrInt;
begin
SetLength(res{%H-}, cnt);
conn := fDbPool.ThreadSafeConnection;
for i := 0 to cnt - 1 do
begin
stmt := conn.NewStatementPrepared(WORLD_READ_SQL, true, true);
stmt.Bind(1, RandomWorld);
stmt.ExecutePrepared;
if not stmt.Step then
exit;
res[i].id := stmt.ColumnInt(0);
res[i].randomNumber := stmt.ColumnInt(1);
stmt.ReleaseRows;
end;
end;
Offline
I start setup an environment what close to CI (4 gb Ubuntu 18 x64) and will test on it.
Adding `stmt.ReleaseRows` is enough - no need to re-prepare statement in loop;
May be we should add a `stmt.ReleaseRows;` directly info TSqlDBStatement.ExecutePrepare to prevent such mistakes?
Offline
I have discovered a memory leak in the HTTP Server when connections are released after keep alive on the server side.
Most of the time, it is OK, but sometimes there are some connection instances which are not released at shutdown.
I a able to reproduce it with 16384 clients from wrk, and with a small keep alive (1000ms on server initilization).
Up to now, I didn't find the cause, but I will investigate further tomorrow.
My guess is that it is a tedius multi-thread bug.
Offline
I do not think such leak is a reason of test fails..
From my side I setup VM with 4 Gb RAM and 1 CPU on Ubuntu 18.04 and even on such VM all tests are passed, so I hope the reason of fails is somewhere on CI side
So I fix memory leak in /raw* and wrote to them - let's wait next CI and see..
Last edited by mpv (2022-07-21 21:47:24)
Offline
I fixed the leak by adding a new explicit hsoFavorHttp10, to be used for HTTP/1.0 benchmarking.
It also enhanced a lot the performance on /plaintext on my PC.
https://github.com/synopse/mORMot2/commit/d9c7d1ff
@mpv
Please see some changes in
https://gist.github.com/synopse/f4b2c2e … 8aadff6c88
Offline
@ab - you forgot to commit BATCH_DIRECT_ID definition (mormot.orm.server.pas(2225,23) Error: Identifier not found "BATCH_DIRECT_ID" ). Please, fix this on master btunch
I can confirm what with latest commit /plaintext improves almost twice.
wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 15 -c 512 --timeout 8 -t 12 "http://localhost:8080/plaintext"
-------before hsoFavorHttp10 introduced
Requests/sec: 305198.18
Transfer/sec: 47.73MB
-------after hsoFavorHttp10 introduced
Requests/sec: 502743.52
Transfer/sec: 78.63MB
Also I verify other endpoints manually - all works. So I waiting master to be fixed and please - made a release to allow me to verify using tfb utility
About changes in test sources:
- I prefer to use a Postgres URL to define database connection - this is because in future we can tweak connection prams. For example on my productions I use usually use `postgres://.....?tcp_user_timeout=3000&application_name=myapp`
- what the reason to set a 5min Keep-Alive? 30 sec is enough IMHO. Benchmarks run tests with -d 15 (15 seconds)
- i agree with your to set HttpQueueLength=20000
Offline
And the bad news is what second TFB pipeline fails again, now on `/queries`- https://github.com/TechEmpower/Framewor … focus=true
And I completely misunderstand the reason TFB kill some process (our process?) on 20sec timeout.
Last edited by mpv (2022-07-23 10:53:07)
Offline
I reproduce TFB VERIFY failure on my PC by creating a bash script what emulates a load using SIEGE bench tool by using it in the same manner as TFB`s python --verify script
here is gist with load generator bash script
Testcases randomly becomes slow, usually on executing `updates?queries=2`, but once i saw on `/json` (on latest master)
I think some issues with connections closing is still exists in HTTP server level. I note about it here in context of HTTP1.0, in case of SIEGE program starts / stops several time, so sockets a massively closing
@ab - please, try to reproduce using my script. For a while I can`t found the reason...
Last edited by mpv (2022-07-24 06:39:02)
Offline
Yes, problem is on HTTP server - it's enough to run (in one sh file)
siege -c 512 -r 2 "http://localhost:8080/json" -R ~/dev/FrameworkBenchmarks/toolset/databases/.siegerc
siege -c 512 -r 2 "http://localhost:8080/json" -R ~/dev/FrameworkBenchmarks/toolset/databases/.siegerc
siege -c 512 -r 2 "http://localhost:8080/json" -R ~/dev/FrameworkBenchmarks/toolset/databases/.siegerc
siege -c 512 -r 2 "http://localhost:8080/json" -R ~/dev/FrameworkBenchmarks/toolset/databases/.siegerc
and on my PC first to 3nd calls takes 0.2sec, but fourth call ~4sec
Last edited by mpv (2022-07-24 07:03:07)
Offline
My finding: sockets are remains in CLOSE_WAIT state (lsof | grep CLOSE_WAIT) after benchmark tool program is finish. BTW, it doesn't matter what program creates a load, siege or wrk - wrk simply open less connections.
I have seen the same problem with mORMot1 on some productions and solves it by decreasing `net.ipv4.tcp_fin_timeout` on kernel level.
But now I understand - this is our bug. I verify with nodeJS - no socket in CLOSE_WAIT remains after siege/wrk stops.
Offline
More info:
Since debugging in threads is hard I just trace how application handle 1 connection with 2 keep alive request in each
strace -f -e trace=network,desc -s 10000 ./raw
-- and in another terminal
siege -c 1 -r 2 "http://localhost:8080/json" -R ~/dev/FrameworkBenchmarks/toolset/databases/.siegerc
We do not close socket after receiving 0 (@ab - I remember we discuss this some years ago)
[pid 494030] <... accept4 resumed>0x7f7aa59a5c04, [110], SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
[pid 494030] accept4(9, {sa_family=AF_INET, sin_port=htons(15120), sin_addr=inet_addr("127.0.0.1")}, [110->16], SOCK_NONBLOCK) = 11
[pid 494030] getsockopt(11, SOL_SOCKET, SO_SNDBUF, [2626560], [4]) = 0
[pid 494030] accept4(9, <unfinished ...>
[pid 494032] recvfrom(11, "GET /json HTTP/1.1\r\nHost: localhost:8080\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate\r\nUser-Agent: Mozilla/5.0 (pc-x86_64-linux-gnu) Siege/4.0.4\r\nConnection: keep-alive\r\n\r\n", 32768, 0, NULL, NULL) = 172
[pid 494032] sendto(11, "HTTP/1.1 200 OK\r\nServer: mORMot2 (Linux)\r\nDate: Sun, 24 Jul 2022 18:54:07 GMT\r\nContent-Length: 27\r\nContent-Type: application/json\r\nConnection: Keep-Alive\r\n\r\n{\"message\":\"Hello, World!\"}", 184, MSG_NOSIGNAL, NULL, 0) = 184
[pid 494032] epoll_ctl(7, EPOLL_CTL_ADD, 11, {EPOLLIN|EPOLLPRI, {u32=2800033872, u64=140164762771536}}) = 0
[pid 494031] epoll_wait(7, [{EPOLLIN, {u32=2800033872, u64=140164762771536}}], 100, 1100) = 1
[pid 494032] recvfrom(11, "GET /json HTTP/1.1\r\nHost: localhost:8080\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate\r\nUser-Agent: Mozilla/5.0 (pc-x86_64-linux-gnu) Siege/4.0.4\r\nConnection: keep-alive\r\n\r\n", 32768, 0, NULL, NULL) = 172
[pid 494032] sendto(11, "HTTP/1.1 200 OK\r\nServer: mORMot2 (Linux)\r\nDate: Sun, 24 Jul 2022 18:54:07 GMT\r\nContent-Length: 27\r\nContent-Type: application/json\r\nConnection: Keep-Alive\r\n\r\n{\"message\":\"Hello, World!\"}", 184, MSG_NOSIGNAL, NULL, 0) = 184
[pid 494031] epoll_wait(7, [{EPOLLIN, {u32=2800033872, u64=140164762771536}}], 100, 1100) = 1
[pid 494032] recvfrom(11, "", 32768, 0, NULL, NULL) = 0
[pid 494032] epoll_ctl(7, EPOLL_CTL_DEL, 11, 0x7f7aa589b6f0) = 0
[pid 494031] epoll_wait(7, [], 100, 1100) = 0
For example nodeJS after reading 0 shutdown socket
accept4(22, NULL, NULL, SOCK_CLOEXEC|SOCK_NONBLOCK) = 24
epoll_ctl(16, EPOLL_CTL_ADD, 24, {EPOLLIN, {u32=24, u64=24}}) = 0
read - writev - epoll_wait - read - writev - epoll_wait
read(24, "", 65536) = 0
shutdown(24, SHUT_WR) = 0
epoll_ctl(16, EPOLL_CTL_DEL, 24, 0x7fff61994964) = 0
close(24)
...
Last edited by mpv (2022-07-24 19:02:23)
Offline
Just a note because I can't reproduce how mORMot handle this - socket also should be closed after receiving ECONNRESET
In nodeJS
[pid 494680] read(25, 0x50f65c0, 65536) = -1 ECONNRESET (Connection reset by peer)
[pid 494680] epoll_ctl(16, EPOLL_CTL_DEL, 25, 0x7fff92fc38d4) = 0
[pid 494680] close(25)
Last edited by mpv (2022-07-24 19:10:32)
Offline
Yes, some years ago, we said that we could avoid to shutdown the socket in the context of a Linux server, because nginx did that. But we close it.
Perhaps it was plain wrong.
I remember also that I had to tune/change the linger option for some reasons.
Offline
Yes, some years ago, we said that we could avoid to shutdown the socket in the context of a Linux server, because nginx did that
May be this was be about avoiding to shutdown listening socket in case of systemd socket activation ?
About closing socket I found this mORMot1/windows topic where we decide to close it.
Node uses interesting techniques - in case read/receive returns 0 (another part close socket) it calls shutdown for writing
shutdown(24, SHUT_WR)
when removes descriptor from epoll and when calls close.
While on ECONNRESET it simply remove descriptor and close socket.
Offline
Reading happens in TPollAsyncSockets.ProcessRead.
If socket recv() returns 0, then connection.recv() returns nrClosed, then we call UnlockAndCloseConnection(), then call CloseConnection(), then Stop()...
Then calls either fRead.Unsubscribe() if the connection has been registered to epoll() queue (which happens after the 1st request),
or directly sock.ShutdownAndClose({rdwr=}false) - e.g. for short-living HTTP1/0 connection.
Then fRead.Unsubscribe() should call socket.ShutdownAndClose({rdwr=}false) because fOwner.fUnsubscribeShouldShutdownSocket has been set.
But fOwner was NOT set! So it didn't close the socket.
Should be fixed by https://github.com/synopse/mORMot2/commit/cb4be409
Another hours debugging for a one-character typo.
Offline
Sockets is OK now, but problems with `updates` (in case I run siege_emulate.sh - if updates is called using curl - all is OK)
25.07.22 09:12:45.416 Enter 5 mormot.orm.server.TRestOrmServerBatchSend(7f3ce4b75600).EngineBatchSend TOrmWorld inlen=73
25.07.22 09:12:45.448 Exception OS 5 EAccessViolation (01) [R38080] at 66cf7f ../../src/db/mormot.db.sql.pas (3579) ../../src/db/mormot.db.sql.pas (3594) ../../src/orm/mormot.orm.sql.pas (1877) ../../src/orm/mormot.orm.server.pas (2266) ../../src/orm/mormot.orm.server.pas (1219) raw.pas (177) ../../src/net/mormot.net.server.pas (1457)
25.07.22 09:12:47.416 Exception 5 EOrmBatchException {Message:"TRestOrmServerBatchSend.EngineBatchSend: TRestStorageExternal.TransactionBegin timeout"} [R38080] at 72f66f ../../src/orm/mormot.orm.server.pas (1959) ../../src/orm/mormot.orm.server.pas (2266) ../../src/orm/mormot.orm.server.pas (1219) raw.pas (177) ../../src/net/mormot.net.server.pas (1457)
25.07.22 09:12:47.416 Trace 5 mormot.orm.server.TRestOrmServerBatchSend(7f3ce4b75600) EngineBatchSend json=73 B count=0 errors=0 post=0 simple=0 hex=0 hexid=0 put=0 delete=0 2s 0/s
25.07.22 09:12:47.416 Warning 5 {"EOrmBatchException(7f3ce5eea590)":{Message:"TRestOrmServerBatchSend.EngineBatchSend: TRestStorageExternal.TransactionBegin timeout"}} -> PARTIAL rollback of latest auto-committed transaction data={World~["automaticTransactionPerRow",10000,"u01~~[2188~6645],[381,2647]]}
25.07.22 09:12:47.416 Leave 5 02.003.984
P.S.
If I replace updates with rawupdates in siege_emulate.sh - all is OK, so problem is in ORM
Last edited by mpv (2022-07-25 09:23:20)
Offline
https://github.com/synopse/mORMot2/commit/56cee1d4 should fix the thread-safety problem.
But there are other ORM issues in my last BatchSend refactoring.
I am currently working on it.
EDIT
- https://github.com/synopse/mORMot2/comm … 046d4a57ea fixes the memory leak problem I observed with high number of short living (typically HTTP/1.0) connections;
- https://github.com/synopse/mORMot2/comm … 710e793f63 restores proper ORM thread-safety
Now, on my side, I don't see any more trouble using wrk, ab or siege over the raw test program.
Offline
At last with commit fixed thread-safety of TRestBatch server-side process all my tests are passed
I can use a fixed commit hash for building TFB Docker container, or fixed release tag. Currently docker build tries to download a latest release tag from mORMot2, but I afraid future releases can broke TFB tests, and it will be bad if it is happens just before the Round22.
If you do not plane a massive updates in near future, let's create a release, I verify it again, and use it in docker instead of latest. Ok?
Last edited by mpv (2022-07-25 17:57:10)
Offline
Using a fixed tag instead of the latest release is the best option: other frameworks do it too.
Sqlite3 3.39.2 is out, so I guess I could make a release tomorrow morning.
It could help integration - hoping it passes the siege testings.
I would like to make the /updates tests scale better with the ORM, because it is the only test behind the "raw" version.
So once it has been optimized, we could stick to a version for rorund 22.
Offline
Before Round22 I plane to investigate performance using valgringd, strace and so on - may be I found some hidden bottlenecks
Also I plane to switch docker image base to ubuntu:22.04 after TFB upgrades their environment. (BTW I tries to upgrade my PC to 22.04 - too many problems yet, so I rollbacks to 20.04)
So we do at last one more MR and in it can upgrade to more fresh mORMot.
For now I waiting for mORMot release tomorrow morning, fix it tag in the docker build and push an MR to TFW.
Last edited by mpv (2022-07-25 18:16:38)
Offline
Here is the release
https://github.com/synopse/mORMot2/rele … g/2.0.3780
I have also made some minor modifications to raw.pass
https://gist.github.com/synopse/f4b2c2e … 8aadff6c88
For instance, I guess we don't need any transaction for the update which is always run as a single SQL statement on PostgreSQL.
Hope it passes the tests this time.
I don't see any problem anymore on my side.
But we will continue testing.
Offline
I remove `TSynLog.Family.Level := LOG_STACKTRACE` - logging is not permitted.
The strange thing for a while - if I checkout tag 2.0.3780 I can build raw.pas form Lazarus IDE, but if I run `setup_and_build.sh` locally or in container I got error
`mormot.core.fpcx64mm.pas(3498,6) Error: (5000) Identifier not found "ObjectLeaksCount"`
I do not understand why....
Last edited by mpv (2022-07-26 18:52:16)
Offline
Command line compile command is
fpc -MDelphi -Sci -Ci -O3 -g -gl -gw2 -Xg '-k-rpath=$ORIGIN' -k-L./bin -Tlinux -Px86_64 -veiq -v-n-h- -vm11047,6058,5092,5091,5060,5058,5057,5028,5024,5023,4081,4079,4055,3187,3124,3123,5059,5036,5089,5090 -Fi./bin/fpc-x86_64-linux/.dcu -Fi./libs/mORMot/src -Fi./libs/mORMot/src/core -Fi./libs/mORMot/src/db -Fi./libs/mORMot/src/rest -Fl./libs/mORMot/static/x86_64-linux -Fu./libs/mORMot/src/core -Fu./libs/mORMot/src/db -Fu./libs/mORMot/src/rest -Fu./libs/mORMot/src/crypt -Fu./libs/mORMot/src/app -Fu./libs/mORMot/src/net -Fu./libs/mORMot/src/lib -Fu./libs/mORMot/src/orm -Fu./libs/mORMot/src/soa -FU./bin/fpc-x86_64-linux/.dcu -FE./bin/fpc-x86_64-linux -o./bin/fpc-x86_64-linux/raw -dFPC_X64MM -dFPCMM_SERVER -B -Se1 ./src/raw.pas
In IDE more files are in PATH, I think (laz package adds all mORMot)
If compiled from command line using command above error is
(3104) Compiling ./src/raw.pas
(3104) Compiling ./libs/mORMot/src/core/mormot.core.fpcx64mm.pas
mormot.core.fpcx64mm.pas(3498,6) Error: (5000) Identifier not found "ObjectLeaksCount"
Last edited by mpv (2022-07-26 19:03:16)
Offline
Ok, packages is evil...
Package defines
-dNOSYNDBZEOS
-dNOSYNDBIBX
-dFPCMM_REPORTMEMORYLEAKS
-dFPCMM_SERVER
But when I compile from command line I do not define a FPCMM_REPORTMEMORYLEAKS so
{$ifdef FPCMM_REPORTMEMORYLEAKS_EXPERIMENTAL}
var
ObjectLeaksCount: integer;
But variable ObjectLeaksCount used only under FPCMM_REPORTMEMORYLEAKS_EXPERIMENTAL condition.
I can wait for fix or can define FPCMM_REPORTMEMORYLEAKSShould I wait for fix or define FPCMM_REPORTMEMORYLEAKS for command line
Offline
I made MR based on 2.0.3780 release with defined FPCMM_REPORTMEMORYLEAKS. Let's wait for approval
Offline
After small profiling of wrk call with 12 thread and 512 connection
wrk -c 512 -t 12 "http://localhost:8080/db"
using my favorite valgrind I found what 25% of program time (unexpectedly!) is spends inside FindPendingFromTag called with n ~350 and new event count is ~24
It's either branch predictor problem, or expectation what O(n*m) is small is not true - in my test it is O(24*350) = 9800
Last edited by mpv (2022-07-27 18:14:51)
Offline
Good finding.
I just tried to remove the duplicate finding and ignore.
My first attempt is to make a simple append (move remaining + move new).
But in fact there are a lot of duplicates, so the pending list grows to a huge number, then a lot of reading errors appear.
So the performance is even worse. The more the connections stay active, the more CPU it consumes. Not good.
I will try another approach tomorrow: just flagging the connection that it is notified as pending may be enough and O(1) in all cases.
Offline
With latest O(1) changes I've got 94868 RPS vs 93147 RPS before (+2%) for /db and ~490000 vs 485000 RPS for /json. On server hardware result difference should be more visible.
I will continue to investigate perf (today is a crazy day - @#$ russians launch missiles starts from 4:00, some of them landing very close to me)
Last edited by mpv (2022-07-28 10:01:25)
Offline
Info: I guess https://github.com/synopse/mORMot2/commit/fed0021366c would allow to use {#.} ... {/.} in the Mustache template.
Stay put and safe, Pavel!
Offline
Nice! And it improves /fortunes from 76701 RPS to 82399
Also I removes unneeded #10 in mustache template - for 15 second test our little mORMot respond 1 233 536 times, so these 11bytes are converted to 12Mb of traffic
I will commit all improvements into - https://github.com/pavelmash/FrameworkBenchmarks/pull/1
Offline
We pass verification check on TFB Now waiting for PR to be merged. If this happens up to tomorrow, then we got benchmark results on read hardware during next intermediate execution (expected to starts in 41 hour)
Warning: it would break the tests if it is run with the latest release tag.
I just included this feature today.
Yes, to test on specific commit it hash should be pasted here and line uncommented, and next line - commented
Offline
With latest sources a possible optimization target is THttpRequestContext.ParseHeader( (6.6% of time for /db)
The simplest optimization IMHO is to remove 2 unnecessary headers from PARSEDHEADERS (SERVER-INTERNALSTATE and X-POWERED-BY).
More complex is use ideas from picohttpparser - it described here starting from slide 31
Also small +500 RPS /db improvement RP 104 - avoid FPC string concatenation
Offline
I fix missed ':' in PR#140. I miss it because `/db` endpoint actually do not return TOrm JSON , but reformat it using FormatUTF8.
May be better to introduce new class method and rewrite `/db` as
ctxt.OutContent := TOrmWorld.RetrieveAsJson(fStore.Orm, RandomWorld);
?
Last edited by mpv (2022-07-29 15:23:43)
Offline
The problem with direct retrieval would be that the ID won't be serialized as "id" but as "ID"...
Update:
I have committed a lot of enhancements to the mormot.sql.db.*.pas units.
Trying to reduce memory allocation, especially when a single row of data is retrieved from the DB.
Code should be now more modular and I hope more performant.
About picohttpparser, a lot of what is included in h2o is already implemented in mORMot HTTP server.
There is no memory allocation at all when running the /plaintext request.
Offline
Nice! For a 10 days results are improved - here is a comparison between mormot from 2022-07-20 and 2022-07-30
Max RPS:
┌──────────────┬────────────┬────────────┬────────────┐
│ (index) │mormot(0720)│mormot(0730)│ lithium │
├──────────────┼────────────┼────────────┼────────────┤
│ fortune │ 74318 │ 90500 │ 90064 │
│ plaintext │ 920198 │ 977024 │ 3388906 │
│ db │ 111119 │ 116756 │ 99463 │
│ update │ 10177 │ 10108 │ 25718 │
│ json │ 422771 │ 446284 │ 544247 │
│ query │ 106665 │ 113516 │ 94638 │
│ cached-query │ 384818 │ 416903 │ 528433 │
└──────────────┴────────────┴────────────┴────────────┘
Offline