#1 2013-11-09 02:08:21

moctes
Member
From: Mexico
Registered: 2013-05-11
Posts: 129

Benchmark information

Hi Arnaud,

Just curious, could you please post more info about the data access libraries used for the updated benchmarks on: http://blog.synopse.info/post/2013/11/0 … ing-MS-SQL, I mean the version of UniDAC, Zeos, Firedac?  I'm evaluating PostgreSQL to be used with mORMot and I am also looking for the faster library, you said nice things about ZEOS on the post and if my memory serves me well you were in touch with one of the core developers so your feedback is very valuable for me, I have had experience with Unidac, Firedac is not included in my Delphi version and also as you know I would like to switch to FPC/Lazarus (server side at least) so maybe the options are reduced to UniDac vs Zeos, and I want to run the benchmarks on my computer and because of that I'm asking: Did you used the latest versions on your tests? Can you recommend one over the other?  What is your experience with them ? or maybe you have SynDBPostgreSQL.pas ;-) hidden somewhere?

Best regards

Offline

#2 2013-11-10 13:24:28

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

ZEOS is the latest 7.x version from SVN.
FireDAC is the version shipped with Delphi XE4.
UniDAC is 4.6.1 AFAIR.

We did not use nor test PostgreSQL yet with mORMot, but it was reported to work as expected (at least the SynDB* layer, perhaps not the whole ORM).
Any feedback is welcome!

From our tests, Zeos/ZDBC is pretty good for reading content, since our classes have direct access to the ZDBC layer, so by-pass the TDataSet bottleneck.
There was a ticket/feature request to create a SynDBPostgreSQL.pas, but we did not start it, since we do not need it (!) and ZDBC sounds to be a pretty good option.
We indeed had some very good sharing with Zeos guys, and find out that the latest commits introduced some noticeable speed-ups - sounds like if Zeos team is very reactive, and they are now focusing also on performance.

What is your own feedback about PostgreSQL?
On the Windows platform, we found out MSSQL Express to be pretty easy to work with, and with amazing speed results (via our OleDB direct access).

Online

#3 2013-11-10 15:33:38

moctes
Member
From: Mexico
Registered: 2013-05-11
Posts: 129

Re: Benchmark information

Thanks for the info Arnaud, I'm trying to run your benchmarks and I'm only testing Unidac and ZEOS, I saw on devart forums they are on v 5.1.3 I will see if I can test with the latest version.

I'm not able to provide any feedback yet because I'm starting with PostgreSQL, I have developed many applications using Firebird as database server with great success, some also targeting MSSQL Express, and now I have a new project which will be very data intensive (expecting a million records at least on our main table on the first month on production) and I'm very interested in some features out of the box of PostgreSQL like schemas, hstore, json and xml data type, replication among others, I was a little worried about choosing PG but I've heard nice things about it and the National Institute for Geography and Statistical here in Mexico is using PG so I think is a safe bet.

I'll keep you informed of my findings.

Offline

#4 2013-11-10 18:24:22

mpv
Member
From: Ukraine
Registered: 2012-03-24
Posts: 1,571
Website

Re: Benchmark information

We use Zeos with Postgre, but on SynDB* level, not in hi level mORMot ORM. There is no project used Postgre in production yet, but our test show good performance and stability. Just one remark - Postgre is not tuned for use on Windows platform (I mean server part), so if anyone want to perform some performance test it is better to install Postgre to *nix environment. About AB remark for MSSQL Express - this DB show good result while database is small (small for me is = feet in memory). If database is expected big ( > 200Mb ) - Postgre or Oracle is very good choice IMHO.

Offline

#5 2013-11-10 19:27:45

moctes
Member
From: Mexico
Registered: 2013-05-11
Posts: 129

Re: Benchmark information

Thank you MVP, I heard the same although this answer on stackoverflow makes me think the gap is closing: http://stackoverflow.com/questions/8368 … t-versions ,  may be in the future when mORMot will work fine in Lazarus and has an alternative to http.sys I can make the switch to Linux on server but by now I don't want to have the mORMot server(windows) and the database server(linux) on separate machines, perhaps worth the test but spending on two servers is another thing we would like to avoid.

What do you think of this numbers on Windows ?   

With: 

fUseCache := false; // caching is to be disabled - not found stable enough

{
    "Engine": "ZEOS",
    "CreateTableTime": "67.37ms",
    "NumberOfElements": 5000,
    "InsertTime": "4.48s",
    "InsertRate": 1115,
    "InsertBatchTime": "4.51s",
    "InsertBatchRate": 1107,
    "InsertTransactionTime": "3.47s",
    "InsertTransactionRate": 1438,
    "InsertBatchTransactionTime": "3.29s",
    "InsertBatchTransactionRate": 1517,
    "ReadOneByOneTime": "3.67s",
    "ReadOneByOneRate": 1358,
    "ReadAllVirtualTime": "76.52ms",
    "ReadAllVirtualRate": 65338,
    "ReadAllDirectTime": "56.28ms",
    "ReadAllDirectRate": 88836,
    "ClientCloseTime": "4.01ms"
}

With :
  fUseCache := true; // caching is to be disabled - not found stable enough

{
    "Engine": "ZEOS",
    "CreateTableTime": "58.21ms",
    "NumberOfElements": 5000,
    "InsertTime": "2.09s",
    "InsertRate": 2386,
    "InsertBatchTime": "1.97s",
    "InsertBatchRate": 2532,
    "InsertTransactionTime": "1.22s",
    "InsertTransactionRate": 4087,
    "InsertBatchTransactionTime": "1.23s",
    "InsertBatchTransactionRate": 4041,
    "ReadOneByOneTime": "1.27s",
    "ReadOneByOneRate": 3932,
    "ReadAllVirtualTime": "51.69ms",
    "ReadAllVirtualRate": 96728,
    "ReadAllDirectTime": "36.01ms",
    "ReadAllDirectRate": 138819,
    "ClientCloseTime": "4.06ms"
}

Remember this is on a Win64 VM with 2 GB de RAM(1600Mhz DDR3), 2.7 Ghz core i7  I know I could use bootcamp on my MBP but I don't want to do that.

Don't know if the comment about cache stability is still valid but I would like to know, may be tomorrow I will test against latest Unidac and can make some conclusions.

BTW had to comment the following lines from the tests :

    Test(nil,'','','','','SQLite3 (file full)',true,smFull);
    Test(nil,'','','','','SQLite3 (file off)',true,smOff);
    Test(nil,'','','','','SQLite3 (file off exc)',true,smOff,lmExclusive);

because was getting an AV on :

    if (aServerName='') and
       not PropsClass.InheritsFrom(TODBCConnectionProperties) then begin

Just FYI I was only interested in PG tests by now.

Regards

Offline

#6 2013-11-10 21:01:02

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

Yes, there was an error introduced last friday in performance test sample.
See http://synopse.info/fossil/info/7e6e948677

The cache was enabled only for the DB engines which were tested as stable.
We encountered some stability issues with early version of Zeos 7.x, so we choose to disabled it by default.
Thanks to your feedback, we can now officially enable cache for PostgreSQL.
See http://synopse.info/fossil/info/3059c1ceef

Thanks a lot for your feedback!
smile
I suspect UniDAC will be slower than ZDBC direct access for the reading (due to TDataSet), and perhaps also for the insertion (depending on your Delphi version).

Online

#7 2013-11-11 12:56:00

mpv
Member
From: Ukraine
Registered: 2012-03-24
Posts: 1,571
Website

Re: Benchmark information

Actually mORMot work with Postgre good. The worst thin for Windows-based Postgre install is what Postgre fork process (postgre.exe) for every connection. But, thanks to internal mORMot thread pool, we create all connections once and number of connection is = number of working thread. So it is possible to use Postgre under Win.

moctes, do you configure Postgre after install? Default Postgre configuration is for VERY low-level server... And I recommend to use 32bit Postgre version in your case ( because of 2Gb RAM) - it must be faster a little bit.

Offline

#8 2013-11-11 16:30:39

moctes
Member
From: Mexico
Registered: 2013-05-11
Posts: 129

Re: Benchmark information

Hi MPV, I didn't made any tunning on PostgreSQL installation the tests were ran with the configuration out of the box, so it can be enhanced but I was interested on a starting point to make comparisons between ZEOS and Unidac don't know if shoud be done in a different way, once I make a decision I will start to fine tunning the DB Engine; and about 32 bits PG, I'm not using the 32 bits version because I have Win64 installed on this VM and wanted a 64 bits version also, and the production environment would be 64 bits (windows server 2008 or 2012) as I'm new to PG I needed to see possible shortcomings on an environment as similar as production ( if I can say such ting ;-) ), I've read on another post Arnaud commented 64 bits support on mORMot side was not complete but I can live with that for a while.

Thank you for your advice I really appreciate it.

Offline

#9 2018-03-28 21:29:14

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

I wanted to evaluate Sqlite vs Firebird embedded.

In example 15, I switched to 10 records because is a realistic scenario (inserting an "order" object it has usually 7-10 record).

Insertion speed:

                             Direct    Batch       Trans        Batch Trans
SQLite3 (file full)        424    3518        2673        1756
SQLite3 (file full exc)  178    3324        2130        3278
ZEOS Firebird             924    2092       2942        2464
FireDAC Firebird        3795  11507       9182       11834

Why "FireDAC Firebird" is 3 times faster than "Zeos Firebird" and also 3 times faster than "SQLite full exclusive", since Sqlite access is maximum optimized in your framework? and I mean for "Trans" because I don't think I'll use "Batch" (anyway Firedac has also a very optimized batch).

Note: I need maximum safety, and since Firebird is implicit in "Full" mode is fair that Sqlite to be also in "Full" mode.

Thank you.

Last edited by emk (2018-03-28 21:33:53)

Offline

#10 2018-03-29 07:32:47

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

Because SQlite3 waits for each record to be really written to disk, whereas obviously Firebird don't - its "Full" mode is much less reliable than Sqlite for sure.
See https://www.sqlite.org/lockingv3.html
and https://synopse.info/files/html/Synopse … ml#TITL_60

Online

#11 2018-03-29 09:39:16

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

I read those pages from SAD, but how to explain "Firedac Sqlite" better performance since is also tested with "smFull":

    {$ifdef USEFIREDAC}
    Test(TSQLDBFireDACConnectionProperties,FIREDAC_PROVIDER[dSQLite],'','','',' SQlite3',true,smFull);
    {$endif}

Results: (also, same 10 records)

FireDAC SQlite3    6644    2913    5117    5980

What is the "magic" behind Firedac Sqlite provider? Direct 15x, Trans 2x? I think this magic worth investigated and imported into mORMot smile

Offline

#12 2018-03-29 11:22:09

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

There is no magic, they just changed the default settings for better performance.
And there seem to be something in your system which make the timing not very realistic (are you running it in a VM? the disk drivers on a VM are known not to be ACID and don't implement FlushFileBuffers as on a real system).

If you use the same settings - see SQLite3 (file off exc) below - we have better performance than FireDAC:

Insertion speed
	 		Direct	Batch	Trans	Batch+Trans
SQLite3 (file full)	91	3354	42227	67349
SQLite3 (file off)	143	9260	61751	123878
SQLite3 (file off exc)	17270	176348	74585	219346
SQLite3 (mem)		60141	226295	75340	233623
ZEOS SQlite3		68	1406	34188	93281
FireDAC SQlite3		12481	17488	37676	122952
UniDAC SQlite3		62	1725	19395	27923

And much better performance, for sure.

For the reading, it is even more obvious that FireDAC and its TDataSet is a bottleneck:

Read speed
 			By one	Virtual	Direct
SQLite3 (file full)	6858	423405	443301
SQLite3 (file off)	7474	446388	459389
SQLite3 (file off exc)	79483	452652	463821
SQLite3 (mem)		79249	449721	459263
ZEOS SQlite3		55367	196556	354032
FireDAC SQlite3		5757	92196	132303
UniDAC SQlite3		1514	78125	108875

When reading one row (which is a very common use case for an ORM), FireDac is more than 10 times slower than our native version...

I doubt we have anything to learn from the Delphi RTL and its DB.pas design...
And ZEOS has a much better performance than FireDAC, here.

Those numbers are extracted from yesterday tests by miab3 - see https://synopse.info/forum/viewtopic.ph … 924#p26924

Online

#13 2018-03-29 12:50:22

miab3
Member
From: Poland
Registered: 2014-10-01
Posts: 188

Re: Benchmark information

@emk,

In the case of ZEOS, to radically speed up SQLite Insertion speed, you must set:

ZConnection1.Properties.Add('synchronous=0');
ZConnection1.Properties.Add('locking_mode=EXCLUSIVE');

before Connection.
https://www.sqlite.org/pragma.html

Michal

Last edited by miab3 (2018-03-29 12:54:49)

Offline

#14 2018-03-29 13:46:07

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

Yes, this is the Zeos way for setting what we called "file off exc" in SynSqlite3, and what FireDAC did set by default - which is wrong IMHO.

Online

#15 2018-03-29 13:47:36

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

@ab
Yes, it was in a VM but these results are on metal (only 10 records - for me this is a realistic scenario):

Insertion speed
 	               Direct	Batch	Trans	Batch Trans
SQLite3 (file full)	128	1314	1150	1307
SQLite3 (file full exc)	204	1768	1643	1680
SQLite3 (file norm exc)	251	2079	1961	1831
SQLite3 (file off)	240	2711	2650	2796
SQLite3 (file off exc)	13531	20120	18691	19305
ZEOS SQlite3	        94	749	743	684
FireDAC SQlite3	        7057	9407	11428	12437
ZEOS Firebird	        1256	4290	4987	4286
FireDAC Firebird	3865	12004	9478	12484

they just changed the default settings for better performance.

Even that they've changed initial settings, I see that you test "Firedac Sqlite" with "smFull"

    {$ifdef USEFIREDAC}
    Test(TSQLDBFireDACConnectionProperties,FIREDAC_PROVIDER[dSQLite],'','','',' SQlite3',true,smFull);
    {$endif}

So, "Firedac Sqlite" is "smFull, exclusive" and has 11428 inserts/s(Trans), where your "SQLite3 (file full exc)" has 1643 inserts/s(Trans). So maybe they have bigger buffers or some sort of setting which increases inserts 8x.

When reading one row (which is a very common use case for an ORM), FireDac is more than 10 times slower than our native version...

I don't argue that reading speed is very good but for me writing speed is essential because almost all readings happen on client from slave replica.

I doubt we have anything to learn from the Delphi RTL and its DB.pas design...

Thank you for your great work, I'm trying to be constructive and maybe some time to time we find something you miss smile

@miab3
I'm interested only on "smFull".

Offline

#16 2018-03-29 14:16:24

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

No, you are wrong: smFull is only used for the main SQlite3 used by the ORM, not the underlying FireDAC SQLite3 instance.
There are TWO Sqlite3 engines involved here.
See https://synopse.info/files/html/Synopse … l#TITL_146
and https://synopse.info/files/html/Synopse … ml#TITL_30
And since smFull involved only virtual tables, not real tables, this setting doesn't have any impact on performance in this particular benchmark.

Which means BTW that there is NO reason to use FireDAC Sqlite3, since it is slower and less maintained (its linked engine is versions older).
Here are the number that matters (comparing the engines with the same settings):

SQLite3 (file off exc)	13531	20120	18691	19305
FireDAC SQlite3	        7057	9407	11428	12437

FireDAC is definitively slower.
We put FireDAC SQLite3 in the benchmark, because we can, and it is informative to validate and test it - but it is useless when used with our ORM.

Online

#17 2018-03-29 18:03:28

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

There are TWO Sqlite3 engines involved here.
And since smFull involved only virtual tables, not real tables, this setting doesn't have any impact on performance in this particular benchmark.

I undestand now.

2. Since DB is the most expensive operation on server, in your opinion, is worth switching from faster inserts Firebird embedded to slower inserts Sqlite? ("Trans" 4987(Zeos Firebird) vs 1643(full exc)).
Note1: I started to look at Sqlite since you made AES encryption for SQLite smile That's why, I started to benchmark, but I'm somehow upset that is slower than Firebird especially in the area that I have the most interest - inserts.
Note2: I didn't have any corrupted database with Firebird.

Offline

#18 2018-03-30 07:17:14

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

Use Sqlite3 in "SQLite3 (file off exc)" mode, you will have the best performance, a smaller database file on disk, no external .dll dependency, and less risk of data corruption, even in this "off exc" mode.

Of course, your VM seems to have a weird behavior - check your VM settings, or change the VM software.
I don't see such problem e.g. with a VPS running Linux and FPC compilation.

Online

#19 2018-03-30 07:54:18

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

Use Sqlite3 in "SQLite3 (file off exc)"

I never used Sqlite., so I ask: If somehow I have to force turn off exe-server with multiple Sqlite opened connections in "off exc" is not a big chance of corruption?

Offline

#20 2018-03-30 08:29:32

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

Less than with Firebird, I think.
Sqlite3 has a LOT of anti-corruption tests and code - it is even detailed how you may corrupt something: read https://www.sqlite.org/howtocorrupt.html
Much more secure than Firebird, for sure.

You can see that to corrupt a database in "off exc" mode, it would be very difficult.
Just delete the journal file, if needed, and it will open it, even after a crash.
What may occur is a missing last transaction.

Online

#21 2018-03-30 16:44:04

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

Any chance for this scenario to happen?

Post some inserts in transaction "n" to master-server ("off exc" - maybe transaction "n" is in journal, but not in db), replica-client gets a message to get updates, retrieves immediately updates from transaction "n" (maybe gets those updates from page cache), master server crashes (and I delete journal manually) and in the end master-server will have "n-1" committed to db and replica will have "n" committed to db.

In other words the question is: in "off exc" mode, by any chance, any update can be read by subsequent select before is synced to disk?

EDIT: Replica-db can check, when takes updates, if his "transaction-id on replica" <= "n" "transaction on master", otherwise means a problem happen and needs manual intervention.
But anyway if somebody knows the answer for previous question is good to know.

Last edited by emk (2018-03-30 19:48:11)

Offline

#22 2018-03-30 23:03:28

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

Sorry for bothering, but in your production server/s do you use "off exc"?
Thank you.

Offline

#23 2018-03-31 15:28:43

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

If you have a master/slave replication, the SQLite3 write mode on those doesn't matter much.
The idea is that the slaves may have to ask another time for the updates, if its previous information was not fully returned and written.
I think there won't be any synch problem here. Of course, we may be wrong, and expect some feedback!

And on production servers, we always use "off exc" mode, since it gives the best performance, and is secure enough for our purpose.
In practice, even after a catastrophic failure, deleting the journal file was the only action we had to do, only when SQLite3 complains about the database file state.

Only, for some databases which should maintain an exact state of information (the most sensitive data, e.g. involving money), then we use a dedicated SQLite3 file, with default slow but safer settings.
Thanks to mORMot ORM, you can easily have several SQLite3 databases, in several TSQLRestServerDB, or using TSQLRestStorageRemote for a set of tables.

If you expect 100% ACID behavior even if you plug the power off, and expect the data on disk, "off exc" won't be enough.
Of course, it will involve only something like financial transactions. Most of the applications can afford to redo the same process. Our guess is that the problem, in such case, will be more in the business logic, and the unsynched state of services, not in the database itself.
But the database won't be broken or unavailable.

And anyway, SQLite3 always sounded more safe and ACID than FireBird, due to much deeper testing and much wider use in a lot of diverse conditions.
Also safer than most "entreprise" RDBMS, too. Try to switch the power off of an Oracle or MSSQL database, during process, and weep. I've seen it several times, with the need to send the Oracle data disks to a sub-contractor just to mount it again...

True ACID and full secure write (e.g. for financal accuracy or computers embedded in rockets, subject to space radiation) would be to have duplicated stores, then compare several results, and only accept e.g. if at least 2 of 3 have the same value...
But I doubt you will need something more than "off exc" for your applications.

Online

#24 2018-03-31 16:10:13

mpv
Member
From: Ukraine
Registered: 2012-03-24
Posts: 1,571
Website

Re: Benchmark information

I'm add my 5 cents. We use a SQLite3 ~ 4 years for a FTS indexes. We have >50 databases each 10-300 Mb size and 10 - 10 000 concurrent users. For 4 years we have only one database corruption, and this is because our customer place it on the samba network share (this is known way to corrupt SQLite3 database).

Offline

#25 2018-04-03 08:38:54

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

Thank you for taking time for such long response.

I lived with the impression that "smOff" is a very dangerous state, but since it's application-crash free and since hardware-crash can be prevented, and the most important, after a crash usually deleting the journal, puts me back in business, sounds very good.
I think for master server where I have control is good to have "off exc" and for client replicas it's better "normal exc", because on client I don't know if they have UPS or something like this.

Anyway, what I'm doing is work in progress and is not in production and sync between master and slave will be made with a message broker, but I 'll come back with results and more testing is done.

Since I'll have one database connection per database always (multi-tenant) and I'll control with a critical section which thread gets the connection, compiling with SQLITE_THREADSAFE=0 should help(avoiding the mutex)? Do you have any benchmarks here? smile

Last edited by emk (2018-04-03 08:53:11)

Offline

#26 2018-04-03 10:00:31

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

We were setting SQLITE_THREADSAFE=0 in the first releases of SynSqlite3Static.pas, but after a while, and since the SQlite3 core could use multiple threads for some queries (if the corresponding pragma option is forced), then we defined SQLITE_THREADSAFE=1 for safety.

Executing a mutex from a single thread is very fast (it is an atomic CPU locked operation), and is called only a few times during each query, so won't be a bottleneck.
We didn't find any performance impact, by setting SQLITE_THREADSAFE=1.

Online

#27 2018-04-03 14:00:15

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

1. Forget my ignorance/lack of knowledge, but from your blog posts, I found out that avoiding LOCK instruction is crucial for good performing multi threading apps, because it will flush cache cores; that's why mORMot avoids allocating memory, char conversion, etc.. I read that atomic InterlockedXXX is cheap, but every InterlockedXXX will not generate a LOCK instruction? so it's no better to get rid of them where is not needed?

2. Since you compiled with SQLITE_THREADSAFE=1, can I change the settings to SQLITE_CONFIG_SINGLETHREAD from your code?

3. I see that every time you deserialize a JSON array, every object in the list is created. It's not worth it to have a fixed small pool of that kind of object? Let's say I can have a fixed small pool of "OrderLines" of 10 objects. Most of Orders will have maximum 10 OrderLines, otherwise if it has ex 12, 10 will be taken from pool, 2 created. I say "fixed", so the array to be maximum fast and not thinking to supplementary allocations/deallocations. A configuration setting with PoolCapacity can be created for every registered class. What do you think? Or creating an object is so fast that it doesn't worth a pool?
EDIT: Configurable fixed pool per class per thread (no synchronization between threads, no allocation for supplementary objects).

I know that what I'm asking here is premature optimization, but it's good to know and few optimizations here and there makes a better performer.

Last edited by emk (2018-04-03 15:39:30)

Offline

#28 2018-04-03 19:17:16

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

1. Only if you run it millions of times per second (e.g. when accessing an object in the ARC model).
Which is not the case within SQLite3.

2. I don't think you have to change anything.

3. If you use the ORM, FillPrepare/FillNext will reuse the very same ORM object.
If you use a dynamic array of records, then all records will be allocated at once, and all fields filled with minimal memory allocation.

Don't assume anything, but check the numbers, e.g. how the numbers of the "Synopse ORM loop" or "Synopse record" numbers in http://blog.synopse.info/post/2015/02/0 … SON-parser and http://blog.synopse.info/post/2015/02/1 … SON-parser
You will find out how it is already optimized. And that the database will be the bottleneck, not the framework.
But any input is welcome!

Online

#29 2018-04-03 20:28:13

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

3. Sorry, my mistake, I didn't explain it correctly: please re-read point 3. but with the addendum that I was talking about deserialize object lists (You know when a "Order" contains a TObjectList with "OrderLines").
In this procedure:

procedure TJSONToObject.HandleObjectList(Lst: TObjectList);

you have somewhere in the middle:

Item := ItemInstance.CreateNew;

With your vast experience, do you think it's worth it to have pool of objects for every type of TObjectList I use (ex: "OrderLines" - pool of 10 records/thread). So when the line ItemInstance.CreateNew is executed will retrieve from pool if objects are available, otherwise create one?

Note: For me this deserialization happens on the server, because clients sends a Json object, I deserialize, run some validations and after than write it to db.

Offline

#30 2018-04-04 07:18:00

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

TObjectList is not easily unserializable in the SOA or the ORM, since we need to specify each class of the items.
So it is not a good example.

In practice, if we want to use objects, we use T*ObjArray definitions.
So that we know the type.
And we usually inherit the class from TSynPersistent instead of TPersistent, which has much lower overhead to create and delete.

If there is a lot of data, then we send some RawJSON, and if it is some output from the ORM, some RawJSON in non-expanded format, then we use FillPrepare/FillNext.
As such, there is no allocation of the objects.

Also note that using the ORM cache, or maintain a cache on the client side (with a revision number) is a naive - but very efficient in practice - way of obtaining real performance.
Sometimes, we used master/slave ORM replication, and/or bloom filters, to reduce the latency.

A pool of objets is a wrong good idea.
It is just another memory manager above the memory manager.
It has been popularized in Garbage Collector languages (mainly Java or C#) to bypass the collection cost...
But not worth it with Delphi, with FastMM4 which allocates and unallocates memory blocks in a few CPU cycles of optimized ASM.
Maintaining a pool would definitively be not faster, and would lead to new problems, especially on multi-thread servers.
Ownership of the objects is sometimes difficult to track, especially when objects are passed by reference - and you end up making local copies of your pooled objects... what is the benefit here? wink

What you propose is clearly premature optimization to me.
Don't forget that we use mORMot on production since years, with timing at runtime in the logs to find out the real bottlenecks on actual load (not some abstract microbenchmarks), and that instantiation (at least with FastMM4 + TSynPersistent) is almost never the bottleneck.

Online

#31 2018-04-05 09:01:36

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

and that instantiation (at least with FastMM4 + TSynPersistent) is almost never the bottleneck.

I thought that is not worth it, because if it was, was already implemented smile


TObjectList is not easily unserializable in the SOA or the ORM, since we need to specify each class of the items.

I need TObjectList because I want to have common code (at least DTO and validation) between Delphi-client/mormot-server/future-EWB-client (doesn't have T*ObjArray). Also I see a lot of people in forum try/need to use TObjectList, so I propose a solution:

RegisterClass('OrderDetails', TOrderDetailsItemClass);
RegisterClass('OrderPayments', TOrderPaymentsItemClass);

in function JSON2Object, make a new option: j2oGetItemClassFromArrayName

{"Order" = {"Client" = "XYZ",
           "Date" = "2012-04-30T02:15:12.356Z",
           "Amount" = 1000,
           "OrderDetails" = [
                            {"itemName" = "Clock",  "qty" = 1, "Value" = 100},
                            {"itemName" = "AClock", "qty" = 1, "Value" = 900}
                            ],
           "OrderPayments" = [
                             {"type" = "cash", "value" = 600, Date = "2012-04-30T02:15:12.356Z"}
                             {"type" = "card", "value" = 400, Date = "2012-04-30T02:15:12.356Z"}
                             ]
           }
}

With this solution can use many sub-TObjectLists and will solve the problems for a lot of people.

Offline

#32 2018-04-05 11:02:39

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

Perhaps with the main class type, it would be better.

Online

#33 2018-04-05 17:08:38

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

Any plans?

Offline

#34 2018-04-05 18:55:18

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

For our projects, we define DTOs as records and arrays, which work very well on main mORMot and cross-platform units.
Using DTOs with tuned and refined types, with only the data needed at application layer for a particular use case, is the way to go, for any serious SOA.
If you are not familiar with DTO, search the framework documentation and https://martinfowler.com/eaaCatalog/dat … bject.html

It would need to be implemented on both normal mORMot and cross-platform clients units, so it is a lot of work for something we don't need.
So no plan.

Online

#35 2018-04-05 20:28:14

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

So you say you have 2 types of objects for one entity: DTO(record&arrays) and Domain Objects(TSQLRecord with logic?)?
You have something like this: DO <-> DTO <-json-> DTO <-> DO <-> DB?

Please shed some light. Thank you.

EDIT: I read Index» mORMot Framework» Difference between DTO and entity
From that thread, this corresponds to my needs:   smile

I know people must use DDD/DTO for a reason, but so far I feel it's a little over-abstraction for my own purpose, I might be wrong

Last edited by emk (2018-04-06 06:50:43)

Offline

#36 2018-04-06 12:29:50

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

In DDD, "entity" has a meaning which is not the same as you use. wink
So I won't use "entity" in my answer.

DTO are some kind of value objects, defined not in the domain, but in the application/presentation layers.
They are used to avoid domain leaking, and coupling.
The main point about DDD is Domain preservation, mainly via uncoupling.

So yes, you have DO <-> DTO <-json-> DTO <-> DO in the application/presentation layer.
I can assure you that in the middle term, using DTO is of huge benefit!
From the code point of view, with mORMot, it is just a mean of defining some types and interfaces, in a well bounded unit, completely uncoupled from the domain/business layer. Then implement the interface in another class, using the domain objects. Object pascal strong typing make it easy to marshal/translate the values.

And also for your clients, to have only what they need, not all potential of your Domain Objects and services - which may be clearly confusing.
Confusion easily comes from naming incompatibilities, or use cases not needed in a particular context.
Think as the Domain as your toolbox, you use internally to provide services to your users. And you keep your tools internal, so that you will be efficient and stay in control.

Of course, all this will pretty much break the mind of any DB-centric programmer...
The worse is to leak your TSQLRecord!

Online

#37 2018-04-07 16:02:39

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

I'm doing what you say:

I have "onion" type layering - GUI layer uses "Controller" layer, which uses Domain layer. Domain doesn't know about "Controller", "Controller" doesn't know about GUI. In Domain layer I have defined PODO objects with business logic. Domain layer has persistence ignorance - I have some services implemented (every call is a Unit-of-Work) who Save the aggregate remotely or Load the aggregate from local replica. Domain objects are serialized directly by these services. I have to think, but for the moment I don't see how I'll benefit from DO->DTO->json instead of DO->json. I read the arguments but somehow doesn't apply to me..will see.

For me is important that DO has the same definition on client, server and EWB client so I can re-use the same business logic and validation rules(that's why I need TObjectList).
So I have common ground, Domain Objects:
- inherits from TSynPersistent on server,
- inherits from my TEntity on delphi-client (also every object/ObjectList has attached a in-sync memory TDataset - TFDMemTable, for easy dual-binding db gui widgets),
- inherits from TPersistent on EWB-client(also every object/ObjectList has attached a in-sync memory TDataset(EWB-type) for easy dual-binding db gui widgets),

Services for persisting differs for all 3, but Domain Objects are the same (PODO with business logic and validation).

I'm just presenting my case that I'm not hardcore db-programmer.. Not anymore.. smile

... but still, my PODO objects are generated with a code-generator(which I written) from db-schema. The idea is when I change db-schema, I run code generator (to generate classes with Set&Get), so I have a single entry of changes. Those generated "simple bags of getter&setters" are my DTOs from which I inherit and adding business rules&validation and they become my Domain Objects.

Last edited by emk (2018-04-07 18:51:15)

Offline

#38 2018-04-08 05:29:28

edwinsn
Member
Registered: 2010-07-02
Posts: 1,218

Re: Benchmark information

emk wrote:

EDIT: I read Index» mORMot Framework» Difference between DTO and entity
From that thread, this corresponds to my needs:   smile

I know people must use DDD/DTO for a reason, but so far I feel it's a little over-abstraction for my own purpose, I might be wrong

That's my question! smile
I might should have done some research instead of asking that, so I went ahead today to have read some material, and I found that the wikipedia page explained it very very clear:

A data transfer object (DTO[1][2]) is an object that carries data between processes. The motivation for its use is that communication between processes is usually done resorting to remote interfaces (e.g., web services), where each call is an expensive operation.

And this:

...the whole purpose of DTOs is to shift data in expensive remote calls.

I understand it in this way - if TSQLRecord is corresponding to a database table row, a DTO is corresponding to a "database view row". I can see benefits of such approach, for example assuming we have the following db tables:
- TSQLDepartment (with 10 fields)
- TSQLStaff (with 20 fields)

If the clients needs the following data:
- Staff's first name and last name.
- The name of the department.
It's only 3 fields from the above described two tables. We'll have two ways to transfer the data from the server to the client:
- Non-DTO way: on the client-side, read the TSQLStaff record and the corresponding TSQLDepartment record, and this involves two network round-trip and 30 fields.
- The DTO way: On the server-side we define an interface method, such as IServerApi.GetStaffFullNameAndDeptName, and the client-side needs only one call to the API and this involves one a single network round-trip and 3 fields, moreover, the client-sides code logic can be significantly simplified.

Just my humble conclusion after after reading several wikipedia pages.


Delphi XE4 Pro on Windows 7 64bit.
Lazarus trunk built with fpcupdelux on Windows with cross-compile for Linux 64bit.

Offline

#39 2018-04-08 11:55:23

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,657
Website

Re: Benchmark information

Good points.

And IMHO, in the long term, the main point is what I wrote above: uncoupling the business logic from the client side, so that both can evolve without inferring the other.

Online

#40 2018-04-09 14:52:50

emk
Member
Registered: 2013-10-24
Posts: 96

Re: Benchmark information

First of all thank you for all responses.

I don't "feel" the need for DTOs, because I'm "cheating" - I'm Load-ing aggregates from local replica, so performance is very good.
As @ab says I'm certain that any abstraction will pay dividends sooner or later, I'm not just there to collect them smile

In the middle future when I'll arrive to my need of deserialize TObjectsLists, I'll try my best as I propose and make a pull request and maybe @ab will polish it. Maybe I'll even try to make a option j2oUseCocForArrayNames (Convention over Configuration) - ex: if json array name is "OrderDetails", I'll look for a class name 'T' + 'OrderDetails' + 'Class'.. I have to think...

Last edited by emk (2018-04-09 17:02:05)

Offline

#41 2018-10-20 06:16:57

EgonHugeist
Member
From: Germany
Registered: 2013-02-15
Posts: 190

Re: Benchmark information

emk wrote:

I wanted to evaluate Sqlite vs Firebird embedded.

In example 15, I switched to 10 records because is a realistic scenario (inserting an "order" object it has usually 7-10 record).

Insertion speed:

                             Direct    Batch       Trans        Batch Trans
SQLite3 (file full)        424    3518        2673        1756
SQLite3 (file full exc)  178    3324        2130        3278
ZEOS Firebird             924    2092       2942        2464
FireDAC Firebird        3795  11507       9182       11834

Why "FireDAC Firebird" is 3 times faster than "Zeos Firebird" and also 3 times faster than "SQLite full exclusive", since Sqlite access is maximum optimized in your framework? and I mean for "Trans" because I don't think I'll use "Batch" (anyway Firedac has also a very optimized batch).

Note: I need maximum safety, and since Firebird is implicit in "Full" mode is fair that Sqlite to be also in "Full" mode.

Thank you.

There is a problem in the tests. The tests (if ready, by section) do drop the DB's if it's a local file. As you made this benchmarks zeos did go the official way to create the db using isc_dsql_execute_immediate. This than forced zeos to reconnect because we're connected in 'CS_NONE' using this methode. I did some researches and found out that the protected isc_create_database is now public using this (http://docwiki.embarcadero.com/InterBas … database()) allows to create the db with all til/encoding settings and we just don't need a reconnection.

For max performance tests comment the firebird mormot improvents in SynDBZeos. I think ab made this to help zeos. This code is deprecated for me. I already send a patch to AB, but it was never applyed.

According the maximum sefaty:
Looking @ http://docwiki.embarcadero.com/RADStudi … _(FireDAC)
we see FD supports nestetd transactions. knowing a bit more about FB -> this is possible with SAVEPOINTS (running in min one main-transaction) only.
Ahhm what happens if you're connection is lost and the transaction is not commited? Savepoints are part of the transaction:
https://firebirdsql.org/file/documentat … -savepoint
Let's hope FD calls a isc_commit/isc_commit_retain if last snapshot is done..

Last edited by EgonHugeist (2018-10-20 06:28:49)

Offline

#42 2018-10-20 09:45:46

miab3
Member
From: Poland
Registered: 2014-10-01
Posts: 188

Re: Benchmark information

Here is my test result for ZEOS 7.3.x svn 5190 with "EgonHugeist patch to AB"
http://zeoslib.sourceforge.net/viewtopi … 87#p104487

Running tests using Synopse mORMot framework 1.18.4849, compiled with Delphi 10 Seattle 32 bit, against SQLite 3.25.2, at 2018-10-17 18:15:16.

Insertion speed
 			Direct	Batch	Trans	Batch Trans
SQLite3 (file full)	73	2838	53768	89346
SQLite3 (file off)	147	9840	62237	164674
SQLite3 (file off exc)	11976	154259	73410	207796
SQLite3 (mem)		60020	224890	75601	227231
ZEOS SQlite3		87	1541	44640	106403
FireDAC SQlite3		9645	12301	37121	118404
UniDAC SQlite3		52	1616	23273	32985
ODBC MSSQL2012		1128	14357	1732	18670
UniDAC MSSQL2012	1437	17064	1678	20251
ZEOS ODBC_W MSSQL2012	1460	33256	2171	32624
ZEOS ODBC_A MSSQL2012	1449	28768	2027	30814
FireDAC MSSQL2012	1418	5591	1864	29216
ZEOS Firebird		7516	65938	26292	66596 <===============
FireDAC Firebird	3425	46534	20706	56006 <===============
Oracle			907	57058	2172	58148
ZEOS Oracle		880	58706	2443	54585
FireDAC Oracle		874	43531	1787	42579
UniDAC Oracle		764	3946	1332	4203
ZEOS PostgreSQL		1050	45217	2028	39399
FireDAC PostgreSQL	1027	14279	2503	14842
UniDAC PostgreSQL	767	8071	986	11503
ZEOS MySQL		6035	38582	8228	59141
FireDAC MySQL		3234	38782	3700	39074
UniDAC MySQL		5996	23636	4477	25934


Read speed
 			By one	All Virtual	All Direct
SQLite3 (file full)	3724	439676	435502
SQLite3 (file off)	3791	442987	443458
SQLite3 (file off exc)	78545	459643	455124
SQLite3 (mem)		77551	455622	453432
ZEOS SQlite3		62566	199808	355467
FireDAC SQlite3		5756	97503	136317
UniDAC SQlite3		1090	75864	104539
ODBC MSSQL2012		1756	85752	206568
UniDAC MSSQL2012	1446	105712	167824
ZEOS ODBC_W MSSQL2012	1205	80923	170660
ZEOS ODBC_A MSSQL2012	1211	94918	168560
FireDAC MSSQL2012	1638	105015	166577
ZEOS Firebird		26917	89026	127424
FireDAC Firebird	2008	61853	78055
Oracle			2186	95172	188928
ZEOS Oracle		2650	87449	175870
FireDAC Oracle		1451	60606	80372
UniDAC Oracle		719	28289	34184
ZEOS PostgreSQL		1893	70532	155265
FireDAC PostgreSQL	461	33792	44514
UniDAC PostgreSQL	237	57595	100926
ZEOS MySQL		7138	154397	286565
FireDAC MySQL		2171	85075	117918
UniDAC MySQL		369	51724	71370

Michal

Offline

#43 2018-11-27 12:53:15

miab3
Member
From: Poland
Registered: 2014-10-01
Posts: 188

Re: Benchmark information

ZEOS 7.3.x svn 5313
http://zeoslib.sourceforge.net/viewtopi … 47#p106447
http://sourceforge.net/p/zeoslib/code-0 … sting-7.3/

Running tests using Synopse mORMot framework 1.18.4860, compiled with Delphi 10 Seattle 32 bit, against SQLite 3.25.2, at 2018-11-27 09:14:07.

Insertion speed
				Direct	Batch	Trans	Batch Trans
SQLite3 (file full)		62	4335	54429	82760
SQLite3 (file off)		517	31259	71766	202061
SQLite3 (file off exc)		11314	152966	72400	209205
SQLite3 (mem)			60101	229652	73614	226295
ZEOS SQlite3			67	2110	46035	79746
FireDAC SQlite3			9482	12553	37267	118511
UniDAC SQlite3			61	1772	22236	34090
ODBC MSSQL2012			1435	14415	1986	18067
UniDAC MSSQL2012		1462	18763	1668	19231
ZEOS ODBC_W MSSQL2012		1499	35019	2446	31852
ZEOS ODBC_A MSSQL2012		1517	35348	2057	35715
FireDAC MSSQL2012		1470	5827	1911	26669
ZEOS Firebird			5158	66785	27861	66728
FireDAC Firebird		2731	54825	20145	54545
Oracle				733	46402	2064	64095
ZEOS Oracle			815	52754	2081	53002
FireDAC Oracle			752	46507	1653	48615
UniDAC Oracle			685	4107	1560	4477
ZEOS PostgreSQL			912	38186	1942	40973
FireDAC PostgreSQL		712	5212	2009	11381
UniDAC PostgreSQL		538	4050	852	13102
ZEOS MySQL			4985	16075	3715	32526
FireDAC MySQL			3342	19277	3401	18525
UniDAC MySQL			3304	14231	4715	18938

Read speed
				By one	All Virtual	All Direct
SQLite3 (file full)		3706	438250	444404
SQLite3 (file off)		3822	439521	441345
SQLite3 (file off exc)		78668	399584	417083
SQLite3 (mem)			77823	458127	462791
ZEOS SQlite3			61574	222518	399297
FireDAC SQlite3			5683	97376	138404
UniDAC SQlite3			1085	79344	107849
ODBC MSSQL2012			2086	104558	163478
UniDAC MSSQL2012		1597	94970	140611
ZEOS ODBC_W MSSQL2012		933	115673	156098
ZEOS ODBC_A MSSQL2012		1155	118360	161035
FireDAC MSSQL2012		1747	94501	136885
ZEOS Firebird			28708	95361	134289
FireDAC Firebird		1870	62597	77464
Oracle				2653	64740	176559
ZEOS Oracle			2211	106055	183466
FireDAC Oracle			1541	61422	81511
UniDAC Oracle			843	33065	36976
ZEOS PostgreSQL			1697	94089	160148
FireDAC PostgreSQL		562	48998	52107
UniDAC PostgreSQL		234	53475	86867
ZEOS MySQL			6971	177588	331564
FireDAC MySQL			1993	78520	108443
UniDAC MySQL			3552	119849	187899


Running tests using Synopse mORMot framework 1.18.4860, compiled with Delphi 10 Seattle 64 bit, against SQLite 3.24.0, at 2018-11-27 09:21:59.

Insertion speed
				Direct	Batch	Trans	Batch Trans
SQLite3 (file full)		57	2512	40670	66908
SQLite3 (file off)		495	30510	69267	107374
SQLite3 (file off exc)		11665	156142	71470	212901
SQLite3 (mem)			56336	212585	71990	226788
ZEOS SQlite3			67	1878	46265	55469
FireDAC SQlite3			9525	12829	37122	132478
UniDAC SQlite3			61	1854	19687	35632
ODBC MSSQL2012			1281	16030	1674	20786
UniDAC MSSQL2012		1402	17388	1711	18541
ZEOS ODBC_W MSSQL2012		1429	27564	1858	33016
ZEOS ODBC_A MSSQL2012		1458	34153	1796	29961
FireDAC MSSQL2012		1437	6034	1784	29839
ZEOS Oracle			1071	44421	2314	55952
FireDAC Oracle			1012	42695	2106	43659
UniDAC Oracle			850	3569	1555	3697
ZEOS PostgreSQL			1077	36446	1883	35907
FireDAC PostgreSQL		1176	12731	2352	12742
UniDAC PostgreSQL		695	7451	952	11353
ZEOS MySQL			1639	21351	2404	37001
FireDAC MySQL			1621	29597	1783	24727
UniDAC MySQL			1726	17573	2319	15542

Read speed
				By one	All Virtual	All Direct
SQLite3 (file full)		3522	493339	483231
SQLite3 (file off)		3712	454752	477783
SQLite3 (file off exc)		74517	507305	466374
SQLite3 (mem)			78977	505305	502108
ZEOS SQlite3			58273	300228	418900
FireDAC SQlite3			5935	109984	135376
UniDAC SQlite3			1103	95615	125294
ODBC MSSQL2012			1780	81300	222133
UniDAC MSSQL2012		1494	119852	162284
ZEOS ODBC_W MSSQL2012		1200	109232	185096
ZEOS ODBC_A MSSQL2012		1145	98125	167341
FireDAC MSSQL2012		1697	107517	161430
ZEOS Oracle			2660	89236	182481
FireDAC Oracle			1721	60866	87765
UniDAC Oracle			975	35850	40155
ZEOS PostgreSQL			1735	82725	144279
FireDAC PostgreSQL		571	38526	52295
UniDAC PostgreSQL		231	79927	94614
ZEOS MySQL			2161	94128	205364
FireDAC MySQL			1304	60307	63490
UniDAC MySQL			1778	114006	149325

Michal

Offline

Board footer

Powered by FluxBB