You are not logged in.
That's true if you did not add the $USE_SYNCOMMONS define to your project base.
I've added full data streaming to zeos lately. Means with zeos you can write/read lobs from the database directly (if the driver supports an api for that: FB/IB, Oracle, Postgres OID lobs, MySQL, ASA, OleDB). So we need a temporary buffer for the streams.
@ab could you apply the following patch to SynDBZeos(compatible with all zeos versions):
Index: SynDBZeos.pas
===================================================================
--- SynDBZeos.pas (revision 8968)
+++ SynDBZeos.pas (working copy)
@@ -1267,9 +1267,11 @@
Len: NativeUInt; // required by Zeos for GetPAnsiChar out param (not PtrUInt)
procedure WriteIZBlob;
var blob: IZBlob;
+ Raw: RawByteString;
begin
blob := fResultSet.GetBlob(col+FirstDbcIndex);
- WR.WrBase64(blob.GetBuffer,blob.Length,true); // withMagic=true
+ raw := blob.GetString;
+ WR.WrBase64(Pointer(raw),Length(Raw),true); // withMagic=true
end;
{$ifend}
begin // take care of the layout of internal ZDBC buffers for each provider
As a workaround just add the USE_SYNCOMMONS define to your project base conditions.
The strange part is that with this 'old' commit Zeos Put subroutine from the above is not used at all in my project.
All Zeos internal Batches have been broken for a long time, because the batch ababilites haven't been determined correctly. Now you'r back on fast zeos batch bindings without the strings mORMot did create for you. Those are slow for FireBird and since FB exists they always recomment using parameters and native bindings.
Could you please update Zeos from SVN and test. R6380 see https://sourceforge.net/p/zeoslib/code-0/6380/.
Is BinaryWireResultMode true by default?
Should we enable it?
It is enabled by default and you may disable it for the ColumnToJSON.
I have and use libpq.so.5 on my Ubuntu system.
Zeos is not able to find it. I have to specify the library name in the connection URI. See https://github.com/synopse/mORMot/blob/ … s.pas#L709
So it depends on the actual system it runs on...
On your system, you can specify libpq.so.4 I guess in the SynDBPostgresLibrary global variable.
I have enhanced TSQLDBPostgresLib to automatically try to load the .so.5 then the .so.4 - see https://synopse.info/fossil/info/d0d2158e9f
curious since its added to the plaindriver locations.. Any ideas?
We won't support *beep* SQL with bulky comments in our unit.
We are not a one-to-one replacement for TDataSet / SQL statements, but support the SQL generated by our ORM, and dedicated clean SQL.
If there is too much SQL to rewrite, with a lot of *beep* comments in it - just skip SynDB and use Zeos.
Not a problem that's why i wrote nouvice state. Just document you do not support *beep" sqls
I may add a way to remove /* */ comments in any kind of SQL... But the ? as part of the table/column name - no thanks!
The same about 'extended' 80-bit float support - we have a single floating point type, which is double. So we don't have to circumvent this problem.
Maybe a misunderstanding: Those values can not be converted from string into a double.. i get a overflow exception if i use the doubles.
I've uploaded the whole SynDBZeos.pas again, not as patch might be easier 4u. https://www.file-upload.net/download-13 … s.pas.html
please test and apply the changes.
@AB,
hi thanks. Good start. I'm not so familar with Git... Oldscool svn user.
I know about the forum rules so please consider: https://www.file-upload.net/download-13 … patch.html
this patch includes:
- real transaction handling for zeos
- adds possibility for parameters if they are added to the databasename (starting with '?') see TSQLDBZEOSConnectionProperties.URI
- changes TSQLDBZEOSStatement.ReleaseRows. IIUC is purpose to free memory in background. That's supported since 7.2 already
- fixes possible wrong MatchPatterns for MetaInformations.
About the whole point of SynDBPostgres.
I agree with mpv approach. It was never a replacement to SynDBZeos, but an alternative.
There always will be benefits for alternatives. I like very much https://github.com/github/putty/blob/master/sshzlib.c approach.
I do understand yours POV. It's not that i'm against that. It just feels like wasting valuable manpower. From my POV it's not task of your project to embedd all server access technics. Usually this is a neverending story. The servers change behaviors, they add funtionality so it's a permantent maintaing job which will never end.
A lot of our users tend to rely on mORMot for long-term projects.
Having alternatives is confusing at first (they are several ways of accessing the same DB), but it tends to benefit for the long term.
We would like to continue SynDBZeos support as the primary DB layer for mORMot. It is much better than ODBC or others.
But as mORMot is not maintained by me only today (mpv and others make tremendous contributions), some of our users told us that depending on Zeos only may be a risk.
So we always provided alternatives.
That's why i am so sensitive . Those guys are not uptodate what happend in background enbetween. They should open there mind and give zeos a fair chance.
Zeos/ZDBC is a great library for sure. What Michael did in terms of performances and features is awesome. He took a slowly supported and bloated library into something huge but fast. The best DB layer for Delphi/FPC for sure.
But ZDBC is a big and complex beast, mainly due to having support a lot of native DB types, compatibility with old code, and all kind of metadata and features we don't need from mORMot ORM.
Adding ColumnsToJSON() support in ZDBC was the nicest addition I have never seen in any DB library - even more since it fulfilled our ORM needs, and integrated our SynCommons unit!Zeos with mORMot is not obvious to setup, since it is a 3rd party.
PostgreSQL is our business-class RDBMS of choice, so being able to access and try it with no dependency is a benefit.
But there is nothing stopping the user to try SynDBZeos and use it if he/she wants to. On the contrary. If SynDBZeos is faster/better/shiner: just use it!
thanks for the nice words, guys. We alse keep FPC support high.
We also switched to FPC/Linux as main platform for server-side, and we can easier optimize SynDBPostgres code on this target than the huge code base of ZDBC, which is more Delphi-centric for good reasons.
For instance, the FPC heap is much slower than FastMM, so any additional allocation hurts. ZDBC tends to allocate more than directly access the libpq library - even if Michael did a great job reducing memory consumption. Switching from FPC heap to other libraries like glibc, Intel TBB or jemalloc is not a viable choice, since it induces unstability in the execution: the slightest memory problem just abort the executable - check https://github.com/synopse/mORMot/blob/ … ligned.pas
Now, we rather benchmark and tune for FPC and Linux - not Delphi and Windows.In terms of performance and stability, bypassing ZDBC sometimes help.
For instance, for our ORM, our SQlite3 direct layer is faster (by a huge margin), and will always be faster than using ZDBC. It not only by-passes ZDBC, but also our SynDB layer.
Probably true. If you have applied the patch than you could add the commented code the the benchmark tests (see TSQLDBZEOSConnectionProperties.URI)
so you can compare your external SQLite with Zeos in all your modes and you have fair benchmarks.
About binary/text with libpq:
- text is much easier to implement and maintain - this is one of the reasons why SynDBPostgres is so small and was quick to develop;
- binary support is complex and slows down things when used in ColumnToJSON() - letting PG directly write its text values into JSON is easier and faster; and our ORM uses mainly ColumnToJSON() - so ZDbcPostgreSqlResultSet version helps but is not mandatory.
you can simply turn ON/OFF the binary wire by Adding
'BinaryWireResultMode=true or false'
. See the stmt parameter section in SynDBCZeos.pas. Accordin the code size, yes i'm not happy about it too. It is huge because we keep all backward compatibility high. We have users how are using old MySQL3 or FB1 or PosgreSQL7. But as you pointed out there is no performance penalty..
EgonHugeist wrote:Looking to ab's approch it's a nice nouvice state. Zeos is full of examples where his technick would fail propably.
Could you please elaborate? I don't understand what you mean.
Zeos is gangled by obscure tests. There are users who are doing things i would never do but it's not forbitten and should be support. An example for an SQL string:
'/*?? do we find these ?question-marks? as parameter ? */'+
'select * from people?2 where p_id > ?'+LineEnding+
'and p?red = ?'+LineEnding+
'/* ? and those marks? Are they ignored too?'+LineEnding+
'? Are they ignored on a multi-line comment as well?*/'+LineEnding+
'-- ? and those marks? Are they ignored too? On a single line comment?'
All servers allow such *beep*. Test your questionmark replacement. There are exacly two params to replace..
I'll remove the old ColumnToJSON overload soon. So thx for lately paches and the test do run again. Even if current SynDBPostgres dos not find libpg on my system and i have no PG-instance runinng on the default port. There is work left, Pavel.
Alternatives challenge us, and help to make better code
It's not everything for performance. But i'll and if i can help i'll do it. Hope you guys too. ZeosLib hab bad critics according performance. Most of them from the TDataSet users, but from you as well. And they where right in the past. Inbetween also the TDataSet layer did speed up very nice. Everything which is faster on dbc is logically faster for the TDataSets as well. I've no other compents to compare but i would start from the premisse ZeosDBO has a nice comparable performance inbetween(And that's for free) even if all other components are optimized for the DS approach only.
Another reason for the binary wire was to suppress some issues PG has with the strings. F.e.:
insert into double_values values('-2.2250738585072014E-308'),('-1.7976931348623157E+308'),('2.2250738585072014E-308')....
you can send/retrieve them. In the past we had the Extended type in our code. I broke with the type since CPU64 doesn't have the REAL10 any more. So we can not read back the value with the string protocol with our native StrToFloat using double precision. There is exacty one digit to much before the exponent.
Thanks for the feedback about caching statements.
Worth thinking more about it.
Two hints:
- TDataSet/TQuery is the slowest path, anyway...
- SynDBZeos: cache was buggy also depending on driver.
So cache can be added on purpose, after proper testing - it is the user responsibility.
By default, we may just disable it. Better safe than buggy.
Writing bad critics is almost quickly done and very easy. What !is! buggy? Or just !was! something buggy?
I start from the premisse you mean !IT IS SOMETHING BUGGY! otherwise you would'nt consider disabling the stmt-cache.
Any examples? You did never open a ticket, nor did you enter the zeos forum to report problems.
I usually did fix everything i could see or which was reported. So such statements are not fair. <- whithout beeing rude
Nice job Pavel.
Believe me - binary protocol give us nothing in terms of performance or memory usage. Just a PITAs with OIDs/htonl/doubles etc.
Well this might be true for you example. Nowaday those numbers do grow quickly. The more the pg date(4Byte)/time(8Byte)/timestamps(BByte)/doubles/numerics are samller than using strings..
@danielkuettner - I spoke with Zeos maintainer may be a year ago (and even give small fixes to Zeos). I know how Postgres binary protocol implemented in Zeos (it's broke my application in some very specific places, so I currently on old zeos version).
that is true but the binary wire protocol is optional except for parameter bindings. You mean this https://sourceforge.net/p/zeoslib/tickets/336/ propably.
The main Zeos (Postgres) performance bottleneck (we fix here in SynDBZeos) is:
- too many interfaces (I mean delphi interfaces)
The interfaces indeed had been a problem where we used virtual methods. The compiler did produce duplicate offset calcultations for thre vmt's. Inbetween zeos does no longer use the virtual methods any more in most relevant places. So the zeos access using interface has the same performance like using objects. Thus this argument does not count for me.
The main Zeos (Postgres) performance bottleneck (we fix here in SynDBZeos) is:
- too generic SQL statement parser (it perfect, can parse almost any SQL syntax but the cost is complexity)
This parser is also no longer slow on 7.3. I've added pchar markers instead. No memallocs are done except to hold the markers in a list. The parser works perfectly to replace the question-marks for the oracle and postgres. Looking to ab's approch it's a nice nouvice state. Zeos is full of examples where his technick would fail propably.
The main Zeos (Postgres) performance bottleneck (we fix here in SynDBZeos) is:
- too many strings (big problem for heavy muti-thread)
Now you make me curious. To many strings? While you using the string result and binding techniks? What do you mean?
Believe me - binary protocol give us nothing in terms of performance or memory usage. Just a PITAs with OIDs/htonl/doubles etc.
Yesterday i found the reason for the performance penalty. I've already commited the changes. So on my side SynDBZeos is slightly faster.
The more SynDBZeos is outdated. I send AB several mails with fixes but all of them has been ignored last year.
Relevant changes of the my maintained SynDBZeos:
-StartTransaction(including nesteds) so the AutoCommit mode is not required to use any more.
-full outparm support for SP's without using the IZCallableStatement
-better ColumnsToJSON with several JSON serialization option (manly for the date values)
-correctly determine batch ababilities on connect so the define is obsolete.
@AB is https://synopse.info/fossil/info/9fe77d6f21 realy necessary? No SQL standard any more? Is that syntax supported by UniDac/FireDac? That broke all zeos compatibilty.
@cybexr
FYI: fixed in trunk and all testing branches.
May be you find time to confirm it?
@mpv
i did look to your pull request. The Line
fDataBase.SetReadOnly(False)
is not required. It's Zeos default.
According the patch for 7.2-patches you have:
Open a ticket on https://sourceforge.net/p/zeoslib/tickets/
tag it as feature request and write purpose.
add a diff/patch file.
I'm willing to merge, if there is no other behavior change. That's it.
Accoding the performance: On 7.3 all the Interface calls to IZPLainDriver-Descendants do no longer exist(except dblib-protocol -> lack of time to do it)
So i would start from the premisse 7.3 is again a bit faster as 7.2 was.
@mpv
The implizit call to TZPostgresSQLConnection.Open by setting the readonly property i've fixed a month ago, see:
https://sourceforge.net/p/zeoslib/code-0/5453/tree/ I was running into same issue in one of my projects. That patch is merged to trunk already.
What is TSQLDBZEOSConection.Open? Do you mean TSQLDBZEOSConection.Connect? Huge surprice while running tests: "connect" is never called.
-> I'd tryed to move the determination of the batch-ababilities(i'm not happy about your define solution) in TSQLDBZEOSConection.Connect but i failed. I'd send AB an update for SynDBZeos, the last was ignored maybe latest one will be accepted, however.
Michael
Current Zeos 7.3 is nextgen ready btw..
@greedy
You don't need to do that today.
....
class function URI(aServer: TSQLDBDefinition;
const aServerName: RawUTF8; const aLibraryLocation: TFileName='';
aLibraryLocationAppendExePath: boolean=true): RawUTF8; overload;
...
just use the third param and define your lib.
@EgonHugeist
for Oracle- mycolumn: varchar2(10)
SyndbOracle will produce dataset contains
<FIELD fieldtype="string.uni" attrname="mycolumn" WIDTH="22"/>while SynDBZeos will produce dataset like:
<FIELD attrname="mycolumn" fieldtype="bin.hex" SUBTYPE="Text"/>
Which dataset? Looking to that i do understand your point of view.
@AB
Where are these informations comming from? Do i miss a point in mORMot wonderland?
3. btw , the zdbc resultset seems lost field length info, so it's not an alternative to me .
Could you please explain what that means? Which info is lost?
Resolved in SVN (Trunk/testing-7.3)
TestCase added, sorry for the inconsistency..
Everything *new* might be risky but without a testcase/ticket on the bugtracker or a post in the zeos forum you shouldn't expect help immidiately.
Have fun
I wanted to evaluate Sqlite vs Firebird embedded.
In example 15, I switched to 10 records because is a realistic scenario (inserting an "order" object it has usually 7-10 record).
Insertion speed:
Direct Batch Trans Batch Trans
SQLite3 (file full) 424 3518 2673 1756
SQLite3 (file full exc) 178 3324 2130 3278
ZEOS Firebird 924 2092 2942 2464
FireDAC Firebird 3795 11507 9182 11834Why "FireDAC Firebird" is 3 times faster than "Zeos Firebird" and also 3 times faster than "SQLite full exclusive", since Sqlite access is maximum optimized in your framework? and I mean for "Trans" because I don't think I'll use "Batch" (anyway Firedac has also a very optimized batch).
Note: I need maximum safety, and since Firebird is implicit in "Full" mode is fair that Sqlite to be also in "Full" mode.
Thank you.
There is a problem in the tests. The tests (if ready, by section) do drop the DB's if it's a local file. As you made this benchmarks zeos did go the official way to create the db using isc_dsql_execute_immediate. This than forced zeos to reconnect because we're connected in 'CS_NONE' using this methode. I did some researches and found out that the protected isc_create_database is now public using this (http://docwiki.embarcadero.com/InterBas … database()) allows to create the db with all til/encoding settings and we just don't need a reconnection.
For max performance tests comment the firebird mormot improvents in SynDBZeos. I think ab made this to help zeos. This code is deprecated for me. I already send a patch to AB, but it was never applyed.
According the maximum sefaty:
Looking @ http://docwiki.embarcadero.com/RADStudi … _(FireDAC)
we see FD supports nestetd transactions. knowing a bit more about FB -> this is possible with SAVEPOINTS (running in min one main-transaction) only.
Ahhm what happens if you're connection is lost and the transaction is not commited? Savepoints are part of the transaction:
https://firebirdsql.org/file/documentat … -savepoint
Let's hope FD calls a isc_commit/isc_commit_retain if last snapshot is done..
Your assumptions about DB was correct in 1980's but DB storage engine did change a lot since!
A BIGINT doesn't take more than a byte, in some DBs.Even CLOB (nvarchar(max)) are faster and uses less space than size-specific fields in modern DB today.
See e.g. https://www.postgresql.org/docs/current … acter.html and the performance note.
?? thats a PG thing only and just an advantage for updates iirc. This has nothing todo with "modern". An index scan is getting slower here as a side effect.
What's the default characterset of the database?
I can't confirm your findings.
One szenario i could imagine is a FB-DB with characterset 'NONE' and undefined collations on the field.
A windows client would write raw data with windows encoded codepages and a *nix client expects utf8.
To be honest we are aware about the 'CS_NONE' issue and with 7.3 we already announced to cut the support for all db charactersets where we need a crystal ball for the encodings.
See: https://sourceforge.net/p/zeoslib/wiki/ … 20release/
If you db uses CS_NONE make a dump and recreate the DB again!
@AB there is no difference for the XE.1 compiler comparing to each other unicode ide.
Hi,
have a look @ https://firebirdsql.org/file/documentat … types.html
section Character Indexes.
Most DB's (except SQLite, Postgre) do limit indices to size of the variable character string. As documented the maximum length for an index equals one quarter of the page size.
your field nome is a unlimited blob subtype text. FireBird can't create an index here. The more fetching such data is slower than fetching a varchar(255) f.e. For all known dbs except (except SQLite, Postgre).
limit your fieldsize to a value you need in reality:
property Nome: RawUTF8 index 255 read fNome write fNome stored AS_UNIQUE
to be clear: You need to re-create this table or do the altering manually. Just change the code as suggested would not resolve your problem.
@mpv,
this is correct, i'd overlook this! It was my pull-request arnaud did apply. So I'm sorry for this inconsistency..
Hi JD,
{$if defined(ZEOS73UP) and defined(USE_SYNCOMMONS)}
fResultSet.ColumnsToJSON(WR);
{$ELSE}
...
as Arnaud wrote: the IZResultSet.ColumnsToJSON is part of Zeos, defined in ZDbcIntfs.pas.
It skips the interface calling chain and brings best performance per driver to write the JSON contents.
I wonder about your regression. As the define shows, you can use the procedure only if both defines are enabled.
The ZEOS73UP define is located in Zeos.inc (included in SynDBZeos.pas) and available only in the 7.3 branches.
Secondary define is your choice to define it in your Project.
Check old revisions of zeos on your computer. It looks to me like your mixing old files somewhere. Check the USE_SYNCOMMONS define too.
@hnb
great news for Zeos, thank to all.
The problem is this:
FConnection := TSQLDBZEOSConnectionProperties.Create(TSQLDBZEOSConnectionProperties.URI(dMSSQL,'SERVER'),'DATABSE','sa', '');
FSQLDBRows := FConnection.Execute('SELECT NazwyMiast FROM Miasta WHERE NazwyMiast LIKE ? ', [Name]);
in combination with the LCL issue SystemCodePage := ut8;
Comment in ZDbcDBlib:
{note: this is a hack from a user-request of synopse project!
Purpose is to notify Zeos all Character columns are
UTF8-encoded. e.g. N(VAR)CHAR. Initial idea is made for MSSQL where we've NO
valid tdsType to determine (Var)Char(Ansi-Encoding) or N(Var)Char(UTF8) encoding
So this is stopping all encoding detections and increases the performance in
a high rate. If Varchar fields are fetched you should use a cast to N-Fields!
Else all results are invalid!!!!! Just to invoke later questions, reports!}
.....
The user attaches ansi encoded fields not nvarchar-fields.
It would help to use the sybdb.dll (your_zeos_folder\lib\freetds\32 bits + iconv) + FreeTDS protocol + Connection.Properties.Values['ClientCodepage'] := 'UTF8';
Propose to use the ODBC driver of SynDB or the Zeos ODBC driver like in \SQLite3\Samples\15 - External DB performance project described.
@DDGG
I don't wanna be rude, just want to help ... But
So it's ok to create a new TODBCConnectionProperties every time I need to query something?
this IS strange to me. (;
Nope i don't think so. mORMot just needs the GetMoreResults logic to query all pending results. And as i said, it's a question of design..
Propose you flush the SP's and use native queries for the job, if it is acceptable for you.
@DDGG
the advice was: "Try Zeos instead"
but i'm asking my selve if it would help(use the Callable stmt api instead)... mORMot simply isn't prepared/made using multiple results...
AB, what do you think? FYI: That's why Zeos splits statement and result-records... Both are not related
Use the TZReadOnlyQuery in unidirectional mode.
Looking to the thrown Error .. Is it possible you do fetch to many rows? Use chunks instead.
Regression in Zeos. Please update from SVN
Works like a charm! Thanks Arnaud.
Don't know why you suggest it. Minimal simple case:
function ShowMyMissingName: RawJSON;
var
Item: Variant;
Item2: TDocVariantData;
begin
Item := _JSON('{"ID": 1,"Notation": "ABC", "Price": 10.1, "CustomNotation": "XYZ"}');
Item2.InitCopy(Item, []);
Item2.I['ID'] := 2;
Item2.Delete(Item2.GetValueIndex('CustomNotation'));
Result := VariantSaveJSON(Item);
end;
Result is:
{
"ID": 1,
"Notation": "ABC",
"Price": 10.1,
"": "XYZ"
}
You've a inconsistency because all copies of the origin do point to the same VName array. There is no differenz between late binding or plain TDocVariant.
Each origin will be invalid if i delete fields from the copy or the copy of a copy etc.
Funny is if you add now a new field+Value to Item2 like:
function ShowMyMissingName: RawJSON;
var
Item: Variant;
Item2: TDocVariantData;
begin
Item := _JSON('{"ID": 1,"Notation": "ABC", "Price": 10.1, "CustomNotation": "XYZ"}');
Item2.InitCopy(Item, []);
Item2.I['ID'] := 2;
Item2.Delete(Item2.GetValueIndex('CustomNotation'));
Item2.AddValue('mORMot', 100);
Result := VariantSaveJSON(Item);
end;
the original item has a new name for value "XYZ":
{
"ID": 1,
"Notation": "ABC",
"Price": 10.1,
"mORMot": "XYZ"
}
where i would expect "CustomNotation".
Arnaud, i wouldn't bother you if there is a solution.
It is confusing becuase changing the TestCase to:
function ShowMyMissingName: RawJSON;
var
Item, Arr: Variant;
begin
Arr := SynCommons._Arr([]);
Item := _JSON('{"ID": 1,"Notation": "ABC"}');
Item.Add('Price', 10.10);
Item.Add('CustomNotation', 'XYZ');
Arr.Add(_Copy(Item)); // <------------------with _Copy()
Item.Delete('Price');
Arr.Add(Item);
Result := VariantSaveJSON(Arr);
end;
doesn't help neither.
Ok second testcase 4 you (done everything double with _Copy() and _Unique() )
function ShowMyMissingName: RawJSON;
Item, Item2, Item3, Arr: Variant;
begin
Arr := SynCommons._Arr([]);
Item := _JSON('{"ID": 1,"Notation": "ABC", "Price": 10.1, "CustomNotation": "XYZ"}');
Arr.Add(Item);
_Unique(Item); //as documented in [url]https://synopse.info/files/html/Synopse[/url] … l#TITLE_43
Item.Delete('Price'); //this still effects element 0 in the array because you still refrence same VName array
Item2 := _Copy(Item); //as you can see i do copy not by ref
_Unique(Item2); //now i also make the item unique which is redundant because copy is doing this already
Item2.ID := 2;
Arr.Add(Item2); //i add second item to the array
Item3 := _Copy(Arr._(1)); //copy second element of Arr into Item3 (currently not assigned)
_Unique(Item3); //make item3 unique which is redundant too
Item3.Delete('CustomNotation'); //after doing this step we'll have a json parse error for element[0] in the array
Item3.ID := 3;
Arr.Add(Item3); //add item3 to array as third element
Result := VariantSaveJSON(Arr);
end;
Result is:
[{
"ID": 1,
"Notation": "ABC",
"": 10.1,
"": "XYZ"
}, {
"ID": 2,
"Notation": "ABC",
"": "XYZ"
}, {
"ID": 3,
"Notation": "ABC"
}]
Done like documented, thought. But wat was happen with my array??
Third case 4 you:
function ShowMyMissingName: RawJSON;
begin
Arr := SynCommons._Arr([_JSON('{"ID": 1,"Notation": "ABC", "Price": 10.1, "CustomNotation": "XYZ"}'), _JSON('{"ID": 2,"Notation": "XYZ", "Price": 9.1, "CustomNotation": "aaaa"}')]);
Item := _Copy(arr._(0));
_Unique(Item1); //make item1 unique which is redundant because of _Copy()
Item.Delete('CustomNotation');
Result := VariantSaveJSON(Arr);
end;
Result is:
[{
"ID": 1,
"Notation": "ABC",
"Price": 10.1,
"": "XYZ"
}, {
"ID": 2,
"Notation": "XYZ",
"Price": 9.1,
"CustomNotation": "aaaa"
}]
Done like documented too. Is this realy expected?
I know this case isn't usual.
However what do you think about this:
function TDocVariantData.Delete(Index: integer): boolean;
const RefCountOffSet = SizeOf(SizeInt)+SizeOf(Longint); //<- depends to compiler and target (just a proposal)
var N: TRawUTF8DynArray;
I: Integer;
begin
if cardinal(Index)>=cardinal(VCount) then
result := false else begin
dec(VCount);
if VName<>nil then begin
If PLongInt(NativeUInt(VName)-RefCountOffSet)^ > 1 then begin
System.SetLength(N, Length(VName)); //<- make a unique dynarr
for i := 0 to length(VName) do
if (Pointer(VName[i]) <> nil) or ((I < High(VName)) and (Pointer(VName[i+1]) <> nil)) then //two empty names lead to parsing errors
N[i] := VName[i] else
Break;
VName := N;
end;
VName[Index] := '';
end;
VarClear(VValue[Index]);
if Index<VCount then begin
if VName<>nil then begin
MoveFast(VName[Index+1],VName[Index],(VCount-Index)*sizeof(pointer));
PtrUInt(VName[VCount]) := 0; // avoid GPF
end;
MoveFast(VValue[Index+1],VValue[Index],(VCount-Index)*sizeof(variant));
TVarData(VValue[VCount]).VType := varEmpty; // avoid GPF
end;
result := true;
end;
end;
This would reduce the memallocs to minimum but there are the offsets and type of RefCount ... which may differ for compiler + target..
else ....
The only code to work with would be something like:
Item := _JSON(VariantSaveJSON(arr._(0))
Should i go this route?
ps.
To successfully reproduce the memoryleak of an ansistring just add
Arr._(0).Add('Test', 10);
@end of my testcase
Edit: Weird, can't reproduce the leak any more after starting the IDE again. But my report about a missing fieldname remains.
Hi Arnaud,
in some circumstances i've some reports of leaking ansistring in my projects. I remember there was a forum discussion about.. But i can't find it any more.
I think the reports and my newly issue having same reasons.
i've a little test case for you:
function ShowMyMissingName: RawJSON;
var
Item, Arr: Variant;
begin
Arr := SynCommons._Arr([]);
Item := _JSON('{"ID": 1,"Notation": "ABC"}');
Item.Add('Price', 10.10);
Item.Add('CustomNotation', 'XYZ');
Arr.Add(Item);
Item.Delete('Price');
Arr.Add(Item);
Result := VariantSaveJSON(Arr);
end;
While tracing the values in the Arr-Variant you'll find a missing Name of "Price" in Element[0] of the JSON wheras the Value still exists.
Is this expected? Did i forgot to set an option?
looking to
procedure TDocVariant.CopyByValue(var Dest: TVarData; const Source: TVarData);
var S: TDocVariantData absolute Source;
D: TDocVariantData absolute Dest;
i: integer;
begin
//Assert(Source.VType=DocVariantVType);
if Dest.VType and VTYPE_STATIC<>0 then
VarClear(variant(Dest)); // Dest may be a complex type
D.VType := S.VType;
D.VOptions := S.VOptions;
D.VKind := S.VKind;
D.VCount := S.VCount;
pointer(D.VName) := nil; // avoid GPF
pointer(D.VValue) := nil;
if S.VCount=0 then
exit; // no data to copy
D.VName := S.VName; // names can always be safely copied
// slower but safe by-value copy
SetLength(D.VValue,S.VCount);
for i := 0 to S.VCount-1 do
D.VValue[i] := S.VValue[i];
end;
you assing the D.VName from S.VName which does increment the array refcount but not the string refcounts..
Some reason for this behavior:
function TDocVariantData.Delete(Index: integer): boolean;
begin
if cardinal(Index)>=cardinal(VCount) then
result := false else begin
dec(VCount);
if VName<>nil then
VName[Index] := ''; //EH: check the refcount ... before doing this and after -> nothing changed even if refcount > 1
VarClear(VValue[Index]);
if Index<VCount then begin
if VName<>nil then begin
MoveFast(VName[Index+1],VName[Index],(VCount-Index)*sizeof(Pointer));
NativeUInt(VName[VCount]) := 0; // avoid GPF
end;
MoveFast(VValue[Index+1],VValue[Index],(VCount-Index)*sizeof(variant));
TVarData(VValue[VCount]).VType := varEmpty; // avoid GPF
end;
result := true;
end;
end;
For me it seem's you start from the premisse on setting
VName[Index] := ";
the compiler checks the DynArr refcount and copies the array on changing a element. But iv'e noticed this doesn't happen. Check the RefCount by
PLongInt(NativeUInt(VName)-8)^ = x;
and the String-Refcounts of course.
Exchanging your assignment in TDocVariant.CopyByValue by:
// slower but safe by-value copy
SetLength(D.VName,Length(S.VName));
for i := 0 to High(S.VName) do
if Pointer(S.VName[i]) <> nil then
D.VName[i] := S.VName[i] else
Break;
resolves my problem. The compiler makes a unique array and increments the stringRefcounts like expected.
What do you think?
Edit:
I know you want a fast copy by ref but this should be optional if dvoValueCopiedByReference is set. Am i wrong?
Cheers, Michael
Hi All,
well this is exactly my issue too. The new LCL IDE's with FPC3 just use the {codepage utf8} define. Result is: all <Ansi>Strings(CP_ACP) have a default encoding of UTF8. So i did this in a differnt way by simply storing a ZOSCodePage which will be initialized on startup like:
procedure SetZOSCodePage;
begin
{$IFDEF MSWINDOWS}
ZOSCodePage := GetACP; //available for Windows and WinCE
{$ELSE}
{$IFDEF WITH_DEFAULTSYSTEMCODEPAGE}
ZOSCodePage := Word(DefaultSystemCodePage); << This is for the UnicodeIDE's and FPC2.7+ but broken because of Lazarus
{$ELSE}
ZOSCodePage := CP_UTF8; //how to determine the current OS CP?
{$ENDIF}
{$ENDIF}
end;
Instead of trust the DefaultSystemCodepage i just use my lokalized one. Some more remarks: Since the eval LCL is doing that you'll need to check all cast or better you use own conversions like i've introduced them in Zeos.
Hope it helps
@Alfred
consider ALL Lazarus/CodeTyphoon IDE's currently use the {$codepage utf8} define, because of the wrong premisse every string is ut8 encoded.
From my personal POV you'll never be able to get a stable behavior running except your tracking the define which than breaks our tests.. The FPC DefaultSystemCodePage is always set to UTF8. So the failing test regression tests need to be UTF8Encoded !!IF!! this define was set. Same trouble i've on Zeos side.. MyTests just make it possible to run in 3 environements + some subsettings (unrealted here): CP_ACP, CP_UTF8 and CP_UTF16. Zeos is able to make a "floating" behavior ... up until latest FPC3 (just for my test > Zeos it's selves can what i want!) So we'll tag current 7.2-testing as a RC-candidate next days.
Regards, Michael
Hello Alfred,
we (Zeos-Team) are in same trouble for the moment. Since FPC3 gen all my non usascii tests are broken. Some of them i got running inbetween by declaring some constants like:
MyTestString: UnicodeString = 'äüö';
but up until now i could not find a common design for UTF8 or WIN1252 raw encoded string which would work for all delphis and FPC2.4>3.x.
So my current intention for our test-suites is to continue with the UnicodeString design and replace const declared variables by vars and initialize them on startup. Ugly i know...
Some remarks for the FPC users:
Delphi by default stores the developers code-page of current Delphi environment in the project file. So the compiler always know how to compile conversion required const Strings. From my POV this is the missing point. I do not understand how they are able to decide which encoding a constant string may have...
Since FPC3 we've had so much trouble. Memoryleaks with the RawByteStrings for a long time and users who had been using codetyphoon 5.4...5.6 are in such trouble too. We got loads of bad critics, just because they didn't know anything about the broken FPC3.x gen arrow. Comming CT5.7 is memoryleak free. Puhh good news for Zeos. (personaly i don't know why, the CT guys also patching the compiler such like CT5.7 will have a replaced memorymanger ...)
And of course i know about the damn slow string conversions they are doing. I just suppressed such things by introducing some mover functions with defines in current Zeos design. Which than means Allocate a new RawByteString and move the data is almost faster and safer than trust in what current FPC is doing with the RBS strings. You won't find any line about SetCodePage() somewhere in the Zeos-code. Just because i was in same trouble since a long time. And of course i can undef all the defines if they make me lucky (:
Yes this kills the performance but note such things are not relavant for Zeos+mORMot. This just cames to shove if Zeos is doing the conversion control for FreeTDS f.e. or FireBird+CS_NONE.
According the http://wiki.freepascal.org/FPC_Unicode_support ... I'm confused too. They are writing about "source file codepage" but i can't locate it somewhere in a project. Also the intendet "no conversion happens just increase the refcount" (if target or source string is a RawByteString) sounds very nice... But i can't confirm it this way.
There some more such weired things ... Just compare a PWideChar()^ WideCharacter with a const declared char like 'a' or #0. Or just move some WideChars from a Source to Dest like PW1^ := PW2^.. A performance nightmare.. Dead slow! Using PWord(PW1)^ := PWord(PW2)^ kills the performance leak..
@shobits1
I wouldn't agree saying mORMot or Zeos are not ready for FPC3 ... The more FPC3 isn't ready for such projects
We missed loads of things pre FPC3 but we got a stable behavior until 2.7
Cheers, Michael
AFAR did Zeos run in trouble on WinCE because FPC uses the $UNICODE define to indicate a Unicode plattform instead of a Unicode-IDE.
No idea if this issue still remains or if MarcoVV replaced the defines.
@sevo did you try to comile Zeos on WinCE?
Michal is from Poland so it should be CP1251
On my side the code compiles and runs fine with my whole compiler battery. No issues to see, just better performance results (:
A good point!
Since FPC 2.7 the official codepage agnostic has been introduced.
Looking @ SynCommons all (RawUnicode, RawUTF8, WinAnsiString, RawByteString) are wrongly declared as AnsiString for all non-Unicode targets inlcuding FPC2.7+
Instead of Setting the codepages Arnaud should check FPC-Version or add a global property based define like in Synopse.inc
@All
since Zeos7.3 i made a little code join with/for the synopse-project.
There is a new define $USE_SYNCOMMONS available which than directly uses the SynCommons.TJSONWriter.
You can either add it to your compiler options or just uncommend the define in Zeos.inc.
This define than finally brings actual best performance for ZDBC + mORMot. The SynCommons.TJSONWriter is pretty nice optimized, because of this i decided to skip a Zeos own approch and just used the one of mORMot. Now Zeos is able to return JSONs. So both projects do benefit from the latest patches.
As you've noticed Michal (miab3) also did test the two new Zeos-ODBC Drivers. I wrote them to make the leap according MSSQL + Zeos and NON Windows plattforms and becouse i wanted to know what brings best MSSQL access rates ... OleDB is the winner in this domain.
Numbers are talking, guys ... from !my! POV there are no performance (are there otheres for mORMot? ) advantages using FireDac or UniDac as well. Remembering the first results of 7.1 series Zeos access rates where between FireDac/UniDac. Now most code is optimized and completely refactored.
Arnaud and me worked hand in hand to get best performance running for his framework and Zeos it's selves too of course!
Note i'm not ready with the complete refactoring. Just most parts of ZCore up to ZDbc have been reworked from ground up. Actually my Zeos coding time is rare comparing my previous activities but 7.2-stable will come soon. (For those who do not trust in the beta attribute ... It's up to Mark,Michal,Jan and me just to tag 7.2 as stable candidate! We'll do that if latest fix is older than 3 weeks.
Missing points:
I've noticed there are adtional options in mORMot. Such like limit the lob sizes or just ignore them.
As a work around i propose to use the JSONWriter.TSQLFieldBits/Fields for composing the json-contents and just ignore those fields. Any other ideas?
@miab3
thanks for attaching the results! I was out of time to setup the whole database batery!
Sadly i see you miss a Zeos OleDB-driver results, or am i wrong? Could you test the OleDB driver + MSSQL too?
In addition could you enable the $USE_SYNCOMMONS define please?
@AB
i was waiting a while before anouncing the define or new drivers. Some final regressions have been fixed. (And beause of lack of time i've tested each protocol @all)
===> THANK's for updating your documantations! This is a great advertinsing for Zeos.
Cheers, Michael
@AB
i agree with Daniel.
In addition consider a connection should never expire (and erased on server side) between TimeOutMinutes range if the connection was bussy.
From my POV a Stmt should allways reset the fProperties.fConnectionTimeOutTicks if a prepare/executeprepared was called. Because the server has activity and there is no reason to close the connection.
Daniel's second issue looks like the connection is closed while a query was executed. Result is the posted error. If your stmts reset the Connection inactivity tickcount that issue shouldn't happen any more.
What do you think?
@itSDS
maybe i've to explain a bit more even if it documented in the links i've posted before...
While the Server is collecting the data, your client library(libmysql.dll f.e.) gets a timeout because there is no responce from the Server. So the client lib raises the error "SQL Error: Lost connection to MySQL server during query", discards all handles and forces you to reconnect.
Continuing thinking about changing "something" after retrieving data is a nop.
it's not a problem of threads or anything else after quering the rows...
The error just indicates to many DATA or a bad/missing index or oversized field structures.
Advices to resolve the issue:
1. Find out why quering your select tooks so long. e.g. missing Index? / To many rows? / Field-Size to huge even if MySQL supports a Index?
2. if there are to many rows -> limit the data by quering the data in more than one step like using first/top skip/offset syntax
3. Again and like mpv mentioned: set the timeout options high enough to do not run into that known behavior.
From my pov you're using threads to have no wait timeouts in your main app. So you know about the huge responce time. The reason is?
Did you increase the TimeOut options? Did you?
@AB
stop thinking your framework is runing into an issue.. Nope! NOO multithreading problem...
@itSDS
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
please read documantions. All of us are trying to support you.
According using Zeos: add your expected TimeOut props to the Zeos connection strings see:
http://dev.mysql.com/doc/refman/5.0/en/ … tions.html
!!--->>> MYSQL_OPT_READ_TIMEOUT/MYSQL_OPT_WRITE_TIMEOUT
like 'MYSQL_OPT_WRITE_TIMEOUT=WhatEverYouWant_Seconds'
Have a look at ZDbcMySQL.pas -> procedure TZMySQLConnection.Open
Such props need to be set before a connection is opened. No idea how FireDac/UniDac is doing the job. Docs?
Hope it helps. Have fun comparing access rates of FireDac/UniDac vs. Zeos7.2+
@Zed
you can also try my Zeos7.3 OleDB approach.
You'll need to checkout testing-7.3 branch from svn://svn.code.sf.net/p/zeoslib/code-0/branches/testing-7.3
create the Properties like
TSQLDBZEOSConnectionProperties.Create(
TSQLDBZEOSConnectionProperties.URI('OleDB[mssql]', '',''),
<DatabaseString>, <UserName>, <Password>)
Hope it helps..
@hnb
From my POV there is no reason to change the CCS. Libpg.dll perfektly converts Latin2 to UTF8 and vice versa.
Note: We did some performance changes on the String getters: since mORMot is a UTF8 based framework we (Arnaud & Me ) did decide to change the IZResultSet-Getters to GetPAnsiChar getters for some RDBM like PG, Oracle, SQLite, MySQL.
So what you're trying to do may leads to pain on composing the JSON contents.
You should also change the Getter to GetUTF8String() for the PG-Protocol which than kills the performance(First alloc mem for a String, move Data from libpg to String, move content to JSONWriter, free mem) which is skipped by ZDBC using the Pointer-Getters...
My advice: Clean bad encoded strings once in your DB and keep the mORMot/ZDBC code as is..
Cheers, Michael
ZeosDevTeam
Hi Arnaud,
i can confirm your findings using Zeos only. Same full tests of Zeos comapring Delphi vs. FPC, FPC looses !one minute!
Personally i think the initial reason is the Memory-Management and some slow code generations(see FPC bugtracker, search for "slow code genreated").
Some times ago i was tracking the MemoryManager of FPC. From what i remember:
Florian did implement the FastMM4 way for AllocMem() and ReallocMem() see http://free-pascal-general.1045716.n5.n … 09801.html
But all other features are ignored. In addition i can't say how FPC-MemoryManger behaves for Multi-threading apps(scaling).. Sorry i can't find the mail of Florian Kämpfl where he wrote about ignoring all other FastMM4 features, but i definitely remember i read such one.
Note some FPC Compiler procs are written in optimized assembler code. Most of them for Win32... This might be a reason too.
Close the Connection if NOBODY is connected..
We all have rare free time, a family and a day to day job. The job you're asking for isn't done in 5 minutes. Not for Arnaud neither for me. FB is a complex API, even if it's ready(execpt ARRAY DESC TYPES) on Zeos, but changing the whole TA behavior infacts ALL drivers not FB only -> the reason why such request are in job queue but not done yet.
Be patient, help by your selves via coding and constribute your findings or ... $$$?
I believe you're talking about SET NOCOUNT {ON|OFF}. Yes, if you put it on the first line of the SP and you don't have any SELECT then you'll get the output values without the need of calling SQLMoreResults(). Thank you for reminding me that! It solves the case with my key generator SP.
Glad, you got it! Even if it's not a full solution for SP's @all.... It simply helps you for "simple" SP, but the issue with multiple ResultSets is still pending, i think.
Offtopic:
Arnaud that's one of the reason for the design/model of Zeos. The Results and Statements are different. Zeos can reuse the Parameters as a Prepared stmt. For some drivers like ADO,MSSQL,SYBASE,MYSQL i can NOT reuse the IZResultSets. Initial issue is i don't know for sure if GetMoreResults(ZEOS) can retrieve another ResultSet or not(except i'm starting a huge SQL-parsing which leads to pain and performance loss-> shouldn't be task of Zeos from my POV). For drivers like SQLite,FireBird,PostgreSQL and Oracle i was able to reuse the IZResultSets (Something similar like your SeekFirst option, which avoids rebuilding the column-informations, buffer-allocations etc.).
Oftenly i was thinking about changing and simplifiying the design of Zeos(IZStatement+IZResultSet in one Interface) to something similar like your TSQLRequest model, but exactly these (and some other Zeos internal automations like the IZCachedResolver) cases have been stopping my ideas...
So let's see how you resolve this case... I was a bit surpriced why this issue never did popup here. I'll stay tuned!
Some notes:
The behavior i wrote previously is also setting-dependend.
Getting the UpdateCound/RowCount as first resultset is optional. Have to study my MSSQL book again.. Back hole in my head now. Can't remember the option yet. Would this be interesting?
@alpinistbg
Accordingly Zeos:
Therefor the IZCallableStatment API is made. There is a spezial CAPI for StoredProcedures (using the TDS-Protocol) only. But this isn't supported my mORMot yet.
Propose you use ADO driver or since Z7.3 the new OleDB Driver with a syntax like "ado[mssql]" as protocol-name.
Both are implmented to fetch Multiple-ResultSets.
Some providers do return multiple Results as RowSets:
1. RowSet: UpdateCount/RowCount
2. RowSet: A ResultSet, part of stored procedure or if more Selects are called one:
3. RowSet: Another user fetched Row
....
X. Last are the Out-Params.
This behavior is provider dependend some do support multiple RS (eg. MySQL, MSSQL, Sybase) others don't. Most providers do execute everything but return only last result...
You can use the IZStatement.GetMoreResults: Boolean; to check if another RowSet is available. If so use IZStatement.GetResultSet to retrieve the data.
Great news, Arnaud.
Accordingly the dyn-array params... Maybe i'm wrong but this little christmas gift could be related: http://bugs.freepascal.org/view.php?id=27206
See the rttichanges.path patch.
Frohe Weihnachten und ein gesundes neues Jahr wünscht das Zeos-Team
@ttomas
Thanks for follow my advice. Results are what i expect and i wrote several times.
Of course having no open TA would be nice but for what exactly with FB?. Again: This is a documented behavior. You'll never ever be able to execute a stmt in any kind with FB and there is TA opend: FB/IB-Docs page 317 ... 336+ NO transaction handle -> raise an error. NO SELECT, simply NOTHING. That's FB/IB.
Common Access API by using interfaces, I assume that's why Zeos behaves like it currently does, but i don't know that for sure. From my POV "alltime code". After reading your logs: case closed for 7.2! It might change if minior version increases... Behavior is locked on 7.2.x All TA's started/Leveled/commited or rolled back as I expect it (:
Thank you for the IB-trace idea.
BTW:
@Daniel
what about the Dead-Lock issue? A week is over! Are you happy now?
Michael.