You are not logged in.
I also need this feature. For example in TSQLDBConnection.NewStatementPrepared database error go to nowhere. So it's impossible to get error message been raised on database level (only in log files). Aa solution may be add "var Err" to
function TSQLDBConnection.NewStatementPrepared(const aSQL: RawUTF8;
ExpectResults: Boolean; var Err: RawUTF8): TSQLDBStatement;
begin
try
result := NewStatement;
result.Prepare(aSQL,ExpectResults);
except
on E: Exception do
FreeAndNil(result);
Err := StringToUTF8(E.Message)
end;
end;and push it up to NewThreadSafeStatementPrepared level?
The first request browser is OPTIONS (without body, and as I understand without URL params ). So must be something like this
procedure TSQLMyRestServerDB.URI(var Call: TSQLRestServerURIParams);
begin
if Call.Method = 'OPTIONS' then begin //TODO add OPTIONS method to - TSQLURIMethod ENUM and StringToMethod function
if (Call.InHead has header Origin and this origin is allowed to call our server) then begin
Call.OutHead := Trim(Trim(Call.OutHead)+
#13#10'Access-Control-Allow-Headers: x-requested-with'+
#13#10'Access-Control-Allow-Methods: POST, GET, DELETE,LOCK,OPTIONS'+
#13#10'Access-Control-Max-Age: 1728000'+
#13#10'Access-Control-Allow-Origin: http://www.exemplo.com');
end else
Call.OutHead := Trim(Trim(Call.OutHead)+ #13#10'Access-Control-Allow-Origin: '); //DENY access
Call.OutStatus := HTML_SUCCESS;
end else
inherited;
end;2AB: The problem warleyalex say is really exist. In mORMot roadmap we have item "JSONP support", but actually for now is better to support CORS (https://developer.mozilla.org/en-US/doc … ntrol_CORS) by define public property AllowedDomains on HTTPServer level and handle OPTIONS request type...
2warleyalex: As solution on your HTTP server where static live (let it be myserver.com) configure redirecting querys from URL myserver.com/root* to mORMot_server_IP:port/root* and do AJAX request to URL's myserver.com/root*
I mean to connect to external (in case of distributed model) server (of cause for in-process connection it is not used)
AB, I already have code for direct connection to mORMot services in server side from scripting via HTTP (wrapper around TWinHTTP + simple JS class to implement protocol details) (something like http://nodejs.org/api/http.html ) - I rewrite it in new way and shere when we finish refactoring of main part.
When implementing event is good to use EventEmitter pattern (http://nodejs.org/api/events.html ) - the main different from Delphi OnSomthing is:
1) posibility to add sereval different listiners to one event - very important thing. For example I use it on auth event (fired by auth procedure of my mORMot-based server to fill custom user information. One subsystem do App.on('auth', {obtain additional info from LDAP), other App.on('auth', {obtain additional info from database table) and so on.
2) posibility to create new event types "on the fly" (something like sendMessage in windows with custom code), i.e. Class.fireEvent('myNewEvent', param1, param2) and in other part of code Class.on('myNewEvent', function(){do something})
Anders Hejlsberg... I'll look on TypeScript once again!! ![]()
The reason I do not implement debugger yet is new Debuger API I waiting from Mozilla - no reason to implement it in old way. As editor I use (with grate success) old good SynEdit on delphi side and codemirror on web side (even with code insight)
Last two week I work on refactoring my “scripting support” codebase to share it with mORMot. The work is in progress but first result I deploy in few days.
The main ideas is:
Why mORMot need scripting support
At the moment mORMot allows us to create well-designed, scalable, and fast application. But often we need to customize application logic in different way for different customers. For example if we develop application for accounting, entities is the same for all customers (store, goods, departments, invoice and so on) but transaction logic differ for every customers. In this case if we write separate logic for every customer, we go to “branch-hell”, if we start to add options for every case - the number of options increase and code become unsupported. Even if we move this logic to “customer specific” plugins (dll) we need to compile it and unable to change “this one little thing” directly on customer workplace. So moving such type of logic to scripting engine is a good idea. In this case we have one version of program with base functionality and different transaction building scripts for different customers. Also scripting is a must for report generation. Creation wrapper over SynPdf for scripting engine give us possibility to create reports without change main application codebase.
My opinion is: only speed/memory critical functionality must be “hardcoded” everything else is work for scripting engine. But this is IMHO.
Why JavaScript
There are many reasons to chose JavaScript as scripting engine.
- This is Web assembler. Every programmer in one way or another knows JS.
- JS is a very powerful language. See for example excellent article (http://javascript.crockford.com/inheritance.html). (IMHO Crockford is the best author for javaScript learn – "JavaScript – good parts" is a “must read” book);
- There are HUGE number of things written on JS: syntax sugar - CoffeScript,TypeScript, Dart; template engine – jade, mustache…; SOAP and LDAP client and many-many other. And node.js libraries of course.
- In case of Rich Internet Application we can directly share part of logic between client and server (validation / template rendering e.t.c.) without any middleware.
- We can use other scripting language (DWS for example) translated to JS and run it under the JavaScript engine.
Why SpiderMonkey
I look at many engine before chose SpiderMonkey.
- pure Delphi Besen http://code.google.com/p/besen/ ( bad performance speed/memory compared to V8 & SM)
- V8 (C++ hard to integrate with Delphi)
- Microsoft scripting host ( COM / poor API / bad performance / only for Windows)
I choose SpiderMonkey because:
- Stability. It is the oldest, most heavily tuned engine available.
- Open source with HUGE community.
- C API – easy to integrate with Delphi
- ECMA-262 Edition 5 (and Harmony soon)
- It is very fast. Really VERY fast.
The road map
- First of all I start from converting SpiderMonkey .h files to Delphi. All known conversion (http://code.google.com/p/delphi-javascript http://delphi.mozdev.org/javascript_bridge/) based on old SM engine (1.5 – no JIT and ECMAScript3). I convert SpiderMonkey 1.8.5 witch give us full ECMAScript5 support and JIT.
- The second step is create Delphi classes to easy deal with engine – I do it in a week.
- Third – implement CommonJS Modules specification and node.js “fs” module (it’s easy) – this give us possibility to use huge number of node.js modules.
- Create wrapper for mORMot classes (TSQLRecord and so on) to use scripting for implementing Interface based services (IBS). (additionally do something like “contract” for IBS may be using JSON-Schema for easy use IBS from AJAX client.
- Implement debugger
The goal
I admire the success of Node.js. But I think it has one big architectural problem: due to single thread node make all API async, and async code become unreadable and “undebuggerable”. mORMot is multithread – it is easy to make JS work the same way mORMot DB connection pool work and write JS code without callback what call callback what call callback and so on. Also we have great ORM engine – so let’s use it in script’s. And we native to Windows.
So the goal is: Delphi based, FAST multithread server with ORM and node.js modules compatible.
Suggestions are welcome!
Sorry for my English ![]()
By the way Trello is fully JavaScript solution - server is NodeJS + MongoDB. Client - CoffeeScript, Backbone.js, HTML5 History API, Mustache
There is no need to check FileExists(FileName) at all. If file not exist HTTPAPI send 404 automatically.
Great work! Just some tips:
1) To increase mORMot performance it is better to set number of thread = CPUs core count *2 (or even = CPUs core count in your case because you do not do any calculation in server side)
2) As AB sad in prev. post most interesting things started then we use some kind of logic in server side (JSON serialisation, Database access) - in this case I think mORMot is MATCH better compared to others (except node, maybe, or very optimized WCF solution what not use .NET connection pull/serialization)
3) I agree with AB - if using Jmeter you need at last 2 client to utilize all server resources - this is the reason mOTMot, Java, Node and WCF performance is nearly the same.
4) To be honest is good to use nodeJS cluster http://nodejs.org/api/cluster.html with the same thread count as in mORMot
Thanks for your work!
See this thread for ajax auth: http://synopse.info/forum/viewtopic.php?id=490
For simple (very simple) web server implementation see "\SQLite3\Samples\09 - HttpApi web server" example.
In your example mmo1.Lines.Text is string type. But JSONToObject need RawUTF8 as input parameter. RawUTF8 is UTF-8 encoded string, so direct conversoin Putf8char(mmo1.Lines.Text) is not allowed!
So please, read comments in interface part of mORMot.pas where function JSONToObject defined.
In your case correct code is:
var
R: RawUTF8;
isValid: boolean;
R := StringToUTF8(mmo1.Lines.Text); // !! IMPORTANT to put Text into buffer (R) and perform conversion from string to RawUTF8!!!
JSONToObject(d, PUTF8Char(R), isValid);
if isValid then
mmo1.Lines.Add(d.Name)
else
mmo1.Lines.Add('invalid JSON string');
AB, do you see this string hashing algorithm - i'ts a 2012 year algorithm winner http://arxiv.org/abs/1202.4961 May be used in mORMot...
2Eric:
1) please, look on this ticket http://synopse.info/fossil/tktview?name=73da2c17b1 - the same in THttpApi2Server.Execute
2) do you have plane to implement authentication?
and question 2AB&Eric - do you have plane to merge THttpApi2Server realization to mORMot repository? (even without auth on HTTPAPI level it give us ability to solve the problem Chaa temparary solve via RemoteIP in this post http://synopse.info/forum/viewtopic.php?pid=5809#p5809 via THttpApi2Server.connectionID)
I can't find good solution for this case, so I use nginx in production as frontend (all request to mORMot redirected via proxy_pass directive, static files handled by nginx, upload file size limited by client_max_body_size option). If you found good solution for http.sys - please, post it here.
Additionally I found good GUI util for manage http.sys - see http://httpsysconfig.codeplex.com/.
If you run HTTP API server you need to register ( call THttpApiServer.AddUrl) first under administrator permission.
ApacheBench don't work if I do HTTP request without Connection: Keep-Alive in header
I compile mORMot with old sources (1.17) - and the problem is the same. So mORMot is OK, 99% that it because of some Windows updates applied to my computer. On other computer everything work as expected on all mORMot versions.
Also I confirm what using ptIdentifiedInOnFile is stable and very good for understanding - match better then log per thread. Thanks for good solutions!
Could you give some feedback about ptIdentifiedInOnFile?
I test latest version a little bit. Seems ptIdentifiedInOnFile is more stable then old PerThreadLog = False ( PerThreadLog = False always put server down in 24 thread and million request stress test)
But I found some strange issue - now ApacheBench don't work if I do HTTP request without Connection: Keep-Alive in header (in prev 1.17 version everything work fine) - I'll check it carefully tomorrow
Another issue - there is Accept-Encoding header in server response - it's must not be present in response - I write ticket.
Try it under Linux - it much faster. In my case I have 2 computer - one with mORMot server + database engine and one with Linux - in case I test mORMot with database access ApacheBench utilize 100% of mORMot server CPU in this configuration.
I run something like this:
ab.exe -c900 -n50000 -p getOneRecFromBigTable.json -T"application/json" "http://mash-w7:888/m3/AS/runList"
The only good tools I found for stress testing is <a href="http://en.wikipedia.org/wiki/ApacheBench"> ApacheBench </a>. Work good under Unix/linux envirovment.
For Windows there is some limitation:
1) work only for localhost
2) only up to 1000 concurrent connection
3) bad performance under Windows virtual machine (can't test on Linux virtual machine yet)
General limitation:
1) Impossible to test mORMot with authentification turned on, so I disable auth in mORMot for stress testing (ab support only basic HTTP auth)
2) Can test only one request type in one time - but it enough to stress testing
3) Can test only HTTP mORMot server
For best mORMot performance you must set number of mORMot threads = number of core x2 (not CPU but core)
If use TSynLog, when set TSynLog.Family.PerThreadLog := ptOneFilePerThread; in other case (ptMergedInOneFile, ptIdentifiedInOnFile) you get custom exception in hard load multithreading (tested in version 1.17)
Another tools I use is MS Visual Studio - it have "test project" feature, but in this case I need 5 client machines
mORMot is VERY FAST! ![]()
I redirect password check to LDAP and not store pwd in my database at all. So if user want to change password he/she change it in LDAP (Windows Domain in my case).
It's not possible in current implementation.
In my project I modify TSQLRestServer.Auth method (ugly but I really need this) and redirect user credential check to LDAP server. But user need to provide password.
To authenticate to the server automatically (without password enter) you need to implement Kerberos support. I plane to do it, but later.
If you register ( THttpApiServer.AddUrlAuthorize ) your URL with HTTPS=false you must unregister it with HTTPS= FALSE and register again with HTTPS= true
so call
THttpApiServer.AddUrlAuthorize('INFO',aPort, FALSE {UseHTTPS} ,'*', true {OnlyDelete});
and when
THttpApiServer.AddUrlAuthorize('INFO',aPort, TRUE {UseHTTPS} ,'*', FALSE {OnlyDelete});
Hope it's help
My IMHO about inheritance:
I 100% agree with AB about "JSON" serialization trick is not a good solution. Also I don't like "Table per subclass" pattern because of JOIN((OUTER JOIN!) operation.
But "Table per class hierarchy" is not very good too. This pattern generates 2 BIG problem in big databases:
1) "sparse table" (I don't know exactly how it's on english) - I mean: for "regular employee" 3 fields always NULL (hourlyRate, perDiem, ContractPeriod), for "contract employee" 2 field always null (salary and bonus). In real project we have match more fields, so match more NULL in table. Creating index on this field become ineffective. Tablespace used ineffective too.
2) many RDBS has limitation of 255 column per table, so if we have many different Employee descendants with unique set of fields we can easily reach this limitation. It is a real problem for example in IBM FileNet
For inheritance emulation in my project I use "table per class" as here
but add second table
EMP(id, name, location, class_name) and in every CRUID operation of RegularEmployee and ContractEmploye I modify(insert/update/delete) corresponding record in EMP table:
EMP.ID = REG_EMP.ID, EMP.name = REG_EMP.name, EMP.class_name = 'RegularEmployee'(I have one ID generator per database (not per table), so can easy do EMP.ID = REG_EMP.ID). It may be done by database triggers on *Employee tables in current mORMot project (but I rewrite TSQLRecord)
I define read only Employee = TSQLRecord what point to EMP - so I can get all my employee and reference to employee from another class.
I can easy select all RegularEmployee without JOIN ![]()
It's not a normal database form, some fields a duplicated, I limit inheritacne by one level (not possible to define ContractEmploye descendants), but this schema give very good performance on big databases.
To apply this pattern to mORMot we need only 2 things:
1) One ID generator per database
2) Something like
TEmployee = class(TSQLRecordUnity) <- we need to to make it read-only
published
name
location,
class_name <- this is something like discriminator in hibernate
end;
TRegularEmployee = class(TEmployee)
published
salary,
bonus
end;
TContractEmploye = class(TEmployee)
published
salary,
bonus
end;
in TSQLREcord CRUID generation check our descendant and generate 2 insert/update/delete - one for TEmployee and one for our class.Sorry for my english.
Hmmm. Really - TSQLRecord.ID is integer. I work with mORMot in more lower level, and in SynDB level it's int64.
About int overflow - In my project there is one ID counter for all table, ant first 3 digit of it is customer number (I need generate different ID for different customer). So yes - I need bigint.
But if I do not reserve first 3 digit int32 is not enough too - in my country live 42million people, everyone pay taxes every month, so int32 is only for 1 year of production usage of level-country tax system, for example....
About storing int - yes - it much easy - I agree with you
@AB - I edit previous post - seems I wrote again not what I had in mind.
such changes require a huge refactoring of mORMot, I think. Let's wait what AB sad. As for me, the best choice is Int64 as it in mORMot now. But I have remarks for you proposition:
1) TSQLRecordID=integer; is not very good, because if we possibly move to x64 integer become int64 and recreating primary key of existing DB tables is a problem. So TSQLRecordID=int32 must be used in your case;
2) In my project's I fount what int32 is not enough in real life. Really - I have int32 primaty key overflow in some production DB, so to prevent future problems I recommend use Int64 as primary key. But this is IMHO.
About JSON: JSON doesn't like cardinals because of JavaScript internal realization (JSON is primary using in JavaScript). For example in Mozilla SpiderMonkey internal(I think in V8 to) cardinal stored as double. Using Int64 is also a VERY big problem in JavaScript - it also become double, so only Int64 with 16 digits long is usable.
AB, the blog title for http://blog.synopse.info/post/2012/10/1 … e-not-evil must be: "Interfaces are not evil; or are Delphi-ers the new Vampires?; or why MPV must study english" ![]()
I don't want trolling, the more to be rude - it's just because of MY translation problems! The idiom "the devil is in the details" means just - "details are important". Interfaces is good sometimes, for example it very good for "between processes" communication between Delphy and C++ code (like in OleDB in mORMot), but my opinion - for "inner processes" use it make code less understood.
according to documentation, which you can easy found here: http://synopse.info/fossil/wiki?name=Do … umentation (direct link http://synopse.info/files/pdf/Synopse%2 … 201.17.pdf ) mORMot don't use HTTP auth at all. Please, read documentation or see the comments in source codes (in interface part of SQLite3Commons TSQLRestServer.Create). In sample project you are talking about you can easy turn off auth by change this line of code in unit2.pas
DB := TSQLRestServerDB.Create(Model,ChangeFileExt(paramstr(0),'.db3'), FALSE {aHandleUserAuthentication});
I'm absolutely new for mocking conception, so I google a little bit and found good article what explain what "mocking" is. It may be useful for others (article about C++, but reading give good understanding of conception):
English version: http://code.google.com/p/googlemock/wiki/ForDummies
Russian version: http://code.google.com/p/googletest-tra … iesRussian
IMHO: Idea is good, but "the devil is in the details". To use mocking I must use interfaces. When I use interfaces I lost control on code, because I don't see implementation. Debugging an optimization became very hard. Especially if a beginner developer read something like GOF (Gang Of Four) and wherever necessary and where not use design templates like Visitor, Decorator and so on, and in debugging I don't understand at all what class actually implement passed interface. As for me, this is a biggest problem for .NET framework - developers use interfaces, don't look on implementation (and often don't have it in sources at all), do not learn by reading someone else's code and therefore produce monkey-code. This is only IMHO...
You can address 4Gb. See this topic http://synopse.info/forum/viewtopic.php?id=702
I use this feature in real project.
Just for information - today I came across SCGI http://en.wikipedia.org/wiki/Simple_Com … _Interface. IMHO a very good candidate for pure pascal version of TFastCGIServer. And IMHO more KISS and mORMot'ing compared to fastCGI ![]()
Thanks for changing. I understand issue not in mormot. So you can not sorry for M$ ![]()
About making ProviderName property writable: yes, I see the constructor and understand what we need refactoring to make it
Uhhhhhh. Today all day long I try to resolve the same bug. After long "dancing with drums" I found this topic.... Really - after changing provider to SQLNCLI10 problem is solved.
2 @AB - may be we change default provider in TOleDBMSSQLConnectionProperties to SQLNCLI10 ?
Or make it possible write to TOleDBMSSQLConnectionProperties.ProviderName property while not connected? so I move provider name to config file...
Thanks!
This peace of code is funny:
if PWord(P)^=ord('*')+ord('/')shl 8 then begin
PWord(P)^ := $2020;
inc(P,2);
break;
end;I remember for future use. I see similar when you compare string for false/true, but forgot ![]()
I plan to implement such functionality in my mORMot-based project in few weeks for use with Ext.direct ( see https://www.sencha.com/products/extjs/extdirect and one of Delphi realization here: http://code.google.com/p/extpascal/). So I do it and share code here - I think this is what we need. About SOAP: in my opinion SOAP is very "hard" for our little mormot
In today world, even on enterprise level, SOAP became not popular. I know some big J2EE project used JSON-RPC instead.
in some case I store my program configuration in JSON format. For better human understanding I add comments in config files, like this
/*
This is global configuration file
*/
{
"serverPort": "888", // HTTP server port
"serverDomainNames": "+",
"handleStatic": true, // handle static files. if true - set staticRoot and staticFolder
"staticRoot": "m3", // appRoot
"staticFolder": /*folder for static */ "X:\\Subsystems\\Components\\Clients\\Web\\"
}But comment is not part of JSON spec, so I write small function to remove comment from string before pass it to JSON parser.
AB, can you add this function to SynCommons ?
/// remove comments from string before pass it to JSON parser
// may be used for prepare configuration files for loading
// for example we store server configuration in file Config.json and want to put some comments in this file
// then code for loading is:
// var cfg: RawUTF8;
// cfg := StringFromFile(ExtractFilePath(paramstr(0)) + 'Config.json');
// removeCommentsFromJSON(PUTF8Char(cfg));
// pLastChar := SQLite3Commons.JSONToObject(sc, PUTF8Char(cfg), configValid);
// handle 2 type of comments:
// starting from // and till end of line and /* ..... */
procedure removeCommentsFromJSON(P: PUTF8Char);
var
nP: PUTF8Char;
begin
if P=nil then exit;
while P^<>#0 do begin
case P^ of
'"': begin // skip string
inc(P);
while not (P^ in [#0, '"']) do begin
if P^ = '\' then begin // escaped "
inc(P); if (P^ in ['"', '\']) then inc(P);
end else
inc(P);
end;
end;
'/': begin
inc(P);
case P^ of
'/': begin // this is // comment - replace by ' '
dec(P);
repeat
P^ := ' '; inc(P)
until P^ in [#0, #10, #13];
end;
'*': begin // this is /* comment - replace by ' ' but keep CRLF
dec(P); P^ := ' '; inc(P);
repeat
if not (P^ in [#10, #13]) then P^ := ' '; // skeep CRLF for correct line numbering (for error for example)
inc(P);
if (P^='*') then begin
nP := P + 1;
if (nP^<>#0) and (nP^='/') then begin
P^ := ' '; nP^ := ' ';
inc(P); inc(P);
break;
end;
end;
until P^ = #0;
end;
end;
end;
end;
inc(P);
end;
end;1) EInsufficientRtti occurs for example for some method signatures if you try to call TRttiMethod.MethodKind, for example for such: myMethod(const Args: array of const): boolean;
2) When I say about slow performance of new RTTI I mean calls to TRttiType.GetFields, GetProperties, GetIndexedProperties, GetMethods, TRTTIMethod.GetAttributes and so on. For now I cache result of this calls to my own structures for quick access in future.
In general I agree with you - using new RTTI is comfortable. Actually I use it.. But as far as I know AB use Delphi7 and in mOTMot road map is migrating to FPC, so.....
I try to work with class and property attributes in XE via new RTTI (TRttiContext e.t.c.) classes and got some problems:
1) An error EInsufficientRtti occurs - not for all cases new RTTI is present
2) VERY slow performance of new RTTI. Very slow.
3) It not work in FPC ![]()
So in my opinion using new RTTI is early now.
SynDBLog is a global variable in SynDB unit. There is very good tool for Delphi called GExperts http://www.gexperts.org/ - it allow to search content of all files in directory for some keyword, for example mORMot source directory for keyword SynDBLog. Or, in new IDE, use Ctrl+Shift+S
I think it`s bad idea. For example, if I install any database server or linux distributive there is no guest user in it, is it?
Everything is OK in this example. MS SQL default transaction model is read commited, so second transaction must wait until commit or rollback of first. About transaction isolation level good article is here http://www.postgresql.org/docs/9.1/stat … n-iso.html
You are right about nginx proxy. I use nginx as proxy and load balance with great success in real project. In my current project I use nginx for static files and proxy for mORMot server. If THttpServer become compatible with FPC I will try to use it with nginx proxy - this better as for me than single thread FastCGI solution.
But I don't understand how it possible (use THttpServer under linux) - TSynThreadPool use IOCP as i see (CreateIoCompletionPort TSynThreadPool .Initialize). Some wrapper exist in FPC for this call? Or under Linux you made other realization?
FastCGI is good idea. But as I look in current implementation it use only one thread. This means 10 times slowly compared to current multithread implementation via http.sys. I'm right? (then I not use THttpApiServer.Clone - is the same as via current FastCGI implementation - i got 700 RPS (request per second), then I set THttpApiServer.Clone(12) i got - 8800 RPS). For testing I use ApacheBench ( ab.exe ![]()
If we create 12 fastCGI app in pool then the question is shared data ( fSessions and fStaticData in our case).....
About nginx - yes, it's support FastCGI.
About Postgre:
closest analogue to bulk insert is COPY operation http://www.postgresql.org/docs/8.3/inte … -copy.html (in our case via STDIN)
Also in last version Postgre support JSON!! http://www.postgresql.org/docs/devel/st … -json.html. So we can get select result directly in JSON format - it's funny ![]()
FPC support is a greate news! But one question - what about THttpServerGeneric descendants under Linux? I mean THttpApiServer (need http.sys) or THttpServer (need IOCP). If we do THttpServerGeneric without asynchronous IO we need to change architecture from fixed number of thread to unlimited thread pool (it's bad - every thread in current realization store database connection. creating many DB connection is very resource cost)? Or you have other idea?
For example my lovely HTTP server nginx use this models: http://nginx.org/en/docs/events.html
Also if we migrate to Linux we can do PostgreSQL direct support - under Linux it's a best production DB for now. Almost (but even better) as Oracle in performance on big databases (1Tb and up) but free and OpenSource.
this query syntax different in all databases ![]()
for MS SQL syntax is: select top1 ......
for Oracle: select .... where rowNum < 1
About inserting - I agree with you - this is not good in server side.
Always insert rowCount=000000 (6 digits) is the way. I code this feature, make performance tests and post results here. After we get test result we can decide add this feature or not.
About remove rowCount for small requests IMHO 15 additional bytes it doesn't matter. But if we don't remove rowCount we will get identical behavior in all cases - it is more important.
For example in JS client part I simple write
if (result.rowCount>0) {...} If rowCount is optional I must write something like this
if (result.rowCount?(result.rowCount>0):(result.data.length>0)) {...} AB, I write <a href = "http://synopse.info/fossil/tktview?name=0b142f4f28">ticket</a> about new rowCount feature. Please, look on it. I think if we close this ticket "rowCount" become useful in generic case. But I do not know how to do this in current realization, only way I see it to write columns after rows and in reader parse result from end of JSON. May be you have better idea......
If we write "rowCount": xx in TSQLTable.GetJSONValues and in TSQLTableJSON.ParseAndConvert analyze it exists and if exist - use rowCount value instead of calculating row count this be enough for me.
Yes, I understand about one record. And I make field definition optional. In some case we need it, in other - not. Let's default will be FALSE - no field definition pass.