You are not logged in.
Pages: 1
Hello,
I Need assistance.
Would like to try the framework, but unfortunately running TestSQL3 in last few leaves I got:
O:\Components\Synopse\SQLite3\TestSQL3.exe 0.0.0.0 (2012-04-30 17:15:54)
Host=HOSTX User=USERX CPU=2*9-6-5894 OS=13.1=6.1.7601 Wow64=1 Freq=2467851
TSynLog 1.16 2012-04-30T17:19:45
20120430 17194558 fail TTestLowLevelCommon(0216A8E8) Low level common: IdemPropName "" stack trace 0007431B SynCommons.TTestLowLevelCommon._IdemPropName (22547) 001839D2 SQLite3SelfTests.SQLite3ConsoleTests (217) 000076C4 System.@StartExe
What could be the problem?
Thanks.
Offline
Are you sure you retrieved the latest version from http://synopse.info/fossil, and ALL expected files?
See http://synopse.info/fossil/wiki?name=Get+the+source
Perhaps some common files or units are in the wrong version.
What is the Delphi compiler you are using?
It is working for me with Delphi 6, 7, 2007, 2010 and XE2.
Could you step with the debugger and tell use which is the exact code line which does not work?
Offline
I am running Win7 64bit, Delphi XE2 Update 1.
Already running latest version from fossil. No other(old) versions around.
Last 10th test failed, in TTestLowLevelCommon._IdemPropName, found in SynCommons.pas:
22546: Check(UpperCaseU(WinAnsiToUTF8('aйзD'))='AECD');
My default locale is Cyrilic (Win-1521).
Tried to convert the SynCommons.pas directly from IDE to UTF8 (File Format), also to add UTF8 BOM to get the IDE read it as UTF8 text. Same failed results.
Could this be the problem?
If yes, how can I get around this?
Other tests are also failing (pdf; one client-server test lasts forever). Will post them once I resolve this issue. I guess all the fails may be related to the problem above.
Thanks.
Last edited by chapa (2012-05-01 12:59:28)
Offline
Thanks, easy fix Guess pdf fails are related to WinAnsiToUTF8, will not trace the test.
I am experiencing another major issue. Before half an year I tried some of the Samples. Found major performance problems and I give up right after first tries.
Now, after reading the SAD I got excited. Good work done.
But I faced same overall performance problems and just traced the problem.
SynCrtSock.pas
3989:
function WinHttpSendRequest(hRequest: HINTERNET; pwszHeaders: PWideChar;
dwHeadersLength: DWORD; lpOptional: Pointer; dwOptionalLength: DWORD; dwTotalLength: DWORD;
Dont know why, but on my machine calling this function is EXTREMELY slow. Sometimes it takes more than a second for just 2-3 calls?!?
All antivirus programs stopped, if may be related.
Tried on other Win7 and Win2008RC2, and I does not face same issue. All worked fine.
Did anyone face such problems before? What could be the reason of very low winhttp.dll performance?
Thanks.
Last edited by chapa (2012-05-01 14:27:39)
Offline
I'm having about 3000 requests per second with the default test program.
And I also found out that WinHTTP is much faster than WinINet, in all configurations.
You can try with WinINet class, or with the direct WinSocket connection layer.
If it works with other Win7, it sounds like a specific configuration issue.
Did you use a proxy on this computer? Even if disabled by now?
Thanks for the interrest, in all cases.
Offline
Yes, sometimes I use proxy, this is the first thing I checked then I saw the problem. But proxy is not configured right now.
May it be the case? Maybe if there is a way to reset all WinHTTP settings will be good.
Will try to find out the exact problem, because I want to get familiar with the framework and give it a try even in production.
Also will be more than happy if I got a chance to contribute.
Thanks, will try also WinINet and WinSocket with the same tests and let you know the results.
Offline
About WinHTTP proxy setting, see http://msdn.microsoft.com/en-us/library … s.85).aspx
You may try to reset the proxy settings to normal.
Or try to check your IP configuration.
Offline
Actually, I found what the problem is.
Nod32 is catching and processing every HTTP like request, even if the port is not in the list of "Ports used by HTTP protocol". Framework compression is very "suspicious" and processing of http compressed content seem very slow compared to regular HTTP traffic.
Disabling Nod32 HTTP filters fixed the mess. Good to know.
One question. Is TSQLRestServerFullMemory aware of CreateSQLMultiIndex? Query using TSQLRestServerFullMemory is very slow compared to TSQLRestServerDB with created indecies.
Thanks.
Offline
Good to know, about the Nod32 HTTP filter.
Is there not any other solution that disabling the whole Nod32 HTTP filter? Adding a particular port to ignore, for instance?
No, TSQLRestServerFullMemory does not implement any index in its current implementation.
It just ignores them.
So with many records, it may be slower than SQLite3, which has a very effective index handling.
How many records do you store in TSQLRestServerFullMemory?
I may add indexes in the road map - could definitively make sense.
Offline
Did not played much with nod32 options.
There are exclusions list for specific domains, which can be set for not scanning. Did not tried adding localhost or remote one in exclusion list.
Also disabling in-depth heuristics and advanced heuristics may lead to much better performance, as nod32 will not try to interpret custom compressed content as binary executable and try to detect low-level malware code inside the "unknown" binary content type.
By default HTTP checking is enabled only for "80, 8080, 3128". I used two >1024 different ports for tests, but there is no effect till whole HTTP checking is disabled.
My use case is not common.
On middle-tier it is more like data processing than ORM, but think I can benefit from using ORM framework, even if I may had little performance impact compared to current custom realization.
On server side I had dozen of tables, few of them with a couple of millions objects. Nothing special. Here I will benefit from using mORMot, with fast SQLlite3 engine and caching. RESTful services are what I need there.
But, middle tier need something fast, in-memory indexed realization, which receive from programs (clients) around 300-500k of new generated objects (only content, no internal ids) every 5 minutes.
Must check:
Does already have them in-memory with assigned server id?
yes (99.9%): check if content changed
yes, changed (<1%): update the object to the Server tier, using known id and new changed content
not changed (>99%): do nothing
no (<0.01%): add them to the server, store in-memory the content along with new server id
As far as new and changed objects are very little % of all processed incoming objects I think it is good to be separate tire.
If cache is good enough it will be easier to use client-side cache of server objects, but I guess the purpose of the cache is other, and realization is not fast enough to process such number of objects efficiently.
I would like to implement it having mORMot functionality, maybe bigtable is suitable for this midware? Or other mORMot implementation?
Offline
Thanks for your input.
It is very nice having feedback from users!
What I may do is use a SQLite3 table containing the objects content HASH, for checking about updates.
Perhaps just two or more integer hashes at once, e.g. Hash32 + Adler32 + crc32 + kr32 (all available in mORMot, in optimized versions), would avoid collisions, and be much faster than SHA-256 or SHA-1. If you want to be sure, just use the SHA-256 hasher supplied in SynCrypto: if it is computed on the client side, then transmitted as hexadecimal to the server, computation power won't be an issue.
You'll save a lot of memory, and with proper indexes over the ID, check will be very fast, and SQLite3 database file will be small (only a few bytes per row of data on the server).
Note that there are several levels of caching in mORMot - see the documentation about this point, and http://blog.synopse.info/post/2012/02/14/ORM-cache
The build-in cache within mORMot at TSQLRecord / CRUD level may fit your purpose, if the cached hashes is in the form of a TSQLRecord.
But there are other levels of cache in mORMot.
There is no need of BigTable here - and BigTable has some known issues with update + pack in its current state. So relying on SQLite3 may be a good idea.
If all cache data fits in memory (IDs+hashs), perhaps a TSQLRestServerStaticInMemory may be faster than SQLite3 (but I'm not sure of that, since SQLite3 indexes are very optimized, and SQLite3 is very good when it deals mainly with reading data).
In all cases, SQLite3 caching implemented within mORMot as global JSON cache and also at statement level may help a lot if change is only about 1%. So TSQLRestServerStaticInMemory is perhaps less scalable than SQLite3 itself.
Offline
Pages: 1