You are not logged in.
By looking deeper into your code I have realized that there is nothing wrong with it and gave me the idea to solve my problem. It works perfectly with the 3rdParty server if TJWTHS256 is created with the right parameters. I have based my code on this line from the documentation, where the second parameter is 10:
j := TJWTHS256.Create('sec',10,[jrcIssuer,jrcExpirationTime,jrcIssuedAt,jrcJWTID],[],60);
Changing the second parameter to 0 is all it took to get it working. ![]()
I think I can understand your code somewhat better now: THMAC_SHA256 is created with the private key and it can be updated later without resupplying the key.
ab,
The result of my hacked code and the result of the original code is different. The result of the modified code is accepted by the 3rd party server. I think it is worth checking. ![]()
"The private key is supplied at constructor class level"
I saw this in the code, it is used to calculate a the 1st part of the token (header) but it does not seem to be stored for later use.
Hi ab,
If my understanding is right, the current implementation of ComputeSignature does not work according the standard. It should be something like this:
var signature : TSHA256Digest;
begin
...
//result := fHeaderB64+payload+'.'+ComputeSignature(payload); << current code
HMAC_SHA256( Private_key , fHeaderB64+payload,signature );
result := fHeaderB64+payload+'.'+ BinToBase64URI(@signature , SizeOf(signature ));
Not not to break current codes it could support both ways.
Private_key does not seem to be available at this point.
Hi ab,
I am looking for an example on how to access a 3rdParty rest service with JWT. I do not suppose there is any yet, but it does not hurt to ask. ![]()
Happy New Year!
Leslie
OK, I found something useful: https://www.nginx.com/resources/admin-g … rse-proxy/
Just not sure if https requests can be redirected transparently to a mORMot-server that handles "http" only...
Incoming https can be forwarded to a mORMot server. But the connection has to be upgraded to http 1.1 to avoid performance issues.
This is an example with the required changes, which are the same for setting up both the http and https section.
Edit /etc/nginx/sitesenabled as root.
## Add upstream for keepalive
upstream http_backend {
# ip:port for the backend servers
server 127.0.0.1:8888;
server 127.0.0.1:8889;
# The number of inactive connections kept open. The oldest one is closed when the limit is reached.
keepalive 100;
}
server {
location /someurl {
proxy_pass http://127.0.0.1:8888;
# ....
## Add for keepalive START
proxy_read_timeout 300;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
# Remove the Connection header if the client sends it,
# it could be "close" to close a keepalive connection
proxy_set_header Connection "";
## Add for keepalive END
}
For HTTP/1.0 requests Wireshark shows 5 frames before the Recv timeout occurs. I can send you the capture file if you want to investigate.
But for HTTPS the problem was something totally unexpected: I had kept the Nginx config file open in the editor for a while. Saved the file a couple of times and there was never any error. But after closing the editor and opening the file again all changes were gone. (The editor had root rights, there was no reason I am aware of for this.) So while I thought I was testing several settings for HTTPS, it turned out that it had the same partially setup configuration all the time. Brrrr.
Finally everything is working now perfectly with HTTP/1.1. Plus the responses from my new mORMot server feels snappier and uses less cpu time than the previous solution. ![]()
I think I can define the problem more clearly now:
Nginx can setup the request it is forwarding to the backend server several ways. These are the ones MORMot can handle and I have tried:
HTTP/1.0 (default) ---------------------------------------> mORMot HttpWebserver : after a ~5 sec timeout there is an error displayed on the console "Errno Recv()=11" , and after that the response gets delivered back to the proxy.
HTTP/1.1 (this requires certain settings for Nginx) --> mORMot HttpWebserver : instant reply
There are two possible solutions :
After setting up the reverse proxy for Nginx for forwarding both http & https incoming requests via HTTP/1.1. this is what happens:
a) incoming HTTP request - Nginx sends HTTP/1.1.
b) incoming HTTPS request - Nginx sends HTTP/1.0.
Since b) looks to be a bug I have created a bug report for the Nginx team.
While this works out I am looking at the mORMot side if the cause for "Errno Recv()=11" can be found and solved.
Yes, of course, but Nginx fills the header in the request for v1.0. Is it possible to handle it if it was v1.1 and 2keep it alive" regardless what the header sais. Since the EWB server did not require any extra configuration it worked with nginx out of the box, I assume this should be possible.
When Nginx forwards a https request to the mORMot server as a http request it ignores "proxy_http_version 1.1", which is required for keep-alive. Could be a bug. So it seems there is no mORMot only solution for this. Unless it is somehow possible to upgrade to v1.1 from the mORMot side.
Still not quite through yet.
For https connections this solution does not seem to work. Maybe it is easier to handle it on the mORMot side: is it possible to tell TTHttpServer to ignore this setting in the header and force it to "keepalive"?
After several misconceptions about what could be the root cause, finally I have the right answer.
With the help of wireshark it turned out that when Nginx forwards the request the header is set to close the connection. It has to be "keepalive" to have a fast response.
And this is how to do it.
Edit /etc/nginx/sitesenabled as root.
# Add upstream for keepalive
upstream http_backend {
# ip:port for the backend servers
server 127.0.0.1:8888;
server 127.0.0.1:8889;
# The number of inactive connections kept open. The oldest one is closed when the limit is reached.
keepalive 100;
}
server {
location /someurl {
proxy_pass http://127.0.0.1:8888;
# ....
# Add for keepalive START
proxy_read_timeout 300;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
# Remove the Connection header if the client sends it,
# it could be "close" to close a keepalive connection
proxy_set_header Connection "";
# Add for keepalive END
}
I assume that EWB server worked out of the box because it ignores this part of the header ad keeps it alive anyway.
Considering that EWB server works fine with NGinx as a reverse proxy it suggests that the mORMot server's response must be different in some small detail.
It seems that Nginx holds back the answer because it is still waiting for something from the mORMot server. Which may never arrive, and in this case the answer is finally sent to the client after a timeout occurs. So the speed issue may be solvable by fine-tuning Nginx. But the problem could also be something missing from the mORMot response, or a missing low level signal ...
It is not TSQLDBZEOSConnectionProperties either. Sending a data url to the mORMot server directly the json answer is returned as fast as expected. Which leaves Nginx itself or the Nginx-mORMot server communication at suspect.
The Zeos only test app is as fast as it can be expected. It seems like a mORMot issue now within TSQLDBZEOSConnectionProperties.Execute.
I have noticed one difference between fb 2.1 & 2.5.
TSQLDBZEOSConnectionProperties.Create
2.1 needed all information in the Servername Parameter.
2.5 worked when all params were assigned properly
No, not yet. I was hoping that maybe it is just a configuration issue with mORMot. Maybe something like connection pooling is required ... My other idea was that maybe the client not matching the server version could be the problem.
Anyway, I know a little more know. After migrating the database to linux native firebird 2.5 server - so wine is gone now - the speed is at least as terrible. So it is definitely not wine related.
I will check with a mORMotless Zeos only test app now ...
Hi,
I have an unusual setup: mostly windows apps with wine but it works really fast.
Ubuntu 14.04 32Bit
Nginx (Linux) is the front. It handles the static part and forwards the data requests to
the EWB Webserver(Wine) - which comes with Elevate Web Builder. It uses odbc for data connections to FireBird 2.1 (Wine).
Now that I have a little more time the goal is to create a fully native, mORMot based solution for Linux. The webserver is already up and running, but at this stage it still connects to the Firebird server( wine - localhost -> not embedded!). The problem is that a simple query takes 2-5 seconds to execute via TSQLDBZEOSConnectionProperties. Before this I haven't had any performance or stability issues, wine works surprisingly well, the same queries took only a few ms with the EWB server. So even though this is a kinda strange setup, I find it hard to believe that wine is at fault here. But I cannot really think of anything else either. Maybe someone with more experience ...
Wow, I am amazed how fast you are fixing bugs. ![]()
Thanks a lot!
Leslie
Hi,
Delphi 2005 apparently does not like GetSQLite3Library being a class function.
[Error] SynSQLite3.pas(3132): E2356 Property accessor must be an instance field or method
This might be useful to read : http://synopse.info/files/html/Synopse% … #TITLE_524
RObyDP
You are focusing on speed, but compression ratio and CPU load are very important too. Can you share results with these included?
The gain in compression speed may easily be lost while transferring less compressed data on the wire. A closer to real life test would be to extend the test case with transferring the data via WAN to a mORMot client.
Currently mORMot is at its best on 32Bit, it is excellent for cheep linux VPS's. Probably most users would be interested in 32Bit solutions.
Erick, the link is not working.
Erick,
I have EWB too. Exploring SMS these days.
SMS compiler is much more advanced than EWB. It has a very interesting memory model. Supports the use of pointers and delphi style memory allocation instead of relying directly on DOM, which makes possible to write shared code with delphi/fpc much more easily. This feature being new I am not sure how stable it is.
On the other hand EWB is stronger on the GUI front , plus Controls can be bound to datasets in delphi style.
Inmemory datasets with transaction handling are also more advanced in EWB.
IDE:
SMS text editor is better, more like Delphi/Lazarus
Property editors are lacking in both products for me.
EWB support is quality but releases often get heavily delayed.
I wish we could have the best of both world in one product. ![]()
Alex,
I have seen you posting a sample app at the SMS forum (MS Employee Directory). This could be useful sample code for many of us. Could you please share the source code for both server and client?
Ok, thanks.
AB,
One more question: is there any example I could look at about data binding when the data is supplied by a mORMot based server to an ionic based UI ? (UI Rendering & saving forms)
This is VERY helpful. Thank you.
I still use SMS for my own projects, since I'm able to deploy applications on Android and iPhone (and any other system) as HTML5 app, by-passing the iTunes/GooglePlay stores
Could you please share a little more on the "by-passing" part?
My mobile apps are not distributed publicly and I am looking for efficient ways to distribute/update a HTML5 app without some "stores" involved. The target is mostly android but occasionally IOS or Windows Mobile may come to the picture. AFAIK installing to IOS requires purchasing an apple developer ID. Reading your comment made me wonder if there is any way around this?
Ok, now I get what you mean. But imho it still does not help here. One problem is that not only fList is involved, fListInterfaceMethod is involved too. So we have to do similiar things for fListInterfaceMethod array --> 2 atomic operations in a non-atomic "parent" method. Having a look at the current implementation it should be no problem to operate on fListInterfaceMethod if another thread is operating on fList currently.
fList and fListInterfaceMethod are not independent. They have a one to many relationship which can be exploited. If updated in the right order the data structure they represent remains valid even between the two atomic operations. The trick is that when expanding the list fListInterfaceMethod must be updated first and fList is to be updated last. I had a quick look at he code and it seems the TServiceContainer.AddServiceInternal is where changes could be done. When removing items the update order is reversed, fList must be updated first. When there are both updates and deletes it is a two phase process. All updates first and then all deletes in the second phase.
Possible problems here are more of an academic kind and should only arise if one is trying to Add/Delete the same interface from several thread at the same time.
This is easy. Since with my suggestion updates are forced to be serialized it is enough to check if the item to be added/removed already exists in NewList .
Leslie7 wrote:Interlocked commands allow you to check if the original value is at place at the time of the exchange. If not you can restart the process in a loop until your modified list can be safely exchanged. Checking for time out in the loop is most likely not necessary for normal use case, but probably better to include because of possible stress tests.
I know about Interlocked*** commands, but that would not help in this situation imho. By calling InterlockedExchangePointer you are exactly doing what the method name tells you: you are exchanging one pointer with another one, in a thread safe, atomic way. And that's the problem. Exchanging the pointer is atomic, but the DeleteInterface/AddInterface method is still a non-atomic operation. If 2 or more threads are calling DeleteInterface() method then you will face exactly the same problems as you do without the InterlockedExchangePointer call. It does not help. InterlockedExchangePointer only ensures that exchanging the fList pointer is an atomic operation, it does not care about the data "behind" that pointer. Various threads would "overwrite" fList again and again without seeing each others changes. Comparing to the old value doesn't help you either.
It works if it is done the right way. ![]()
The List fList references is always entirely readonly.
for any change
1. OldList:= fList // two references to the same list.
2. OldList is duplicated to NewList // two identical lists
3. changes are applied to NewList
4. if InterlockedCompareExchangePointer (@fList, @NewList , @OldList) <> @OldList then
// we know that it has been changed since we started --> Start again with 1.
else
// we know that it has not been changed since we started and InterlockedCompareExchangePointer has already replaced fList with NewList --> SUCCESS
In my opinion there is no need to make AddInterface() and DeleteInterface() method thread safe at all. AddInterface() exists for a very long time right now, and there has never been any problem with it.
Calling DeleteInterface() is quite a low-level task, and you should have a good reason to do that. This feature is not intended to be called heavily anywhere in your server sources. The programmer is responsible to ensure that AddInterface() and DeleteInterface() method calls are done in thread safe functions. Personally I use those features only inside a single thread safe singleton class.
Requirements can be different from yours. Some servers need to be on truly 24/7 and it must be possible to change/extend its functionality without restarting or going offline for maintenance. Which can be achieved via plugins or/and scripting, but this is the point when updating the list of interfaces in a thread safe fashion comes to the picture. Currently a mORMot based server must go offline to safely update the published interfaces.
Creating copies of those list does not help here. That would be "thread safe" only by the meaning of no access violations happening.
Imagine what happens if 2 threads are calling DeleteInterface at the same time, each one deleting another interface. Each call to DeleteInterface creates its own copy of the lists, deletes the interface and exchanges the pointer to that list. It will happen that one of those DeleteInterface calls would be "lost" for sure because at the time when Thread 2 is creating the private fList copy, Thread 1 hasn't exchanged the fList pointer yet.So, if the goal is to make this function thread safe then there is no other way then locking.
I'll write down some more thoughts/advices about that whole thing later...
Interlocked commands allow you to check if the original value is at place at the time of the exchange. If not you can restart the process in a loop until your modified list can be safely exchanged. Checking for time out in the loop is most likely not necessary for normal use case, but probably better to include because of possible stress tests.
To be thread-safe without locking, function DeleteInterface can create modified copies of fList and fListInterfaceMethod, and then call InterlockedExchangePointer to replace old lists with new one.
Or may be create copy of TServiceContainer, and replace it.
This may still cause problems if any code is using the original object at the time of the exchange. It could work if all the routines accessing the object do so by assigning it to a local variable which is the only point of access from there. But freeing the object when not needed any more could be tricky in this case. Using interfaces (which are reference counted) should be the solution. It seems to me that thread safety cannot be accomplished without AB having to change some code.
Oz, is this implementation thread safe? Is this safe when the server is under load?
Thanks for links, I will start my work soon.
http://www.cromis.net/blog/2013/02/tany … container/
TAnyValue was my inspiration for management operators
(Initialize,Finalize,Copy,Clone):
TOmniValue might be worth a look for ideas.
This is only one way mORMot can be used, which seems more like the exception. I think my concerns apply to all the rest. Sharding does not suit all purposes and is not a pattern that can be easily implemented cleanly. Mainly because aggregates have practical size limits. Sometimes the "unit of work" can be a selected group of aggregates which need to be processed together.
AB,
I think the record versioning approach in itself is problematic. The replication can cause inconsistent states for periods of times. Following the unit of work pattern and assigning the same version number to every record modified by processing a "unit of work" could ensure consistency. In simple cases like updating a single record it would be almost the same as record versioning.
ab,
It is good to know.
I am sure the current websockets implementation fits very well the specific needs of the framework. The question is if it can be used with browsers (which implement the official standards) as a client? What about crossplatform clients?
My opinion is to use libuv to replace a IOCP. All websocket part in mORMot is already implemented, we luck only a async IO.
Pros for this lib is:
- battle tested by millions of user
- fallback to the optimal mechanism depending on platform: epoll, kqueue, IOCP, event ports
- C API
They both seem to be good. Maybe we should create a positive/negative list for both.
" fallback to the optimal mechanism depending on platform: epoll, kqueue, IOCP, event ports"
this goes for Websocket Plus Plus too. It can be linked with different libraries. Boost being one of them uses the best choice for every platform. I only outlined epoll because the platform most developer has to deal with beside Windows is Linux, so this is where mORMot could use the improvement the most.
I think the question here is more than just async IO, websockets support is a huge topic as well:
WSPP passes all the Autobhan tests, it implements most websocket features, much more than mORMot does. For example one I am interested in is scatter&gather. I would argue that what it has to offer on the websockets front carries quite a weight.
Yes, it is interesting in this context.
But it is a C++ library, so a C flat API is needed to consume it from Delphi or FPC...
Is anyone willing to do this?
I am not a C/C++ guy. But maybe the developer is willing to create a C API.
It has the ability to respond to HTTP requests other than WebSocket Upgrade requests.
I may be wrong but it seems to me that it has all the low level stuff based on epoll, and mORMot has all the high level implementation to handle the http requests in its own httpserver implementation. They seem like a good match.
It could be more than two birds with one stone to provide efficient websocket and http server implementation for a lot of platforms with a single library. Having the core compiled with eg a GNU compiler could boost the performance as well. You already have the sqlite obj files compiled from C in the framework. WebsocketPP could become part of mORMot in the same manner.
What do you think of using http://gwan.com/ as our HTTP server under Linux?
Sounds like a great and proven system, to be used instead of http.sys (which sounds the best under Windows) and any FastCGI processes (e.g. Apache/NGinx/Lighttpd) under Linux.
Take a look at http://www.wikivs.com/wiki/G-WAN_vs_Nginx
AB,
How about this: https://github.com/zaphoyd/websocketpp
It seems to be a typical head office with local offices scenario. Search for "Replication use cases" in the documentation.
1. I can't use ORM(the app exists, so I'll not recode it), so I can't use mormot replication.
You don not have to change the existing code, ORM classes are just and other way to access the same data. They can coexist with any DAC, though one have to be mindful of caching. You can write a simple generator to automatically create ORM classes from database metadata. The framework handles it the other way around.
It is worth discovering what mORMot has to offer. Eg interface based services, websockets two way communication, callbacks via interfaces, live replication ...
What you proposed to use one big master-database and few replicated detail-database filtered by client has 2 drawbacks:
No, I meant one slave db for each client. But now I can see that you do not even need that since you already have the local office servers as slaves.
1. 6000db x 100 mb = will be already 600GB database and growing
You might want to check mORMot's up and coming bigdata support.
An SQLite database can be smaller than a firebird database with the same data. It is worth creating a test export to see the ratio.
2. I can't use "fast download database". "Fast download database" means backup now "master-database", zip it and download it quickly to client to replace "detail-database"(less than 15 MB for a full 100 MB db). This is useful when synchonization was a long time ago and changes log is to long to be asked by SQL(part by part and buffered to client), or the client database is corrupted, or computer is reinstalled etc.
Even from a single database you can export & compress the data you want to have on your local server. Once again interface based services can be useful here.
mORMot tends to be well optimized. You may find it's replication to be gentle enough with the bandwith and sqlite to be fast enough applying the updates. Search for "Data access benchmark" in the documentation.
Though SQLite does not support stored procedures, so it may not be suitable to easily replace the local office servers. But in the cloud either SQLite or PostgreSQL can be a better candidate because of the way they store records they tend to use less disk space and memory.
It is worth testing your usage scenarios with mORMot for speed and resource consumption before making design decisions.
"each slave database contains only client specific data" - All data commonly used by all clients and read only to them can be put to a separate database. Much easier to maintain this way. When the client needs its database these two are handed over.
emk,
Applying meta changes to all these databases can be painful. If you have only a single executable it may be down for a while until all updates are finished.
A single database as AB has suggested can save you a lot of headaches. I think the best would be probably to set up a selective master-slave replication where the master database has all the data and each slave database contains only client specific data. mORMot supports replication where slave servers are online only for the time of the replication . You only need a second server which can act as the slave server on behalf of all client databases. It could iterate through the client databases periodically and run the replications one by one. This way you can have the best of both architectures without the disadvantages. All clients can access the server any time. Also significantly less resources are used. You may even include both servers (Master & Slave) into a single executable.
Cheers,
Leslie
Avoiding loosing the connection is probably the next best thing after automatic reconnect.
Calling TZConnection.Ping with a smaller than timeout frequency should be enough to practically keep the connection open.
http://dev.mysql.com/doc/refman/5.7/en/mysql-ping.html
I am not sure whether it should be MYSQL_OPT_RECONNECT=TRUE or MYSQL_OPT_RECONNECT=1.
Thanks!
I have searched the crossplatform units for 'TSQLRestBatch and came up empty. I should have searched just for 'batch'.
AB,
Can TSQLRestBatch be used with the crossplatform units?
Ab,
"since each client is a server"
I think you probably have not understood the concept. I have not mentioned clients at all - clients are serviced by the frontend server, it is business as usual.
It is the server side which has two layers. One accessible from the outside word, but the other one -the backend - with the the sensitive data is not.
"Just use a mORMot server for what you call a "client", and a mORMot client on the "server" side."
This is the normal logic for a frontend server to connect to a backend server. The one thing I would like to change if possible is how the connection is initiated. Normally the frontend server would initiate the connection as a client to the backend server. I would like the backend server to initiate the connection which is used by the frontend server to call the published services of the backend server. From the connection viewpoint the backend is the client and the frontend is the server. But from the viewpoint of the services the frontend is the the client and the backend is the server.
This setup would shield the backend server from being hacked from a compromised frontend server.
I know this may seem an overkill, but there is data sensitive and valuable enough worth the extra mile.