You are not logged in.
Pages: 1
Since mORMot stores session information in memory, what is the recommended method of handling multiple instances of a mormot server on different machines for load balancing and/or redundancy? It seems the signature would constantly change and require a login every time a call to each server instance is made.
For example:
First call --> Server A
Second call --> Server B (requires root/auth login to get new session signature)
Third call --> Server A (session signature was changed above, so call is now rejected from server A)
Etc.
Perhaps I'm missing something obvious, but the only solution I could come up with was to override some of the authentication methods so I could persist the session in a database table.
I also found that for the authentication to work, I had to set fIDCardinal to the signature returned from the database so it matches the one used by the other server instance.
For example:
procedure TMyCustomAuthSession.SetSignature(const aSignature: RawUTF8);
begin
Assert(Length(aSignature) = 8);
// set fIDCardinal to the signature returned from the database so it matches the one used by the other server instance
HexDisplayToCardinal(@aSignature[1], fIDCardinal);
end;
function TMyCustomRestServerAuthentication.RetrieveSession(Ctxt: TSQLRestServerURIContext): TAuthSession;
var
Svr: TMyRestServer;
SessionSignature: RawUTF8;
AuthGroupId: Integer;
MyAuthUser: TMySQLAuthUser;
SQLAuthUser: TSQLAuthUser;
WebServiceSession: TWebServiceSession;
begin
Result := inherited;
if (Result = nil) then
begin
Svr := (Ctxt.Server as TMyRestServer);
if UrlDecodeNeedParameters(Ctxt.Parameters, 'session_signature') then
begin
SessionSignature := Ctxt.InputUTF8['session_signature'];
// signature must be 8 bytes
Assert(Length(SessionSignature) = 8);
WebServiceSession := nil;
if Svr.GetDataAccessLayer.LookupUserBySessionSignature(SessionSignature, False, WebServiceSession, MyAuthUser) then
begin
try
if (MinutesBetween(NowUTC, WebServiceSession.LoginDateTime) > GetLoginTimeout) then
begin
Svr.GetDataAccessLayer.WebServiceSessionDelete(WebServiceSession.WebServiceSessionId);
AuthUser.Free;
Result := nil;
end
else
begin
Svr.GetDataAccessLayer.WebServiceSessionUpdateLoginTime(WebServiceSession.WebServiceSessionId);
AuthGroupId := Svr.MainFieldID(TSQLAuthGroup, 'Admin');
MyAuthUser.GroupRights := TSQLAuthGroup(AuthGroupId);
SQLAuthUser := MyAuthUser;
Svr.SessionCreate(SQLAuthUser, Ctxt, Result);
Result.User.GroupRights.SessionTimeout := GetLoginTimeout;
====>(Result as TMyCustomAuthSession).SetSignature(SessionSignature);
end;
finally
WebServiceSession.Free;
end;
end;
end;
end;
end;
Am I way off base here or does this look like a reasonable solution (it works as implemented).
Thanks
Offline
You are right, such TAuthSession instances are implementation details of the transmission layer, and in practice tied to the server instance.
If you want something more global, create a global application session, with its own state - but it would be something else than TAuthSession, e.g. some shared data persisted as a TSQLRecord.
And I would not implement it at TAuthSession level, since it would reduce the performance a lot to access an external/centralized database.
You are voiding most of the mORMot benefits, by implementing such a solution, I'm afraid.
IMHO load-balancing at IP level is to be used only with stateless requests (e.g. return a static content, or some uncoupled information - see "stateless" in the doc).
Once you are using sessions, you become stateful.
And load-balancing does NOT help any more, whatever system it runs on: replicating the session state would void the benefit of load-balancing.
I guess you come from a background where load-balancing is a need, due to high response times, or small number of concurrent connections (e.g. a php server with a rendering time of half a second, like http://stackoverflow.com/questions/23283574 and its session management). A properly implemented mORMot server would have a ms response time, and handle 1000ths of concurrent connections.
So my proposal is that you should ask yourself: do I really need load balancing?
The load balancing is usually done within the mORMot server itself, which performs as fast as a proxy (e.g. nginx).
A fail-over mechanism may be implemented at application level.
That is, a node has its set of data locally, pushing it to a global database (using e.g. real-time replication), then another node is able to take the requests in case of failure of the first server, or during a maintenance phase.
Online
Thank you for your perspective on this.
The problem is that we need geo-redundancy on the server, so there needs to be a server running in two separate data centers.
If you want something more global, create a global application session, with its own state - but it would be something else than TAuthSession, e.g. some shared data persisted as a TSQLRecord.
That's the path I first tried, but it became obvious that implementing my own user/session management and authentication was more difficult than simply overriding some of the default mORMot behavior.
(e.g., as mentioned here: http://synopse.info/forum/viewtopic.php?id=1474)
And I would not implement it at TAuthSession level, since it would reduce the performance a lot to access an external/centralized database.
...
IMHO load-balancing at IP level is to be used only with stateless requests (e.g. return a static content, or some uncoupled information - see "stateless" in the doc).
We are essentially creating an API for 3rd parties to use to provision our system (via REST URIs), which is using an existing, mature MS SQL Server database with time-tested stored procedures, so we are not in a position to use mORMot as-is.
We do need to maintain state because we have our own security architecture to manage user group rights to various application functions, so I don't know how we'd ever be able to avoid database lookups when authenticating a new user-session.
We do persist those group rights in memory (which we do from a global instance on server startup, with periodic refreshes) otherwise it would definitely affect performance to look them up every time a request was made. That's why my current implementation (as shown in my RetrieveSession method) first checks to see if the session exists in memory, otherwise it checks the database and creates one for the next call, so I imagine performance shouldn't be affected too much.
Please let me know if I'm missing something else here that would severely impact performance.
The load balancing is usually done within the mORMot server itself, which performs as fast as a proxy (e.g. nginx).
Can you please explain this a bit more? I'm not clear on what you mean other than that mORMot performance is sufficient to remove the need for load-balanced servers.
I'd also be interested in anything more you can share (relevant links or otherwise) on scaling or load-balancing. How else would one scale a mORMot system (e.g., like scaling an ASP.NET MVC site, where all subscriber/auth information is also stored in the database).
I suppose we could deploy server instances in to different locations, and just have a primary/secondary DNS in case the first one cannot be accessed, rather than try to save the session state...
In any case, I'm always open to a better/more efficient way of doing things...
Offline
See https://www.digitalocean.com/community/ … -balancing
For DNS (e.g. geographic) balancing, you would indeed have to maintain a centralized storage.
IMHO you shoud just make a difference between mORMot sessions, and your business sessions.
Let the mORMot clients create a session, which may be even with another user/group credential.
For instance, use sicClientDriven services, and retrieve the current business information from a centralized session store, owning the user session with a token (to be the only one to use it), then store all this business session information in the sicClientDriven class implementation. The sicClientDriven instance would be the cache on server side.
OR
You may just set the TSQLRestServer.OnSessionCreate event to a custom method, which would retrieve the session information from the main DB (stored in your custom TAuthSession class, with all the fields you need), once the user is authenticated and the mORMot TAuthSession instance has been created.
Online
First call --> Server A
Second call --> Server B (requires root/auth login to get new session signature)
Third call --> Server A (session signature was changed above, so call is now rejected from server A)
---------------------------
this is why Redis become popular. a distributed session.
Offline
Or you can use a JWT to store the session information, and validate it on any Server.
For additional safety, you could use ES256 asymmetric algorithm and store the private keys in a dedicated Auth server.
Online
I use an nginx load balancer which able to do a sticky session based on IP hash (in commercial version other algos also available)
Configs looks like this:
upstream myappgroup {
server host-1 max_fails=2 fail_timeout=30;
server host-2 max_fails=2 fail_timeout=30;
ip_hash;
keepalive 32;
}
server {
location / {
proxy_pass http://myappgroup;
}
}
Works in production for many years
Another option is a custom OnSessionCreate and storing serialized TAuthSession class in Redis - I do not use database for it because on big load intensive access to the one table with many delete/insert statements can cause some problems (big redo logs, locking etc)
Last edited by mpv (2021-02-19 04:37:32)
Offline
Pages: 1