You are not logged in.
Pages: 1
Hello everyone, come here to raise an old question of mine about architecture and I would like your opinion.
I use the micro services architecture for my applications.
Some time ago I opted to use mormot massively in my backend projects. At this time, under the influence of the programmer, we chose to use multi-tenant applications with a single tenant database.
In mormot I can implement it as follows. Each tenant of mine is a root of my interface server:
server / tenant1
server / tenant2
server / tenant3
Very well, this architecture works very well, but I started to question myself when I needed to make communications between one micro service and another.
I explain, some of my services make it necessary at runtime to create a new tenant. As I use a database per service, this becomes a very complicated task. I'm using mormot's callback structure to communicate between services.
This led me to wonder if a multi-tenant application with a multi-tenant database would not be more suitable in the mormot infrastructure. Does anyone have any opinion formed on this?
Offline
It is difficult to understand your exact question.
The usual way is to have a front end service which redirects to the proper service with the proper DB.
Each service instance can handle several DBs, then you could move the DB from one service instance to another, e.g. when the data grows, or for the service instance to be geographically closer to the consumer.
In practice, what I would do is
1. Client connects to a redirection service with its organisation ID. It returns the server instance URI to connect to.
2. Client connects to its proper instance and work as usual with its data.
The idea is to put the URI in a cache, and by-pass step 1 for normal use. Only if the service rejects the client, then its data may have moved, so the client would call step 1 to get a new/fixed URI.
It could allow also to make load balancing, and fallback to another server in case of problem.
Offline
Hi Ab, thanks for the reply, my idea was as follows: each service of mine is a different server. In each service I have several tenants using this service. This architectural pattern is already known (Multi-tenant APP with Simple tenant Database)
This inside the mormot I can easily separate by the instance of TSQLRestServerDB where each tenant of mine is a specific root.
The result of this is that I can in each service of mine (when I say service I am referring to a business object) with several databases linked to this service
I will explain using the classes of the mormot itself to facilitate the understanding:
TSQLHttpServer(this represents my business service)--->TSQLRestServerDB1,TSQLRestServerDB2,TSQLRestServerDB3 ....(this represents the tenants' databases)
on another server
TSQLHttpServer(this represents my business service)--->TSQLRestServerDB1,TSQLRestServerDB2,TSQLRestServerDB3 ....(this represents the tenants' databases)
Do you notice that I solve the isolation of the tenent in the architecture of the mormot?
I have a common service that does what you suggested, returns the ideal root for the ideal server. However, when a new tenant starts using a service that he was not part of on a server (in the case of hiring a product of ours) I need to instantiate a TSQLRestServerDB ideal for that tenant at runtime, if not he does not even exist in the other service.
The question I asked earlier is whether I would be making this architecture somewhat complicated when it is unnecessarily complicated?
If I am dealing with the tenant's isolation in the model, it would not be easier to develop or to maintain it later?
In this way:
TSQLHttpServer(this represents my business service)--->TSQLRestServerDB(here would have all tenants isolated by the model)
Offline
Don't expose the TSQLREstServerDB directly to the HTTP Server.
Use a TSQLRestServerFullMemory with no table for the HTTP Server, then expose some services as interface or methods.
Then use a JWT or a service parameter or mORMot auth to get the proper user, and get the corresponding DB.
Offline
Hi Ab, I didn't understand your statement. It got a little out of the context of my questioning. But about the suggestion, do you suggest that I have a TSQLRestServerFullMemory acting as a gateway that chooses the right bank? But at some point I will need to communicate with TSQLREstServerDB, would I do that with a client connection?
Offline
But at some point I will need to communicate with TSQLREstServerDB, would I do that with a client connection?
You can use TSQLRestServerDB instance directly from TSQLRestServerFullMemory services/methods implementations.
I'm also using such an approach, works perfectly and no need to worry about DB access since it is internal.
Offline
Hello Vitaly , let me see if I understand. Are you suggesting that I do not expose my TSQLRestServerDB via TSQLHttpServer but use a TSQLRestServerFullMemory that connects as a client of my TSQLRestServerDB? Is there a way to expose my rest server layer without coupling it to a TSQLHttpServer?
Offline
Nope, in fact, there is no need for any client for using TSQLRestServerDB. It has everything, ORM, batches - all (or almost all) is threadsafe. You can just create a TSQLRestServerDB instance in your TSQLRestServerFullMemory or provide it there via different ways - constructor of a descendant, property, global variable, something else. So every service/method of TSQLRestServerFullMemory will be able to use it and provide the result for your REST client in the form of DTO, plain data, or maybe even TSQLRecord. Only TSQLRestServerFullMemory will be exposed to the HTTP server, so REST clients will work only with it.
REST Client -----> TSQLRestServerFullMemory:SOA/Methods -----> TSQLRestServerDB (or many TSQLRestServerDB-instances, as in your case, as far as I understood)
I think, such a way will allow adding new typical TSQLRestServerDB-instances at runtime and routing users between them at TSQLRestServerFullMemory-level, which will not change anyhow despite the quantity of DBs. But again, as far as I understood your problem.
Offline
I think Vitaly was thinking of the following scenario. This is not a real example but shows the scheme.
Server side:
ITest = interface(IInvokable)
['{4A613FCE-3B0D-4582-97C5-4244B06C2006}']
procedure GetAll(out pmoList: TSQLTestRecordObjArray);
end;
TTestService = class(TInterfacedObject, ITest)
public
procedure GetAll(out pmoList: TSQLTestRecordObjArray);
end;
procedure TTestService.GetAll(out pmoList: TSQLTestRecordObjArray);
begin
FDB.RetrieveListObjArray(pmoList, TSQLTestRecord, '', []);
end;
FDB := TSQLRestServerDB.Create(TSQLModel.Create([TSQLTestRecord]), ChangeFileExt(ParamStr(0), '.db3'), False);
FDB.CreateMissingTables;
FRest := TSQLRestServerFullMemory.CreateWithOwnModel([], {UserAuthentication=} True, ROOT_NAME);
FRest.CreateMissingTables;
FRest.ServiceDefine(TTestService, [ITest], sicShared);
FServer := TSQLHttpServer.Create(PORT_NAME, [FRest], 'localhost', useHttpSocket);
initialization
TJSONSerializer.RegisterObjArrayForJSON([TypeInfo(TSQLTestRecordObjArray), TSQLTestRecord]);
TInterfaceFactory.RegisterInterfaces([TypeInfo(ITest)]);
Client side:
Client := TSQLHttpClient.Create('localhost', PORT_NAME, Model);
Client.SetUser('User', 'synopse');
Client.ServiceDefine([ITest], sicShared);
var
service: ITest;
values: TSQLTestRecordObjArray;
begin
service := Client.Service<ITest>;
if service <> Nil then
service.GetAll(values);
With best regards
Thomas
Offline
I think Vitaly was thinking of the following scenario. This is not a real example but shows the scheme.
Yep, exactly. I was trying to express myself with words, but it seems your example explains everything better
Offline
Hy tbo thanks for your explanation, I understand this level of decoupling, but the situation is complicated when I need to deal with several databases on a single server. This information of which database I should connect to could come from a right jwt? even so i would have to store a list of tsqlrestserverdb to know which database persists.
Is this level of decoupling simply for security or architecture? because if it is for security, I get that same level of decoupling simply by not defining my cqrs services that in fact make use of databases. But I agree that the tsqlrecords are exposed even though they need to be protected by some type of authentication, in this case I am using jwt
Last edited by fabiovip2019 (2021-03-12 19:44:37)
Offline
Another big advantage of not using fixed root <-> TSQLRestServerDB is that you can maintain a cache of latest TSQLRestServerDB used.
You don't need to maintain all TSQLRestServerDB opened during the process. Each SQlite3 instance consumes memory, so it is better to release them when not needed.
A TSynDictionary can do handle both DB lookup and deprecation simple and efficient - and threadsafe.
Offline
Another big advantage of not using fixed root <-> TSQLRestServerDB is that you can maintain a cache of latest TSQLRestServerDB used.
You don't need to maintain all TSQLRestServerDB opened during the process. Each SQlite3 instance consumes memory, so it is better to release them when not needed.
A TSynDictionary can do handle both DB lookup and deprecation simple and efficient - and threadsafe.
Sorry Arnaud, I can't figure out your last post. I have read it several times in the original and also with DeepL translated into my native language. But I didn't really understand it. Can you please describe it a bit more simply. Maybe with the help of some lines of source code. The last sentence made me lose the thread in the whole context.
With best regards
Thomas
Offline
Imagine you have a DB per Organisation ID = Int64 = TOrgID.
type
TOrgID = Int64;
TOrgIDDynArray = array of TOrgID;
TSQLRestServerDBObjArray = array of TSQLRestServerDB;
...
TJSONSerializer.RegisterObjArrayForJSON(
TypeInfo(TSQLRestServerDBObjArray),TSQLRestServerDB);
...
fDatabases := TSynDictionary.Create(TypeInfo(TOrgIDDynArray), TypeInfo(TSQLRestServerDBObjArray), false, 3600);
You can maintain a TSynDictionary to hold some TSQLRestServerDB instances in memory, and release them when not needed, after 1 hour (3600 seconds).
(you need to register TSQLRestServerDBObjArray on older Delphi, not on newer Delphi nor FPC 3.2 for mORMOt 2)
Offline
You can maintain a TSynDictionary to hold some TSQLRestServerDB instances in memory, and release them when not needed, after 1 hour (3600 seconds).
Thank you for the explanation Arnaud. I use TDynArrayHashed, but I haven't noticed TSynDictionary yet. It seems that I should take a closer look at it. Have a nice weekend.
With best regards
Thomas
Offline
TSynDictionary uses TDynArrayHashed internally for the keys, and TDynArray for the values.
It is thread-safe, and features an optional timeout to deprecate old entries, and has a lot of low-level methods, including advanced searches or serialization.
Offline
Thanks Ab, I ended up following this same line of thought this weekend and ended up implementing it, my dbs were already separated by int64 so I separated them via root uri, really the advantages are many. The fact that you can cache an entire layer is fantastic. I'm only concerned with callbacks now. since before each callback only supports the subscribers of the rest layer of the specific database. But now everyone will be signed to a single layer, right? Thanks to all tbo, vitaly and ab for the answers, they clarified many of my questions.
Offline
Hi Ab, can this proposed architecture retrieving databases from a dictionary be applied to the structure of mormotDDD? Isn't the cost too high to keep making the repositories at all times?
I am referring to this:
aRestServer.ServiceContainer.InjectResolver(
[
TRepoUserFactory.Create(aRestServer),
TRepoDomainEventFactory.Create(aRestServer)
], True);
Offline
Would the TDDDRepositoryRestFactory builder call each request from the rest then?
Offline
fábio você é brasileiro?
Offline
Pages: 1