You are not logged in.
Pages: 1
Hi @all,
I'm currently digging into the mORMot2 library and I'm trying to wrap my head around it.
I have already searched the forum on how to communicate between several microservices or how to publish the microservices to the outside world.
I found, that there is no need or no binding for a REDIS or MQTT backend.
But how are the processes managed or how is the data sent to each microservice from my calling application.
In the contained examples I did not find anything which guides me into the right direction.
The demos always have a 1:1 relation with 1 Client and 1 Server or n Clients and 1 Server
Do I need to publish each service on a separate port on my PC, and directly connect my App which wants to fetch data from different services to each Service:Port individually?
Thanks in advance,
Best regards
Bastian
Offline
Do I need to publish each service on a separate port on my PC, and directly connect my App which wants to fetch data from different services to each Service:Port individually?
You can register several services for one RestServer and one root name:
function TXRestServer.InitializeServices: Boolean;
begin
Result := (ServiceDefine(TX1Service, [IX1], sicSingle) <> Nil);
Result := Result and (ServiceDefine(TX2Service, [IX2], sicClientDriven) <> Nil);
You can register several RestServers for one HttpServer and port:
FHttpServer := TRestHttpServer.Create(pmcPort, [FXRestServer, FYRestServer], '+' {DomainName}, useHttpSocket);
Depending on the root name for a RestServer, it could look like this (example: root for X-RestServer is "store" and for Y-RestServer is "admin"):
domain.com/store
domain.com/admin
With best regards
Thomas
Last edited by tbo (2024-01-22 16:50:43)
Offline
Thanks vor your reply, but how can i handle this, when I have a Setup Like this:
Myservice1.exe
Myservice2.exe
Myservice3.exe
Guiapp.exe
Each separate executable has a limited and so, better testable responsibility.
None of the Services has to rely on another service.
And now I want to call functions from all three services from within my main application (guiapp.exe)
So basically more or less as if calling functions from several DLLs
Offline
Thanks vor your reply, but how can i handle this, when I have a Setup Like this:
Myservice1.exe
Myservice2.exe
Myservice3.exe
Guiapp.exe
If you want to run multiple HttpServers, I see no advantage in this, you can instantiate the clients as follows:
TRestHttpClient.Create(ServerURI, TOrmModel.Create([], ROOT_NAME_SERVER), ServerPort, {Https=} (ServerPort = 443));
For an introduction to the topic, you can read articles Introduction to method-based services and Introduction to Interface-based Services in Delphi-Praxis forum.
With best regards
Thomas
Offline
But in this scenario all functions would be inside a monolithic server process. As in a normal rest server implementation I could have several endpoints/routes but I could not replace only one part/route while keeping the rest of the other modules unchanged. I would always need to deploy a complete new server binary.
I already started to read your threads in the German Delphi forum but as far as I understood, all methods were called by one application (client) within one server application
Maybe an approach like a publisher/subscriber would describe the scenario. Each process registers with it's publish and the result routes. In a central instance and the guiapp does only have to communicate with the server managing the pub/sub like a redis or mqtt server
Best regards
Bastian
Last edited by Basti-Fantasti (2024-01-22 19:29:42)
Offline
But in this scenario all functions would be inside a monolithic server process. As in a normal rest server implementation I could have several endpoints/routes but I could not replace only one part/route while keeping the rest of the other modules unchanged. I would always need to deploy a complete new server binary.
No, you can specify server URI and port and set up several Rest clients as required.
Generally: How you organize your data is up to you. You can have one or more databases for all customers, or one or more databases for each customer. The interfaces only do your routing and the necessary administration. What happens in the background afterwards is up to you. Example: Access via a pool of servers would be:
function TCustomServiceObject.GetReportRestOrm: IRestOrm;
begin
with TAdminRestServer(Server) do
Result := RestServerPool.FindReportRestServer(GetReportRestServerID(ServiceRunningContext.Request.Session)).Orm;
end;
function TReportService.UpdateSource(const pmcRowID: TID; const pmcSource: RawBlob): Boolean;
var
orm: IRestOrm;
begin
orm := GetReportRestOrm;
if orm <> Nil then
Result := orm.UpdateBlob(TOrmReport, pmcRowID, 'Source', pmcSource)
else
Result := False;
end;
Here you fetch the corresponding RestServer via a SessionID. The data can come from wherever you want. It can also be another service, created by you or by an outside service provider.
PS: My examples are simple to get started, but can also be easily expanded.
With best regards
Thomas
Last edited by tbo (2024-01-22 20:11:48)
Offline
Thanks for your detailed explanation.
Do you know if there's a sample project available which shows the rest server pool usage and how to register an external available Service in another server running in another process?
I just started to look into the Mormot framework so it's a lot of new stuff and possibilities to evaluate
Best regards
Bastian
Last edited by Basti-Fantasti (2024-01-22 20:21:30)
Offline
Do you know if there's a sample project available which shows the rest server pool usage ...
Sorry no, I have already written about this, but can't find the thread (I post in several forums). Perhaps you will find some inspiration in this thread. Another reader may have links at hand. The easiest way is to start from an example and incorporate the various techniques described. The help is written very detailed.
With best regards
Thomas
Offline
Microservices with several servers usually use reverse proxy server.
-> Myservice1
client N -> proxy(nginx) -> Myservice2
-> Myservice3
-> Myservice1
Last edited by ttomas (2024-01-22 22:15:55)
Offline
What is your definition of use of MicroService?
Because what you write about seems a bit technical, i.e. how a MicroServices is usually (and wrongly) implemented, but not what a MicroService should do.
MicroServices implemented around a main database is usually wrong. Over an event bus (REDIS or other MQ), it makes more sense, but it is still not the best practice.
Back to the basics:
https://www.martinfowler.com/articles/m … vices.html
https://blog.synopse.info/?post/2014/10 … SOA-mORMot
https://www.slideshare.net/ArnaudBouche … -meets-soa
https://www.slideshare.net/ArnaudBouche … ven-design
I have made a few years ago a solution using mORMot between more than 500 (cheap) servers, with no problem.
It run for several years, until the startup company did sadly close. But from the technical point of view, it was working fine, able to manage up to one PB of data (each server had a 2TB capacity).
There was a main mORMot server as frontend, then 500 servers communicating with WebSockets with the main server and their peers using interface-based services or HTTPS.
The Message Queues were part of the code, in plain pascal, with local SQlite3 local storage. No need of an external REDIS instance, which quickly becomes a bottleneck.
So what are your actual needs?
Offline
The goal I want to achieve is to split up an old monolith of software into separate and standalone applications.
@ab After watching your great Ekon21 slides, the desired structure should be like shown on slide 60 or 62 of your SOLID Meets SOA presentation.
Each service should be responsible for one part of the existing software, e.g.
- database handling
- Import/Export or conversion of external fileformats
- communication with external machines via sockets, 3rd party dlls or pipes (CNC Machines)
- Code Generation
Right now, all of the listed above (and a lot more) is all done in one executable.
This all works (for over 2 decades by now) but adding functionallity or testing this software is becoming more and more a pain - and if a bug is found or a new feature is implemented, the whole software has to be tested again.
The target software (package) will run on a single PC (no distribution over the network).
Scalability in terms of performance is not needed (by now) -it's not necessary to spawn several instances of a worker to handle a higher workload.
The main advantages I see in the microservice approach are
- Easier way to implement new features/functions due to defined interfaces
- Significantly reduced complexity when testing the individual functions of the software
- Possibility of replacing individual services in the event of an error without having to replace the software as a whole and thus retest it
I can well imagine the definition of the interfaces and the outsourcing of functions to individual modules. Only the orchestration/managing of the individual services to the app I'm consuming all the services is not quite clear to me yet.
@tbo you wrote that it is possible to find the individual services using a session ID and integrate them accordingly. This approach sounds very promising.
I'll try searching the forums and Google to see if I can find an implementation of this approach.
@ab Alternatively, I have also considered an event driven publish/subscribe model based on defined interfaces for managing the individual services.
Partners of ours have chosen this approach. Optionally via REDIS database or MQTT server.
If there is an mORmot-native approach to integrate several independent processes, such as via the "FindRestServer" function described by @tbo,
this is a nice and lean approach without much overhead. If I understand this correctly, all potential endpoints can be addressed centrally without explicitly sending a REST request to each separate endpoint (at least this is mapped or handled by the mORmot Framework).
It also eliminates the need to use an additional server to communicate and manage the services.
Best regards
Bastian
Offline
- Easier way to implement new features/functions due to defined interfaces
OK
- Significantly reduced complexity when testing the individual functions of the software
OK
- Possibility of replacing individual services in the event of an error without having to replace the software as a whole and thus retest it
From my experiment, it is always easier to have software as a whole if you can. Even if you replace a single part of the software, you still need to test it as a whole. So I don't see any real benefit here.
My guess is that if you want to have a cleaner code, and everything in the same server PC, then you don't actually need MicroServices, but just SOLID code.
I would go into using interfaces as "software seams" in the existing process.
And once the interfaces are clearly defined, then switch to several processes. But I don't think having several executables is really needed, nor anything more complex.
https://www.slideshare.net/ArnaudBouche … conference
But if you really need several processes, then I would just use some convention about the communication ports on the loopback, between the processes.
Each process would have its own web sockets server on a given port, with its own interface based services.
That is all that you need: no service registration is required.
Offline
Pages: 1