You are not logged in.
Pages: 1
Hi,
I just found out that when checking the mormots capabilites related to JWT on the jwt.io site under libraries,
there's only the mormot1 implementation with all the limitations regarding supported encryption methods.
Maybe you can add or replace the mormot2 jwt implementation as well as the github repo link, too.
Here's the link:
Best regards
Bastian
As this topic is now a bit older I thought I bring it up again.
Is there an update on the current Swagger usage in the mORMot 2 framework or an example showing on how to integrate it?
Thanks for your valuable feedback.
I have my existing business logic separated from the mORMot Framework (a lot of legacy code that needs a huge amount of refactoring and rewriting).
I agree, when refactoring the code it would absolutely make sense to split the workflows into separate interfaces to decrease the complexity and simplify testing.
It would make sense to follow the steps you've described when cleaning up and extending the existing code and finally attaching it to the mormot framework.
I would have intended to use the mORMot only for the SOA calls (mistakes and coffee are handmade )
Can you give me some further details about the mentioned resolver pattern.
I tried to look it up but couldn't find the required infos on what problem it solves in which way.
Thanks
Bastian
Hi,
I want/have to deploy a server app, that runs on different servers in a company network (distributed all over the world).
I think about implementing it using the interface based approach.
I would have one interface that is implemented by the server app but runs on several servers.
How can I implement the connectivity to an undefinded number of servers in my local calling app, which all publish the same interface?
In addition I need to check all of them, one at a time.
It could also be, that not all servers are running all the time, or that new servers are added later on.
The procedure would be more or less like this:
- Connect to server 1
- check if desired information is available
- disconnect
- connect to server 2
- check if desired information is available
- disconnect
.
.
.
when the decision is made, download the data from server1 or server2
The second usecase would be to upload data to a single server or to a selectable amount of servers.
I think about adding one Client instance with one "Service" field to call the interface.
Then I could lookup the configured servers in my config file, and poll through all servers in a loop.
I would need to
- create the client object
- look up if the service is available/published by this service
- make the call
- free the service
- free the client
What do you think?
Any help is greatly appreciated.
Best regards
Bastian
The goal I want to achieve is to split up an old monolith of software into separate and standalone applications.
@ab After watching your great Ekon21 slides, the desired structure should be like shown on slide 60 or 62 of your SOLID Meets SOA presentation.
Each service should be responsible for one part of the existing software, e.g.
- database handling
- Import/Export or conversion of external fileformats
- communication with external machines via sockets, 3rd party dlls or pipes (CNC Machines)
- Code Generation
Right now, all of the listed above (and a lot more) is all done in one executable.
This all works (for over 2 decades by now) but adding functionallity or testing this software is becoming more and more a pain - and if a bug is found or a new feature is implemented, the whole software has to be tested again.
The target software (package) will run on a single PC (no distribution over the network).
Scalability in terms of performance is not needed (by now) -it's not necessary to spawn several instances of a worker to handle a higher workload.
The main advantages I see in the microservice approach are
- Easier way to implement new features/functions due to defined interfaces
- Significantly reduced complexity when testing the individual functions of the software
- Possibility of replacing individual services in the event of an error without having to replace the software as a whole and thus retest it
I can well imagine the definition of the interfaces and the outsourcing of functions to individual modules. Only the orchestration/managing of the individual services to the app I'm consuming all the services is not quite clear to me yet.
@tbo you wrote that it is possible to find the individual services using a session ID and integrate them accordingly. This approach sounds very promising.
I'll try searching the forums and Google to see if I can find an implementation of this approach.
@ab Alternatively, I have also considered an event driven publish/subscribe model based on defined interfaces for managing the individual services.
Partners of ours have chosen this approach. Optionally via REDIS database or MQTT server.
If there is an mORmot-native approach to integrate several independent processes, such as via the "FindRestServer" function described by @tbo,
this is a nice and lean approach without much overhead. If I understand this correctly, all potential endpoints can be addressed centrally without explicitly sending a REST request to each separate endpoint (at least this is mapped or handled by the mORmot Framework).
It also eliminates the need to use an additional server to communicate and manage the services.
Best regards
Bastian
Thanks for your detailed explanation.
Do you know if there's a sample project available which shows the rest server pool usage and how to register an external available Service in another server running in another process?
I just started to look into the Mormot framework so it's a lot of new stuff and possibilities to evaluate
Best regards
Bastian
But in this scenario all functions would be inside a monolithic server process. As in a normal rest server implementation I could have several endpoints/routes but I could not replace only one part/route while keeping the rest of the other modules unchanged. I would always need to deploy a complete new server binary.
I already started to read your threads in the German Delphi forum but as far as I understood, all methods were called by one application (client) within one server application
Maybe an approach like a publisher/subscriber would describe the scenario. Each process registers with it's publish and the result routes. In a central instance and the guiapp does only have to communicate with the server managing the pub/sub like a redis or mqtt server
Best regards
Bastian
Thanks vor your reply, but how can i handle this, when I have a Setup Like this:
Myservice1.exe
Myservice2.exe
Myservice3.exe
Guiapp.exe
Each separate executable has a limited and so, better testable responsibility.
None of the Services has to rely on another service.
And now I want to call functions from all three services from within my main application (guiapp.exe)
So basically more or less as if calling functions from several DLLs
Hi @all,
I'm currently digging into the mORMot2 library and I'm trying to wrap my head around it.
I have already searched the forum on how to communicate between several microservices or how to publish the microservices to the outside world.
I found, that there is no need or no binding for a REDIS or MQTT backend.
But how are the processes managed or how is the data sent to each microservice from my calling application.
In the contained examples I did not find anything which guides me into the right direction.
The demos always have a 1:1 relation with 1 Client and 1 Server or n Clients and 1 Server
Do I need to publish each service on a separate port on my PC, and directly connect my App which wants to fetch data from different services to each Service:Port individually?
Thanks in advance,
Best regards
Bastian
Pages: 1