#1 2020-06-11 18:02:47

EduardAppelhans
Member
Registered: 2020-04-14
Posts: 8

mORMot DB Server Layer with ORM similar to CALVIN transaktion Layer

Hallo Arnaud,

TMS WEB Core features out-of-box support for FaunaDB.

https://www.tmssoftware.com/site/blog.a … =660&s=dev

FaunaDB´s approach seems to be based on Calvin.

http://cs.yale.edu/homes/thomson/public … gmod12.pdf

"Calvin is designed to serve as a scalable transactional layer above any storage system that implements a basic CRUD interface (create/insert, read, update, and delete)."


Can mORMot DB Server Layer with ORM regarded similar to CALVIN transaktion Layer? 

regards

Eduard

Offline

#2 2020-06-11 19:21:01

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,655
Website

Re: mORMot DB Server Layer with ORM similar to CALVIN transaktion Layer

When you see the FaunaDB pricing, it sounds like if their main target is for remote DB of a few GB.
This is exactly what a SQlite3 engine could handle with no problem.
So you are perfectly right: mORMot ORM + SQLite3 is a perfect fit for such solutions. smile

IMHO huge DB is very specific. You should not have a huge DB.
It is not because you have a big database that you have a bigdata solution.
BigData is a business,  e.g. when you collect logs of servers, or (window) manufacturing machines, or you collect video streams and then apply IA on it. In such case, I don't suppose you would use FaunaDB. It would be too much pricey, and it was not really meant for parallel processing.
I guess FaunaDB scaling ability is more like a marketing argument. If you "may" someday need huge DB, FaunaDB "may" help you thanks to its horizontal scaling abilities, whereas with regular vertical scaling of a SQL DB like Oracle or MSSQL, you won't scale so easily.

But I don't understand why someone should really need transactions on horizontal scale - unless your DB scheme is broken, and you use a pure relational model, whereas bigdata is not meant for transactions... it is meant for big data. ;(
Transactions means atomic writing/updating of information. With big data, you just keep the data. You don't write to it, you just append the new incoming data, then you process it into reduced datasets, to extract meaningful information.
For instance, if you want to anonymize some bigdata content, you won't run an "update set name=hash(name) on user" request, but you will create a new dataset, with only the data you need for a task. And you won't need a transaction for this.
If you really need some atomic write, you use the DDD notion of Aggregate, which is putting all the data within transaction boundaries in a single object, and you update it atomically - e.g. on MongoDB a document is always atomically written with no explicit transaction needed.

In practice, if a software isn't strictly bigdata centric, but have a huge DB, you may suffer from a wrong architecture.
In the pure SQL/relational model, every data is put in the same big DB, which grows... grows... grows... and someday the HW limitations of the server is reached. And you weep. sad
So I guess that the first step is to refactor this storage layer with a less centralized approach, in two steps:
1) put the logic in the code, not the DB;
2) create uncoupled microservices, each with its own (smaller) DB.
Then you would probably need with the transactional model on a few microservices only (like accounting, or parts tracking), but keep the huge data on a set of per-customer or per-region services.

If I understand it correctly, FaunaDB point is to leverage GraphQL to store some un-structured (schema-less) JSON to its cloud storage.
It targets for JavaScript developers, which mostly don't structure their data, but just need to persist some JSON remotely.
FaunaDB marketing focuses on "time to market": no need to setup a DB instance, create users and such. Just connect and play with the JSON.
In the long term, most node.js applications are written in such a schema-less mind. As a result, they are a nightmare to maintain. To be honnest, there was a data schema, but this schema was in the coder brain for a few days/weeks, to fit a particular purpose. When you change the developer, or change the purpose, you need to reverse-engineer the code, and try to fit your new goal. Bad design and practice for sure.

In mORMot, we don't like this pure shema-less approach. We like the data to have a structure, even if it is not relational. In mORMot, we have an ORM which leverages objects fields to give structure in the storage for the long term, and use TDocVariant to store unstructured data if needed.
big_smile

(it became a long post now... perhaps worth a blog entry)

Offline

Board footer

Powered by FluxBB