You are not logged in.
Today I got a very unexpected error on server - developer did a mistake and DB query returns all rows (millions in my case) from table instead of limited count.
After try to serialize such a result into JSON server crash (out of memory in my case)
I propose to add optional size limitation parameter to ISQLDBRow.FetchAllAsJSON and throw if size is exceed - see PR #387
Limiting by row count is a bad idea, because we can get a huge JSON even for small amount of rows (CLOB\BLOB\varchar(max) colimnus)
Offline
Many thanks!
I will set this property default to ~50Mb for my use case, mostly to prevents JS engine crash on JSON.parse. Does not metter server side SpiderMonkey / QuickJS or browser side - all JS engines have a memory limits. In SyNode we limit a SpiderMonkey to 512Mb by default, NodeJS default limit is 700MB. Increasing such JS engine ,e,ory limit cause to longer GC circle.
Offline