#1 2021-03-16 17:08:15

mpv
Member
From: Ukraine
Registered: 2012-03-24
Posts: 1,564
Website

Propose - limiting of the result size for ISQLDBRow.FetchAll

Today I got a very unexpected error on server - developer did a mistake and DB query returns all rows (millions in my case) from table instead of limited count.
After try to serialize such a result into JSON server crash (out of memory in my case)

I propose to add optional size limitation parameter to ISQLDBRow.FetchAllAsJSON and throw if size is exceed - see PR #387

Limiting by row count is a bad idea, because we can get a huge JSON even for small amount of rows (CLOB\BLOB\varchar(max) colimnus)

Offline

#2 2021-03-17 07:54:50

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,376
Website

Re: Propose - limiting of the result size for ISQLDBRow.FetchAll

Please see my implementation in the PR.

Thanks for the feedback!

Offline

#3 2021-03-17 08:18:39

mpv
Member
From: Ukraine
Registered: 2012-03-24
Posts: 1,564
Website

Re: Propose - limiting of the result size for ISQLDBRow.FetchAll

Many thanks!
I will set this property default to ~50Mb for my use case, mostly to prevents JS engine crash on JSON.parse. Does not metter server side SpiderMonkey / QuickJS  or browser side - all JS engines have a memory limits.  In SyNode we limit a SpiderMonkey to 512Mb by default, NodeJS default limit is 700MB. Increasing such JS engine ,e,ory limit cause to longer GC circle.

Offline

Board footer

Powered by FluxBB