Fix decimal precision (decimal.InvalidOperation decimal.DivisionImpossible error) #207
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
Error message (this is from target-snowflake but the problem is in the shared code):
The problem is that Singer's offical
tap-postgres
usesminimum
,maximum
, andmultipleOf
to effectively report the scale of the column.For example,
This breaks JSON schema validation as when validating
multipleOf
it tries to doDecimal('0.000913808181253534') % Decimal('1E-38')
, as the default decimal precision in python is too small (28 I believe).Solution
This is a well-known problem that has bitten several targets, for which there are several solutions. One is simply to set the precision to something arbitrarily high, like 40. Another, which the
pipelinewise-target-postgres
does, is to simply not allow precision higher than some threshold.Here, I ported a solution I wrote for
meltano/target-postgres
which sets Python's decimal precision to as large as it needs to be to match the schema. This solution was later ported tomeltano/target-snowflake
. Open to other solutions though. I would request that any solution also be ported todatamill-co/target-snowflake
.