Skip to content

Conversation

himanshudube97
Copy link
Member

@himanshudube97 himanshudube97 commented Sep 16, 2025

Summary by CodeRabbit

  • New Features

    • Charts now support an extra grouping dimension, enabling multi-series bar and line charts across both raw and aggregated data.
    • Pie charts can display per-record slices with values from the selected metric or a default count when no metric is chosen.
    • Table preview aligns with chart column selection for accurate previews.
  • Bug Fixes

    • Deduplicated axis/column selection prevents duplicate columns and ensures stable column order.
    • Improved slice naming and series labeling for clearer legends and unique identifiers.

Copy link

coderabbitai bot commented Sep 16, 2025

Walkthrough

Adds logging of extra_dimension in charts API. In charts service, deduplicates selected columns, reworks column mapping, and implements consistent extra_dimension grouping across raw and aggregated chart paths, including multi-series bar/line handling and revised pie slice generation. Table preview aligns with new deduped column ordering. No API signatures changed.

Changes

Cohort / File(s) Summary of Changes
API logging tweak
ddpui/api/charts_api.py
Expanded a log statement in get_chart_data to include extra_dimension alongside dimension_col and metrics. No logic or control-flow changes.
Chart query, execution, and transform updates
ddpui/core/charts/charts_service.py
- build_chart_query: deduplicate axis columns before projection
- execute_chart_query: create ordered unique column list and mapping
- transform_data_for_chart (raw/aggregated): add extra_dimension grouping; multi-series for bar/line; pie slices per record with adjusted naming/values
- get_chart_data_table_preview: align column dedup and mapping with query building

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant API as Charts API
  participant Service as Charts Service
  participant DB as Data Source

  Client->>API: get_chart_data(request: chart_type, x/y, metrics, extra_dimension?)
  API->>Service: build_chart_query(params)
  activate Service
  Note right of Service: Deduplicate axis cols -> projection order
  Service->>DB: execute_chart_query(query)
  DB-->>Service: rows
  Service->>Service: transform_data_for_chart(rows, chart_type, extra_dimension?)
  alt raw or aggregated with extra_dimension
    Note right of Service: Group by extra_dimension -> multi-series
  else no extra_dimension
    Note right of Service: Single-series per chart_type
  end
  Service-->>API: chart series + x-axis + labels
  deactivate Service
  API-->>Client: response
Loading
sequenceDiagram
  autonumber
  participant Service
  participant DB

  Service->>Service: get_chart_data_table_preview(params)
  Note right of Service: Collect x/y/metrics/extra_dimension
  Service->>Service: Deduplicate columns (ordered)
  Service->>DB: query with projected unique columns
  DB-->>Service: rows
  Service-->>Service: Build column_mapping from ordered uniques
  Service-->>Service: Preview table data
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "raw chart working" is directly related to the primary changes in this PR, which focus on fixing and enhancing raw chart behavior (notably updates in charts_service raw paths and a small logging change in charts_api). It is concise and indicates the main intent, but it is informal and could be more specific to help future readers quickly understand the exact scope of the change. Overall the title is not misleading and references a real part of the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dashboard-fixes-ishan-him-27aug

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

codecov bot commented Sep 16, 2025

Codecov Report

❌ Patch coverage is 0% with 84 lines in your changes missing coverage. Please review.
✅ Project coverage is 52.29%. Comparing base (ab57298) to head (c55e043).

Files with missing lines Patch % Lines
ddpui/core/charts/charts_service.py 0.00% 84 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1168      +/-   ##
==========================================
- Coverage   52.57%   52.29%   -0.28%     
==========================================
  Files          97       97              
  Lines       11358    11418      +60     
==========================================
  Hits         5971     5971              
- Misses       5387     5447      +60     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
ddpui/core/charts/charts_service.py (2)

1124-1144: Preview (raw): keep ordered de-dup and de-duplicate logic in one place.

Mirror the ordered logic and consider extracting a small helper to avoid drift between build_chart_query/execute_chart_query/preview.

-        columns_to_add = set()
-        column_order = []  # Preserve insertion order
-        if payload.x_axis:
-            if payload.x_axis not in columns_to_add:
-                columns_to_add.add(payload.x_axis)
-                column_order.append(payload.x_axis)
-        if payload.y_axis:
-            if payload.y_axis not in columns_to_add:
-                columns_to_add.add(payload.y_axis)
-                column_order.append(payload.y_axis)
-        if payload.extra_dimension:
-            if payload.extra_dimension not in columns_to_add:
-                columns_to_add.add(payload.extra_dimension)
-                column_order.append(payload.extra_dimension)
-
-        # Build mapping using the unique column order
-        for col_index, col_name in enumerate(column_order):
+        ordered_columns: list[str] = []
+        for col in [payload.x_axis, payload.y_axis, payload.extra_dimension]:
+            if col and col not in ordered_columns:
+                ordered_columns.append(col)
+
+        # Build mapping using the unique column order
+        for col_index, col_name in enumerate(ordered_columns):
             column_mapping.append((col_name, col_index))
             columns.append(col_name)

I can extract a helper like:

def _ordered_unique_columns(*cols: str | None) -> list[str]:
    out = []
    for c in cols:
        if c and c not in out:
            out.append(c)
    return out

and wire it in all three places. Want me to push that refactor?


557-579: Use ordered de-dup for raw columns and fix/remove the unused column_mapping

  • Replace the set+manual checks with an ordered de-dup (e.g. iterate [payload.x_axis, payload.y_axis, payload.extra_dimension] and append if not already present) in both places that build column_mapping in ddpui/core/charts/charts_service.py (around l.555 and l.1120).
  • execute_query currently ignores column_mapping — PostgresClient.execute and BigqueryClient.execute convert rows to dicts ([dict(row) for row in rows]), so the mapping is never applied. Either:
    • Implement mapping inside execute_query to convert tuple results -> dicts using column_mapping, or
    • Remove the column_mapping parameter and stop building it in execute_chart_query (also update execute_query's docstring).
ddpui/api/charts_api.py (1)

516-516: Include extra_dimension in logs — good; consider DEBUG level to reduce noise.

This line is verbose for INFO (logs schema/table/metrics each request). Suggest DEBUG.

-    logger.info(
+    logger.debug(
         f"Columns - x_axis: {payload.x_axis}, y_axis: {payload.y_axis}, dimension_col: {payload.dimension_col}, extra_dimension: {payload.extra_dimension}, metrics: {payload.metrics}"
     )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ab57298 and c55e043.

📒 Files selected for processing (2)
  • ddpui/api/charts_api.py (1 hunks)
  • ddpui/core/charts/charts_service.py (6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
ddpui/core/charts/charts_service.py (1)
ddpui/datainsights/query_builder.py (1)
  • add_column (31-34)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: checks (3.10, 6)

Comment on lines 621 to +679
if payload.chart_type == "bar":
if payload.computation_type == "raw":
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
# Handle extra dimension for grouping/stacking in raw charts
if payload.extra_dimension:
# Group data by extra dimension
grouped_data = {}
x_values = set()

for row in results:
x_val = convert_value(safe_get_value(row, payload.x_axis, null_label))
y_val = convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
extra_val = convert_value(
safe_get_value(row, payload.extra_dimension, null_label)
)

x_values.add(x_val)

if extra_val not in grouped_data:
grouped_data[extra_val] = {}
grouped_data[extra_val][x_val] = y_val

# Build series data for each extra dimension value
x_axis_data = sorted(list(x_values))
series_data = []
legend_data = []

for extra_val, data_dict in grouped_data.items():
series_values = [data_dict.get(x_val, 0) for x_val in x_axis_data]
series_data.append({"name": str(extra_val), "data": series_values})
legend_data.append(str(extra_val))

return {
"xAxisData": x_axis_data,
"series": series_data,
"legend": legend_data,
}
else:
# Simple raw chart without extra dimension
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label),
preserve_none=True,
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: y-axis nulls become "Unknown" strings in raw bar.

Using safe_get_value on numeric y injects the null label into series data. Use row.get(...) and convert_value(..., preserve_none=True) to keep None numeric-safe.

-                    y_val = convert_value(
-                        safe_get_value(row, payload.y_axis, null_label), preserve_none=True
-                    )
+                    y_val = convert_value(row.get(payload.y_axis), preserve_none=True)

Also consider retaining original x order instead of sorted() if the dataset order is meaningful.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if payload.chart_type == "bar":
if payload.computation_type == "raw":
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
# Handle extra dimension for grouping/stacking in raw charts
if payload.extra_dimension:
# Group data by extra dimension
grouped_data = {}
x_values = set()
for row in results:
x_val = convert_value(safe_get_value(row, payload.x_axis, null_label))
y_val = convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
extra_val = convert_value(
safe_get_value(row, payload.extra_dimension, null_label)
)
x_values.add(x_val)
if extra_val not in grouped_data:
grouped_data[extra_val] = {}
grouped_data[extra_val][x_val] = y_val
# Build series data for each extra dimension value
x_axis_data = sorted(list(x_values))
series_data = []
legend_data = []
for extra_val, data_dict in grouped_data.items():
series_values = [data_dict.get(x_val, 0) for x_val in x_axis_data]
series_data.append({"name": str(extra_val), "data": series_values})
legend_data.append(str(extra_val))
return {
"xAxisData": x_axis_data,
"series": series_data,
"legend": legend_data,
}
else:
# Simple raw chart without extra dimension
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label),
preserve_none=True,
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
if payload.chart_type == "bar":
if payload.computation_type == "raw":
# Handle extra dimension for grouping/stacking in raw charts
if payload.extra_dimension:
# Group data by extra dimension
grouped_data = {}
x_values = set()
for row in results:
x_val = convert_value(safe_get_value(row, payload.x_axis, null_label))
y_val = convert_value(row.get(payload.y_axis), preserve_none=True)
extra_val = convert_value(
safe_get_value(row, payload.extra_dimension, null_label)
)
x_values.add(x_val)
if extra_val not in grouped_data:
grouped_data[extra_val] = {}
grouped_data[extra_val][x_val] = y_val
# Build series data for each extra dimension value
x_axis_data = sorted(list(x_values))
series_data = []
legend_data = []
for extra_val, data_dict in grouped_data.items():
series_values = [data_dict.get(x_val, 0) for x_val in x_axis_data]
series_data.append({"name": str(extra_val), "data": series_values})
legend_data.append(str(extra_val))
return {
"xAxisData": x_axis_data,
"series": series_data,
"legend": legend_data,
}
else:
# Simple raw chart without extra dimension
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label),
preserve_none=True,
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
🤖 Prompt for AI Agents
In ddpui/core/charts/charts_service.py around lines 621 to 679, fix the raw
bar-chart branch so numeric y nulls are preserved as None (not replaced by the
null label) and original x order is retained when extra_dimension is present:
when extracting y values use row.get(payload.y_axis, None) (not safe_get_value)
and pass that to convert_value(..., preserve_none=True); for x values, preserve
input order instead of calling sorted() by collecting x values into a list while
tracking a seen set (append new x when first encountered) and use that ordered
list for xAxisData and for building series with data_dict.get(x_val, 0).

Comment on lines 791 to 820
elif payload.chart_type == "pie":
if payload.computation_type == "raw":
# For raw data, count occurrences
value_counts = {}
for row in results:
key = handle_null_value(safe_get_value(row, payload.x_axis, null_label), null_label)
value_counts[key] = value_counts.get(key, 0) + 1
# For truly raw data, each record becomes a separate pie slice
pie_data = []
for i, row in enumerate(results):
x_value = handle_null_value(
safe_get_value(row, payload.x_axis, null_label), null_label
)

if payload.y_axis:
# Use Y-axis value as the slice value
y_value = convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
slice_value = y_value if y_value is not None else 0
# Create unique slice name combining X and Y values
slice_name = f"{x_value} ({slice_value})"
else:
# If no Y-axis, each record gets value of 1
slice_value = 1
# Create unique slice name with index to avoid duplicates
slice_name = f"{x_value} #{i+1}"

pie_data.append({"value": slice_value, "name": slice_name})

series_name = payload.y_axis if payload.y_axis else f"Records by {payload.x_axis}"
return {
"pieData": [{"value": count, "name": name} for name, count in value_counts.items()],
"seriesName": payload.x_axis,
"pieData": pie_data,
"seriesName": series_name,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: pie slice values can become non-numeric "Unknown".

For raw pie with y_axis, safe_get_value converts None to a string label, breaking ECharts numeric expectations. Fetch raw and preserve None.

-                    y_value = convert_value(
-                        safe_get_value(row, payload.y_axis, null_label), preserve_none=True
-                    )
+                    y_value = convert_value(row.get(payload.y_axis), preserve_none=True)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In ddpui/core/charts/charts_service.py around lines 791 to 820, the raw pie
branch currently converts None Y values into the null label string which
produces non-numeric slice values; change the Y-axis value retrieval to fetch
the raw value while preserving None (i.e., call safe_get_value for
payload.y_axis with preserve_none=True or otherwise avoid mapping None to
null_label), then pass that raw value into convert_value with preserve_none=True
so that None stays None and you can set slice_value = y_value if y_value is not
None else 0; keep slice_name logic but only include the numeric y_value when it
is not None to avoid embedding the null label into the numeric field.

Comment on lines 865 to 921
if payload.computation_type == "raw":
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
# Check if we have extra_dimension for grouping
if payload.extra_dimension:
# Group data by extra dimension
grouped_data = {}
x_values = set()
for row in results:
x_val = convert_value(safe_get_value(row, payload.x_axis, null_label))
y_val = convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
extra_val = convert_value(
safe_get_value(row, payload.extra_dimension, null_label)
)

x_values.add(x_val)
if extra_val not in grouped_data:
grouped_data[extra_val] = {}
grouped_data[extra_val][x_val] = y_val

# Build series data for each extra dimension value
x_axis_data = sorted(list(x_values))
series_data = []
legend_data = []

for extra_val, data_dict in grouped_data.items():
series_values = [data_dict.get(x_val, 0) for x_val in x_axis_data]
series_data.append({"name": str(extra_val), "data": series_values})
legend_data.append(str(extra_val))

return {
"xAxisData": x_axis_data,
"series": series_data,
"legend": legend_data,
}
else:
# Simple raw chart without extra dimension
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label),
preserve_none=True,
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
else: # aggregated
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: y-axis null handling in raw line mirrors bar issue.

Apply the same fix to avoid "Unknown" in numeric series.

-                    y_val = convert_value(
-                        safe_get_value(row, payload.y_axis, null_label), preserve_none=True
-                    )
+                    y_val = convert_value(row.get(payload.y_axis), preserve_none=True)
-                            "data": [
-                                convert_value(
-                                    safe_get_value(row, payload.y_axis, null_label),
-                                    preserve_none=True,
-                                )
-                                for row in results
-                            ],
+                            "data": [
+                                convert_value(row.get(payload.y_axis), preserve_none=True)
+                                for row in results
+                            ],
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if payload.computation_type == "raw":
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
# Check if we have extra_dimension for grouping
if payload.extra_dimension:
# Group data by extra dimension
grouped_data = {}
x_values = set()
for row in results:
x_val = convert_value(safe_get_value(row, payload.x_axis, null_label))
y_val = convert_value(
safe_get_value(row, payload.y_axis, null_label), preserve_none=True
)
extra_val = convert_value(
safe_get_value(row, payload.extra_dimension, null_label)
)
x_values.add(x_val)
if extra_val not in grouped_data:
grouped_data[extra_val] = {}
grouped_data[extra_val][x_val] = y_val
# Build series data for each extra dimension value
x_axis_data = sorted(list(x_values))
series_data = []
legend_data = []
for extra_val, data_dict in grouped_data.items():
series_values = [data_dict.get(x_val, 0) for x_val in x_axis_data]
series_data.append({"name": str(extra_val), "data": series_values})
legend_data.append(str(extra_val))
return {
"xAxisData": x_axis_data,
"series": series_data,
"legend": legend_data,
}
else:
# Simple raw chart without extra dimension
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(
safe_get_value(row, payload.y_axis, null_label),
preserve_none=True,
)
for row in results
],
}
],
"legend": [payload.y_axis],
}
else: # aggregated
if payload.computation_type == "raw":
# Check if we have extra_dimension for grouping
if payload.extra_dimension:
# Group data by extra dimension
grouped_data = {}
x_values = set()
for row in results:
x_val = convert_value(safe_get_value(row, payload.x_axis, null_label))
y_val = convert_value(row.get(payload.y_axis), preserve_none=True)
extra_val = convert_value(
safe_get_value(row, payload.extra_dimension, null_label)
)
x_values.add(x_val)
if extra_val not in grouped_data:
grouped_data[extra_val] = {}
grouped_data[extra_val][x_val] = y_val
# Build series data for each extra dimension value
x_axis_data = sorted(list(x_values))
series_data = []
legend_data = []
for extra_val, data_dict in grouped_data.items():
series_values = [data_dict.get(x_val, 0) for x_val in x_axis_data]
series_data.append({"name": str(extra_val), "data": series_values})
legend_data.append(str(extra_val))
return {
"xAxisData": x_axis_data,
"series": series_data,
"legend": legend_data,
}
else:
# Simple raw chart without extra dimension
return {
"xAxisData": [
convert_value(safe_get_value(row, payload.x_axis, null_label))
for row in results
],
"series": [
{
"name": payload.y_axis,
"data": [
convert_value(row.get(payload.y_axis), preserve_none=True)
for row in results
],
}
],
"legend": [payload.y_axis],
}
else: # aggregated
🤖 Prompt for AI Agents
In ddpui/core/charts/charts_service.py around lines 865-921, the grouped
raw-path currently fills missing x-axis points with 0 (series_values =
[data_dict.get(x_val, 0) ...]) which causes numeric series to show
"Unknown"/incorrect bars; change the default to None so missing points remain
nulls (series_values = [data_dict.get(x_val, None) ...]) and ensure the grouped
y values are converted with preserve_none=True (leave that conversion in place)
so charts receive None for missing data instead of 0 or the string "Unknown".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant