|
| 1 | +--- |
| 2 | +author: Ahmad Al Hakim |
| 3 | +date: 2025-09-03 |
| 4 | +title: Turn Jupyter Notebooks into Production Dashboards in Python |
| 5 | +description: "Convert Jupyter notebooks into interactive, production-ready dashboards with Python and Reflex. Keep your pandas logic, add UI, and deploy in minutes." |
| 6 | +image: /blog/jupyter_reflex.png |
| 7 | +meta: [ |
| 8 | + { |
| 9 | + "name": "keywords", |
| 10 | + "content": "Jupyter Notebook, Python dashboards, data science workflows, interactive dashboards, productionizing notebooks, Python web apps, data visualization, machine learning apps, data scientist guide, dashboard deployment" |
| 11 | + } |
| 12 | +] |
| 13 | +--- |
| 14 | + |
| 15 | +## The Data Scientist's Dilemma |
| 16 | + |
| 17 | +Data scientists excel at analysis but struggle with productionization. You build sophisticated models in Jupyter notebooks, then face the dreaded request: "Can we make this a live dashboard?" |
| 18 | + |
| 19 | +The usual options aren't great. Hand it off to engineers and wait months. Use limited dashboard tools that can't handle your analysis complexity. Or learn React, APIs, and deployment just to make your Python work interactive. |
| 20 | + |
| 21 | +This guide shows a different path: transforming your Jupyter analysis directly into a production dashboard without leaving Python. |
| 22 | + |
| 23 | +## Our Starting Point: The Jupyter Notebook |
| 24 | + |
| 25 | +Let's work with a realistic scenario: analyzing customer churn using the IBM Telco dataset. Here's what a typical analysis notebook looks like: |
| 26 | + |
| 27 | +```python |
| 28 | +import pandas as pd |
| 29 | +import numpy as np |
| 30 | +import matplotlib.pyplot as plt |
| 31 | +import seaborn as sns |
| 32 | +from sklearn.ensemble import RandomForestClassifier |
| 33 | + |
| 34 | +# Load the IBM Telco customer churn dataset |
| 35 | +url = "https://raw.githubusercontent.com/IBM/telco-customer-churn-on-icp4d/master/data/Telco-Customer-Churn.csv" |
| 36 | +df = pd.read_csv(url) |
| 37 | + |
| 38 | +print(f"Dataset shape: {df.shape}") |
| 39 | + |
| 40 | +# Prepare columns and add some realistic features |
| 41 | +df['last_login'] = pd.to_datetime("2022-01-01") + pd.to_timedelta(np.random.randint(0, 400, len(df)), unit='D') |
| 42 | +df['usage_last_30d'] = np.random.randint(10, 500, len(df)) |
| 43 | +df['usage_prev_30d'] = np.random.randint(10, 500, len(df)) |
| 44 | +df['support_tickets'] = np.random.poisson(2, len(df)) |
| 45 | +df['plan_value'] = df['MonthlyCharges'] |
| 46 | +df['plan_type'] = df['Contract'] |
| 47 | +df['churned'] = df['Churn'].map({'Yes': 1, 'No': 0}) |
| 48 | + |
| 49 | +print(f"Churn rate: {df['churned'].mean():.2%}") |
| 50 | + |
| 51 | +# Feature engineering |
| 52 | +df['days_since_last_login'] = (pd.Timestamp.now() - df['last_login']).dt.days |
| 53 | +df['usage_decline'] = df['usage_last_30d'] / (df['usage_prev_30d'] + 1e-5) |
| 54 | + |
| 55 | +# Key insights |
| 56 | +churn_by_plan = df.groupby('plan_type')['churned'].agg(['count', 'mean']) |
| 57 | +print("\nChurn by Plan Type:") |
| 58 | +print(churn_by_plan) |
| 59 | + |
| 60 | +# Visualizations |
| 61 | +fig, axes = plt.subplots(2, 2, figsize=(12, 8)) |
| 62 | + |
| 63 | +churn_by_plan['mean'].plot(kind='bar', ax=axes[0,0], title='Churn Rate by Plan') |
| 64 | +df.boxplot('usage_decline', by='churned', ax=axes[0,1]) |
| 65 | +axes[0,1].set_title('Usage Decline: Churned vs Active') |
| 66 | +df.hist(column='days_since_last_login', by='churned', bins=20, ax=axes[1,0]) |
| 67 | +axes[1,0].set_title('Days Since Last Login Distribution') |
| 68 | +sns.histplot(data=df, x='plan_value', hue='churned', kde=True, ax=axes[1,1]) |
| 69 | +axes[1,1].set_title('Plan Value Distribution') |
| 70 | + |
| 71 | +plt.tight_layout() |
| 72 | +plt.show() |
| 73 | + |
| 74 | +# Predictive model |
| 75 | +features = ['days_since_last_login', 'usage_decline', 'support_tickets', 'plan_value'] |
| 76 | +X = df[features] |
| 77 | +y = df['churned'] |
| 78 | + |
| 79 | +rf = RandomForestClassifier(random_state=42) |
| 80 | +rf.fit(X, y) |
| 81 | + |
| 82 | +print(f"\nModel accuracy: {rf.score(X, y):.3f}") |
| 83 | +print("Feature importance:") |
| 84 | +for feature, importance in zip(features, rf.feature_importances_): |
| 85 | + print(f" {feature}: {importance:.3f}") |
| 86 | +``` |
| 87 | + |
| 88 | +```python exec |
| 89 | +import reflex as rx |
| 90 | +from reflex_image_zoom import image_zoom |
| 91 | + |
| 92 | +def render_image(): |
| 93 | + return rx.el.div( |
| 94 | + image_zoom( |
| 95 | + rx.image( |
| 96 | + src="/blog/jupyter_plots.png", |
| 97 | + class_name="p-2 rounded-md h-auto", |
| 98 | + border=f"0.81px solid {rx.color('slate', 5)}", |
| 99 | + ), |
| 100 | + class_name="rounded-md overflow-hidden", |
| 101 | + ), |
| 102 | + rx.text( |
| 103 | + "Plots generated from Google Colab using Jupyter Notebook", |
| 104 | + class_name="text-sm text-slate-10 mt-2 italic", |
| 105 | + ), |
| 106 | + class_name="w-full flex flex-col rounded-md cursor-pointer", |
| 107 | + ) |
| 108 | +``` |
| 109 | + |
| 110 | +```python eval |
| 111 | + |
| 112 | +rx.el.div(render_image(), class_name="py-6") |
| 113 | + |
| 114 | +``` |
| 115 | + |
| 116 | +This notebook does what data scientists do every day: loads data, engineers features, explores patterns, and builds predictive models. The analysis works, the insights are valuable, but it's stuck in a static format. |
| 117 | + |
| 118 | +When stakeholders ask "Can we see this updating with fresh data?" you're back to the productionization problem. |
| 119 | + |
| 120 | +## The Productionization Problem |
| 121 | + |
| 122 | +Your notebook analysis is solid, but it has limitations. The plots are static images. The insights are buried in print statements. To see updated results, someone needs to rerun the entire notebook manually. |
| 123 | + |
| 124 | +Traditional solutions force you to choose between complexity and capability: |
| 125 | + |
| 126 | +**Flask + React**: Build a backend API, create React components, manage state, handle authentication. Weeks of work to recreate what you already built. |
| 127 | + |
| 128 | +**Streamlit**: Quick to deploy, but limited interactivity. Complex analyses don't translate well to Streamlit's widget-based approach. |
| 129 | + |
| 130 | +**Hand-off to engineering**: Wait months while engineers rebuild your analysis, often losing nuance in translation. |
| 131 | + |
| 132 | +None of these options preserve your existing work or let you iterate quickly. What if you could keep your Python analysis logic and just make it interactive? |
| 133 | + |
| 134 | +## Transforming to Reflex |
| 135 | + |
| 136 | +Here's how to transform our notebook into an interactive dashboard. Your data processing logic stays the same—we just add Reflex components around it. |
| 137 | + |
| 138 | +### Project Structure |
| 139 | + |
| 140 | +First, let's set up a proper Reflex project structure: |
| 141 | + |
| 142 | +```text |
| 143 | +churn-dashboard/ |
| 144 | +├── app/ |
| 145 | +│ ├── __init__.py |
| 146 | +│ ├── app.py |
| 147 | +│ ├── state.py |
| 148 | +│ └── components/ |
| 149 | +│ ├── __init__.py |
| 150 | +│ ├── kpi_card.py |
| 151 | +│ └── bar_chart.py |
| 152 | +├── assets/ |
| 153 | +├── requirements.txt |
| 154 | +└── rxconfig.py |
| 155 | +``` |
| 156 | + |
| 157 | +### Step 1: State Management (app/state.py) |
| 158 | + |
| 159 | +Move your notebook's data processing logic into a Reflex state class: |
| 160 | + |
| 161 | +```python |
| 162 | +import reflex as rx |
| 163 | +import pandas as pd |
| 164 | +import logging |
| 165 | +import asyncio |
| 166 | + |
| 167 | + |
| 168 | +class DashboardState(rx.State): |
| 169 | + """The app state.""" |
| 170 | + |
| 171 | + is_loading: bool = True |
| 172 | + total_customers: int = 0 |
| 173 | + total_churn: int = 0 |
| 174 | + churn_rate: float = 0.0 |
| 175 | + avg_monthly_charges: float = 0.0 |
| 176 | + churn_by_contract: list[dict[str, str | int]] = [] |
| 177 | + churn_by_tenure: list[dict[str, str | int]] = [] |
| 178 | + chart_view: str = "Contract" |
| 179 | + |
| 180 | + @rx.event(background=True) |
| 181 | + async def load_data(self): |
| 182 | + """Load and process the data from the URL.""" |
| 183 | + async with self: |
| 184 | + self.is_loading = True |
| 185 | + try: |
| 186 | + await asyncio.sleep(0.5) |
| 187 | + url = "https://raw.githubusercontent.com/IBM/telco-customer-churn-on-icp4d/master/data/Telco-Customer-Churn.csv" |
| 188 | + df = pd.read_csv(url) |
| 189 | + df["TotalCharges"] = pd.to_numeric(df["TotalCharges"], errors="coerce") |
| 190 | + df.dropna(inplace=True) |
| 191 | + total_customers = len(df) |
| 192 | + total_churn = len(df[df["Churn"] == "Yes"]) |
| 193 | + churn_rate = ( |
| 194 | + total_churn / total_customers * 100 if total_customers > 0 else 0 |
| 195 | + ) |
| 196 | + avg_monthly_charges = df["MonthlyCharges"].mean() |
| 197 | + churn_data_contract = ( |
| 198 | + df.groupby("Contract")["Churn"].value_counts().unstack(fill_value=0) |
| 199 | + ) |
| 200 | + churn_data_contract.reset_index(inplace=True) |
| 201 | + churn_data_contract.rename( |
| 202 | + columns={"No": "retained", "Yes": "churned"}, inplace=True |
| 203 | + ) |
| 204 | + chart_data_contract = churn_data_contract.to_dict(orient="records") |
| 205 | + bins = [0, 12, 24, 36, 48, 60, df["tenure"].max()] |
| 206 | + labels = ["0-12m", "13-24m", "25-36m", "37-48m", "49-60m", "61m+"] |
| 207 | + df["tenure_group"] = pd.cut( |
| 208 | + df["tenure"], bins=bins, labels=labels, right=True, include_lowest=True |
| 209 | + ) |
| 210 | + churn_data_tenure = ( |
| 211 | + df.groupby("tenure_group", observed=False)["Churn"] |
| 212 | + .value_counts() |
| 213 | + .unstack(fill_value=0) |
| 214 | + ) |
| 215 | + churn_data_tenure.reset_index(inplace=True) |
| 216 | + churn_data_tenure.rename( |
| 217 | + columns={"No": "retained", "Yes": "churned"}, inplace=True |
| 218 | + ) |
| 219 | + chart_data_tenure = churn_data_tenure.to_dict(orient="records") |
| 220 | + async with self: |
| 221 | + self.total_customers = total_customers |
| 222 | + self.total_churn = total_churn |
| 223 | + self.churn_rate = round(churn_rate, 2) |
| 224 | + self.avg_monthly_charges = round(avg_monthly_charges, 2) |
| 225 | + self.churn_by_contract = chart_data_contract |
| 226 | + self.churn_by_tenure = chart_data_tenure |
| 227 | + self.is_loading = False |
| 228 | + except Exception as e: |
| 229 | + logging.exception(f"Failed to load data: {e}") |
| 230 | + async with self: |
| 231 | + self.is_loading = False |
| 232 | + |
| 233 | + @rx.event |
| 234 | + def set_chart_view(self, view: str): |
| 235 | + self.chart_view = view |
| 236 | + |
| 237 | + @rx.var |
| 238 | + def chart_data(self) -> list[dict[str, str | int]]: |
| 239 | + if self.chart_view == "Contract": |
| 240 | + return self.churn_by_contract |
| 241 | + return self.churn_by_tenure |
| 242 | + |
| 243 | + @rx.var |
| 244 | + def chart_title(self) -> str: |
| 245 | + if self.chart_view == "Contract": |
| 246 | + return "Customer Retention by Contract Type" |
| 247 | + return "Customer Retention by Tenure" |
| 248 | + |
| 249 | + @rx.var |
| 250 | + def chart_x_axis_key(self) -> str: |
| 251 | + if self.chart_view == "Contract": |
| 252 | + return "Contract" |
| 253 | + return "tenure_group" |
| 254 | +``` |
| 255 | + |
| 256 | +### Step 2: Chart Component (app/components/bar_chart.py) |
| 257 | + |
| 258 | +Convert your matplotlib bar chart to an interactive Reflex chart: |
| 259 | + |
| 260 | +```python |
| 261 | +import reflex as rx |
| 262 | +from app.state import DashboardState |
| 263 | + |
| 264 | +def churn_bar_chart() -> rx.Component: |
| 265 | + """A bar chart showing churn by contract type.""" |
| 266 | + return rx.el.div( |
| 267 | + rx.el.h3("Customer Retention by Contract Type"), |
| 268 | + rx.recharts.bar_chart( |
| 269 | + rx.recharts.cartesian_grid(vertical=False), |
| 270 | + rx.recharts.x_axis(data_key="Contract"), |
| 271 | + rx.recharts.y_axis(), |
| 272 | + rx.recharts.legend(), |
| 273 | + rx.recharts.bar( |
| 274 | + data_key="retained", |
| 275 | + name="Retained", |
| 276 | + fill="#3b82f6", |
| 277 | + stack_id="a" |
| 278 | + ), |
| 279 | + rx.recharts.bar( |
| 280 | + data_key="churned", |
| 281 | + name="Churned", |
| 282 | + fill="#ef4444", |
| 283 | + stack_id="a" |
| 284 | + ), |
| 285 | + data=DashboardState.churn_by_contract, |
| 286 | + height=300, |
| 287 | + ) |
| 288 | + ) |
| 289 | +``` |
| 290 | + |
| 291 | +### Step 3: KPI Cards (app/components/kpi_card.py) |
| 292 | + |
| 293 | +Create reusable metric cards to replace your print statements: |
| 294 | + |
| 295 | +```python |
| 296 | +import reflex as rx |
| 297 | + |
| 298 | +def kpi_card(title: str, value: str | int, icon: str, color: str) -> rx.Component: |
| 299 | + """A reusable KPI card component.""" |
| 300 | + return rx.el.div( |
| 301 | + rx.el.div( |
| 302 | + rx.icon(icon, class_name="w-6 h-6"), |
| 303 | + class_name=f"p-3 rounded-full {color}" |
| 304 | + ), |
| 305 | + rx.el.div( |
| 306 | + rx.el.p(title, class_name="text-xs font-medium text-gray-500"), |
| 307 | + rx.el.p(value, class_name="text-xl font-semibold text-gray-800"), |
| 308 | + ), |
| 309 | + class_name="flex items-center gap-4 p-4 bg-white border border-gray-200 rounded-xl shadow-sm", |
| 310 | + ) |
| 311 | +``` |
| 312 | + |
| 313 | +### Step 4: Main Dashboard (app/app.py) |
| 314 | + |
| 315 | +Bring everything together into a dashboard: |
| 316 | + |
| 317 | +```python |
| 318 | +import reflex as rx |
| 319 | +from app.state import DashboardState |
| 320 | +from app.components.kpi_card import kpi_card |
| 321 | +from app.components.bar_chart import churn_bar_chart |
| 322 | + |
| 323 | +def index() -> rx.Component: |
| 324 | + """The main dashboard page.""" |
| 325 | + return rx.el.main( |
| 326 | + rx.el.div( |
| 327 | + rx.el.h1("Telco Churn Dashboard"), |
| 328 | + |
| 329 | + # KPI Cards - replacing our print statements |
| 330 | + rx.el.div( |
| 331 | + kpi_card("Total Customers", DashboardState.total_customers, "users", "bg-blue-100 text-blue-600"), |
| 332 | + kpi_card("Total Churn", DashboardState.total_churn, "user-minus", "bg-red-100 text-red-600"), |
| 333 | + kpi_card("Churn Rate", f"{DashboardState.churn_rate}%", "trending-down", "bg-yellow-100 text-yellow-600"), |
| 334 | + kpi_card("Avg Monthly Bill", f"${DashboardState.avg_monthly_charges}", "dollar-sign", "bg-green-100 text-green-600"), |
| 335 | + class_name="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-4", |
| 336 | + ), |
| 337 | + # Chart - replacing our matplotlib plot |
| 338 | + churn_bar_chart(), |
| 339 | + ), |
| 340 | + on_mount=DashboardState.load_data, |
| 341 | + ) |
| 342 | + |
| 343 | +app = rx.App() |
| 344 | +app.add_page(index) |
| 345 | +``` |
| 346 | + |
| 347 | +```python eval |
| 348 | +rx.el.div( |
| 349 | + rx.image( |
| 350 | + src="/blog/jupyter_reflex_app_demo.gif", |
| 351 | + height="auto", |
| 352 | + ), |
| 353 | + rx.text( |
| 354 | + "Reflex Build demo app showing minimal interactivity of data.", |
| 355 | + class_name="text-sm text-slate-10 mt-2 italic", |
| 356 | + ), |
| 357 | + class_name="py-6", |
| 358 | +) |
| 359 | +``` |
| 360 | + |
| 361 | +Your notebook's pandas analysis logic stays intact, it just moves into the `load_data` method. The static matplotlib plots become interactive charts, and your print statements become clean KPI cards. The same insights, now accessible to anyone with a web browser. |
| 362 | + |
| 363 | +If you want to try this dashboard live, you can do so here on Reflex Build -> [Churn Dashboard](https://build.reflex.dev/gen/c100a12f-4f22-452a-8e3c-74cbf8baba98/) |
| 364 | + |
| 365 | +You can edit, re-work, and improve it as you see fit! |
| 366 | + |
| 367 | +# Deploying with Reflex |
| 368 | + |
| 369 | +The final step is sharing your work. A dashboard is only valuable if others can access it, and deployment is where most data science projects stall. |
| 370 | + |
| 371 | +With Reflex, deployment is built-in. You don’t need to worry about servers, Docker, or frontend builds. Your Python app can be published live with a single command: |
| 372 | + |
| 373 | +```bash |
| 374 | +reflex deploy |
| 375 | +``` |
| 376 | + |
| 377 | +For detailed information on how deployment works, visit the [Cloud Deploy Docs](https://reflex.dev/docs/hosting/deploy-quick-start/) to find out how to begin. |
| 378 | + |
| 379 | +# Wrapping Up |
| 380 | + |
| 381 | +We started with a Jupyter notebook full of exploratory analysis—static plots and printouts that lived on your laptop. Then, we showed how to transform that work into a production-grade dashboard with Reflex, keeping your Python workflow intact. Finally, we saw how easy it is to deploy and share your dashboard. |
| 382 | + |
| 383 | +With this workflow, data scientists can go from notebook → live dashboard → deployed app in hours instead of weeks. |
| 384 | + |
| 385 | +Next steps: |
| 386 | + |
| 387 | + - Try deploying your own analysis. |
| 388 | + |
| 389 | + - Explore more Reflex components for interactive UIs. |
| 390 | + |
| 391 | + - Experiment with refreshing your data sources. |
| 392 | + |
| 393 | +The barrier between analysis and production is shrinking. With Reflex, your notebook insights can live on the web. |
0 commit comments