Skip to content

Conversation

@VsevolodX
Copy link
Member

@VsevolodX VsevolodX commented Nov 30, 2025

Summary by CodeRabbit

  • Documentation
    • Added two end-to-end Bandgap Workflow Jupyter notebooks (minimal and extended) demonstrating environment setup and auth, material import and visualization, workflow selection and customization (add/modify relaxation and model subtypes), applying contexts to specific workflow units, configuring compute resources, creating/submitting/monitoring jobs, retrieving bandgap results, and plotting visualizations.

✏️ Tip: You can customize this high-level summary in your review settings.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@coderabbitai
Copy link

coderabbitai bot commented Nov 30, 2025

Note

Reviews paused

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Walkthrough

Adds two new example Jupyter notebooks that demonstrate end-to-end bandgap workflows for twisted MoS2: environment and auth setup, material import/save, workflow selection and mutation (including relaxation and model subtype changes), context provisioning, compute/job creation and submission, monitoring, and result retrieval/visualization.

Changes

Cohort / File(s) Summary
Bandgap workflow notebooks
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb, other/materials_designer/workflows/run_bandgap_workflow_min.ipynb
Two new example notebooks illustrating a complete bandgap workflow for twisted MoS2: environment/auth setup, material loading and saving, application/workflow selection and preview, adding/modifying subworkflows (e.g., relaxation, model subtype changes), applying contexts to units, compute configuration, job creation/submission/polling, result extraction, and basic visualization.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10–15 minutes

  • Verify notebook imports and referenced helper utilities resolve in the repo environment.
  • Confirm authentication placeholders (ACCOUNT_ID, AUTH_TOKEN, ORGANIZATION_ID) are non-sensitive examples.
  • Optionally run key cells to validate API payload shapes and job lifecycle snippets behave as shown.

Poem

🐰 I nudged the cells with velvet feet,

Twisted MoS2 where band edges meet,
I queued the run beneath the moon,
Numbers hopped back—bright afternoon,
A rabbit cheers each result so sweet.

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Feature/SOF-7759 Jode Mock NB' is vague and uses abbreviations that don't clearly convey the actual changes—adding two bandgap workflow example notebooks. Revise the title to be more descriptive, such as 'Add bandgap workflow example notebooks for twisted MoS2' to clearly communicate the main change to reviewers.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb (4)

24-29: Empty section: add setup code or remove.

Section 1 promises environment and parameter setup but contains no code cell. Either add the necessary setup code (imports, constants, configuration) or remove this section and renumber the remaining sections.


76-101: Consider removing inline implementation notes.

The comments on lines 83-84 about missing JavaScript implementations are internal development notes that may confuse users of this example notebook. Consider either removing them or moving them to a separate documentation section if they're important for users to know.


152-162: Fix spacing in keyword argument.

Line 159 has inconsistent spacing around the = in compute = compute_config. Per PEP 8, keyword arguments should not have spaces around the equals sign.

Apply this diff:

-job = create_job(workflow=workflow, material=material, compute = compute_config, auth_config=auth_config)
+job = create_job(workflow=workflow, material=material, compute=compute_config, auth_config=auth_config)

169-182: Move PropertyEnum import to where it's used.

Line 175 imports PropertyEnum, but it's not used until line 197 in the next section. Moving the import to section 9 would improve code organization.

Remove from this cell:

-from mat3ra.prode import PropertyEnum
 job.run()
 job.wait_for_complete()

And add to the cell at lines 189-200:

+from mat3ra.prode import PropertyEnum
+
 # AFTER Finished
 # A class from Prode to handle results
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 76cf906 and c9bd9ee.

📒 Files selected for processing (1)
  • other/materials_designer/specific_examples/run_bandgap_workflow.ipynb (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Continuous Testing and Docs Publication
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb

[error] 1-1: pre-commit: nbstripout hook modified notebook files and failed the commit (exit code 1). Updated notebook diffs were applied by the hook.

🪛 Ruff (0.14.6)
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb

4-4: await statement outside of a function

(F704)

🔇 Additional comments (5)
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb (5)

54-69: LGTM!

The material loading logic is clear and appropriate for a demonstration notebook.


108-124: LGTM!

The workflow configuration is well-structured with clear method calls and helpful inline documentation.


131-145: LGTM!

The compute configuration is straightforward and appropriate for the example.


189-200: LGTM!

The results retrieval logic is clear and properly uses the PropertyEnum for specifying which properties to retrieve.


207-219: LGTM!

The visualization and display logic appropriately uses the results retrieved in the previous section.

Comment on lines 36 to 47
{
"metadata": {},
"cell_type": "code",
"source": [
"from mat3ra.api import ApiClient\n",
"# Log in to get the API token\n",
"auth_config = await ApiClient().login()"
],
"id": "23626cb27f6e7206",
"outputs": [],
"execution_count": null
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

jq '.metadata.language_info.version' other/materials_designer/specific_examples/run_bandgap_workflow.ipynb

Repository: Exabyte-io/api-examples

Length of output: 74


Update Python version and use sync API call or enable async support.

Top-level await is used in the code, but the notebook metadata specifies Python 2.7.6, which does not support async/await syntax (introduced in Python 3.5+). Either update the notebook metadata to Python 3.5+, or replace the async call with a synchronous alternative.

🤖 Prompt for AI Agents
In other/materials_designer/specific_examples/run_bandgap_workflow.ipynb around
lines 36 to 47, the notebook uses top-level "await ApiClient().login()" but the
notebook metadata is set to Python 2.7.6 which lacks async/await support; either
update the notebook kernel metadata to a Python 3.5+ kernel (preferably
3.8/3.9+) so async/await are supported, or change the call to a synchronous
login variant (e.g., use ApiClient().login_sync() or call the login coroutine
via an event loop wrapper) and remove the top-level await; update metadata or
code accordingly and rerun the cell to confirm no syntax errors.

Comment on lines +220 to +242
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix Python version metadata inconsistency.

The notebook metadata shows conflicting Python versions:

  • Kernel specifies "Python 3" (line 223)
  • Language info specifies version "2.7.6" and "ipython2" (lines 230, 237)

Since the code uses Python 3 features (await/async), the language_info should be updated to reflect Python 3.

Apply this diff to fix the metadata:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.8.0"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
🤖 Prompt for AI Agents
In other/materials_designer/specific_examples/run_bandgap_workflow.ipynb around
lines 220 to 242, the notebook metadata is inconsistent (kernel is "Python 3"
but language_info claims Python 2.7.6 and ipython2); update language_info to
match Python 3 by changing "version" to a Python 3.x string (e.g., "3.8" or
"3.10"), set codemirror_mode "version" to 3, and change "pygments_lexer" from
"ipython2" to "ipython3" (leave kernelspec as-is), so the metadata consistently
reflects Python 3.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb (1)

312-323: Fix Python version metadata inconsistency.

The notebook metadata contains conflicting Python versions, which was previously flagged. The kernelspec indicates "Python 3" (line 308) but language_info specifies "2.7.6" and "ipython2". Since the code uses Python 3 features (await/async), the language_info must be updated to reflect Python 3.

Apply this diff to fix the metadata:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.8.0"
🧹 Nitpick comments (2)
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb (2)

205-209: Move unused import to where it's actually used.

The PropertyEnum import on line 205 is not used in this cell. It's actually used in the next cell (line 229). Move the import to that cell for better organization.

Apply this diff:

-from mat3ra.prode import PropertyEnum
 job.run()
 job.wait_for_complete()
 # job.check_status()

And add the import to the next cell (before line 229):

from mat3ra.prode import PropertyEnum
# AFTER Finished
# A class from Prode to handle results
results = job.get_results(PropertyEnum.BAND_GAP, PropertyEnum.BAND_STRUCTURE)

297-304: Remove empty code cell.

This empty code cell serves no purpose and should be removed to keep the notebook clean.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c9bd9ee and 8ce197f.

📒 Files selected for processing (1)
  • other/materials_designer/specific_examples/run_bandgap_workflow.ipynb (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.6)
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb

8-8: await statement outside of a function

(F704)


50-50: Undefined name kgrid_context_provider

(F821)

🔇 Additional comments (5)
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb (5)

71-75: LGTM!

The material loading and visualization logic is clear and appropriate for the workflow.


163-168: LGTM!

The compute configuration is properly structured with appropriate queue and resource settings.


186-187: LGTM!

Job creation properly wires together the workflow, material, compute configuration, and authentication.


227-229: LGTM!

Result retrieval properly uses PropertyEnum to specify the desired properties.


247-250: LGTM!

Result visualization and display logic is appropriate for the workflow.

Comment on lines 47 to 53
"from mat3ra.api import ApiClient\n",
"\n",
"from examples.reproducing_publications.band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide import \\\n",
" unit_flowchart_id\n",
"\n",
"# Log in to get the API token\n",
"auth_config = await ApiClient().login()"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused import or clarify its purpose.

The imported unit_flowchart_id from line 49-50 is not used in this cell or anywhere else in the notebook. If it's intended for future use, consider adding a comment explaining its purpose, otherwise remove it.

Apply this diff to remove the unused import:

 from mat3ra.api import ApiClient
 
-from examples.reproducing_publications.band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide import \
-    unit_flowchart_id
-
 # Log in to get the API token
 auth_config = await ApiClient().login()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"from mat3ra.api import ApiClient\n",
"\n",
"from examples.reproducing_publications.band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide import \\\n",
" unit_flowchart_id\n",
"\n",
"# Log in to get the API token\n",
"auth_config = await ApiClient().login()"
"from mat3ra.api import ApiClient\n",
"\n",
"# Log in to get the API token\n",
"auth_config = await ApiClient().login()"
🧰 Tools
🪛 Ruff (0.14.6)

50-50: Undefined name kgrid_context_provider

(F821)

🤖 Prompt for AI Agents
other/materials_designer/specific_examples/run_bandgap_workflow.ipynb around
lines 47-53: the import of unit_flowchart_id from
examples.reproducing_publications.band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide
is unused; remove unit_flowchart_id from the import statement (or delete the
entire import line) to eliminate the unused import, or if it’s intended for
future use, keep it but add a concise comment in the same cell explaining its
purpose so reviewers know it’s intentional.

Comment on lines 93 to 109
"from mat3ra.wode.context_providers import (\n",
" PointsGridFormDataProvider, # exists - for k-points\n",
" PlanewaveCutoffsContextProvider, # exists - for cutoffs\n",
" SmearingContextProvider, # no JS implementation yet\n",
" BandsContextProvider # no JS implementation yet\n",
")\n",
"\n",
"\n",
"kgrid_provider = PointsGridFormDataProvider(material=material)\n",
"cutoffs_provider = PlanewaveCutoffsContextProvider(material=material)\n",
"smearing_provider = SmearingContextProvider(material=material)\n",
"bands_provider = BandsContextProvider(material=material)\n",
"\n",
"kpoints = kgrid_provider.get_dimensions() # or calculate_dimensions()\n",
"cutoff = cutoffs_provider.get_cutoff() # defaultECUTWFC, defaultECUTRHO\n",
"smearing = smearing_provider.get_smearing()\n",
"number_of_bands = bands_provider.get_number_of_bands()"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd /tmp/repo && find . -type f -name "*.py" | head -20

Repository: Exabyte-io/api-examples

Length of output: 126


🏁 Script executed:

cd /tmp/repo && rg "class SmearingContextProvider|class BandsContextProvider|class PointsGridFormDataProvider" -A 5 --max-count=10

Repository: Exabyte-io/api-examples

Length of output: 126


🏁 Script executed:

cd /tmp/repo && rg "def get_dimensions|def calculate_dimensions" -B 2 -A 3

Repository: Exabyte-io/api-examples

Length of output: 126


🏁 Script executed:

git ls-files | grep -E "(context|provider)" | head -20

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

rg "class SmearingContextProvider|class BandsContextProvider|class PointsGridFormDataProvider" -A 5

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

rg "def get_dimensions|def calculate_dimensions" -B 2 -A 3

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

find . -name "run_bandgap_workflow.ipynb" -type f

Repository: Exabyte-io/api-examples

Length of output: 138


🏁 Script executed:

python3 << 'EOF'
import json
import sys

try:
    with open('./other/materials_designer/specific_examples/run_bandgap_workflow.ipynb', 'r') as f:
        notebook = json.load(f)
    
    # Find cells with imports and provider usage
    for i, cell in enumerate(notebook.get('cells', [])):
        if cell.get('cell_type') == 'code':
            source = ''.join(cell.get('source', []))
            if 'PointsGridFormDataProvider' in source or 'SmearingContextProvider' in source or 'get_dimensions' in source:
                print(f"\n=== Cell {i} ===")
                print(source)
except Exception as e:
    print(f"Error: {e}", file=sys.stderr)
    sys.exit(1)
EOF

Repository: Exabyte-io/api-examples

Length of output: 853


🏁 Script executed:

rg "SmearingContextProvider|BandsContextProvider" --type py

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

rg "def get_dimensions|def calculate_dimensions" --type py

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

rg "mat3ra|wode" --type py -A 2 -B 2 | head -50

Repository: Exabyte-io/api-examples

Length of output: 2696


🏁 Script executed:

find . -name "*.md" -o -name "*.rst" | xargs grep -l "SmearingContextProvider\|BandsContextProvider\|get_dimensions" 2>/dev/null | head -10

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

web_search
mat3ra wode PointsGridFormDataProvider SmearingContextProvider API documentation

Repository: Exabyte-io/api-examples

Length of output: 162


🌐 Web query:

mat3ra.wode PointsGridFormDataProvider get_dimensions API

💡 Result:

I can't find a public reference to "PointsGridFormDataProvider.get_dimensions" in the mat3ra.wode docs or source from my quick search. Do you want me to:

  • search deeper (PyPI package source / GitHub / docs) for the exact API and return code/docs, or
  • show how to call a similar "get_dimensions" function (example usage) if you can paste the package code or the class file?

🌐 Web query:

SmearingContextProvider BandsContextProvider mat3ra wode

💡 Result:

I’m not sure what you mean. Do you mean React context providers named SmearingContextProvider and BandsContextProvider (where did you see them)? What does “mat3ra wode” mean here — is that a transliteration of Arabic asking “what is this”? Please share the code or a link and I’ll look them up.


🏁 Script executed:

find . -name "requirements.txt" -o -name "pyproject.toml" -o -name "setup.py" | xargs cat 2>/dev/null | grep -i mat3ra

Repository: Exabyte-io/api-examples

Length of output: 280


🏁 Script executed:

rg "SmearingContextProvider|BandsContextProvider|PointsGridFormDataProvider" -l

Repository: Exabyte-io/api-examples

Length of output: 136


🏁 Script executed:

find . -name "*.ipynb" -type f | xargs grep -l "context_providers\|ContextProvider" 2>/dev/null

Repository: Exabyte-io/api-examples

Length of output: 138


Remove or clarify internal development notes from production notebook.

Lines 96-97 contain comments stating "no JS implementation yet", which are internal development notes about a JavaScript port. These are inappropriate in user-facing documentation and create confusion about the Python API status.

Additionally, line 106's comment "or calculate_dimensions()" suggests API uncertainty. If this reflects actual uncertainty about the correct method, verify with mat3ra-made package documentation and update the code accordingly; otherwise, remove the comment.

Comment on lines 253 to 295
{
"cell_type": "code",
"execution_count": null,
"id": "20",
"metadata": {},
"outputs": [],
"source": [
"import pydot\n",
"import json\n",
"\n",
"# Example JSON data (you would load this from a file)\n",
"json_data = {\n",
" \"nodes\": [\n",
" {\"name\": \"A\", \"shape\": \"box\", \"label\": \"Start\"},\n",
" {\"name\": \"B\", \"shape\": \"box\", \"label\": \"Process 1\"},\n",
" {\"name\": \"C\", \"shape\": \"diamond\", \"label\": \"Decision\"},\n",
" {\"name\": \"D\", \"shape\": \"box\", \"label\": \"End\"}\n",
" ],\n",
" \"edges\": [\n",
" {\"source\": \"A\", \"target\": \"B\", \"label\": \"\"},\n",
" {\"source\": \"B\", \"target\": \"C\", \"label\": \"\"},\n",
" {\"source\": \"C\", \"target\": \"D\", \"label\": \"Yes\"},\n",
" {\"source\": \"C\", \"target\": \"B\", \"label\": \"No\"}\n",
" ]\n",
"}\n",
"\n",
"# Create a new directed graph\n",
"dot_graph = pydot.Dot(\"my_flowchart\", graph_type=\"digraph\")\n",
"\n",
"# Add nodes\n",
"for node_data in json_data[\"nodes\"]:\n",
" node = pydot.Node(node_data[\"name\"], shape=node_data[\"shape\"], label=node_data[\"label\"])\n",
" dot_graph.add_node(node)\n",
"\n",
"# Add edges\n",
"for edge_data in json_data[\"edges\"]:\n",
" edge = pydot.Edge(edge_data[\"source\"], edge_data[\"target\"], label=edge_data[\"label\"])\n",
" dot_graph.add_edge(edge)\n",
"\n",
"# Save the flowchart as a PNG\n",
"dot_graph.write_png(\"flowchart.png\")\n",
"print(\"Flowchart rendered to flowchart.png\")\n"
]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unrelated pydot flowchart code.

This code cell appears to be test or example code for creating flowcharts with pydot, which is completely unrelated to the bandgap calculation workflow demonstrated in this notebook. It should be removed to maintain focus and clarity.

If flowchart visualization of the workflow is intended, it should be integrated with the actual workflow object created in earlier cells, not use hardcoded JSON data.

🤖 Prompt for AI Agents
In other/materials_designer/specific_examples/run_bandgap_workflow.ipynb around
lines 253 to 295, remove the unrelated pydot flowchart example cell that uses
hardcoded JSON (nodes/edges) because it is not part of the bandgap workflow;
either delete the entire cell or replace it with code that integrates a
visualization using the actual workflow object from earlier cells (e.g., build
the graph from the real workflow data structure and render that), ensuring no
stray test/example pydot JSON remains in the notebook.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1)

369-386: Fix Python metadata to reflect Python 3 (duplicate of prior review)

Kernelspec declares “Python 3”, but language_info still has codemirror_mode.version: 2, pygments_lexer: "ipython2", and version: "2.7.6". Given the use of await and modern mat3ra APIs, this notebook is clearly Python 3; the current metadata is misleading and can confuse nbconvert, syntax highlighting, and tooling.

You can align the metadata with Python 3 by updating this block, for example:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.8.0"  // or whatever Python 3.x you actually use

(Adjust the exact version string to match your deployment.)

🧹 Nitpick comments (1)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1)

8-22: Align process overview with actual notebook sections

The overview lists steps 4–6 (import workflow builder, analyze material, create workflow configuration), but the body jumps from “3. Load the target material” directly to “5. Create workflow and set its parameters.” Consider either adding matching section headings/cells or updating the overview text so users aren’t looking for missing steps.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 13f23e8 and 8716dd8.

📒 Files selected for processing (1)
  • other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb

5-5: await statement outside of a function

(F704)

🔇 Additional comments (1)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1)

42-52: Top‑level await relies on Jupyter async support and conflicts with current metadata

This cell uses await at the top level, which is fine in modern IPython/Jupyter kernels but will (a) trigger Ruff’s F704 if linted as plain Python and (b) contradict the notebook metadata that still claims Python 2.7.6. Please:

  • Ensure this notebook is only run with a Python 3 kernel that supports top‑level await.
  • Update the language_info block to Python 3 (see separate metadata comment), and/or configure Ruff to skip F704 for notebooks if you lint them.

No code change is strictly required if the runtime is guaranteed, but clarifying this avoids confusing syntax errors when exporting or reusing the code as a script.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (2)

356-366: Fix import: mat3ra.provemat3ra.prode

The import from mat3ra.prove import visualize_property will fail because mat3ra.prove doesn’t exist; earlier in the notebook you already use mat3ra.prode for PropertyEnum. Change the import to:

-# Visual library that can visualize any property defined in Prode
-from mat3ra.prove import visualize_property
+# Visual library that can visualize any property defined in Prode
+from mat3ra.prode import visualize_property

Without this change, the visualization cell will raise ModuleNotFoundError.


378-395: Align notebook metadata with Python 3 kernel

Kernelspec is Python 3, but language_info still claims IPython 2 / Python 2.7.6. Update metadata so tools see this as a Python 3 notebook, e.g.:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.8.0"
🧹 Nitpick comments (1)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1)

77-83: Fix section numbering to match the overview

The overview lists step 4 (“Import workflow builder and related analyzers”), but the body jumps from “## 3. Load the target material” to “## 5. Create workflow…”. Either add a step‑4 section or renumber this to keep the doc structure consistent.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8716dd8 and 1f7ff60.

📒 Files selected for processing (1)
  • other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb

5-5: await statement outside of a function

(F704)

🔇 Additional comments (1)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1)

42-52: Top‑level await is fine in Jupyter, but will fail in plain Python

The await ApiClient().login() at cell scope relies on IPython’s support for top‑level await. That’s OK for this notebook, but if you ever export/run this as a .py script you’ll need to wrap it in async def and call it via asyncio.run(...), or similar. Also explains Ruff’s F704 warning.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (2)

362-366: Fix import path: mat3ra.provemat3ra.prode

The import still uses a non‑existent module mat3ra.prove; the correct package in the Mat3ra stack is mat3ra.prode (ProDe — property definitions), which you already import earlier for PropertyEnum. Leaving this as‑is will raise ModuleNotFoundError when running the visualization cell.

Apply this minimal fix:

-# Visual library that can visualize any property defined in Prode
-from mat3ra.prove import visualize_property
+# Visual library that can visualize any property defined in Prode
+from mat3ra.prode import visualize_property

To double‑check module availability in your environment, you can run:

#!/bin/bash
python - << 'PY'
import importlib
print("prode:", importlib.util.find_spec("mat3ra.prode") is not None)
print("prove:", importlib.util.find_spec("mat3ra.prove") is not None)
PY

379-395: Update notebook language metadata to match Python 3 kernel

The notebook kernelspec declares Python 3, but language_info still claims Python 2.7.6 (codemirror_mode.version: 2, pygments_lexer: ipython2, version: "2.7.6"). This inconsistency can confuse tools and editors and was already flagged earlier.

Align language_info with Python 3, e.g.:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.8.0"

(Any accurate Python 3.x version string you actually use is fine.)

🧹 Nitpick comments (2)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (2)

41-41: Variable redefinition and redundant reuse of ACCOUNT_ID and OWNER_ID.

Line 41 redefines ACCOUNT_ID via import sys and line 129 redefines OWNER_ID locally, then again at line 402. This pattern is confusing and error-prone. Centralize credential setup and reuse consistently.

Extract credentials to a single setup cell early in the notebook and reuse throughout, rather than redefining in multiple cells.

Also applies to: 129-129, 402-402


1-543: Add documentation on notebook compatibility and prerequisites.

The notebook relies heavily on JupyterLite-specific features (micropip, data_from_host) but lacks clear documentation on execution environment requirements. Users attempting to run this in standalone Jupyter will encounter confusing errors.

Add a markdown cell at the beginning (after the title) clarifying:

  • Required environment: JupyterLite (running in Pyodide/browser) or standard Jupyter with mat3ra packages pre-installed.
  • Credentials: ACCOUNT_ID, AUTH_TOKEN, ORGANIZATION_ID must be provided via environment variables or JupyterLite's data_from_host.
  • Expected duration: Approximate runtime estimate for the workflow.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f7ff60 and be40a18.

📒 Files selected for processing (2)
  • other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1 hunks)
  • other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb

5-5: await statement outside of a function

(F704)

other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

7-7: await statement outside of a function

(F704)


8-8: await statement outside of a function

(F704)


11-11: await statement outside of a function

(F704)


15-15: Possible hardcoded password assigned to: "AUTH_TOKEN"

(S105)


23-23: Undefined name data_from_host

(F821)


24-24: Undefined name data_from_host

(F821)


41-41: Redefinition of unused ACCOUNT_ID from line 14

(F811)


228-228: Undefined name display

(F821)

🔇 Additional comments (4)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (3)

7-11: Await statements in notebook context are flagged but acceptable in JupyterLite.

Lines 7–11 use await outside of a function, which Ruff flags as F704. However, in JupyterLite (Pyodide), notebook cells are inherently async-compatible, so this is valid. Standard Jupyter kernels will raise a SyntaxError if executed. Consider adding a documentation cell clarifying that this notebook requires JupyterLite.

Verify that this notebook is intended exclusively for JupyterLite environments and not for standard Jupyter installations.

Also applies to: 40-49


455-457: Variable scope issue: jobs may not be accessible in later cell.

Line 455 iterates over jobs, but jobs is defined in a separate cell at line 421. While this is valid in Jupyter (cells share scope), the dependency is implicit and fragile. If cells are reordered or the job creation cell fails, line 455 will raise NameError.

Verify that cell execution order is enforced. Consider adding an assertion at line 455 to validate that jobs is defined and non-empty.


522-522: Built-in display() function used without explicit import.

Line 522 calls display(), which is a Jupyter built-in available in the notebook kernel by default. Ruff flags this as F821 (undefined name) because static analysis cannot infer Jupyter globals. This is not a runtime error in notebooks but indicates the code will fail in linting/type-checking environments.

Either import display from IPython.display at the top of the notebook or add a lint-ignore comment. For consistency, if display is used, prefer explicit import over relying on Jupyter's injected globals.

+from IPython.display import display
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1)

69-229: Workflow construction and job lifecycle example looks solid

The material loading, workflow creation/mutation (relaxation subworkflow, model subtype changes, unit context updates), compute configuration, job creation, submission, and results retrieval form a coherent, end‑to‑end example. The cell structure and comments are clear and should be easy for users to follow interactively.

Also applies to: 247-301, 319-345

Comment on lines 421 to 425
"jobs = job_endpoints.create_by_ids(\n",
" materials=material, workflow_id=workflow[\"_id\"], project_id=project_id, prefix=JOB_NAME, owner_id=OWNER_ID\n",
")\n",
"\n",
"display_JSON(jobs)"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Undefined variable reference: workflow["_id"] does not exist.

Line 422 references workflow["_id"], but earlier code (line 333) stores the workflow in saved_workflow_data. The variable workflow at line 205 is a Workflow object (not a dict) from mat3ra.wode.workflows.Workflow.create(), so indexing with ["_id"] will fail.

Apply this diff:

-    materials=material, workflow_id=workflow["_id"], project_id=project_id, prefix=JOB_NAME, owner_id=OWNER_ID
+    materials=material, workflow_id=saved_workflow_data["_id"], project_id=project_id, prefix=JOB_NAME, owner_id=OWNER_ID
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"jobs = job_endpoints.create_by_ids(\n",
" materials=material, workflow_id=workflow[\"_id\"], project_id=project_id, prefix=JOB_NAME, owner_id=OWNER_ID\n",
")\n",
"\n",
"display_JSON(jobs)"
"jobs = job_endpoints.create_by_ids(\n",
" materials=material, workflow_id=saved_workflow_data[\"_id\"], project_id=project_id, prefix=JOB_NAME, owner_id=OWNER_ID\n",
")\n",
"\n",
"display_JSON(jobs)"
🤖 Prompt for AI Agents
In other/materials_designer/workflows/run_bandgap_workflow_min.ipynb around
lines 421 to 425, the code uses workflow["_id"] which is invalid because
`workflow` is a Workflow object; instead reference the saved workflow dict
created earlier (saved_workflow_data) or the Workflow object's id attribute.
Replace workflow["_id"] with saved_workflow_data["_id"] (or use workflow.id if
you prefer the object) when calling job_endpoints.create_by_ids, ensuring the
variable name matches the earlier assignment at line ~333 so the correct
workflow id is passed.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (2)

67-84: Avoid hardcoded credential placeholders and harden data_from_host handling.

ACCOUNT_ID = "ACCOUNT_ID" / AUTH_TOKEN = "AUTH_TOKEN" / ORGANIZATION_ID = "ORGANIZATION_ID" both trip security tooling (S105) and encourage users to paste secrets directly into the notebook. In addition, data_from_host is used without a guard; if it is missing under emscripten, this will crash even before you can show a helpful message.

Recommend:

  • Drop hardcoded credential placeholders.
  • Always read credentials from the environment.
  • Guard access to data_from_host and provide a clear error if it is not present.

For example:

-ACCOUNT_ID = "ACCOUNT_ID"
-AUTH_TOKEN = "AUTH_TOKEN"
-ORGANIZATION_ID = "ORGANIZATION_ID"
-
-import os
-import sys
-
-if sys.platform == "emscripten":
-    # Only works if launched within the Platform, otherwise requires redirect to Login
-    apiConfig = data_from_host.get("apiConfig")
-    os.environ.update(data_from_host.get("environ", {}))
-    os.environ.update(
-        dict(
-            ACCOUNT_ID=apiConfig.get("accountId"),
-            AUTH_TOKEN=apiConfig.get("authToken"),
-            ORGANIZATION_ID=apiConfig.get("organizationId") or "",
-        )
-    )
+import os
+import sys
+
+if sys.platform == "emscripten":
+    # Only works if launched within the Platform, otherwise requires redirect to Login
+    if "data_from_host" in globals():
+        apiConfig = data_from_host.get("apiConfig", {})
+        os.environ.update(data_from_host.get("environ", {}))
+        os.environ.update(
+            dict(
+                ACCOUNT_ID=apiConfig.get("accountId"),
+                AUTH_TOKEN=apiConfig.get("authToken"),
+                ORGANIZATION_ID=apiConfig.get("organizationId") or "",
+            )
+        )
+    else:
+        raise RuntimeError("Running under emscripten without data_from_host available")
+
+ACCOUNT_ID = os.getenv("ACCOUNT_ID")
+AUTH_TOKEN = os.getenv("AUTH_TOKEN")
+ORGANIZATION_ID = os.getenv("ORGANIZATION_ID", "")
+
+if not all([ACCOUNT_ID, AUTH_TOKEN]):
+    raise ValueError(
+        "Missing required credentials. Ensure ACCOUNT_ID and AUTH_TOKEN are set in the environment."
+    )

This keeps secrets in the environment, makes the JupyterLite branch robust, and gives users a clear failure mode when credentials are missing.


420-431: Fix workflow id usage and job submission (workflow["_id"] and jobs["_id"]).

There are a couple of issues in the job creation/submission flow:

  1. workflow_id=workflow["_id"]workflow is a Workflow object created via Workflow.create(...), not a dict, so workflow["_id"] is very likely invalid. You already have the saved workflow document in saved_workflow_data from WorkflowEndpoints.create, which should hold the correct _id. (This was flagged in an earlier review and still appears unresolved.)

  2. job_endpoints.submit(jobs["_id"])jobs is subsequently iterated (for job in jobs and list comprehension for job_ids), so treating it as a single dict here is inconsistent and will fail if it is a list.

  3. materials=material – given the method name create_by_ids, this probably expects material ids (or a list of them), not the local material object. You already computed material_id from saved_material, which is safer to pass through.

A possible end‑to‑end fix:

 job_endpoints = JobEndpoints(*ENDPOINT_ARGS)
 
-jobs = job_endpoints.create_by_ids(
-    materials=material,
-    workflow_id=workflow["_id"],
-    project_id=project_id,
-    prefix=JOB_NAME,
-    owner_id=OWNER_ID,
-    compute=compute
-)
-
-display_JSON(jobs)
+jobs = job_endpoints.create_by_ids(
+    materials=[material_id],  # or [saved_material["_id"]]
+    workflow_id=saved_workflow_data["_id"],
+    project_id=project_id,
+    prefix=JOB_NAME,
+    owner_id=OWNER_ID,
+    compute=compute,
+)
+
+job_ids = [job["_id"] for job in jobs]
+display_JSON(jobs)
-job_endpoints.submit(jobs["_id"])
+job_endpoints.submit(job_ids)
-from utils.generic import wait_for_jobs_to_finish
-
-job_ids = [job["_id"] for job in jobs]
-
-wait_for_jobs_to_finish(job_endpoints, job_ids, poll_interval=60)
+from utils.generic import wait_for_jobs_to_finish
+
+wait_for_jobs_to_finish(job_endpoints, job_ids, poll_interval=60)

This aligns the workflow id with the saved workflow document, treats jobs consistently as a list, and passes material ids explicitly. Please double‑check these against the actual exabyte_api_client.JobEndpoints.create_by_ids / submit signatures in your version.

Also applies to: 449-449, 459-463

🧹 Nitpick comments (2)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (2)

125-137: Clean up ACCOUNT_ID imports/redefinitions to avoid confusion.

ACCOUNT_ID is defined from environment (or placeholders) and then imported again from utils.settings in multiple cells. This creates F811 “redefinition” warnings and makes it unclear which value is actually used.

Given you already derive credentials from environment in the login cell, you can rely on that single source of truth and avoid re‑importing ACCOUNT_ID:

-from utils.settings import ENDPOINT_ARGS, ACCOUNT_ID
+from utils.settings import ENDPOINT_ARGS
...
-from exabyte_api_client import WorkflowEndpoints
-from utils.settings import ENDPOINT_ARGS, ACCOUNT_ID
+from exabyte_api_client import WorkflowEndpoints
+from utils.settings import ENDPOINT_ARGS
...
-OWNER_ID = os.getenv("ORGANIZATION_ID") or ACCOUNT_ID
+OWNER_ID = os.getenv("ORGANIZATION_ID") or ACCOUNT_ID

(Here ACCOUNT_ID refers to the environment-derived variable from the login cell.) This keeps credential flow simpler and removes the static-analysis noise.

Also applies to: 328-334, 403-407


551-567: Consider updating notebook language_info metadata to match Python 3.

The notebook metadata advertises kernelspec.name = "python3" but language_info.version = "2.7.6" and pygments_lexer = "ipython2". This mismatch can confuse tools and some notebook consumers.

If this notebook is intended for Python 3 only (which it appears to be), consider updating the language_info block (or letting Jupyter overwrite it by re‑saving from a modern Python 3 kernel) so version and lexer reflect the actual runtime.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between be40a18 and 8564836.

📒 Files selected for processing (1)
  • other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

7-7: await statement outside of a function

(F704)


8-8: await statement outside of a function

(F704)


11-11: await statement outside of a function

(F704)


15-15: Possible hardcoded password assigned to: "AUTH_TOKEN"

(S105)


23-23: Undefined name data_from_host

(F821)


24-24: Undefined name data_from_host

(F821)


41-41: Redefinition of unused ACCOUNT_ID from line 14

(F811)


234-234: Undefined name display

(F821)

Comment on lines 481 to 506
"import re\n",
"from utils.generic import get_property_by_subworkflow_and_unit_indicies\n",
"from exabyte_api_client.endpoints.properties import PropertiesEndpoints\n",
"\n",
"property_endpoints = PropertiesEndpoints(*ENDPOINT_ARGS)\n",
"\n",
"results = []\n",
"# Iterate in general, but we have only 1 material here\n",
"for material in [saved_material]:\n",
" job = next((job for job in jobs if job[\"_material\"][\"_id\"] == material[\"_id\"]))\n",
" final_structure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, \"final_structure\", job, 0, 0)[\n",
" \"data\"\n",
" ]\n",
" pressure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, \"pressure\", job, 0, 0)[\"data\"][\"value\"]\n",
" unit_flowchart_id = job[\"workflow\"][\"subworkflows\"][1][\"units\"][1][\"flowchartId\"]\n",
" band_gap_direct = property_endpoints.get_direct_band_gap(job[\"_id\"], unit_flowchart_id)\n",
" band_gap_indirect = property_endpoints.get_indirect_band_gap(job[\"_id\"], unit_flowchart_id)\n",
"\n",
" results.append(\n",
" {\n",
" \"material_id\": material[\"_id\"],\n",
" \"angle_deg\": re.search(r\"(\\d+(?:\\.\\d+)?) degrees\", material[\"name\"]).group(1),\n",
" \"band_gap_direct\": band_gap_direct,\n",
" \"band_gap_indirect\": band_gap_indirect,\n",
" }\n",
" )"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's locate the notebook file
find . -name "run_bandgap_workflow_min.ipynb" -type f

Repository: Exabyte-io/api-examples

Length of output: 134


🏁 Script executed:

# Read the notebook to check the code around lines 481-506 and 524-538
# Notebooks are JSON files, so we'll examine the structure
wc -l ./other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

Repository: Exabyte-io/api-examples

Length of output: 138


🏁 Script executed:

# Read the notebook around the specified lines to understand its structure
sed -n '475,510p' ./other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

Repository: Exabyte-io/api-examples

Length of output: 1719


🏁 Script executed:

# Check lines 524-538 for the second occurrence
sed -n '520,545p' ./other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

Repository: Exabyte-io/api-examples

Length of output: 941


🏁 Script executed:

# Search for usage of final_structure and pressure variables in the notebook
rg "final_structure|pressure" ./other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

Repository: Exabyte-io/api-examples

Length of output: 332


🏁 Script executed:

# Check the beginning of the notebook for imports and check for display import
head -100 ./other/materials_designer/workflows/run_bandgap_workflow_min.ipynb | grep -i "display\|import"

Repository: Exabyte-io/api-examples

Length of output: 328


🏁 Script executed:

# Let's verify the regex pattern is correct by checking the material names structure
# Search for how saved_material is used and what the "name" field looks like
grep -n "saved_material\|material\[\"name\"\]" ./other/materials_designer/workflows/run_bandgap_workflow_min.ipynb | head -20

Repository: Exabyte-io/api-examples

Length of output: 229


Convert angle_deg to float, remove unused variables, and import display.

Three code quality issues need addressing:

  • angle_deg is a string from regex and causes lexicographic sorting in sort_values("angle_deg") and incorrect x-axis ordering in plots. Convert to float for proper numeric ordering.

  • final_structure and pressure are computed but never used; remove them to eliminate unnecessary work.

  • display() on line 524 relies on the Jupyter global without explicit import, affecting static analysis clarity. Import it explicitly from IPython.display.

Suggested changes at lines 481-506:

+    match = re.search(r"(\d+(?:\.\d+)?) degrees", material["name"])
+    angle_deg = float(match.group(1)) if match else None
+
     results.append(
         {
             "material_id": material["_id"],
-            "angle_deg": re.search(r"(\d+(?:\.\d+)?) degrees", material["name"]).group(1),
+            "angle_deg": angle_deg,
             "band_gap_direct": band_gap_direct,
             "band_gap_indirect": band_gap_indirect,
         }
     )

Remove lines computing unused final_structure and pressure:

-    final_structure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, "final_structure", job, 0, 0)[
-        "data"
-    ]
-    pressure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, "pressure", job, 0, 0)["data"]["value"]
     unit_flowchart_id = job["workflow"]["subworkflows"][1]["units"][1]["flowchartId"]

Add import at lines 524-538:

 from matplotlib import pyplot as plt
 import pandas as pd
+from IPython.display import display
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"import re\n",
"from utils.generic import get_property_by_subworkflow_and_unit_indicies\n",
"from exabyte_api_client.endpoints.properties import PropertiesEndpoints\n",
"\n",
"property_endpoints = PropertiesEndpoints(*ENDPOINT_ARGS)\n",
"\n",
"results = []\n",
"# Iterate in general, but we have only 1 material here\n",
"for material in [saved_material]:\n",
" job = next((job for job in jobs if job[\"_material\"][\"_id\"] == material[\"_id\"]))\n",
" final_structure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, \"final_structure\", job, 0, 0)[\n",
" \"data\"\n",
" ]\n",
" pressure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, \"pressure\", job, 0, 0)[\"data\"][\"value\"]\n",
" unit_flowchart_id = job[\"workflow\"][\"subworkflows\"][1][\"units\"][1][\"flowchartId\"]\n",
" band_gap_direct = property_endpoints.get_direct_band_gap(job[\"_id\"], unit_flowchart_id)\n",
" band_gap_indirect = property_endpoints.get_indirect_band_gap(job[\"_id\"], unit_flowchart_id)\n",
"\n",
" results.append(\n",
" {\n",
" \"material_id\": material[\"_id\"],\n",
" \"angle_deg\": re.search(r\"(\\d+(?:\\.\\d+)?) degrees\", material[\"name\"]).group(1),\n",
" \"band_gap_direct\": band_gap_direct,\n",
" \"band_gap_indirect\": band_gap_indirect,\n",
" }\n",
" )"
import re
from utils.generic import get_property_by_subworkflow_and_unit_indicies
from exabyte_api_client.endpoints.properties import PropertiesEndpoints
property_endpoints = PropertiesEndpoints(*ENDPOINT_ARGS)
results = []
# Iterate in general, but we have only 1 material here
for material in [saved_material]:
job = next((job for job in jobs if job["_material"]["_id"] == material["_id"]))
unit_flowchart_id = job["workflow"]["subworkflows"][1]["units"][1]["flowchartId"]
band_gap_direct = property_endpoints.get_direct_band_gap(job["_id"], unit_flowchart_id)
band_gap_indirect = property_endpoints.get_indirect_band_gap(job["_id"], unit_flowchart_id)
match = re.search(r"(\d+(?:\.\d+)?) degrees", material["name"])
angle_deg = float(match.group(1)) if match else None
results.append(
{
"material_id": material["_id"],
"angle_deg": angle_deg,
"band_gap_direct": band_gap_direct,
"band_gap_indirect": band_gap_indirect,
}
)
🤖 Prompt for AI Agents
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb around lines
481-506: convert the regex-captured angle_deg to a float (wrap the regex group
with float(...) so downstream sort_values("angle_deg") treats it numerically),
remove the unused computations for final_structure and pressure (delete those
two get_property_by_subworkflow_and_unit_indicies calls and their variables),
and explicitly import display by adding from IPython.display import display at
the top of the notebook so display() is statically valid.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (3)

67-86: Avoid hardcoded credentials and guard data_from_host in auth setup.

This cell still hardcodes ACCOUNT_ID, AUTH_TOKEN, and ORGANIZATION_ID, triggers Ruff’s S105, and assumes data_from_host exists, which will raise NameError outside the platform. It also conflicts with the later ACCOUNT_ID import from utils.settings.

Consider instead loading credentials exclusively from the environment and guarding data_from_host:

-ACCOUNT_ID = "ACCOUNT_ID"
-AUTH_TOKEN = "AUTH_TOKEN"
-ORGANIZATION_ID = "ORGANIZATION_ID"
-
-import os
-import sys
-import json
-
-if sys.platform == "emscripten":
-    # Only works if launched within the Platform, otherwise requires redirect to Login
-    apiConfig = data_from_host.get("apiConfig")
-    os.environ.update(data_from_host.get("environ", {}))
-    os.environ.update(
-        dict(
-            ACCOUNT_ID=apiConfig.get("accountId"),
-            AUTH_TOKEN=apiConfig.get("authToken"),
-            ORGANIZATION_ID=apiConfig.get("organizationId", ""),
-            CLUSTERS=json.dumps(apiConfig.get("clusters", [])),
-        )
-    )
+import os
+import sys
+import json
+
+# Load credentials from environment; do not hardcode secrets in the notebook.
+ACCOUNT_ID = os.getenv("ACCOUNT_ID")
+AUTH_TOKEN = os.getenv("AUTH_TOKEN")
+ORGANIZATION_ID = os.getenv("ORGANIZATION_ID", "")
+
+if sys.platform == "emscripten" and "data_from_host" in globals():
+    # Only works if launched within the Platform, otherwise requires redirect to Login
+    apiConfig = data_from_host.get("apiConfig", {})
+    os.environ.update(data_from_host.get("environ", {}))
+    os.environ.update(
+        dict(
+            ACCOUNT_ID=apiConfig.get("accountId"),
+            AUTH_TOKEN=apiConfig.get("authToken"),
+            ORGANIZATION_ID=apiConfig.get("organizationId", ""),
+            CLUSTERS=json.dumps(apiConfig.get("clusters", [])),
+        )
+    )
+
+if not all([ACCOUNT_ID, AUTH_TOKEN]):
+    raise ValueError("Missing ACCOUNT_ID and/or AUTH_TOKEN; set them in the environment before running this cell.")

This removes hardcoded secrets, avoids NameError on data_from_host, and sidesteps the ACCOUNT_ID redefinition warning.


429-439: Fix job creation, submission, and lookup: wrong workflow id and inconsistent jobs type.

The job lifecycle code is currently not executable:

  • workflow_id=workflow["_id"] is invalid: workflow is a Workflow object, not a dict. Use the id from saved_workflow_data (or workflow.id if available).
  • jobs = SimpleNamespace(**jobs) almost certainly breaks things: create_by_ids typically returns a list of job dicts. Wrapping that in a SimpleNamespace makes jobs non‑subscriptable and iterating it will yield attribute names instead of job dicts.
  • job_endpoints.submit(jobs["_id"]) expects jobs to be a dict, but above you convert it to a namespace.
  • job_ids = [job["_id"] for job in jobs] and the later lookup next((job for job in jobs if job._meterial._id == material._id)) both rely on jobs being an iterable of job objects, which is no longer true once wrapped in SimpleNamespace. The _meterial attribute is also a typo.

A more consistent pattern is to keep jobs as the list returned by create_by_ids, operate on dicts, and use the saved workflow id:

-from types import SimpleNamespace
-job_endpoints = JobEndpoints(*ENDPOINT_ARGS)
-
-jobs = job_endpoints.create_by_ids(
-    materials=material,
-    workflow_id=workflow["_id"],
-    project_id=project_id,
-    prefix=JOB_NAME,
-    owner_id=OWNER_ID,
-    compute=compute
-)
-
-jobs = SimpleNamespace(**jobs)
-
-display_JSON(jobs)
+job_endpoints = JobEndpoints(*ENDPOINT_ARGS)
+
+jobs = job_endpoints.create_by_ids(
+    materials=[material_id],  # or material_id, depending on API contract
+    workflow_id=saved_workflow_data["_id"],
+    project_id=project_id,
+    prefix=JOB_NAME,
+    owner_id=OWNER_ID,
+    compute=compute,
+)
+
+display_JSON(jobs)
-job_endpoints.submit(jobs["_id"])
-
-job_ids = [job["_id"] for job in jobs]
+job_ids = [job["_id"] for job in jobs]
+job_endpoints.submit(job_ids)
-for material in [saved_material]:
-    job = next((job for job in jobs if job._meterial._id == material._id))
+for material in [saved_material]:
+    job = next(
+        job
+        for job in jobs
+        if job.get("material", {}).get("_id") == material.get("_id")
+    )

Please double‑check the exact create_by_ids contract (whether it expects material ids, materials, or a single item) and adjust the materials argument and submit signature accordingly.

Also applies to: 461-475, 501-503


503-507: Streamline result extraction: drop unused properties, make angle_deg numeric, and import display.

This block has a few correctness and cleanliness issues:

  • final_structure and pressure are fetched but never used; they just add API calls and noise.
  • angle_deg is currently taken as a string from the regex group; sort_values("angle_deg") will then sort lexicographically, not numerically.
  • display(df) relies on the implicit IPython global, which Ruff flags as undefined; an explicit import improves clarity.

Suggested changes:

-    final_structure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, "final_structure", job, 0, 0)[
-        "data"
-    ]
-    pressure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, "pressure", job, 0, 0)["data"]["value"]
-
-    unit_flowchart_id = Workflow(job["workflow"]).get_unit_by_name(name="pw_scf").flowchart_id
+    unit_flowchart_id = Workflow(job["workflow"]).get_unit_by_name(name="pw_scf").flowchart_id
@@
-    results.append(
-        {
-            "material_id": material._id,
-            "angle_deg": re.search(r"(\d+(?:\.\d+)?) degrees", material.name).group(1),
-            "band_gap_direct": band_gap_direct,
-            "band_gap_indirect": band_gap_indirect,
-        }
-    )
+    match = re.search(r"(\d+(?:\.\d+)?) degrees", material.get("name", ""))
+    angle_deg = float(match.group(1)) if match else None
+
+    results.append(
+        {
+            "material_id": material.get("_id"),
+            "angle_deg": angle_deg,
+            "band_gap_direct": band_gap_direct,
+            "band_gap_indirect": band_gap_indirect,
+        }
+    )

And in the plotting cell:

-from matplotlib import pyplot as plt
-import pandas as pd
+from matplotlib import pyplot as plt
+import pandas as pd
+from IPython.display import display

This keeps the results table minimal, ensures numeric sorting/axes, and satisfies the static analysis warning about display.

Also applies to: 512-519, 537-541

other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (2)

373-379: Fix visualization import: mat3ra.provemat3ra.prode.

The import from mat3ra.prove import visualize_property points to a non‑existent module; the ProDe package is mat3ra.prode, which you already use above for PropertyEnum. As written, this cell will raise ModuleNotFoundError.

-# Visual library that can visualize any property defined in Prode
-# Use similar to Wave
-from mat3ra.prove import visualize_property
+# Visual library that can visualize any property defined in Prode
+# Use similar to Wave
+from mat3ra.prode import visualize_property

This aligns the import with the actual ProDe package and the earlier PropertyEnum usage.


396-406: Update notebook metadata to consistently reflect Python 3.

As in the minimal notebook, the metadata here mixes a Python 3 kernelspec with Python 2 language info (ipython2, version "2.7.6"). For tools and exporters that read language_info, this inconsistency can be misleading.

Align it with Python 3:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
@@
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.10.0"

(Use the actual Python 3.x version you run in your environment.)

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8564836 and d3fd230.

📒 Files selected for processing (2)
  • other/materials_designer/workflows/run_bandgap_workflow_max.ipynb (1 hunks)
  • other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
other/materials_designer/workflows/run_bandgap_workflow_max.ipynb

5-5: await statement outside of a function

(F704)

other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

7-7: await statement outside of a function

(F704)


8-8: await statement outside of a function

(F704)


11-11: await statement outside of a function

(F704)


15-15: Possible hardcoded password assigned to: "AUTH_TOKEN"

(S105)


24-24: Undefined name data_from_host

(F821)


25-25: Undefined name data_from_host

(F821)


43-43: Redefinition of unused ACCOUNT_ID from line 14

(F811)


248-248: Undefined name display

(F821)

Comment on lines +563 to +580
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and get its size
wc -l other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

Repository: Exabyte-io/api-examples

Length of output: 136


🏁 Script executed:

# Extract the metadata section from the notebook (around lines 563-580)
sed -n '560,585p' other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

Repository: Exabyte-io/api-examples

Length of output: 514


🏁 Script executed:

# Get the full metadata section to understand context better
python3 << 'EOF'
import json
import sys

try:
    with open('other/materials_designer/workflows/run_bandgap_workflow_min.ipynb', 'r') as f:
        notebook = json.load(f)
    
    # Print metadata
    print("=== Notebook Metadata ===")
    metadata = notebook.get('metadata', {})
    print(json.dumps(metadata, indent=2))
    
    # Print a sample of cells to check Python version usage
    print("\n=== Sample Cell Content (first 3 cells) ===")
    cells = notebook.get('cells', [])
    for i, cell in enumerate(cells[:3]):
        if cell.get('cell_type') == 'code':
            source = cell.get('source', [])
            print(f"\nCell {i}:")
            content = ''.join(source) if isinstance(source, list) else source
            print(content[:200])
            
except Exception as e:
    print(f"Error: {e}", file=sys.stderr)
    sys.exit(1)
EOF

Repository: Exabyte-io/api-examples

Length of output: 732


Align notebook metadata with Python 3 kernel.

The metadata declares a Python 3 kernel (kernelspec.name: "python3") but contains conflicting Python 2 settings:

  • language_info.codemirror_mode.version: 2 (should be 3)
  • language_info.pygments_lexer: "ipython2" (should be "ipython3")
  • language_info.version: "2.7.6" (should be 3.x)

Update to reflect the actual Python 3 kernel:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
@@
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.10.0"

(Adjust the version string to match the Python version you use to run these notebooks.)

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
}
🤖 Prompt for AI Agents
In other/materials_designer/workflows/run_bandgap_workflow_min.ipynb around
lines 563 to 580, the notebook metadata claims a Python 3 kernel but still
contains Python 2 settings; update language_info.codemirror_mode.version to 3,
set language_info.pygments_lexer to "ipython3", and change language_info.version
to a Python 3.x string that matches the interpreter you use for these notebooks
(e.g., "3.8.10" or your actual version) so the metadata consistently reflects
Python 3.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (5)

532-554: Make angle_deg numeric and remove unused final_structure/pressure calls

In this results loop:

  • final_structure and pressure are computed but never used, incurring extra API calls for no benefit.
  • angle_deg is captured as a regex string; sort_values("angle_deg") will then sort lexicographically, not numerically.

You can both simplify and fix the ordering by:

-    # Find the job that matches this material - fix typo: _meterial -> _material
-    job = next((job for job in jobs if job._material._id == material._id))
-    final_structure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, "final_structure", job, 0, 0)[
-        "data"
-    ]
-    pressure = get_property_by_subworkflow_and_unit_indicies(property_endpoints, "pressure", job, 0, 0)["data"]["value"]
-
+    # Find the job that matches this material
+    job = next((job for job in jobs if job._material._id == material._id))
+
@@
-    results.append(
-        {
-            "material_id": material._id,
-            "angle_deg": re.search(r"(\d+(?:\.\d+)?) degrees", material.name).group(1),
-            "band_gap_direct": band_gap_direct,
-            "band_gap_indirect": band_gap_indirect,
-        }
-    )
+    match = re.search(r"(\d+(?:\.\d+)?) degrees", material.name)
+    angle_deg = float(match.group(1)) if match else None
+
+    results.append(
+        {
+            "material_id": material._id,
+            "angle_deg": angle_deg,
+            "band_gap_direct": band_gap_direct,
+            "band_gap_indirect": band_gap_indirect,
+        }
+    )

This avoids unused work and ensures correct numeric sorting/plotting by angle.


572-576: Explicitly import display for clarity and to satisfy static analysis

display(df) relies on the Jupyter global, which Ruff reports as undefined (F821). Importing it makes the dependency explicit and silences the warning:

-from matplotlib import pyplot as plt
-import pandas as pd
+from matplotlib import pyplot as plt
+import pandas as pd
+from IPython.display import display

The rest of the cell can remain as-is.


598-615: Align notebook metadata with Python 3 kernel

The kernelspec declares "python3", but language_info still has Python 2 values (codemirror_mode.version: 2, pygments_lexer: "ipython2", version: "2.7.6"). This can confuse tools that inspect metadata.

Update to match the actual Python 3 interpreter you use, e.g.:

   "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
   },
@@
   "name": "python",
   "nbconvert_exporter": "python",
-  "pygments_lexer": "ipython2",
-  "version": "2.7.6"
+  "pygments_lexer": "ipython3",
+  "version": "3.10.0"

(Adjust the version string to your real Python 3.x version.)


87-106: Remove hardcoded placeholder credentials and guard access to data_from_host

You still (a) hardcode placeholder secrets and (b) rely on a JupyterLite-specific global data_from_host that Ruff reports as undefined. Even if they’re placeholders, this is noisy and brittle.

A safer pattern is:

  • Drop the placeholder assignments entirely.
  • Only touch data_from_host when it actually exists in globals.
  • Validate that ACCOUNT_ID and AUTH_TOKEN are present in the environment after the JupyterLite branch (and let utils.settings read from env).

Example patch:

-ACCOUNT_ID = "ACCOUNT_ID"
-AUTH_TOKEN = "AUTH_TOKEN"
-ORGANIZATION_ID = "ORGANIZATION_ID"
-
-import os
-import sys
-import json
-
-if sys.platform == "emscripten":
-    # Only works if launched within the Platform, otherwise requires redirect to Login
-    apiConfig = data_from_host.get("apiConfig")
-    os.environ.update(data_from_host.get("environ", {}))
-    os.environ.update(
-        dict(
-            ACCOUNT_ID=apiConfig.get("accountId"),
-            AUTH_TOKEN=apiConfig.get("authToken"),
-            ORGANIZATION_ID=apiConfig.get("organizationId", ""),
-            CLUSTERS=json.dumps(apiConfig.get("clusters", [])),
-        )
-    )
+import os
+import sys
+import json
+
+if sys.platform == "emscripten" and "data_from_host" in globals():
+    # Only works if launched within the Platform, otherwise requires redirect to Login
+    apiConfig = data_from_host.get("apiConfig", {})
+    os.environ.update(data_from_host.get("environ", {}))
+    os.environ.update(
+        dict(
+            ACCOUNT_ID=apiConfig.get("accountId"),
+            AUTH_TOKEN=apiConfig.get("authToken"),
+            ORGANIZATION_ID=apiConfig.get("organizationId", ""),
+            CLUSTERS=json.dumps(apiConfig.get("clusters", [])),
+        )
+    )
+
+if not (os.getenv("ACCOUNT_ID") and os.getenv("AUTH_TOKEN")):
+    raise ValueError(
+        "Missing required credentials. Ensure ACCOUNT_ID and AUTH_TOKEN are set in environment."
+    )

This removes the S105 “hardcoded password” smell and the F821 on data_from_host, and makes the notebook’s credential story clearer.


452-461: workflow is a Workflow object, not a dict; workflow["_id"] is invalid

Here:

jobs_response = job_endpoints.create_by_ids(
    materials=material,
    workflow_id=workflow["_id"],
    ...
)

workflow was created via Workflow.create(workflow_config) and is not subscriptable. You already have the saved workflow document in saved_workflow_data, which should carry the _id. Use that instead:

 jobs_response = job_endpoints.create_by_ids(
-    materials=material,
-    workflow_id=workflow["_id"],
+    materials=material,
+    workflow_id=saved_workflow_data["_id"],
     project_id=project_id,
     prefix=JOB_NAME,
     owner_id=OWNER_ID,
     compute=compute
 )

Without this, job creation will fail with a TypeError.

🧹 Nitpick comments (1)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (1)

40-49: Top-level await is fine for Jupyter but will keep triggering Ruff F704

This pattern (await micropip.install(...) at top level) is idiomatic for JupyterLite/Pyodide but Ruff flags it as F704. Consider either configuring Ruff to ignore F704 for notebooks, or documenting that this cell is intended for a JupyterLite runtime so the lint warning can be safely ignored.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d3fd230 and d5cb17b.

📒 Files selected for processing (1)
  • other/materials_designer/workflows/run_bandgap_workflow_min.ipynb (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
other/materials_designer/workflows/run_bandgap_workflow_min.ipynb

7-7: await statement outside of a function

(F704)


8-8: await statement outside of a function

(F704)


11-11: await statement outside of a function

(F704)


31-31: Possible hardcoded password assigned to: "AUTH_TOKEN"

(S105)


40-40: Undefined name data_from_host

(F821)


41-41: Undefined name data_from_host

(F821)


59-59: Redefinition of unused ACCOUNT_ID from line 30

(F811)


278-278: Undefined name display

(F821)

Comment on lines +147 to +162
"from utils.settings import ENDPOINT_ARGS, ACCOUNT_ID\n",
"from utils.generic import display_JSON\n",
"from exabyte_api_client.endpoints.materials import MaterialEndpoints\n",
"\n",
"\n",
"\n",
"OWNER_ID = os.getenv(\"ORGANIZATION_ID\") or ACCOUNT_ID\n",
"endpoint = MaterialEndpoints(*ENDPOINT_ARGS)\n",
"\n",
"RAW_CONFIG = material.to_dict()\n",
"fields = [\"name\", \"lattice\", \"basis\"]\n",
"CONFIG = {key: RAW_CONFIG[key] for key in fields}\n",
"\n",
"saved_material_response = endpoint.create(CONFIG, OWNER_ID)\n",
"saved_material = dict_to_namespace(saved_material_response)"
]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

saved_material is a SimpleNamespace, so .get() will fail

dict_to_namespace converts the response dict into a SimpleNamespace, but later you call:

material_id = saved_material.get("_id")

SimpleNamespace does not implement .get, so this will raise AttributeError at runtime.

Use attribute access (or fall back via getattr) instead:

-saved_material_response = endpoint.create(CONFIG, OWNER_ID)
-saved_material = dict_to_namespace(saved_material_response)
+saved_material_response = endpoint.create(CONFIG, OWNER_ID)
+saved_material = dict_to_namespace(saved_material_response)
@@
-material_id = saved_material.get("_id")
+material_id = getattr(saved_material, "_id", None)

Also applies to: 179-181

🤖 Prompt for AI Agents
In other/materials_designer/workflows/run_bandgap_workflow_min.ipynb around
lines 147 to 162 (and also adjust the similar usage at 179-181), saved_material
is a SimpleNamespace so using saved_material.get("_id") will raise
AttributeError; replace .get(...) calls with attribute access
(saved_material._id) or use getattr(saved_material, "_id", None) to safely
obtain the value (and do the same for any other keys accessed via .get on
saved_material).

@VsevolodX
Copy link
Member Author

@coderabbitai pause

@coderabbitai
Copy link

coderabbitai bot commented Dec 4, 2025

✅ Actions performed

Reviews paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants