-
Notifications
You must be signed in to change notification settings - Fork 389
dev: add make notebook
#2528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
dev: add make notebook
#2528
Conversation
| @echo "Cleanup complete." | ||
|
|
||
| notebook: ## Launch Jupyter Notebook | ||
| ${POETRY} run pip install jupyter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move this into a poetry dependency group? Similar to the docs.
I agree, and that's great, but should we also spin up the resources as part of this effort? We could even inject a notebook that imports Spark-connect, etc (which won't be installed from a fresh install? I think this is a dev dependency, we probably want to double check there to avoid scaring newcomers to the project). |
|
Bonus idea: what if |
We could do getting started as a notebook! https://py.iceberg.apache.org/#getting-started-with-pyiceberg |
yea we could do that. the integration test setup gives us 2 different catalogs (rest and hms) |
|
@kevinjqliu I would keep it simple, and go with the preferred catalog; REST :) |
Rationale for this change
Add
make notebookto spin up a jupyter notebookWith spark connect (#2491) and our testing setup, we can quickly spin up a local env with
in the jupyter notebook, connect to spark easily
Are these changes tested?
Are there any user-facing changes?