Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OrthographicCamera and/or Hook for Perspective Angle #31

Open
chrisjsewell opened this issue Jun 28, 2017 · 18 comments
Open

OrthographicCamera and/or Hook for Perspective Angle #31

chrisjsewell opened this issue Jun 28, 2017 · 18 comments

Comments

@chrisjsewell
Copy link
Contributor

Hey Maarten, great package!
I've been playing around with it to visualize electron densities from quantum computations;
https://chrisjsewell.github.io/ipyvolume_si_ech3

However, the large hard-wired perspective angle is a bit of a pain for viewing trends along certain directions. So it would be great, ideally, to have the option for the Orthographic Camera, or at least the PerspectiveCamera VIEW_ANGLE (in figure.js) linked to a Figure trait.

On a related note, up until the last few weeks, I have no experience with JS. Apparently it is an interpreted language, but, if I try changing VIEW_ANGLE in the source code (figure.js), like I would for Python code, this doesn't change anything. Is there extra steps I need to take? Do I need to compile something?

FYI, for my work, other nice to haves would be;

  • creating line objects (to show atomic bonds)
    • I know I can do this in pythreejs, but it would be nice to have it integrated
  • creating isosurfaces
    • or, more fancy, would be to create a moveable/rotatable plane in the figure, that is linked to a plot, that shows the density/isocurves for that slice.
@maartenbreddels
Copy link
Collaborator

Hi Chris,

thanks for sharing that with me, I always enjoy seeing how other people see ipyvolume and getting feedback is really useful for future development.

I'm assuming you installed it from github using pip install -e . right?
If you run from the js dir npm install, it will update ipyvolume/static/index.js, then you need to refresh the browser/notebook. There are better ways for development mode, using webpack --watch, I need to document that some day. The index.js file contains the full source code, with all dependencies, so changing the original source without recreating this file indeed will not do much.

Try hacking it a bit, see if you can make a new a .view_angle trait. Copy what you see for eye_separation and see if you see follow the logic. Fee free to ask questions on gitter and before you know it you have you first PR for ipyvolume :).

Lines are now supported in master, but not yet well documented. The scatter class has a connected property to draw lines between the points, set it to True and you should see lines.

Isosurfaces are really something I'd like to have, and threejs has support for it so it shouldn't be too complex to support. I like the idea of the plane, and happy to accept a PR for that 👍 😉 . No seriously, I don't think I can make that myself soon (other things higher on the priority list), but would be happy to help.

@chrisjsewell
Copy link
Contributor Author

Cheers!

Yeh I'll see what I can do.
For now, I've worked out how to change from source :) (yes just need to refresh web-page after changes). Changing VIEW_ANGLE to 5 and this.camera.position.z to 20 improves the situation.

I installed it from pip, in a conda environment and just to note for prosperity:

  • To change the rendered image in the notebook, I had to change: /anaconda/pkgs/ipyvolume-0.3.2-py27_0/share/jupyter/nbextensions/ipyvolume/index.js
  • To change the rendered image that saves to html or png/jpeg, I had to change; /anaconda/envs/ipyvolume/lib/python2.7/site-packages/ipyvolume/static/index.js

One last, minor (maybe) bug, I noted is that, for certain values of data_min, I get some rendering artifacts in the figure. You can see it in the new version of https://chrisjsewell.github.io/ipyvolume_si_ech3. Literally if I change the initial value from 1.6 to 1.61, it disappears.

@vidartf
Copy link
Contributor

vidartf commented Jun 29, 2017

FYI: There is also a three camera called "Combined camera" or something similar, that allows for easy switching between orthographic/perspective camera (while retaining the view direction/size).

@maartenbreddels
Copy link
Collaborator

I didn't know that, thanks for sharing that!

@chrisjsewell
Copy link
Contributor Author

Hey @maartenbreddels , I will get round to making this orthographic camera eventually! But, for now, I thought I'd share that I've included an output from your project in my ipypublish package: https://github.com/chrisjsewell/ipypublish#embedding-interactive-html (hope this is ok).

The idea is to have a notebook cell with a static image of the widget in the output, and a path to the embed html in the metadata so that a) if you export to latex/pdf, you get the static image or b) if you export to html/reveal slides, you get the html. Works well and is awesome to have presentations with the ipyvolume renderings in :) https://chrisjsewell.github.io/ipypublish/Example.slides.html#/9

@maartenbreddels
Copy link
Collaborator

(odd, my last reply got lost, 2nd try)

Awesome work again, and interesting work on ipypublishing! I guess you know about pylab.savefig(), what I thought about is, that it should be possible for the 'screenshots' to be made in really high resolution (say 4 or 10x the resolution), for publication quality.

For the camera part, feel free to make a PR, it doesn't need to be merge ready, I can already give you some feedback and we can iterate on it.

cheers,

Maarten

@chrisjsewell
Copy link
Contributor Author

Yeh sounds good :) I think I saw someone else mention it, but if you could add the option in savefig to output to a PIL.Image or similar, that would be helpful ta.

@maartenbreddels
Copy link
Collaborator

Really nice idea, had to implement that direcly! :) (see 13ebff6)
Now you can do

p3.figure()
mesh = p3.examples.klein_bottle(uv=True)

Next cell:

mesh.texture = p3.screenshot()

and repeat the last cell many times 😉

@maartenbreddels
Copy link
Collaborator

maartenbreddels commented Jul 23, 2017

You can now specify the width and height of the screenshot or figure as well (e74164a):

mesh.texture = p3.screenshot(width=100, height=100) # low res texture
#... 
p3.savefig('fig1.png', width=1024*4, height=1024*4) # high res 4k plot

@chrisjsewell
Copy link
Contributor Author

Perfect thanks :) and big fan of this idea as well jupyter-widgets/pythreejs#109

@satra
Copy link
Contributor

satra commented Aug 14, 2017

@maartenbreddels @chrisjsewell - this is a really awesome extension. i started using it for brain meshes and immediately ran into the projection issue. i tried a quick hack to uncomment the line in figure.js to_orthographic, but that by itself didn't work. i'm coming at this from mostly a user standpoint, but if there were some pointers as to how it could be enabled, i'd be happy to go down that trail.

@chrisjsewell
Copy link
Contributor Author

So far I have just put in the fov hook and am using that, if you just set fig.camera_fov = 1 then its basically like orthographic

@satra
Copy link
Contributor

satra commented Aug 14, 2017

great - that looks much better.

image

is there a way to:

  1. extend the bounding box size
  2. turn axes off. for example, could i keep just x and y and turn z off or all axes off?

@satra
Copy link
Contributor

satra commented Aug 14, 2017

figured out the answer to 1:

fig.xlim = (-100, 100)
fig.ylim = (-100, 100)
fig.zlim = (-100, 100)

@chrisjsewell
Copy link
Contributor Author

for 2, the easiest way is:

fig.style = {'axes': {'color': 'black',
  'label': {'color': 'black'},
  'ticklabel': {'color': 'black'},
  'visible': False},
 'background-color': 'white',
 'box': {'visible': False}}

@satra
Copy link
Contributor

satra commented Aug 14, 2017

thank you - that's super useful.

@maartenbreddels
Copy link
Collaborator

Hi Satrajit,

thanks for the positive feedback! For the bounding box, see also http://ipyvolume.readthedocs.io/en/latest/api.html#ipyvolume.pylab.xyzlim although I'm surprised it's not contained in the bounding box, as it automatically should, maybe a bug?
Styling, although supported is a bit rough, will need to work on that and document it.
Thx @chrisjsewell for answering!

cheers,

Maarten

@fberlinger
Copy link

Hey Chris,

Did you find any solution for an orthographic camera, specifically for setting the axes orientations of the camera? I am able to set the camera to different positions within the simulated environment, however, it is always oriented such that it looks toward the center of the simulated environment (cube). I would love to change that camera orientation such that it is aligned with the direction of a moving particle that I simulate. In other words, I would attached the camera to one of many simulated particles (position) and get this particle's view (orientation) on what's going on.

Thanks for any ideas!
Florian

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants