-
-
Notifications
You must be signed in to change notification settings - Fork 18
Model Discretization API draft #13
base: master
Are you sure you want to change the base?
Conversation
convolved_model = ConvolvedModel(source, psf) | ||
|
||
# should there be a Convolved1DModel and Convolved2DModel? | ||
# should this be an extension of the CompoundModel class? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably not exactly--the CompoundModel
class is specifically for handling models that are composed out of multiple models that have been combined with arithmetic and/or composition operators. That said, I can see why you would suggest that here, since you're basically convolving the source model with some convolution kernel, so in that sense you're composing the source model with the convolution operator.
So the way Perry and I talked about doing this is instead of making a ConvolvedModel
that takes the source as an argument, as proposed here, there might be some ConvolutionModel
that is instantiated just with the desired kernel and then would run a source model through it like:
convolution = ConvolutionModel(psf)
convolved_model = source | convolution
this would require that the ConvolutionModel
overrides how the pipe operator is implemented for models, since it has a slightly different sense than it does in normal compound models. In normal compound models, model_1 | model_2
means "pass the output of model_1 in as the input to model_2". However, in this case the input to the convolution model is the entire model on the LHS (as well as its input, if that makes sense). This special case could definitely be supported though, I think. Last we talked about this I said I would whip up a prototype but haven't had a chance. I can show you more exactly what I mean later if that sounds interesting.
However, if the scheme I'm proposing doesn't work out, something like the above example could be made to work too. I just like the source | convolution
scheme since it fits in more nicely overall with the existing syntax for compound models :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me what a convolution operator implies in this context is that the form of the input model must be returned as a regularly sampled set of values (effectively table look up) so that a discrete convolution can be applied. For example, if the input model is purely analytic (or even table look-up on a different grid, or irregular grid), the model must be evaluated on a grid that the convolved can use. In effect the convolver has to turn the input into a lookup table model and operate on that. The convolver needs info on the sampling interval and range that it is expected to deal with, and uses that to evaluate the input to that. Is there any other way to deal with it?
@patti, @adonath – Triggered by astropy/astropy#3786 I'd like to discuss some use cases / issues for One thing that's not yet spelled out here is how fit statistics like Cash where the model needs to be integrated over the bin / pixel be implemented in astropy.modeling. Sherpa does this by adding a bool integrate flag to models and by having integrated dataspace classes like DataInt1D. A more general question is what a Fitter should be in astropy.modeling once we add more complex fit statistics and error estimators. I guess it's similar to the Fit object in Sherpa which combines data, model, fit statistic and error estimator classed (see simple example here)? Concerning the API for fit statistics, I have to admit I don't really understand the function & class approach in astropy.modeling that's used e.g. for the Cash statistic here: astropy/astropy#3786. I'll have to look at that more in detail. def my_stat_func(data, model, staterror=None, syserror=None, weight=None): i.e. explicitly lists things like So have anyone thoughts (or even suggestions / solutions) how integral models should interface with fit statistics and fitters in astropy.modeling? |
|
||
|
||
# the convolution method can be set via string, to choose between the astropy.convolution | ||
# methods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be better if ConvolvedModel
is an abstract base class, and the "method" should be treated using distinct class. E.g. FFTConvolvedModel(source, psf)
and StandardConvolvedModel(source,psf)
. Then instead of the user passing in a function, they sub-class ConvolvedModel
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There could still be a factory function or class that then creates an object of different types depending on the method
parameter as a convenience for the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I don't see any advantage of one method over the other (function vs. subclassing).
Unfortunately, I don't think I can make the hangout today due to a prior engagement, although I might make it to the tail end. @patti @embray - One thing that's missing here right now: a way to use @cdeil - can you clarify why the fit statistic is particularly relevant to the discretization question? It seems like maybe the discussion is better had in astropy/astropy#3786 ? |
|
||
|
||
############################### | ||
# Model Image Computation API # |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One really important thing here: adding fake noise to make simulated images. This really really should use quantities. I.e., effective_area = 100*u.m**2
, exposure_time=5*u.minutes
, quantum_efficiency = 0.9
, and then something like sky_image.simulate(effective_are, exposure_time, qunatum_efficiency)
would yield an "image" that's an NDData with units in u.ADU
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(using np.random.poisson to produce the simulated image)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eteq – What exactly are you proposing here?
If the model evaluate returns a quantity, it'll just be used, i.e. the output sky image will also be a quantity.
Other than that, I don't think we should introduce checks or attach units ourselves, because it would be restricting.
Maybe Joe wants to render an image in flux units and Jane in counts units and Jim in surface brightness units?
Hi @chris, I think you're hitting on some of my confusion about the evaluate/integrate overlap during our discussion last week. When the convolved_model object is called or evaluated is when the source and psf models are discretized and convolved. If evaluating at a single point, then only the psf model will be discretized on the larger grid. If the psf (or source) model is fixed during a fit, it should only be discretized once and then stored. Evaluation of points that are not at pixel centers should probably be interpolated or have the option to be interpolated. The integration method is defined by the discretization mode ('center', 'oversample', etc). I vote to start calling the "discretization mode" the "integration method" since that's what it really is. So the evaluate method for the convolved model object will call the discretization function which will in turn call the integrate function (if mode is not 'center'). But the integrate function/method should be general enough to standalone for any model and limits. Does that make sense or am I talking in circles? I'll work on detailing the use case in the API to reflect this. Patti |
model_discretization_api.py
A GSoC 2015 project to implement efficient model image evaluation and rasterization functionality in astropy. Mentors are @adonath and @cdeil. This is a rough draft of the proposed API for community discussion and input. Interested persons should join the upcoming discussion via Google Hangout ( doodle poll).