Model Fitting Bias
The previous pages’ results looked purely at analytic expectations for chromatic biases, based on propagating expected shifts in PSF second moments into ellipticity defined as an unweighted combination of deconvolved surface brightness profile second moments. This approach assumes that PSF deconvolution will succeed exactly, something difficult to achieve in practice. On this page, we investigate a second way to estimate the impact of chromatic PSF biases, based on simulated images of galaxies affected by chromatic PSFs.
Ring test
The alternative way to estimate shear bias induced by chromatic effects is to simulate sheared galaxy images using the true (galactic) SED, and then attempt to recover the simulated galactic ellipticities while pretending that the galaxy instead had a stellar SED. A ring test (Nakajima & Bernstein 2007) is a specific prescription for such a suite of simulations designed to rapidly converge to the correct (though biased) shear calibration parameters. The test gets its name from the arrangement of galaxy shapes used in the simulated images, which form a ring in ellipticity space centered at the origin (i.e. |e| is constant), before any shear is applied. By choosing intrinsic ellipticities that exactly average to zero, the results of the test converge faster than for randomly (though isotropically) chosen ellipticities that only average to zero statistically.
The general procedure can be implemented as follows:
- Choose an input “true” reduced shear \(g\).
- Choose a pre-sheared galaxy ellipticity \(e^\mathrm{int}\) on the ring.
- Compute the sheared galaxy ellipticity \( e^\mathrm{obs} = \frac{e^\mathrm{int} + g}{1+g^*e^\mathrm{int}} \)
- Generate the target image using the chromatic PSF, assuming the SED is galactic.
- Fit a model to this target image pretending the SED is instead stellar. (Do this by varying the model parameters that define the galaxy, including, e.g., \(e^\mathrm{obs}\))
- Repeat steps 3-5 using the opposite pre-sheared ellipticity: \( e^\mathrm{int} \rightarrow -e^\mathrm{int} \).
- Repeat steps 2-6 for as many values around the ellipticity ring as desired. Typically these values are uniformly spaced.
- Average together all recorded output ellipticity \(e^\mathrm{obs}\) values. This is the (biased by chromaticity) estimate for the shear \(\hat{g}\).
- Repeat steps 1-8 to map out the relation \(g \rightarrow \hat{g} \). \(1+m\) and \(c\) are then the slope and intercept of this relation.
Implementation in Chroma
In Chroma, the bin/simulations/one_ring_test.py
script carries out exactly the above procedure.
Let’s run this script with the default options (i.e. no command line options).
The script first reports information about what simulation it’s going to run; things like the size of the pixels, the zenith angle used for DCR, and morphological and spectral properties of the simulated galaxy and PSF. Most of these properties are configurable. The lines labeled “analytic” and “ring” are the actual output of the program. The “analytic” line shows the estimates for the shear calibration parameters from analytic formulae, and the “ring” line shows the values obtained from the ring test. In this case there’s a slight disagreement between these, though clearly both estimates are in the same ballpark. The last two lines show the nominal maximum values of and tolerable for the DES and LSST surveys. Note that these are the final tolerances, including many systematic effects beyond chromatic PSF biases. We should therefore strive to correct chromatic effects to well below this threshold.
Let’s look at what options are configurable for this test:
There are lots of possible variations to investigate! Here’s a particularly interesting one:
That’s interesting. Changing the Sersic index of the simulated galaxy had no affect on the analytic
results. That makes sense since the analytic formulae only care about the galaxy’s second moment
radius \(r^2\mathrm{gal} = Ixx + Iyy \), not on the details of its surface brightness profile.
The one_ring_test.py
script is specifically designed to keep \( r^2\mathrm{gal} \) fixed as the
Sersic index is varied (though this can be changed using the --gal_HLR
or --gal_convFWHM
keyword
arguments). The results for the ring test changed by quite a bit, however. What’s going on? We
can investigate using the command line option --diagnostic
and the script
plot_ring_diagnostic.py
:
We’ve created a bunch of diagnostic plots for each step of the two ring tests we’ve run. Let’s look
at n0.5-g1-0.0-g2-0.0-beta1.0471975512.png
, which corresponds to steps 4 and 5 of the ring test
procedure where \(g = 0.0\), \(n = 0.5 \)(i.e. the galaxy is Gaussian), and
\(\beta = 1.0471975512\) (i.e. the galaxy is oriented with its major axis at an angle of
to the x axis).
Let’s take this figure one panel at a time. The first panel at the top left is the target galaxy model, post-shear but pre-PSF-convolution. To the right is the effective PSF when the SED is galactic. To the right of that is the convolved image, and finally the rightmost panel is the pixelized and convolved image. The second row shows the same images but for the fitting step of the ring test procedure (step 5). That is, the effective PSF is for the stellar SED instead of the galactic SED, and the model is the result of the fitting procedure; the images are the convolved and pixelized versions of these. Not much difference, huh? That’s just an indication that the chromatic biases we’re studying are tiny, and by extension that the requirements for LSST are incredibly strict.
The final row shows the residuals between the target image generated assuming a galactic SED, and the best fit when pretending that the SED is stellar. Note that the scale bar for this row is different than the top rows and somewhat unusual in that it’s displaying both positive and negative values logarithmically (with a linear transition region in the middle).
The residuals in the image columns say something about how well we were able to deconvolve the stellar SED effective PSF from the target image. If the deconvolution proceeded perfectly, then the images columns in both the “truth” and “fit” rows would be exactly equal to each other, and there wouldn’t be any residuals in the images columns. The fact that the residuals there are not zero says that the deconvolution is only approximate.
Another way to think about this is that the deconvolution by the stellar PSF of the Sersic profile convolved by the galactic PSF, is no long exactly describable as a Sersic profile; it’s something more complicated. Since we’re restricting the best fit model to be a Sersic profile, we therefore make some small amount of error in this deconvolution.
Let’s see what the \(n=4.0\) case looks like:
The residuals are clearly larger in this case. That is, the bias due to our choice of model is larger when \(n = 4.0\) than when \(n = 0.5\). This explains why the ring test results are further from the analytic results for \(n = 4.0\).