[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Coma, work to date




  [duplicates of this message may appear -- apologies if they do.  MWR]

  Herb asks "how can one characterize coma in the Mark IV images?"

  The short answer: pick one of the following two methods:

     - measure the fraction of all light from a star which falls
          within an aperture of some fixed size, as a function
          of the star's distance from optical axis.  I did a very
          brief, inadequate version of this in TN 59

     - measure the length -- from the bright core to the most distant
          visible portion of the tail -- of the PSF, as a function
          of a star's distance from the optical axis.  Express this
          length in micrometers, noting that each pixel is 15 micrometers
          in size

In each case, express the distance from the optical axis in degrees.
Recall that each pixel is about 7.5 arcseconds in size.

  An optical expert could take either of the above measurements and
try to figure out how the optical design of the lenses was implemented
improperly. 

  Let me ask: is there anyone reading this who does optical design 
for a living/hobby?  If so, could that person(s) please contact Tom 
Droege, get a copy of the specs for the V-band lens, and verify that
_if_ built according to spec, the lenses would really produce the
intended results?  See the Mark IV Lens design page:

            http://www.hitide.com/ATM/eoc22a.html

for the quality expected for the lenses.  We should have put 90% of
the light within a circle of radius 20 microns (= 1.3 pixels),
even at the edge of the field.  Instead, we seem to put about 70%
of the light within a circle of radius 45 microns (= 3 pixels).
Ugh.

  If person A can measure the amount of coma in either of the two
ways mentioned above, and person B knows how to run a lens ray-tracing
program, then maybe person B could fiddle the position of the lens
elements and discover some simple mistake which might yield the
current image quality.  And then we could move the one mis-placed
lens element and fix the images.

  Well, I can dream ...


----------------------------------------------------------------------

  Glenn asks:

> While busy with the above activities I began to wonder if we might use
> Mike G's "FlatComp" program (suitably modified) to compensate for coma and
> other effects in Mark IV images?? It uses a number of zones in Declination
> (usually eight) to produce correction factors based on magnitudes of Tycho
> stars in each band to adjust the photometric magnitudes and thus (possibly)
> compensate for the change in PSF's across the image due to "coma" and other
> effects.

  Well, in theory, sure.  We would need to 

      a) make a very careful map of the error in photometry as a function
         of (row, col) position on the chip

      b) apply the correction to each detection

  But there are complications.  

      - making the map requires lots and lots and lots of stars in each
        little patch of the image.  I speculate that it would take many
        more stars than appear in any single image to measure the
        correction to the required accuracy ...

      - ... which means that one must combine the results from a number
        of images ...

      - ... which means that all the images must have identical PSFs
        (no differences in tracking/trailing)

  Here's what I mean: suppose we decide that splitting each full image
into 10 x 10 patches will be good enough -- the correction for coma
within each patch is constant (or smaller than our desired precision).
How many stars are there within each patch to use for the calibration
of the patch's correction?

  Recall that in the 200-second V-band exposure described in TN 59,
I found about 1300 stars.  That means roughly 13 stars per patch.
Is that enough?  No!

      - most of those stars are fainter than the Tycho catalog 
             (and the Tycho catalog isn't all that great as
              a photometric reference)

      - most of those stars are also so faint that the signal-to-noise
             ratio in their measurements will be poor ... which means
             that the correction derived from them, even with a 
             perfect reference catalog, would be pretty bad

  Sure, the number of stars in an image can vary from place to place in
the sky, and one could try to use longer exposures to detect more
stars, and one could try to divide each image into a smaller number
of patches .... but I suspect that we can't use a single image to 
do the trick.

  If one wants to try this method of correction, one might

       - use 10, or 20, or 50 images of different parts of the sky
             to derive the correction factors in each patch
             (averaging the corrections from each image)

       - take 20, 30, or 50 images of THE SAME AREA of the sky,
             offsetting the camera by some amount between each
             exposure; IF the sky remains clear throughout, 
             and IF there are no significant changes in extinction,
             then one can use the change in each star's instrumental
             magnitude as it moves from one position to another
             on the chip to derive the correction. 

  The latter method, sometimes call "grid tests", is used by 
a number of astronomers to determine the accuracy of photometry 
across their CCD images.  The Sloan Digital Sky Survey is using grid 
tests to look for sources of error in the results from the Monitor 
Telescope, for example.  One paper which describes this method 
can be found in a paper by J. Manfroid, "Astronomy and Astrophysics
Supplement Series," vol 113, p. 587 (Nov 1995).  You can find a 
scanned copy of the paper on the ADS Abstract Server:

        http://adsabs.harvard.edu/abstract_service.html

  I've written code to combine the instrumental magnitudes of 
the same set of stars in a "grid test" set of exposures in order to
determine the correction factors across a CCD.  I plan to run this
sort of test -- a lot -- on the Mark IV which comes to RIT.

  Note that there are no guarantees that this correction factor will
remain constant over a period of weeks or months.  One should really
calculate the correction factors periodically -- weekly, perhaps --
to see if they change. 

  In an ideal case, one might find that the correction factor can
be expressed as a simple function -- a low-order polynomial, perhaps --
of the position of a star on the CCD.  That is, one might find that

       correction  =  A + B*(row - row0) + C*(row - row0)*(row - row0)
                        + D*(col - col0) + E*(col - col0)*(col - col0)

where (row0, col0) is the center of the optical axis, probably close to the
center of the chip.  If one knew the optical design of the lens, 
one could even calculate and predict this correction factor a priori.
  
  Keep in mind the following point:

      - even if you can derive a correction factor as a function of 
           position across the chip, measurements near the center
           will always be better than those near the edges, 
           because the light is more concentrated -- which means
           that errors in sky subtraction lead to smaller errors

      - the correction factor will depend sensitively on the size and
           shape of aperture used to measure stars.  Thus, if two
           people measure the same image with different apertures,
           they will not (in general) come up with identical correction
           factors

      - the correction factor will _probably_ be quite different for
           each Mark IV camera, so the results from one will _probably_
           not be good enough to correct measurements from another.
           Hmmm.  We might get lucky and find out that all the lenses
           really are identical, but I doubt it ...

  So, even a perfectly determined correction factor will NOT yield results
as good as those from a system with no coma.  Still, this could help quite
a bit.


                                           Michael Richmond