[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Everett and Howell Paper



On Tue, 6 Nov 2001 17:33:16 -0700 , "Creager, Robert S"
<CreagRS@LOUISVILLE.STORTEK.COM> wrote:

>
>See questions inserted inline...
>
>> -----Original Message-----
>> From: Andrew Bennett [mailto:andrew.bennett@ns.sympatico.ca]
>> Sent: Tuesday, November 06, 2001 5:17 PM
>> To: tass@listserv.wwa.com
>> Subject: Re: Everett and Howell Paper
>> 
>> Actually, it doesn't need to be anything like that good. To
>> use the same set of reference stars, one needs to do better
>> than 1/10 image width if one puts up with losing 20% round
>> the edges. Similarly, with flat field errors of order 0.1
>> mags across the field, one should be down at the 0.001 mag level
>> if one only has to correct inside 1/10 x 1/10 image: basically 
>> the sort of thing Arne has already done in the MK III post 
>> processing.
>
>I don't understand your 1/10 references, like 'better than 1/10 image
>width'.  Is this tracking?
If the position of the images on the sky are all contained
within 1/10 image size in both coordinates, you can use
a common reference area 9/10 x 9/10. You lose only 20%
of the area. The present system can manage this only
with a lot of luck. The declination drive is not very
repeatable. The polar axis alignment is currently awful.
I'm not too sure how well Tom can come up with the same
right ascension from night to night even though he has
got it down to seconds of arc from image to image on the
same night ... with luck.

>  'correct 1/10 x 1/10 image'?  I'm sure this will
>spring out at me if you speak more slowly ;-)

With data set "TOM", we were trying to calibrate
sources essentially randomly positioned within
the image. This requires a flat field correction
with small relative errors between pixels separated
by the entire width of the image. The errors of
this correction initially run 0.1 magnitudes or more. 
One attempts to cook this error by using comparison
sources by any of a multiplicity of methods - I
fitted polynomials; others use sub areas. However
one does it, there are large residual errors. I beat
the errors down to 0.008 magnitudes rms in one case
as judged by the noise floor of variation for bright
sources. This is not very good - entirely useless for
planet hunting.

If the locations of a star on the various images are
concentrated in a smaller area of the CCD, the problem
is very much easier. The above quoted 0.1 magnitudes
uncertainty is basically a quadratic term which falls
to 0.001 magnitudes across 1/10 image and the higher
order terms are reduced by larger factors. If with a
good deal of effort one can beat an initial 0.1 
magnitudes down to 0.008 magnitudes rms, one should
be easily ably to beat 0.001 magnitudes into
invisibility. In fact, the residual errors will be
dominated by the inaccuracies in measuring the local 
pixel-to-pixel sensitivity variations.
>
>> > 
>> Our flat fielding is entirely inadequate. We need some
>> sort of dome flat. The last time this was discussed, we 
>> turned up (among others) a design that produced circularly 
>> symmetrical illumination. IIRC this was good to around 
>> 0.1% (a few digits in a 12-bit system) and could be built 
>> to fit the MK IV lens. So flat fielding is a solved problem 
>> for pixel to pixel calibration (but not across a 4 degree
>> image!) We should do it.
>
>I understand the circularly symmetrical illumination, and I get the gist of
>the pixel to pixel calibration, but how would this information be used
>practically?

A more precise flat field gives more precise stars.
The present method, even ignoring the presence of
stars, measures maybe 10,000 electrons/image. With
say 30 images averaged, the resulting flat has rms
errors 1/root(300,000) or 0.002 magnitudes. OK - not
too bad. But in the real world you are not averaging
but doing a median and the sky is full of stars to be
got rid of, not to mention little wispy clouds. This 
results in a big increase in the noise.
Worse, because of the star removal, the noise is correlated
for nearby pixels so you don't gain root(n) when the
star covers n pixels - the improvement is less. This
is not compatible with mmag photometry.

E & H used 120 images with 40,000 electrons/pixel giving
0.0005 magnitudes uncertainty in the resulting master
flat. Per pixel. And the pixels are independent so the
resulting star magnitude uncertainties are 0.0005/root(n)
for an n-pixel star. Overkill for the MK IV but there is
no reason not to do it!

>
>Thanks,
>Rob
>

Andrew Bennett, Avondale Vineyard