[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Everett and Howell Paper
I need to reply to all of this, but my brain has been foggy the last few
days. This happens when you get older. I just wait until things clear up
and I can think.
OK, I can do a better job of flat fielding. There is that flat field box
that I built that is in my cabinet. I have not set out to use it because I
was worried that it would not be as "flat" as the night sky. But It would
probably be flat in detail, not in the whole picture. I.e. it would make a
flat that was good for fixing pixel to pixel variations, but not so good
for frame side to side. What it more important?
As for polar alignment, that will never be great with TOM1 because of the
slide out platform. TOM2 and TOM3 should be better when I get done with it.
I think the current data sets meet the 1/10 frame requirement. Possibly
better. I recall 150 or so pixels over 2.5 hours while taking 56 exposures.
OK, I just measured it. For last night's run it was 80 pixels E-W and 22
pixels N-S between exposure 1 and 56. There was 2 hours and 19 minutes
between the start of the first and the last exposure.
I could improve the E-W just by adjusting the RA drive. Seems to me that
Michael was thinking of using the fact that there was some movement to make
flat images. I think I quit trying to adjust it better when it got in the
pixel per frame range. It looks like it is 80 pixels in 8400 seconds. So
it drifts a pixel during the exposure. I guess I can make it better.
At 07:14 PM 11/6/01 -0800, you wrote:
>Thanks for the info. I'll chew on this information a
>One piece I'm still missing is your comment about the
>circularly symetrical illumination, and how it solves
>the pixel to pixel variation, but not across the field
>of view. What I'm thinking this means is that the
>flatfield differences between pixels 100,100 and
>100,101 can accuratly be determined, but because of
>the 'polar' nature of the flatfield image, you don't
>know the relation between 100,100 and 1200, 1200,
>where the illumination is different (not truly flat).
>Is that correct? If so, how would you treat this
>'polar' composit flatfield different from the 'ideal'
>flatfield, vs twilight, vs night?
>---- Andrew Bennett <email@example.com>
> > On Tue, 6 Nov 2001 17:33:16 -0700 , "Creager, Robert
> > <CreagRS@LOUISVILLE.STORTEK.COM> wrote:
> > >
> > >See questions inserted inline...
> > >
> > >> -----Original Message-----
> > >> From: Andrew Bennett
> > >> Sent: Tuesday, November 06, 2001 5:17 PM
> > >> To: firstname.lastname@example.org
> > >> Subject: Re: Everett and Howell Paper
> > >>
> > >> Actually, it doesn't need to be anything like
>that good. To
> > >> use the same set of reference stars, one needs to
> > >> than 1/10 image width if one puts up with losing
> > >> the edges. Similarly, with flat field errors of
> > >> mags across the field, one should be down at the
>0.001 mag level
> > >> if one only has to correct inside 1/10 x 1/10
> > >> the sort of thing Arne has already done in the MK
> > >> processing.
> > >
> > >I don't understand your 1/10 references, like
>'better than 1/10 image
> > >width'. Is this tracking?
> > If the position of the images on the sky are all
> > within 1/10 image size in both coordinates, you can
> > a common reference area 9/10 x 9/10. You lose only
> > of the area. The present system can manage this only
> > with a lot of luck. The declination drive is not
> > repeatable. The polar axis alignment is currently
> > I'm not too sure how well Tom can come up with the
> > right ascension from night to night even though he
> > got it down to seconds of arc from image to image on
> > same night ... with luck.
> > > 'correct 1/10 x 1/10 image'? I'm sure this will
> > >spring out at me if you speak more slowly ;-)
> > With data set "TOM", we were trying to calibrate
> > sources essentially randomly positioned within
> > the image. This requires a flat field correction
> > with small relative errors between pixels separated
> > by the entire width of the image. The errors of
> > this correction initially run 0.1 magnitudes or
> > One attempts to cook this error by using comparison
> > sources by any of a multiplicity of methods - I
> > fitted polynomials; others use sub areas. However
> > one does it, there are large residual errors. I beat
> > the errors down to 0.008 magnitudes rms in one case
> > as judged by the noise floor of variation for bright
> > sources. This is not very good - entirely useless
> > planet hunting.
> > If the locations of a star on the various images are
> > concentrated in a smaller area of the CCD, the
> > is very much easier. The above quoted 0.1 magnitudes
> > uncertainty is basically a quadratic term which
> > to 0.001 magnitudes across 1/10 image and the higher
> > order terms are reduced by larger factors. If with a
> > good deal of effort one can beat an initial 0.1
> > magnitudes down to 0.008 magnitudes rms, one should
> > be easily ably to beat 0.001 magnitudes into
> > invisibility. In fact, the residual errors will be
> > dominated by the inaccuracies in measuring the local
> > pixel-to-pixel sensitivity variations.
> > >
> > >> >
> > >> Our flat fielding is entirely inadequate. We need
> > >> sort of dome flat. The last time this was
> > >> turned up (among others) a design that produced
> > >> symmetrical illumination. IIRC this was good to
> > >> 0.1% (a few digits in a 12-bit system) and could
> > >> to fit the MK IV lens. So flat fielding is a
> > >> for pixel to pixel calibration (but not across a
> > >> image!) We should do it.
> > >
> > >I understand the circularly symmetrical
>illumination, and I get the gist of
> > >the pixel to pixel calibration, but how would this
>information be used
> > >practically?
> > A more precise flat field gives more precise stars.
> > The present method, even ignoring the presence of
> > stars, measures maybe 10,000 electrons/image. With
> > say 30 images averaged, the resulting flat has rms
> > errors 1/root(300,000) or 0.002 magnitudes. OK - not
> > too bad. But in the real world you are not averaging
> > but doing a median and the sky is full of stars to
> > got rid of, not to mention little wispy clouds. This
> > results in a big increase in the noise.
> > Worse, because of the star removal, the noise is
> > for nearby pixels so you don't gain root(n) when the
> > star covers n pixels - the improvement is less. This
> > is not compatible with mmag photometry.
> > E & H used 120 images with 40,000 electrons/pixel
> > 0.0005 magnitudes uncertainty in the resulting
> > flat. Per pixel. And the pixels are independent so
> > resulting star magnitude uncertainties are
> > for an n-pixel star. Overkill for the MK IV but
> > no reason not to do it!
> > >
> > >Thanks,
> > >Rob
> > >
> > Andrew Bennett, Avondale Vineyard