[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: downloading data



I'm dumb, but isn't there an ensemble solution for each frame?  
Wouldn't the error in the fit be a suitable error for each star in  
that frame? (i.e. the zeropoint error).

Maybe that's what you said...?

Michael

On Sep 28, 2005, at 9:42 AM, Michael Richmond wrote:

>
>
>   Andrew wrote:
>
>
>> One degree boxes now mean Sec(Declination) degrees in
>> RA so my boxes now overlap except on the equator. Just
>> another minor change to my code to ignore the stuff I have
>> read twice.
>>
>
>   I'm the guilty party who asked Michael S. to make this
> change.  Most of the other tools I use to request data
> from big databases (SIMBAD, NED, etc.) use the convention
> that a one-degree box cover a square one-degree region
> on the sky.  The old TASS interface yielded a square
> region near the equator, but an increasingly elongated
> rectangle as one moved towards the poles.  I think
> that the new convention will make analysis simpler for
> most users.
>
>
>> But some things have not changed. One is still given the
>> incorrect error estimates:
>>
>
>   I'm guilty of not moving quickly on this point.  I have
> been wanting for a long time to provide two tables of
> information:
>
>      1) table of scatter of Mark IV database magnitudes from
>              their mean value, as function of magnitude,
>              something like this (I'm making up values here)
>
>                  V mag         7       8      9     10    ....
>                  V uncert    0.03    0.05    0.07  0.10   ....
>
>              These values would be simple internal estimates
>              of the consistency of measurements which have run
>              through the ordinary pipeline.
>
>      2) a similar table, but this time using magnitudes based
>              on an ensemble solution of all the stars within
>              a region considerably smaller than a full 4x4 degree
>              frame; perhaps a 1x1 degree area.
>
>   We know that there are systematic errors in the photometry as
> a function of a star's position on the frame  -- see TN 97.
> The first table would be dominated by those errors.  It would
> warn users of the scatter they might reasonably expect for a
> random star of a given brightness in a random field.
>
>   The second table would require that a new set of measurements
> be added to the database -- magnitudes based on ensemble solutions.
> I believe that these values would have somewhat smaller scatter
> from the mean, and so would be more useful for some purposes.
>
>
>   I've been grabbing data in 1x1-degree blocks from the Mark IV
> database for the past few weeks, and just yesterday, finally
> finished (there are a few spots where the file transfer had
> problems, so I'll have to try again, but that's a minor issue).
> I have a script which has succeeded in running the ensemble
> photometry code on a few sample blocks -- so I could start that
> going on the entire set.  It will take several days to several
> weeks, I _think_.  It should be possible to create a very
> quick and only slightly dirty table of the second sort
> by examining the ensemble output for many blocks.
>
>   So, I think we might be able to move forward in a few
> little steps:
>
>        a) very soon: someone calculates the simple scatter from mean
>              for a large set of stars in the Mark IV database
>              (called "Table 1" above), and posts it to the
>              TASS E-mail list.
>
>        b) less soon: Michael Sallman (sorry, Michael, I don't mean
>              to force this on you, but I'm not sure anyone else
>              can do it easily) inserts this table into the database,
>              and creates a slightly modified query form (as an option,
>              without destroying the current form) which will
>
>                  - grab magnitudes and positions and dates and so  
> forth
>                           just as it currently does, but ...
>                  - use this new table to estimate the uncertainty
>                           to be reported with each magnitude  
> measurement
>                           instead of using the current uncertainty  
> values
>
>              This means, for example, that a star with V=10.34 would
>              yield a big list of individual measurements, as it  
> currently
>              does, but that the "uncertainty" value attached to each
>              line would be identically 0.06 mag (or whatever the
>              Table indicates).
>
>              No, this isn't the "right" way to estimate uncertainty
>              for some purposes; but it is probably the method which
>              will help users interpret and use the data best,
>              especially casual users.
>
>
>           c) sometime in future: I finish the ensemble analysis for
>              all (or most) of the blocks in the northern sky.  I then
>              create a big list of ensemble mag and uncertainty for
>              all stars.  Actually, there could be one big list
>              of just "mean mag" and uncertainty, and a second
>              enormous list of individual magnitude measurements
>              for each star.
>
>           d) further in future: somehow, these two lists are
>              made available for query by users.  Probably the
>              first list would be done first, since it would
>              be only a few gigabytes (at a guess).  It could
>              be placed into the existing Mark IV database,
>              or this information could be served from another
>              site via a similar interface -- whatever is
>              simpler.
>
>
>
>   Andrew, would these two tables, and a single "typical" uncertainty
> value based on them, satisfy some or all of your needs?
>
>                                           Michael
>
>
>