[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
We now have a large data set to work with. I am trying various things to
improve the results. If anyone else wants to investigate other
improvements, just let me know what kind of a data set you need for your
For example, I had a theory that removing the outliers would improve the
photometry. That is plot all the points for a star, draw some radius, and
exclude the points outside this radius. When I looked at the measurements,
there were just no significant outliers. So this did not improve the
Next, I thought it would be fun to try to find asteroids. I sorted out all
the single hits from the data set, then collected back those that were
close to each other. With a very large data set known to contain many
asteroids, I only found one with 6 points in a straight line. I even did a
magnitude cut. Another dud idea.
I am presently reducing a data set where I first remove the frames with a
large sigma. This in theory that the photometry is not so good when there
are clouds. The early indications are that this throws the baby away with
the bath water. That is most of the variable stars went away in the
process of eliminating the clouds. I will study this a bit more. The ws
finds lots of things that vary that are probably not variable stars. There
are several large classes that I will try to discuss in a technical
note. For example, I would expect contrails to be a problem. They should
confuse the "compare to all the catalog stars" strategy. One sees
measurements that bump up and down over 10 minutes or so. Could this be a
contrail? I plan to start finding examples and tracing them back to the
image and trying to detect what is going on. Michael, how about a
comparison strategy that gives more weight to catalog stars closer to the
measured star? I think computation time is not yet a problem, so we could
afford a fancier computation.
OK, I am an novice to these things so I am sure I am trying things that are
known not to work. But that is the only way I can get familiar with the
problems. I will just keep working away trying to make things
better. Since I am spending a lot of time looking at the data, I am seeing
problems. The only question is whether I can find ways to improve the
problem measurements. I figure I have about six months of work to do
before the engineering set will be ready to go out in a paper. Possibly I
won't improve it, but I will understand it better.
Michael, once you get a pipeline that uses Arne's catalog, I am ready to
run a large data set through it.
At 10:00 AM 6/21/02 -0700, you wrote:
> >The idea was that hopefully an object in outburst might just have
> >sufficiently different photocentre/whatever to appear at a slightly
> >different position as an object at quiescence and maybe show up as two
> >very close objects.
> Not likely; if you look at the typical periods and typical
>distances of outbursting objects like CVs, you will find that the
>orbital separation is in the milliarcsec range.
> >My position is that there are very few twins on the scale of things. You
> >will have to work hard to convince me to do anything until Arne gets us a
> >catalog with good positions that we can use as a reference.
> This has been given to Michael R.; it will be interesting to see
>the result of a master catalog match approach instead of internal
>astrometry. Note that there are at least two ways of getting twins: poor
>astrometry giving positions outside of matching boxes (this can be
>improved), and physical doubles with separation around the matching
>radius dimension (difficult to do anything about). The latter ones are
>those that I mentioned in the FASTT datasets. You will get more twins
>as you move into the galactic plane.