[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: We need to be 1,000 times faster.
>Some problems with Chris' scheme, though in general
>I agree that distributed database processing is what
>will be needed. First, a site such as NOFS or Tom's
>6X will produce more data per year than is currently in Michael's
>entire database. If Michael has problems updating it, then
>at the same level each site will have problems in maintaining
>their own individual database.
A 6x or 10x factor can be taken up by just doing it now
rather than a few years ago. Also you don't have the problem
of transporting the data. If we automate the data import
more we can just let it run all the time. A re-write could
gain us another 2x at least. 10x is in reach.
> Second, while
>FITS tables format is certainly a convenient,
>standard way to keep data available that might be
>needed for queries, such tables are not SQL-accessible
You're right. Maybe I was not clear. I was proposing
FITS only for site to site data exchange. The first
step upon geting a FITS file would be to convert it to
something else. Yes FITS is bulky but it compresses to
the size of its information content.
>and I wonder how a centralized server could handle them
>in an efficient manner. The total database is not that
>much different in size from SDSS.
The total size of the data is not the issue. Disks are
cheap. The bottle neck is in processing so my plan was
the process the dat befor shipping it rather than after
as we do now.
Let's see what ARNE
>brings to the table. Perhaps the time really has come
>to get Microsoft, etc. involved.
When we give them our requirment that the software needs
to be available to anyone who wants to use it they may
not be interrested. I for one will not write code for
MS SQL Server if that means I have to buy a license for
it at what $10K each? They's be on thier own.