OK, here is my quandary...
I have an application that needs a database to collect a
LARGE amount of research data. This application arranges
up to 200 "bodies" in a 3 dimensional space and basically
collects "telemetry" from them as they move around.
This "telemetry" comprises somewhere in the neighborhood
of 1400 data elements. This application would "feed" this
to the db at 20Hz over the period of an hour or so.
So basically, every 20th of a second, up to 200 records
would be written, each record would contain approx. 1400
fields. A 1 hour "engagement" would write some 14.4
million records. (20 (writes per second) * 3600 (seconds
per hour) * 200 (bodies))
Given SQL Server 2000 has a 1024 column limit, I will need
to break this data down into several normalized tables,
not a problem, should be done anyway I imagine. Here's my
question. Can SQL Server 2000 keep up, if the machine and
drive's are robust enough to handle the I/O '
Any and all help or, advice is greatly appreciated !!
MichaelThanks Joe ! It does sound like it should function,
unfortunately I don't have any idea of the fields or sizes
yet. So I'm not sure of the row sizes.
I did bookmark the site, and will take a look. Will you be
published there after the conference ?
>--Original Message--
>i can do ~7,000 single row inserts/sec with one insert
>statement per stored procedure call on a 2x2.4GHz Xeon
>server. This is a small table ~12 columns, avg size
>80bytes/row.
>By consolidating multiple single row insert statements
per
>stored procedure, i can insert > 18K rows/sec.
>By inserting multiple rows per statement, >29K row/sec.
>In our case, it sounds like you may have to insert into
>more than one table to accommodate 1400 fields.
Otherwise,
>I do not expect the larger size per row in bytes or
>columns to significantly increase the cost of the insert.
>i will talk on this subject at the fall SQL Server
>Magazine Connections conference (Oct 12-15) ,
>www.sqlconnections.com , if you are interested in more
>material
>
>>--Original Message--
>>OK, here is my quandary...
>>I have an application that needs a database to collect a
>>LARGE amount of research data. This application arranges
>>up to 200 "bodies" in a 3 dimensional space and
basically
>>collects "telemetry" from them as they move around.
>>This "telemetry" comprises somewhere in the neighborhood
>>of 1400 data elements. This application would "feed"
this
>>to the db at 20Hz over the period of an hour or so.
>>So basically, every 20th of a second, up to 200 records
>>would be written, each record would contain approx. 1400
>>fields. A 1 hour "engagement" would write some 14.4
>>million records. (20 (writes per second) * 3600 (seconds
>>per hour) * 200 (bodies))
>>Given SQL Server 2000 has a 1024 column limit, I will
>need
>>to break this data down into several normalized tables,
>>not a problem, should be done anyway I imagine. Here's
my
>>question. Can SQL Server 2000 keep up, if the machine
and
>>drive's are robust enough to handle the I/O '
>>Any and all help or, advice is greatly appreciated !!
>>Michael
>>.
>.
>|||On Wed, 13 Aug 2003 12:38:40 -0700, "Michael" <Mykliv@.hotmail.com>
wrote:
>Given SQL Server 2000 has a 1024 column limit, I will need
>to break this data down into several normalized tables,
>not a problem, should be done anyway I imagine. Here's my
>question. Can SQL Server 2000 keep up, if the machine and
>drive's are robust enough to handle the I/O '
Probably not a good idea to try to write the stuff in realtime.
Probably some version of buffering the data to an ascii file, then
doing a bulk-load of the records every minute or ten, is a better
architecture (there was a similar thread to this on the newsgroups
recently ... or was that you?).
Another data architecture option is to store sets of data as blobs,
that SQLServer will fetch but does not really understand the structure
of. This is often appropriate for spacial and/or realtime apps. This
can drastically reduce the work SQLServer does inserting and selecting
data, but with a lack of flexibility and power, of course.
J.|||Thanks Bill. The machine will most likely be a 2x3 GHz
Xeon with amble memory. I will look into the Tigi
accelerators you mentioned, thanks !
I did figure that I/O was going to be an issue, but I
wanted to be sure SQL could handle the load before I
started down that road.
I am after all looking at writing some 2.016 Trillion
fields an hour here.
Thanks for your advice, your help and your input !
Michael.
>--Original Message--
>Michael,
>SQL will not be an issue here. That many transactions
with that large amount
>of data should not use an inordinate amount of processor
or memory. A dual
>Xeon processor with 2 GB memory should be more than
adequate. However, your
>bottleneck will be very significant - physical disk I/O.
>Make sure you are using advanced physical storage
techniques. You'll need to
>stripe a lot of drives together and mirror them if you
need the redundency.
>Use partitioned views across different RAID partitions if
you need to.
>Also, a SAN technology with a large write buffer will
help a lot. If you
>have the budget look at solid state accelerators such as
Tigi:
>http://www.tigicorp.com/.
>Once you've determined you size requirements, you'll be
able to calculate
>your disk I/O throughput requirements.
>One more thing, look into using MSMQ (farmed across
multiple servers) to
>level the load going into SQL Server.
>Hope this helps.
>Bill
>"Michael" <Mykliv@.hotmail.com> wrote in message
>news:02e201c361d2$7f707a10$a501280a@.phx.gbl...
>> OK, here is my quandary...
>> I have an application that needs a database to collect a
>> LARGE amount of research data. This application arranges
>> up to 200 "bodies" in a 3 dimensional space and
basically
>> collects "telemetry" from them as they move around.
>> This "telemetry" comprises somewhere in the neighborhood
>> of 1400 data elements. This application would "feed"
this
>> to the db at 20Hz over the period of an hour or so.
>
>.
>
No comments:
Post a Comment