I am using SQL 2000 Dev edition. I create a DB on box
1. I run a simple program that loops through all records
in a table. It works fine. (VBA Code in MS Access
accessing the SQL table via ADO if it matters.) I then
detach the database and copy the file, plus log file, to
Box 2.
I attach the database on Box 2 and run the same code to
loop through all records in the table. It fails with an
I/O error 998 (Invalid access to memory location)
detected during read at offset 0x00000b8f1e0000 in
file 'My DB File'
It does not always stop at the same point. When I check,
it shows EOF to be true - even though a counter shows I
am less than 30% of the way through the file.
Is there some form of utility to check the validity of
the DB? I ran a ShrinkDatabase on box 1. Then copied
the file to box 2 again. Same results.
Known differences:
- Hardware.
- OS. (Can that really matter?)
- Box1 = Win2003 Server.
- Box2 = Win XP Pro SP1
Both boxes are SQL 2000 Developers edition SP3.
I'm pulling my hair out on this. Any help is greatly
appreciated.
Thank you.Sounds like a hardware issue, I have seen similar problems just prior to a
hard disk crapping out. Have you run any disk checking utilities against
the problem machine? Checked the event viewer for operating system and/or
Disk errors/warnings?
--
http://www.aspfaq.com/
(Reverse address to reply.)
"DB" <anonymous@.discussions.microsoft.com> wrote in message
news:180801c4d6fc$9cb189c0$a501280a@.phx.gbl...
> I am using SQL 2000 Dev edition. I create a DB on box
> 1. I run a simple program that loops through all records
> in a table. It works fine. (VBA Code in MS Access
> accessing the SQL table via ADO if it matters.) I then
> detach the database and copy the file, plus log file, to
> Box 2.
> I attach the database on Box 2 and run the same code to
> loop through all records in the table. It fails with an
> I/O error 998 (Invalid access to memory location)
> detected during read at offset 0x00000b8f1e0000 in
> file 'My DB File'
> It does not always stop at the same point. When I check,
> it shows EOF to be true - even though a counter shows I
> am less than 30% of the way through the file.
> Is there some form of utility to check the validity of
> the DB? I ran a ShrinkDatabase on box 1. Then copied
> the file to box 2 again. Same results.
> Known differences:
> - Hardware.
> - OS. (Can that really matter?)
> - Box1 = Win2003 Server.
> - Box2 = Win XP Pro SP1
> Both boxes are SQL 2000 Developers edition SP3.
> I'm pulling my hair out on this. Any help is greatly
> appreciated.
> Thank you.
>
Showing posts with label records. Show all posts
Showing posts with label records. Show all posts
Thursday, March 29, 2012
Tuesday, March 27, 2012
Database Growth & Space Recovery Problem
I have done this experiment on one of the tables. There is table called
build havinf nvText Field with large no of records. I want to drop that
column and recover space. These are the results I got.
SP_SPACEUSED BUILD Results
name rows reserved data index_size unused
1. Before Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
2. After Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
3. After Executing the Shrink Database from Enterprise Manager.
Build 663211 608280 KB 604744 KB 3456 KB 80 K
4. After Executing DBCC DBReindex (build,'',70)
Build 663211 124096 KB 123392 KB 712 KB -8 KB
Can anyone please explain me after executing step 3 i.e shrink data
column as well as index_size shows an increased figure whereas logically
it should be a reduced figure.
Regards,
Harcharan
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
One thing is that you should be suspicious of the numbers , as you are..
SP_SPaceused reports information from Sysindexes, but the sysindexes info is
NOT updated with everyinsert update or delete...So the numbers may not
reflect the actual size..
For each sp_spaceused in your test , execute like this
exec sp_spaceused 'build', true
The second parameter tells sql to read through the table and update the
estimates, it runs longer but you will get tru results...If the numbers
still look funny, we'll try something else...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Harcharan Jassal" <hjjassal@.yahoo.com> wrote in message
news:uCtYWSBhEHA.3540@.TK2MSFTNGP10.phx.gbl...
> I have done this experiment on one of the tables. There is table called
> build havinf nvText Field with large no of records. I want to drop that
> column and recover space. These are the results I got.
> SP_SPACEUSED 'BUILD' Results
> name rows reserved data index_size unused
> 1. Before Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 2. After Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 3. After Executing the Shrink Database from Enterprise Manager.
> Build 663211 608280 KB 604744 KB 3456 KB 80 K
> 4. After Executing DBCC DBReindex (build,'',70)
> Build 663211 124096 KB 123392 KB 712 KB -8 KB
> Can anyone please explain me after executing step 3 i.e shrink data
> column as well as index_size shows an increased figure whereas logically
> it should be a reduced figure.
> Regards,
> Harcharan
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
build havinf nvText Field with large no of records. I want to drop that
column and recover space. These are the results I got.
SP_SPACEUSED BUILD Results
name rows reserved data index_size unused
1. Before Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
2. After Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
3. After Executing the Shrink Database from Enterprise Manager.
Build 663211 608280 KB 604744 KB 3456 KB 80 K
4. After Executing DBCC DBReindex (build,'',70)
Build 663211 124096 KB 123392 KB 712 KB -8 KB
Can anyone please explain me after executing step 3 i.e shrink data
column as well as index_size shows an increased figure whereas logically
it should be a reduced figure.
Regards,
Harcharan
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
One thing is that you should be suspicious of the numbers , as you are..
SP_SPaceused reports information from Sysindexes, but the sysindexes info is
NOT updated with everyinsert update or delete...So the numbers may not
reflect the actual size..
For each sp_spaceused in your test , execute like this
exec sp_spaceused 'build', true
The second parameter tells sql to read through the table and update the
estimates, it runs longer but you will get tru results...If the numbers
still look funny, we'll try something else...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Harcharan Jassal" <hjjassal@.yahoo.com> wrote in message
news:uCtYWSBhEHA.3540@.TK2MSFTNGP10.phx.gbl...
> I have done this experiment on one of the tables. There is table called
> build havinf nvText Field with large no of records. I want to drop that
> column and recover space. These are the results I got.
> SP_SPACEUSED 'BUILD' Results
> name rows reserved data index_size unused
> 1. Before Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 2. After Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 3. After Executing the Shrink Database from Enterprise Manager.
> Build 663211 608280 KB 604744 KB 3456 KB 80 K
> 4. After Executing DBCC DBReindex (build,'',70)
> Build 663211 124096 KB 123392 KB 712 KB -8 KB
> Can anyone please explain me after executing step 3 i.e shrink data
> column as well as index_size shows an increased figure whereas logically
> it should be a reduced figure.
> Regards,
> Harcharan
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
Database Growth & Space Recovery Problem
I have done this experiment on one of the tables. There is table called
build havinf nvText Field with large no of records. I want to drop that
column and recover space. These are the results I got.
SP_SPACEUSED BUILD Results
name rows reserved data index_size unused
1. Before Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
2. After Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
3. After Executing the Shrink Database from Enterprise Manager.
Build 663211 608280 KB 604744 KB 3456 KB 80 K
4. After Executing DBCC DBReindex (build,'',70)
Build 663211 124096 KB 123392 KB 712 KB -8 KB
Can anyone please explain me after executing step 3 i.e shrink data
column as well as index_size shows an increased figure whereas logically
it should be a reduced figure.
Regards,
Harcharan
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!One thing is that you should be suspicious of the numbers , as you are..
SP_SPaceused reports information from Sysindexes, but the sysindexes info is
NOT updated with everyinsert update or delete...So the numbers may not
reflect the actual size..
For each sp_spaceused in your test , execute like this
exec sp_spaceused 'build', true
The second parameter tells sql to read through the table and update the
estimates, it runs longer but you will get tru results...If the numbers
still look funny, we'll try something else...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Harcharan Jassal" <hjjassal@.yahoo.com> wrote in message
news:uCtYWSBhEHA.3540@.TK2MSFTNGP10.phx.gbl...
> I have done this experiment on one of the tables. There is table called
> build havinf nvText Field with large no of records. I want to drop that
> column and recover space. These are the results I got.
> SP_SPACEUSED 'BUILD' Results
> name rows reserved data index_size unused
> 1. Before Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 2. After Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 3. After Executing the Shrink Database from Enterprise Manager.
> Build 663211 608280 KB 604744 KB 3456 KB 80 K
> 4. After Executing DBCC DBReindex (build,'',70)
> Build 663211 124096 KB 123392 KB 712 KB -8 KB
> Can anyone please explain me after executing step 3 i.e shrink data
> column as well as index_size shows an increased figure whereas logically
> it should be a reduced figure.
> Regards,
> Harcharan
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!sql
build havinf nvText Field with large no of records. I want to drop that
column and recover space. These are the results I got.
SP_SPACEUSED BUILD Results
name rows reserved data index_size unused
1. Before Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
2. After Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
3. After Executing the Shrink Database from Enterprise Manager.
Build 663211 608280 KB 604744 KB 3456 KB 80 K
4. After Executing DBCC DBReindex (build,'',70)
Build 663211 124096 KB 123392 KB 712 KB -8 KB
Can anyone please explain me after executing step 3 i.e shrink data
column as well as index_size shows an increased figure whereas logically
it should be a reduced figure.
Regards,
Harcharan
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!One thing is that you should be suspicious of the numbers , as you are..
SP_SPaceused reports information from Sysindexes, but the sysindexes info is
NOT updated with everyinsert update or delete...So the numbers may not
reflect the actual size..
For each sp_spaceused in your test , execute like this
exec sp_spaceused 'build', true
The second parameter tells sql to read through the table and update the
estimates, it runs longer but you will get tru results...If the numbers
still look funny, we'll try something else...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Harcharan Jassal" <hjjassal@.yahoo.com> wrote in message
news:uCtYWSBhEHA.3540@.TK2MSFTNGP10.phx.gbl...
> I have done this experiment on one of the tables. There is table called
> build havinf nvText Field with large no of records. I want to drop that
> column and recover space. These are the results I got.
> SP_SPACEUSED 'BUILD' Results
> name rows reserved data index_size unused
> 1. Before Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 2. After Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 3. After Executing the Shrink Database from Enterprise Manager.
> Build 663211 608280 KB 604744 KB 3456 KB 80 K
> 4. After Executing DBCC DBReindex (build,'',70)
> Build 663211 124096 KB 123392 KB 712 KB -8 KB
> Can anyone please explain me after executing step 3 i.e shrink data
> column as well as index_size shows an increased figure whereas logically
> it should be a reduced figure.
> Regards,
> Harcharan
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!sql
database grows abnormally
there are 350 tables in database.in each hour 60 records appended to each
table.there is a unclustered index on each table.
but database size grows abnomarly (fragmentation) .i exported and imported
it then the size was normal .but again after inserting records
the size of grows abnormally.
this db is used in real time system so time is impossibe.when i used
clustered index the insertion action was very slow.so i used unclustred
index.
i used dbreindex , defrag whitout any advantage for defragmanting.
thanks of all.
Message posted via http://www.sqlmonster.com
sepideh iranpour via SQLMonster.com wrote:
> there are 350 tables in database.in each hour 60 records appended to
> each table.there is a unclustered index on each table.
> but database size grows abnomarly (fragmentation) .i exported and
> imported it then the size was normal .but again after inserting
> records
> the size of grows abnormally.
> this db is used in real time system so time is impossibe.when i used
> clustered index the insertion action was very slow.so i used
> unclustred index.
>
> i used dbreindex , defrag whitout any advantage for defragmanting.
> thanks of all.
Is it possible the database is just auto-growing to accommodate the
newly inserted data? i don't think fragmentation would itself be the
cause of abnormal growth. Is it just the data file size that's growing
and not really the underlying data in the file(s)?
David Gugick
Imceda Software
www.imceda.com
sql
table.there is a unclustered index on each table.
but database size grows abnomarly (fragmentation) .i exported and imported
it then the size was normal .but again after inserting records
the size of grows abnormally.
this db is used in real time system so time is impossibe.when i used
clustered index the insertion action was very slow.so i used unclustred
index.
i used dbreindex , defrag whitout any advantage for defragmanting.
thanks of all.
Message posted via http://www.sqlmonster.com
sepideh iranpour via SQLMonster.com wrote:
> there are 350 tables in database.in each hour 60 records appended to
> each table.there is a unclustered index on each table.
> but database size grows abnomarly (fragmentation) .i exported and
> imported it then the size was normal .but again after inserting
> records
> the size of grows abnormally.
> this db is used in real time system so time is impossibe.when i used
> clustered index the insertion action was very slow.so i used
> unclustred index.
>
> i used dbreindex , defrag whitout any advantage for defragmanting.
> thanks of all.
Is it possible the database is just auto-growing to accommodate the
newly inserted data? i don't think fragmentation would itself be the
cause of abnormal growth. Is it just the data file size that's growing
and not really the underlying data in the file(s)?
David Gugick
Imceda Software
www.imceda.com
sql
database grows abnormally
there are 350 tables in database.in each hour 60 records appended to each
table.there is a unclustered index on each table.
but database size grows abnomarly (fragmentation) .i exported and imported
it then the size was normal .but again after inserting records
the size of grows abnormally.
this db is used in real time system so time is impossibe.when i used
clustered index the insertion action was very slow.so i used unclustred
index.
i used dbreindex , defrag whitout any advantage for defragmanting.
thanks of all.
--
Message posted via http://www.sqlmonster.comsepideh iranpour via SQLMonster.com wrote:
> there are 350 tables in database.in each hour 60 records appended to
> each table.there is a unclustered index on each table.
> but database size grows abnomarly (fragmentation) .i exported and
> imported it then the size was normal .but again after inserting
> records
> the size of grows abnormally.
> this db is used in real time system so time is impossibe.when i used
> clustered index the insertion action was very slow.so i used
> unclustred index.
>
> i used dbreindex , defrag whitout any advantage for defragmanting.
> thanks of all.
Is it possible the database is just auto-growing to accommodate the
newly inserted data? i don't think fragmentation would itself be the
cause of abnormal growth. Is it just the data file size that's growing
and not really the underlying data in the file(s)?
--
David Gugick
Imceda Software
www.imceda.com
table.there is a unclustered index on each table.
but database size grows abnomarly (fragmentation) .i exported and imported
it then the size was normal .but again after inserting records
the size of grows abnormally.
this db is used in real time system so time is impossibe.when i used
clustered index the insertion action was very slow.so i used unclustred
index.
i used dbreindex , defrag whitout any advantage for defragmanting.
thanks of all.
--
Message posted via http://www.sqlmonster.comsepideh iranpour via SQLMonster.com wrote:
> there are 350 tables in database.in each hour 60 records appended to
> each table.there is a unclustered index on each table.
> but database size grows abnomarly (fragmentation) .i exported and
> imported it then the size was normal .but again after inserting
> records
> the size of grows abnormally.
> this db is used in real time system so time is impossibe.when i used
> clustered index the insertion action was very slow.so i used
> unclustred index.
>
> i used dbreindex , defrag whitout any advantage for defragmanting.
> thanks of all.
Is it possible the database is just auto-growing to accommodate the
newly inserted data? i don't think fragmentation would itself be the
cause of abnormal growth. Is it just the data file size that's growing
and not really the underlying data in the file(s)?
--
David Gugick
Imceda Software
www.imceda.com
database grows abnormally
there are 350 tables in database.in each hour 60 records appended to each
table.there is a unclustered index on each table.
but database size grows abnomarly (fragmentation) .i exported and imported
it then the size was normal .but again after inserting records
the size of grows abnormally.
this db is used in real time system so time is impossibe.when i used
clustered index the insertion action was very slow.so i used unclustred
index.
i used dbreindex , defrag whitout any advantage for defragmanting.
thanks of all.
Message posted via http://www.droptable.comsepideh iranpour via droptable.com wrote:
> there are 350 tables in database.in each hour 60 records appended to
> each table.there is a unclustered index on each table.
> but database size grows abnomarly (fragmentation) .i exported and
> imported it then the size was normal .but again after inserting
> records
> the size of grows abnormally.
> this db is used in real time system so time is impossibe.when i used
> clustered index the insertion action was very slow.so i used
> unclustred index.
>
> i used dbreindex , defrag whitout any advantage for defragmanting.
> thanks of all.
Is it possible the database is just auto-growing to accommodate the
newly inserted data? i don't think fragmentation would itself be the
cause of abnormal growth. Is it just the data file size that's growing
and not really the underlying data in the file(s)?
David Gugick
Imceda Software
www.imceda.com
table.there is a unclustered index on each table.
but database size grows abnomarly (fragmentation) .i exported and imported
it then the size was normal .but again after inserting records
the size of grows abnormally.
this db is used in real time system so time is impossibe.when i used
clustered index the insertion action was very slow.so i used unclustred
index.
i used dbreindex , defrag whitout any advantage for defragmanting.
thanks of all.
Message posted via http://www.droptable.comsepideh iranpour via droptable.com wrote:
> there are 350 tables in database.in each hour 60 records appended to
> each table.there is a unclustered index on each table.
> but database size grows abnomarly (fragmentation) .i exported and
> imported it then the size was normal .but again after inserting
> records
> the size of grows abnormally.
> this db is used in real time system so time is impossibe.when i used
> clustered index the insertion action was very slow.so i used
> unclustred index.
>
> i used dbreindex , defrag whitout any advantage for defragmanting.
> thanks of all.
Is it possible the database is just auto-growing to accommodate the
newly inserted data? i don't think fragmentation would itself be the
cause of abnormal growth. Is it just the data file size that's growing
and not really the underlying data in the file(s)?
David Gugick
Imceda Software
www.imceda.com
Friday, February 24, 2012
Database Design Assistance
I have 2 tables that will be created from flat files and recreated on a
daily basis with 300,000+ records and growing daily (tables will be
truncated and re-loaded via a batch process because we do not "own" the
data). I would like to extract the address information from these
tables and create a simple database design that will be relational and
efficient (there are additional Customer and Account fields but these
are the fields currently of interest).
Customer [table name]
TaxID [PK]
Addr1
Addr2
City
State
Zip
Country
MAddr1
MAddr2
MCity
MState
MZip
MCountry
Account [table name]
AccountNumber [PK]
TaxID
SecTaxID
Addr1
Addr2
City
State
Zip
Country
Every Customer in Customer table will have 2 addresses per TaxID,
Mailing and Physical even if they are the same. Every AccountNumber
will have exactly 1 address (Mailing) associated with it. So, if you
are a customer with 2 accounts, you would have 1 record in the customer
table (with both address and Mailing address populated) and 2 records
in the Account table. All 4 addresses could be the same or unique.
Mailings will EITHER go out at the Account level OR the Customer level
so I'm not sure if it is beneficial to combine the addresses into one
table or just create some sort of Account address table and another
customer address table but then i get one-to-one relationships and I'm
not sure how beneficial that is. This is how the data will be received
from the vendor and there isn't any leeway there. Any assistance that
could be provided would be greatly appreciated.Hi
If you do not store all the addresses as received you will probably end up
with a data cleansing excercise to make sure that you are not duplicating th
e
same address that is input slightly differently. Moving the address out to
another table will be a good idea if you have occassions to retrieve the
customer/account information without the address details. I would start with
a normalised design and if performance is being severely compromised you can
then de-normalise.
John
"jasonl22@.yahoo.com" wrote:
> I have 2 tables that will be created from flat files and recreated on a
> daily basis with 300,000+ records and growing daily (tables will be
> truncated and re-loaded via a batch process because we do not "own" the
> data). I would like to extract the address information from these
> tables and create a simple database design that will be relational and
> efficient (there are additional Customer and Account fields but these
> are the fields currently of interest).
> Customer [table name]
> TaxID [PK]
> Addr1
> Addr2
> City
> State
> Zip
> Country
> MAddr1
> MAddr2
> MCity
> MState
> MZip
> MCountry
> Account [table name]
> AccountNumber [PK]
> TaxID
> SecTaxID
> Addr1
> Addr2
> City
> State
> Zip
> Country
> Every Customer in Customer table will have 2 addresses per TaxID,
> Mailing and Physical even if they are the same. Every AccountNumber
> will have exactly 1 address (Mailing) associated with it. So, if you
> are a customer with 2 accounts, you would have 1 record in the customer
> table (with both address and Mailing address populated) and 2 records
> in the Account table. All 4 addresses could be the same or unique.
> Mailings will EITHER go out at the Account level OR the Customer level
> so I'm not sure if it is beneficial to combine the addresses into one
> table or just create some sort of Account address table and another
> customer address table but then i get one-to-one relationships and I'm
> not sure how beneficial that is. This is how the data will be received
> from the vendor and there isn't any leeway there. Any assistance that
> could be provided would be greatly appreciated.
>|||so you import it exactly as it comes from your source. that is life,
and how it is done. .
from there, you have an account table, that ONLY has accountnumber,
taxid, and maybe sectaxid depending on what the heck sectaxid is.
you will have an "address table." it will have primary key of
Taxid,Addresstype
AddressType
TaxID
Addr1
Addr2
City
State
Zip
You will have an addresstype table.
it will have
addresstype (M or P)
AddressDescr (Either "Mailing or Physical)|||Jason,
I'd go for something like
Customer [table name]
TaxID [PK]
Physical_address_id
-- both could point to the same row
mailing_address_id
SInce one and the same addres can be both physical and mailing, I'd
rather not have address_type column in address table|||my answer was more normalized......
daily basis with 300,000+ records and growing daily (tables will be
truncated and re-loaded via a batch process because we do not "own" the
data). I would like to extract the address information from these
tables and create a simple database design that will be relational and
efficient (there are additional Customer and Account fields but these
are the fields currently of interest).
Customer [table name]
TaxID [PK]
Addr1
Addr2
City
State
Zip
Country
MAddr1
MAddr2
MCity
MState
MZip
MCountry
Account [table name]
AccountNumber [PK]
TaxID
SecTaxID
Addr1
Addr2
City
State
Zip
Country
Every Customer in Customer table will have 2 addresses per TaxID,
Mailing and Physical even if they are the same. Every AccountNumber
will have exactly 1 address (Mailing) associated with it. So, if you
are a customer with 2 accounts, you would have 1 record in the customer
table (with both address and Mailing address populated) and 2 records
in the Account table. All 4 addresses could be the same or unique.
Mailings will EITHER go out at the Account level OR the Customer level
so I'm not sure if it is beneficial to combine the addresses into one
table or just create some sort of Account address table and another
customer address table but then i get one-to-one relationships and I'm
not sure how beneficial that is. This is how the data will be received
from the vendor and there isn't any leeway there. Any assistance that
could be provided would be greatly appreciated.Hi
If you do not store all the addresses as received you will probably end up
with a data cleansing excercise to make sure that you are not duplicating th
e
same address that is input slightly differently. Moving the address out to
another table will be a good idea if you have occassions to retrieve the
customer/account information without the address details. I would start with
a normalised design and if performance is being severely compromised you can
then de-normalise.
John
"jasonl22@.yahoo.com" wrote:
> I have 2 tables that will be created from flat files and recreated on a
> daily basis with 300,000+ records and growing daily (tables will be
> truncated and re-loaded via a batch process because we do not "own" the
> data). I would like to extract the address information from these
> tables and create a simple database design that will be relational and
> efficient (there are additional Customer and Account fields but these
> are the fields currently of interest).
> Customer [table name]
> TaxID [PK]
> Addr1
> Addr2
> City
> State
> Zip
> Country
> MAddr1
> MAddr2
> MCity
> MState
> MZip
> MCountry
> Account [table name]
> AccountNumber [PK]
> TaxID
> SecTaxID
> Addr1
> Addr2
> City
> State
> Zip
> Country
> Every Customer in Customer table will have 2 addresses per TaxID,
> Mailing and Physical even if they are the same. Every AccountNumber
> will have exactly 1 address (Mailing) associated with it. So, if you
> are a customer with 2 accounts, you would have 1 record in the customer
> table (with both address and Mailing address populated) and 2 records
> in the Account table. All 4 addresses could be the same or unique.
> Mailings will EITHER go out at the Account level OR the Customer level
> so I'm not sure if it is beneficial to combine the addresses into one
> table or just create some sort of Account address table and another
> customer address table but then i get one-to-one relationships and I'm
> not sure how beneficial that is. This is how the data will be received
> from the vendor and there isn't any leeway there. Any assistance that
> could be provided would be greatly appreciated.
>|||so you import it exactly as it comes from your source. that is life,
and how it is done. .
from there, you have an account table, that ONLY has accountnumber,
taxid, and maybe sectaxid depending on what the heck sectaxid is.
you will have an "address table." it will have primary key of
Taxid,Addresstype
AddressType
TaxID
Addr1
Addr2
City
State
Zip
You will have an addresstype table.
it will have
addresstype (M or P)
AddressDescr (Either "Mailing or Physical)|||Jason,
I'd go for something like
Customer [table name]
TaxID [PK]
Physical_address_id
-- both could point to the same row
mailing_address_id
SInce one and the same addres can be both physical and mailing, I'd
rather not have address_type column in address table|||my answer was more normalized......
Database Design Assistance
I have 2 tables that will be created from flat files and recreated on a
daily basis with 300,000+ records and growing daily (tables will be
truncated and re-loaded via a batch process because we do not "own" the
data). I would like to extract the address information from these
tables and create a simple database design that will be relational and
efficient (there are additional Customer and Account fields but these
are the fields currently of interest).
Customer [table name]
TaxID [PK]
Addr1
Addr2
City
State
Zip
Country
MAddr1
MAddr2
MCity
MState
MZip
MCountry
Account [table name]
AccountNumber [PK]
TaxID
SecTaxID
Addr1
Addr2
City
State
Zip
Country
Every Customer in Customer table will have 2 addresses per TaxID,
Mailing and Physical even if they are the same. Every AccountNumber
will have exactly 1 address (Mailing) associated with it. So, if you
are a customer with 2 accounts, you would have 1 record in the customer
table (with both address and Mailing address populated) and 2 records
in the Account table. All 4 addresses could be the same or unique.
Mailings will EITHER go out at the Account level OR the Customer level
so I'm not sure if it is beneficial to combine the addresses into one
table or just create some sort of Account address table and another
customer address table but then i get one-to-one relationships and I'm
not sure how beneficial that is. This is how the data will be received
from the vendor and there isn't any leeway there. Any assistance that
could be provided would be greatly appreciated.Hi
If you do not store all the addresses as received you will probably end up
with a data cleansing excercise to make sure that you are not duplicating the
same address that is input slightly differently. Moving the address out to
another table will be a good idea if you have occassions to retrieve the
customer/account information without the address details. I would start with
a normalised design and if performance is being severely compromised you can
then de-normalise.
John
"jasonl22@.yahoo.com" wrote:
> I have 2 tables that will be created from flat files and recreated on a
> daily basis with 300,000+ records and growing daily (tables will be
> truncated and re-loaded via a batch process because we do not "own" the
> data). I would like to extract the address information from these
> tables and create a simple database design that will be relational and
> efficient (there are additional Customer and Account fields but these
> are the fields currently of interest).
> Customer [table name]
> TaxID [PK]
> Addr1
> Addr2
> City
> State
> Zip
> Country
> MAddr1
> MAddr2
> MCity
> MState
> MZip
> MCountry
> Account [table name]
> AccountNumber [PK]
> TaxID
> SecTaxID
> Addr1
> Addr2
> City
> State
> Zip
> Country
> Every Customer in Customer table will have 2 addresses per TaxID,
> Mailing and Physical even if they are the same. Every AccountNumber
> will have exactly 1 address (Mailing) associated with it. So, if you
> are a customer with 2 accounts, you would have 1 record in the customer
> table (with both address and Mailing address populated) and 2 records
> in the Account table. All 4 addresses could be the same or unique.
> Mailings will EITHER go out at the Account level OR the Customer level
> so I'm not sure if it is beneficial to combine the addresses into one
> table or just create some sort of Account address table and another
> customer address table but then i get one-to-one relationships and I'm
> not sure how beneficial that is. This is how the data will be received
> from the vendor and there isn't any leeway there. Any assistance that
> could be provided would be greatly appreciated.
>|||so you import it exactly as it comes from your source. that is life,
and how it is done. .
from there, you have an account table, that ONLY has accountnumber,
taxid, and maybe sectaxid depending on what the heck sectaxid is.
you will have an "address table." it will have primary key of
Taxid,Addresstype
AddressType
TaxID
Addr1
Addr2
City
State
Zip
You will have an addresstype table.
it will have
addresstype (M or P)
AddressDescr (Either "Mailing or Physical)|||Jason,
I'd go for something like
Customer [table name]
TaxID [PK]
Physical_address_id
-- both could point to the same row
mailing_address_id
SInce one and the same addres can be both physical and mailing, I'd
rather not have address_type column in address table|||my answer was more normalized......
daily basis with 300,000+ records and growing daily (tables will be
truncated and re-loaded via a batch process because we do not "own" the
data). I would like to extract the address information from these
tables and create a simple database design that will be relational and
efficient (there are additional Customer and Account fields but these
are the fields currently of interest).
Customer [table name]
TaxID [PK]
Addr1
Addr2
City
State
Zip
Country
MAddr1
MAddr2
MCity
MState
MZip
MCountry
Account [table name]
AccountNumber [PK]
TaxID
SecTaxID
Addr1
Addr2
City
State
Zip
Country
Every Customer in Customer table will have 2 addresses per TaxID,
Mailing and Physical even if they are the same. Every AccountNumber
will have exactly 1 address (Mailing) associated with it. So, if you
are a customer with 2 accounts, you would have 1 record in the customer
table (with both address and Mailing address populated) and 2 records
in the Account table. All 4 addresses could be the same or unique.
Mailings will EITHER go out at the Account level OR the Customer level
so I'm not sure if it is beneficial to combine the addresses into one
table or just create some sort of Account address table and another
customer address table but then i get one-to-one relationships and I'm
not sure how beneficial that is. This is how the data will be received
from the vendor and there isn't any leeway there. Any assistance that
could be provided would be greatly appreciated.Hi
If you do not store all the addresses as received you will probably end up
with a data cleansing excercise to make sure that you are not duplicating the
same address that is input slightly differently. Moving the address out to
another table will be a good idea if you have occassions to retrieve the
customer/account information without the address details. I would start with
a normalised design and if performance is being severely compromised you can
then de-normalise.
John
"jasonl22@.yahoo.com" wrote:
> I have 2 tables that will be created from flat files and recreated on a
> daily basis with 300,000+ records and growing daily (tables will be
> truncated and re-loaded via a batch process because we do not "own" the
> data). I would like to extract the address information from these
> tables and create a simple database design that will be relational and
> efficient (there are additional Customer and Account fields but these
> are the fields currently of interest).
> Customer [table name]
> TaxID [PK]
> Addr1
> Addr2
> City
> State
> Zip
> Country
> MAddr1
> MAddr2
> MCity
> MState
> MZip
> MCountry
> Account [table name]
> AccountNumber [PK]
> TaxID
> SecTaxID
> Addr1
> Addr2
> City
> State
> Zip
> Country
> Every Customer in Customer table will have 2 addresses per TaxID,
> Mailing and Physical even if they are the same. Every AccountNumber
> will have exactly 1 address (Mailing) associated with it. So, if you
> are a customer with 2 accounts, you would have 1 record in the customer
> table (with both address and Mailing address populated) and 2 records
> in the Account table. All 4 addresses could be the same or unique.
> Mailings will EITHER go out at the Account level OR the Customer level
> so I'm not sure if it is beneficial to combine the addresses into one
> table or just create some sort of Account address table and another
> customer address table but then i get one-to-one relationships and I'm
> not sure how beneficial that is. This is how the data will be received
> from the vendor and there isn't any leeway there. Any assistance that
> could be provided would be greatly appreciated.
>|||so you import it exactly as it comes from your source. that is life,
and how it is done. .
from there, you have an account table, that ONLY has accountnumber,
taxid, and maybe sectaxid depending on what the heck sectaxid is.
you will have an "address table." it will have primary key of
Taxid,Addresstype
AddressType
TaxID
Addr1
Addr2
City
State
Zip
You will have an addresstype table.
it will have
addresstype (M or P)
AddressDescr (Either "Mailing or Physical)|||Jason,
I'd go for something like
Customer [table name]
TaxID [PK]
Physical_address_id
-- both could point to the same row
mailing_address_id
SInce one and the same addres can be both physical and mailing, I'd
rather not have address_type column in address table|||my answer was more normalized......
Sunday, February 19, 2012
Database design
I have got a design issue.I have got 4 tables,having relationships.Now,user will edit records and save them,but I want to save the values which were exixsting before they were edited and saved.Implying,if a record is edited 50 times,then the values of each edit needs to be saved.How best ,can I achieve this.I can see 2 answers, there may be more.
1. Fire a trigger on the before update that would collect the data and populate it some where else.
2. Use Stored procedure to do all updating and inserting then you can add code to each procedure that would grap the values before it updates.
1. Fire a trigger on the before update that would collect the data and populate it some where else.
2. Use Stored procedure to do all updating and inserting then you can add code to each procedure that would grap the values before it updates.
I would go with the SP Method as triggers can get busy.|||Indeed ... create a history table where each time an update happens you will save a record|||Here is a nice Generic audit trail trigger code:
http://www.nigelrivett.net/AuditTrailTrigger.html
Subscribe to:
Posts (Atom)