Showing posts with label field. Show all posts
Showing posts with label field. Show all posts

Tuesday, March 27, 2012

Database Growth & Space Recovery Problem

I have done this experiment on one of the tables. There is table called
build havinf nvText Field with large no of records. I want to drop that
column and recover space. These are the results I got.
SP_SPACEUSED BUILD Results
name rows reserved data index_size unused
1. Before Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
2. After Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
3. After Executing the Shrink Database from Enterprise Manager.
Build 663211 608280 KB 604744 KB 3456 KB 80 K
4. After Executing DBCC DBReindex (build,'',70)
Build 663211 124096 KB 123392 KB 712 KB -8 KB
Can anyone please explain me after executing step 3 i.e shrink data
column as well as index_size shows an increased figure whereas logically
it should be a reduced figure.
Regards,
Harcharan
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
One thing is that you should be suspicious of the numbers , as you are..
SP_SPaceused reports information from Sysindexes, but the sysindexes info is
NOT updated with everyinsert update or delete...So the numbers may not
reflect the actual size..
For each sp_spaceused in your test , execute like this
exec sp_spaceused 'build', true
The second parameter tells sql to read through the table and update the
estimates, it runs longer but you will get tru results...If the numbers
still look funny, we'll try something else...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Harcharan Jassal" <hjjassal@.yahoo.com> wrote in message
news:uCtYWSBhEHA.3540@.TK2MSFTNGP10.phx.gbl...
> I have done this experiment on one of the tables. There is table called
> build havinf nvText Field with large no of records. I want to drop that
> column and recover space. These are the results I got.
> SP_SPACEUSED 'BUILD' Results
> name rows reserved data index_size unused
> 1. Before Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 2. After Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 3. After Executing the Shrink Database from Enterprise Manager.
> Build 663211 608280 KB 604744 KB 3456 KB 80 K
> 4. After Executing DBCC DBReindex (build,'',70)
> Build 663211 124096 KB 123392 KB 712 KB -8 KB
> Can anyone please explain me after executing step 3 i.e shrink data
> column as well as index_size shows an increased figure whereas logically
> it should be a reduced figure.
> Regards,
> Harcharan
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!

Database Growth & Space Recovery Problem

I have done this experiment on one of the tables. There is table called
build havinf nvText Field with large no of records. I want to drop that
column and recover space. These are the results I got.
SP_SPACEUSED BUILD Results
name rows reserved data index_size unused
1. Before Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
2. After Deleting nvText Field
Build 663211 341440 KB 339464 KB 1944 KB 32 KB
3. After Executing the Shrink Database from Enterprise Manager.
Build 663211 608280 KB 604744 KB 3456 KB 80 K
4. After Executing DBCC DBReindex (build,'',70)
Build 663211 124096 KB 123392 KB 712 KB -8 KB
Can anyone please explain me after executing step 3 i.e shrink data
column as well as index_size shows an increased figure whereas logically
it should be a reduced figure.
Regards,
Harcharan
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!One thing is that you should be suspicious of the numbers , as you are..
SP_SPaceused reports information from Sysindexes, but the sysindexes info is
NOT updated with everyinsert update or delete...So the numbers may not
reflect the actual size..
For each sp_spaceused in your test , execute like this
exec sp_spaceused 'build', true
The second parameter tells sql to read through the table and update the
estimates, it runs longer but you will get tru results...If the numbers
still look funny, we'll try something else...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Harcharan Jassal" <hjjassal@.yahoo.com> wrote in message
news:uCtYWSBhEHA.3540@.TK2MSFTNGP10.phx.gbl...
> I have done this experiment on one of the tables. There is table called
> build havinf nvText Field with large no of records. I want to drop that
> column and recover space. These are the results I got.
> SP_SPACEUSED 'BUILD' Results
> name rows reserved data index_size unused
> 1. Before Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 2. After Deleting nvText Field
> Build 663211 341440 KB 339464 KB 1944 KB 32 KB
> 3. After Executing the Shrink Database from Enterprise Manager.
> Build 663211 608280 KB 604744 KB 3456 KB 80 K
> 4. After Executing DBCC DBReindex (build,'',70)
> Build 663211 124096 KB 123392 KB 712 KB -8 KB
> Can anyone please explain me after executing step 3 i.e shrink data
> column as well as index_size shows an increased figure whereas logically
> it should be a reduced figure.
> Regards,
> Harcharan
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!sql

Thursday, March 22, 2012

Database Field showing as zero in report footer but works in group footer

I'm fairly new to crystal so there may be an obvious solution but I just can't find it. So any help what-so-ever would be excellent and much appreciated...

Problem:

Database field shows correct value and works in formulas properly in group footer.
In Report footer this same database field shows as zero and therefore other formulas in the report footer do not calculate correctly.

Any solutions?Please. Any help would be greatly appreciated. I'm kind of in a time crunch.

I'm not sure why a database field would show the value in the group footer and come up zero in the report footer.

Any ideas??|||Sorry, don't know why either - the report footer always shows the final database record for me!
I have read of other people having this problem, and I believe that assigning the fiele to a shared variable has allowed the value to be used.|||I gave it a variable and now it works. How weird is that?!

Thanks for your help!sql

Database field length problem

Hi everyone
I had an access database running as the source for a website but it
has become too large to run correctly so it has been ported to MS-SQL
the problem is that 4 of the fields were Memo fields in access and as
such are 5000+ characters long each this overflows the allowed size on
the SQL server (8192)

Is there a way round without splitting those 4 fields into seperate
tales?? as this would cause a truly major re-write of the website

Thanks for any help
Further details available if requiredIf you use VarChar can you not set the max field length to 8000 characters?

I can't believe SQL has such limits, surely there's a way to automatically
use two rows for one record, or does this require additional programming in
ASP?

Cheers, Ash

"Peter" <peter@.iib.ws> wrote in message
news:81307dbc.0406250528.ae1d3d@.posting.google.com ...
> Hi everyone
> I had an access database running as the source for a website but it
> has become too large to run correctly so it has been ported to MS-SQL
> the problem is that 4 of the fields were Memo fields in access and as
> such are 5000+ characters long each this overflows the allowed size on
> the SQL server (8192)
> Is there a way round without splitting those 4 fields into seperate
> tales?? as this would cause a truly major re-write of the website
> Thanks for any help
> Further details available if required|||Hi there
When i try to import a flat text file where i have used the transform
tool to delare field sizes of 6000 for the four fields i get the
following error if they are varchar

cannot create a row of size 8366 which is greater than the allowable
maximum of 8060

If i map them as ntext i dont get an error but the data is truncated

Any ideas
Peter

*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!|||Peter Winning (peter@.iib.ws) writes:
> When i try to import a flat text file where i have used the transform
> tool to delare field sizes of 6000 for the four fields i get the
> following error if they are varchar
> cannot create a row of size 8366 which is greater than the allowable
> maximum of 8060
> If i map them as ntext i dont get an error but the data is truncated

ntext is probably the way to go, since that is the only way to have
more than 8060 bytes of data on one row.

Why your ntext data is truncated I don't know, but then again I don't
know how import the data. A CREATE TABLE definition and a sample data
file could help. (You would have to pack the data file into a zip
file, since it surely would be wrecked by news transport, if you
posted it as text.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi
I am using the enterprise manager to upload the database and so far all
attempts no matter how the fields are transformed are still failing
I have tried mapping the fields as vchar, nvchar and ntext with the same
results in every case ie failure to complete or data truncation this has
also been attempted on the server itself
The upsizing wizard in access also failed to do the job

*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!|||Peter Winning (peter@.iib.ws) writes:
> I am using the enterprise manager to upload the database and so far all
> attempts no matter how the fields are transformed are still failing
> I have tried mapping the fields as vchar, nvchar and ntext with the same
> results in every case ie failure to complete or data truncation this has
> also been attempted on the server itself
> The upsizing wizard in access also failed to do the job

I have no idea what Enterprise Manager is up to when it imports data;
I didn't even know that it had a function for it, and even less have I
used it.

My general experience of EM, though, is that it tends to occlude some
syntax in order to be helpful, when things go over its head, it leaves
you alone in the dark.

I would try to import the file with BCP, but since I don't know how your
text file looks like, I cannot suggest the exact command line. I repeat
from my previous posting:

A CREATE TABLE definition and a sample data file could help. (You would
have to pack the data file into a zip file, since it surely would be
wrecked by news transport, if you posted it as text.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thank you
I have to confess that i am a complete novice with sql what is "BCP" and
where would i find some help in how to use it.
The database is a flat field one with about 30 fields 4 of which are
memo fields with very large amounts of data in each

Regards
Peter

*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!|||Peter Winning (peter@.iib.ws) writes:
> I have to confess that i am a complete novice with sql what is "BCP" and
> where would i find some help in how to use it.

BCP is a command line tool that permits you to load large amount of
data from files. The files can be text files or binary. BCP is a bit
restricted in that the file has to be fairly square. That is, it not able
to sort out headers, unless you can find a square hole to put them in.

You can read more about BCP in Books Online.

Another alternative is DTS (Data Transformation Service), which is a more
versatile load tool, which I have never used my self though.

> The database is a flat field one with about 30 fields 4 of which are
> memo fields with very large amounts of data in each

Well, it is up to you. If you don't want to post a CREATE TABLE statement
for your table and a sample data file, you don't have to. But then you will
have to find out how to load your file with BCP on your own, because I
don't really feel like guessing your table and data.

If you look in the SQL Server Program group, there is "Import and
Export Data". This takes you to the DTS wizard, which may be able to
guide all the way. But as I said, I have not used DTS. Then again,
there are some nice people in microsoft.public.sqlserver.dts who might
be able to help you if you go that way. But they, too, might want the
table definition and sample data.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi

I don't think the DTS Import wizard will truncate the data if you import it
from an Access database! It could be that you are selecting the data in
Query Analyser which has a configurable value (Tools/Options/Results/Maximum
characters per column).

If you have a table such as

CREATE TABLE MyAccessTable ( id int, Memo1 ntext, Memo2 ntext, Memo3 ntext,
Memo4 ntext )

The you can see the number of characters using:
SELECT id,
datalength(memo1)/2,datalength(memo2)/2,datalength(memo3)/2,datalength(memo4
)/2 FROM MyAccessTable

John

"Peter Winning" <peter@.iib.ws> wrote in message
news:40dc5ecd$0$16435$c397aba@.news.newsgroups.ws.. .
> Hi there
> When i try to import a flat text file where i have used the transform
> tool to delare field sizes of 6000 for the four fields i get the
> following error if they are varchar
> cannot create a row of size 8366 which is greater than the allowable
> maximum of 8060
> If i map them as ntext i dont get an error but the data is truncated
> Any ideas
> Peter
>
> *** Sent via Devdex http://www.devdex.com ***
> Don't just participate in USENET...get rewarded for it!|||I've just looked at our WebMail application we're using, it uses a SQL
database to store the messages which generally are over 8000 characters - it
uses NTEXT as the datatype, with a length of '16', how on EARTH does that
relate to a VARCHAR field that has to be set to 8000??

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns9514A53A31CFFYazorman@.127.0.0.1...
> Peter Winning (peter@.iib.ws) writes:
> > When i try to import a flat text file where i have used the transform
> > tool to delare field sizes of 6000 for the four fields i get the
> > following error if they are varchar
> > cannot create a row of size 8366 which is greater than the allowable
> > maximum of 8060
> > If i map them as ntext i dont get an error but the data is truncated
> ntext is probably the way to go, since that is the only way to have
> more than 8060 bytes of data on one row.
> Why your ntext data is truncated I don't know, but then again I don't
> know how import the data. A CREATE TABLE definition and a sample data
> file could help. (You would have to pack the data file into a zip
> file, since it surely would be wrecked by news transport, if you
> posted it as text.)
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp|||J. Hall (remove_this_ash@.a-hall.com) writes:
> I've just looked at our WebMail application we're using, it uses a SQL
> database to store the messages which generally are over 8000 characters
> - it uses NTEXT as the datatype, with a length of '16', how on EARTH
> does that relate to a VARCHAR field that has to be set to 8000??

16 is the length of the pointer that is stored within the row. The data
itself is stored on separate pages. A varchar value on the other hand is
stored within the row, and since a row can not host more than 8060 bytes
of data, there is an upper limit.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Excellent thanks for clearing that up.

Many thanks,

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns9517834BC4F9AYazorman@.127.0.0.1...
> J. Hall (remove_this_ash@.a-hall.com) writes:
> > I've just looked at our WebMail application we're using, it uses a SQL
> > database to store the messages which generally are over 8000 characters
> > - it uses NTEXT as the datatype, with a length of '16', how on EARTH
> > does that relate to a VARCHAR field that has to be set to 8000??
> 16 is the length of the pointer that is stored within the row. The data
> itself is stored on separate pages. A varchar value on the other hand is
> stored within the row, and since a row can not host more than 8060 bytes
> of data, there is an upper limit.
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp

Database field length from dataset in vb.net

How to get the max length of numeric field in a DataSet?
I have a DataSet bound to an Access database. Is it possible to get the maximum length of numeric field of a table in the DataSet? Many fields in the database tables have maximum length values set in ...

A VB.net or ADO.Net forum would be a better start, but just walk down the object hierarchy-

myDataSet.Tables(0).Columns(0).Length

database extension in MS SQL?

hello ... I am trying to copy 1 field from a table in database called DB1 to a same table in DB2 .. I am not sure if this is possible. I found out form www3.schools.com it's doable using Select Into ... but it uses MS Access for example (Northwind.mdb) ... I wonder what's the extension database in MS SQL? Does anyone know where can we check the query syntax on the web? Can anyone tell me it's correct? I dun have MS SQL from home .. so :(

======copy prodID from DB1 to DB2 table called production1 (both DB1 and DB2)=======

// this is done in SQL QUERY ANALYZER under DB1?

SELECT production1.prodID INTO production1 IN DB2 FROM DB1

=====create VIEW in DB2=======

// this has to be done in DB1 ... if I have multiple databases, is there anyway to do it using something like SELECT INTO to create view in differences databases?

CREATE VIEW production_id AS SELECT prodID from DB1

Assuming that you're running against a SQL Server instance, and not directly mounting the database files, you don't need to know the file names. If both databases are on the same server, you can reference tables on another database as [database].[owner].[table]. So, if you're logged into db2, you could do

insert into production1 select db1.production1.prodID

Wednesday, March 7, 2012

Database design question - Using Field definition table

Hi,
We have a table similar to the following schema.
Product table
product_id int,
product_name varchar(50),
product_desc varchar(100),
product_price money,
product_sub_category_id int
We also have tables for product category and sub category. We would
like to save audit trail information whenever a new product is added,
changed name, description or price, and moved to another sub category
or category. We need to save modified user, modification datetime and
some other audit info as well. It seems there are two options available
saving this data i.e. either using a large table that has all audit
columns or using field definition table. The field definition table has
a field id and audit field column and the audit table will have
product_id, modified field id, new value, old value, modified user and
modification date.
My question is which design is efficient? It seems the first is pretty
straight forward and easy to implement, but stores redundant
information and grows quickly. The second solution seems to reduce
redundancy, but end up saving multiple rows if there are multiple
updates at one time. Also, we need to make join to the same table when
we write complex queries.
Any comments or suggestions would be appreciated.
Thanks,It sounds like you already understand the tradeoffs pretty well, at
least between the alternatives given.
Not knowing the details of your operation makes it hard to give
meaningful advice. Just looking at the given information, I have to
suspect that the most common change would be to product_price. It
also makes me wonder if the price does not belong in a price table
instead, with effective dates and who-changed-it-when information. In
that configuration the price table itself becomes its own audit table.
If changes to the rest of the columns are infrequent enough then that
would allow the simple approach of duplicating the entire row for the
other columns. When space permits I much prefer the simple approach,
as it makes using the audit data so much simpler.
Roy Harvey
Beacon Falls, CT
On 5 Jul 2006 16:12:54 -0700, sreedhardasi@.gmail.com wrote:

>Hi,
>We have a table similar to the following schema.
>Product table
> product_id int,
> product_name varchar(50),
> product_desc varchar(100),
> product_price money,
> product_sub_category_id int
>We also have tables for product category and sub category. We would
>like to save audit trail information whenever a new product is added,
>changed name, description or price, and moved to another sub category
>or category. We need to save modified user, modification datetime and
>some other audit info as well. It seems there are two options available
>saving this data i.e. either using a large table that has all audit
>columns or using field definition table. The field definition table has
>a field id and audit field column and the audit table will have
>product_id, modified field id, new value, old value, modified user and
>modification date.
>My question is which design is efficient? It seems the first is pretty
>straight forward and easy to implement, but stores redundant
>information and grows quickly. The second solution seems to reduce
>redundancy, but end up saving multiple rows if there are multiple
>updates at one time. Also, we need to make join to the same table when
>we write complex queries.
>Any comments or suggestions would be appreciated.
>Thanks,|||Hi
I agree with Roy , you will be benefit from having price's table . Much
easier to audit instead of having triggers or something else to track the
info
<sreedhardasi@.gmail.com> wrote in message
news:1152141174.756626.230500@.v61g2000cwv.googlegroups.com...
> Hi,
> We have a table similar to the following schema.
> Product table
> product_id int,
> product_name varchar(50),
> product_desc varchar(100),
> product_price money,
> product_sub_category_id int
> We also have tables for product category and sub category. We would
> like to save audit trail information whenever a new product is added,
> changed name, description or price, and moved to another sub category
> or category. We need to save modified user, modification datetime and
> some other audit info as well. It seems there are two options available
> saving this data i.e. either using a large table that has all audit
> columns or using field definition table. The field definition table has
> a field id and audit field column and the audit table will have
> product_id, modified field id, new value, old value, modified user and
> modification date.
> My question is which design is efficient? It seems the first is pretty
> straight forward and easy to implement, but stores redundant
> information and grows quickly. The second solution seems to reduce
> redundancy, but end up saving multiple rows if there are multiple
> updates at one time. Also, we need to make join to the same table when
> we write complex queries.
> Any comments or suggestions would be appreciated.
> Thanks,
>|||Thanks a lot for your suggestions. I think it is a good idea to have a
price table.
Uri Dimant wrote:[vbcol=seagreen]
> Hi
> I agree with Roy , you will be benefit from having price's table . Much
> easier to audit instead of having triggers or something else to track the
> info
>
> <sreedhardasi@.gmail.com> wrote in message
> news:1152141174.756626.230500@.v61g2000cwv.googlegroups.com...|||Thanks a lot for your suggestions. I think it is a good idea to have a
price table.
Uri Dimant wrote:[vbcol=seagreen]
> Hi
> I agree with Roy , you will be benefit from having price's table . Much
> easier to audit instead of having triggers or something else to track the
> info
>
> <sreedhardasi@.gmail.com> wrote in message
> news:1152141174.756626.230500@.v61g2000cwv.googlegroups.com...

Database design question - Using Field definition table

Hi,
We have a table similar to the following schema.
Product table
product_id int,
product_name varchar(50),
product_desc varchar(100),
product_price money,
product_sub_category_id int
We also have tables for product category and sub category. We would
like to save audit trail information whenever a new product is added,
changed name, description or price, and moved to another sub category
or category. We need to save modified user, modification datetime and
some other audit info as well. It seems there are two options available
saving this data i.e. either using a large table that has all audit
columns or using field definition table. The field definition table has
a field id and audit field column and the audit table will have
product_id, modified field id, new value, old value, modified user and
modification date.
My question is which design is efficient? It seems the first is pretty
straight forward and easy to implement, but stores redundant
information and grows quickly. The second solution seems to reduce
redundancy, but end up saving multiple rows if there are multiple
updates at one time. Also, we need to make join to the same table when
we write complex queries.
Any comments or suggestions would be appreciated.
Thanks,It sounds like you already understand the tradeoffs pretty well, at
least between the alternatives given.
Not knowing the details of your operation makes it hard to give
meaningful advice. Just looking at the given information, I have to
suspect that the most common change would be to product_price. It
also makes me wonder if the price does not belong in a price table
instead, with effective dates and who-changed-it-when information. In
that configuration the price table itself becomes its own audit table.
If changes to the rest of the columns are infrequent enough then that
would allow the simple approach of duplicating the entire row for the
other columns. When space permits I much prefer the simple approach,
as it makes using the audit data so much simpler.
Roy Harvey
Beacon Falls, CT
On 5 Jul 2006 16:12:54 -0700, sreedhardasi@.gmail.com wrote:
>Hi,
>We have a table similar to the following schema.
>Product table
> product_id int,
> product_name varchar(50),
> product_desc varchar(100),
> product_price money,
> product_sub_category_id int
>We also have tables for product category and sub category. We would
>like to save audit trail information whenever a new product is added,
>changed name, description or price, and moved to another sub category
>or category. We need to save modified user, modification datetime and
>some other audit info as well. It seems there are two options available
>saving this data i.e. either using a large table that has all audit
>columns or using field definition table. The field definition table has
>a field id and audit field column and the audit table will have
>product_id, modified field id, new value, old value, modified user and
>modification date.
>My question is which design is efficient? It seems the first is pretty
>straight forward and easy to implement, but stores redundant
>information and grows quickly. The second solution seems to reduce
>redundancy, but end up saving multiple rows if there are multiple
>updates at one time. Also, we need to make join to the same table when
>we write complex queries.
>Any comments or suggestions would be appreciated.
>Thanks,|||Hi
I agree with Roy , you will be benefit from having price's table . Much
easier to audit instead of having triggers or something else to track the
info
<sreedhardasi@.gmail.com> wrote in message
news:1152141174.756626.230500@.v61g2000cwv.googlegroups.com...
> Hi,
> We have a table similar to the following schema.
> Product table
> product_id int,
> product_name varchar(50),
> product_desc varchar(100),
> product_price money,
> product_sub_category_id int
> We also have tables for product category and sub category. We would
> like to save audit trail information whenever a new product is added,
> changed name, description or price, and moved to another sub category
> or category. We need to save modified user, modification datetime and
> some other audit info as well. It seems there are two options available
> saving this data i.e. either using a large table that has all audit
> columns or using field definition table. The field definition table has
> a field id and audit field column and the audit table will have
> product_id, modified field id, new value, old value, modified user and
> modification date.
> My question is which design is efficient? It seems the first is pretty
> straight forward and easy to implement, but stores redundant
> information and grows quickly. The second solution seems to reduce
> redundancy, but end up saving multiple rows if there are multiple
> updates at one time. Also, we need to make join to the same table when
> we write complex queries.
> Any comments or suggestions would be appreciated.
> Thanks,
>|||Thanks a lot for your suggestions. I think it is a good idea to have a
price table.
Uri Dimant wrote:
> Hi
> I agree with Roy , you will be benefit from having price's table . Much
> easier to audit instead of having triggers or something else to track the
> info
>
> <sreedhardasi@.gmail.com> wrote in message
> news:1152141174.756626.230500@.v61g2000cwv.googlegroups.com...
> > Hi,
> >
> > We have a table similar to the following schema.
> >
> > Product table
> > product_id int,
> > product_name varchar(50),
> > product_desc varchar(100),
> > product_price money,
> > product_sub_category_id int
> >
> > We also have tables for product category and sub category. We would
> > like to save audit trail information whenever a new product is added,
> > changed name, description or price, and moved to another sub category
> > or category. We need to save modified user, modification datetime and
> > some other audit info as well. It seems there are two options available
> > saving this data i.e. either using a large table that has all audit
> > columns or using field definition table. The field definition table has
> > a field id and audit field column and the audit table will have
> > product_id, modified field id, new value, old value, modified user and
> > modification date.
> >
> > My question is which design is efficient? It seems the first is pretty
> > straight forward and easy to implement, but stores redundant
> > information and grows quickly. The second solution seems to reduce
> > redundancy, but end up saving multiple rows if there are multiple
> > updates at one time. Also, we need to make join to the same table when
> > we write complex queries.
> >
> > Any comments or suggestions would be appreciated.
> >
> > Thanks,
> >|||Thanks a lot for your suggestions. I think it is a good idea to have a
price table.
Uri Dimant wrote:
> Hi
> I agree with Roy , you will be benefit from having price's table . Much
> easier to audit instead of having triggers or something else to track the
> info
>
> <sreedhardasi@.gmail.com> wrote in message
> news:1152141174.756626.230500@.v61g2000cwv.googlegroups.com...
> > Hi,
> >
> > We have a table similar to the following schema.
> >
> > Product table
> > product_id int,
> > product_name varchar(50),
> > product_desc varchar(100),
> > product_price money,
> > product_sub_category_id int
> >
> > We also have tables for product category and sub category. We would
> > like to save audit trail information whenever a new product is added,
> > changed name, description or price, and moved to another sub category
> > or category. We need to save modified user, modification datetime and
> > some other audit info as well. It seems there are two options available
> > saving this data i.e. either using a large table that has all audit
> > columns or using field definition table. The field definition table has
> > a field id and audit field column and the audit table will have
> > product_id, modified field id, new value, old value, modified user and
> > modification date.
> >
> > My question is which design is efficient? It seems the first is pretty
> > straight forward and easy to implement, but stores redundant
> > information and grows quickly. The second solution seems to reduce
> > redundancy, but end up saving multiple rows if there are multiple
> > updates at one time. Also, we need to make join to the same table when
> > we write complex queries.
> >
> > Any comments or suggestions would be appreciated.
> >
> > Thanks,
> >

Friday, February 24, 2012

Database design for status field

Hello,

I have a database design question. I want to store a status in a table. In the ASP.NET interface for the user this status will be adapted to the language of the user.

This is how it would look like:

StatusID 1 = "yes" for English, "oui" for French, "ja" for Dutch
StatusID 2 = "no" for English, "non" for French, "neen" for Dutch
StatusID 3 = "error" for English, "erreur" for French, "fout" for Dutch

I don't want to do the translation in ASP.NET and it should be done in one query.

How would it look like in the database design.

WimVM, how about this,

if you are asking about how to design a table or tables, you could have one table (status) like this:

statusID is you primary key and three additional fields (english, french, dutch)

so for instance if you have an english speaker your "yes" would be returned by

declare @.statusintset status = (getYourIdFromApp)select english from status where statusId = @.status
hope this is helpful -- jp|||

Hello,

Thanks for your reply. This would indeed do it, nice and easy. What I forgot to say was that the language is set by an ID, LanguageID. This holds the value 1 for English, 2 for Dutch, 3 for French and 4 for German.

I would need a way that I have more flexibility when I add or disable a language. In the above example you need to know the position of the value linked to the value. Sorry for not informing you about this in the first place.

Thanks

|||

WimVM:

Hello,

Thanks for your reply. This would indeed do it, nice and easy. What I forgot to say was that the language is set by an ID, LanguageID. This holds the value 1 for English, 2 for Dutch, 3 for French and 4 for German.

I would need a way that I have more flexibility when I add or disable a language. In the above example you need to know the position of the value linked to the value. Sorry for not informing you about this in the first place.

Thanks

I still have not found a solution for this. It would be great if somebody could help me with this. Thanks.

|||

WimVM:

Thanks for your reply. This would indeed do it, nice and easy. What I forgot to say was that the language is set by an ID, LanguageID. This holds the value 1 for English, 2 for Dutch, 3 for French and 4 for German.

I would need a way that I have more flexibility when I add or disable a language. In the above example you need to know the position of the value linked to the value. Sorry for not informing you about this in the first place.

Hi Wim, I have some suggestions.

Add a new table to your database called Languages. This table will have 2 fields: LanguageID and LanguageName. The data would look like this:

LanguageID LanguageName
1 English
2 Dutch
3 French
4 German

Add another table called Statuses. This table will have 2 fields: StatusID and StatusName. This table would not necessarily be used in queries, but could be used to store metadata about the Statuses. The data would look like this:

StatusID StatusName
1 Yes Status
2 No Status


Add another table called StatusTranslation. This table will have 3 fields: StatusID, LanguageID, and Translation. The data would look like this:

StatusID LanguageID StatusTranslation
1 1 yes
2 1 no
1 2 ja
2 2 neen
1 3 oui
2 3 non
1 4 ja
2 4 nein

Now, assuming your main data table is called MyTable. This table might look something like this:

ID Column1 Column2 UserLanguageID StatusID
1 Value1 Value2 2 1
2 SomeValue SomeValue2 1 2
3 Value1 Value2 3 1

The query to pull out the status data, translated for the user's language, would look like this:

SELECT
ID,
StatusTranslation
FROM
myTable
INNER JOIN
StatusTranslation ON myTable.StatusID = StatusTranslation.StatusID AND myTable.UserLanguageID = StatusTranslation.UserLanguageID

And the result of that query would be:

ID StatusTranslation
1 ja
2 no
3 oui

This has made some assumptions, such as that you have the UserLanguageID in the same table as the StatusID. You'd need to adjust the queries and data tables to fit your situation, but perhaps this will help get you started.

For some background, you might like Alister Jones' (SomeNewKid) blog post:Localizing the Content, which lead's to Karl Seguin's article:Creating multilingual websites - Part 2.

|||Thankstmorton!

Friday, February 17, 2012

Database Defaults

I have a strange (well I think its strange) where I have create a field
(CreateDate) with a default of (getdate()); however, on the server where this
table is built, it is not populating the CreateDate, and as a result I am
getting errors, has anyone ever seen an issue like this were the field
defaults aren't getting populated?Does the column allow nulls?
Is the insert statement inserting a blank string in that column, or
something else that shouldn't be inserted?
Are you receiving any errors?
can we see the insert script?
Simon Worth
Scott wrote:
> I have a strange (well I think its strange) where I have create a field
> (CreateDate) with a default of (getdate()); however, on the server where this
> table is built, it is not populating the CreateDate, and as a result I am
> getting errors, has anyone ever seen an issue like this were the field
> defaults aren't getting populated?|||"Simon Worth" wrote:
The field does NOT allow nuls, and I tested with a very simple script:
Insert into Codes
(Code,Description)
values ('0001','Test Code')
the error message that I received was that the field will not accept nulls
the actual verbage was "Cannot insert the value NULL into column
'CreateDate',table test_app.dbo.codes, column does not allow NULLs. Update
Fails."
I use this all the time, and I am sure alot of others as well, this is the
first time I have ever run into anything like this
> Does the column allow nulls?
> Is the insert statement inserting a blank string in that column, or
> something else that shouldn't be inserted?
> Are you receiving any errors?
> can we see the insert script?
> Simon Worth|||Are you certain that the default constraint is still on the column? Try
verifying existence with sp_help 'Codes'.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Scott" <Scott@.discussions.microsoft.com> wrote in message
news:B5F635A5-6379-41C9-A849-BC10B57865A7@.microsoft.com...
>
> "Simon Worth" wrote:
> The field does NOT allow nuls, and I tested with a very simple script:
> Insert into Codes
> (Code,Description)
> values ('0001','Test Code')
> the error message that I received was that the field will not accept nulls
> the actual verbage was "Cannot insert the value NULL into column
> 'CreateDate',table test_app.dbo.codes, column does not allow NULLs.
> Update
> Fails."
> I use this all the time, and I am sure alot of others as well, this is the
> first time I have ever run into anything like this
>
>> Does the column allow nulls?
>> Is the insert statement inserting a blank string in that column, or
>> something else that shouldn't be inserted?
>> Are you receiving any errors?
>> can we see the insert script?
>> Simon Worth
>

Database Defaults

I have a strange (well I think its strange) where I have create a field
(CreateDate) with a default of (getdate()); however, on the server where this
table is built, it is not populating the CreateDate, and as a result I am
getting errors, has anyone ever seen an issue like this were the field
defaults aren't getting populated?
Does the column allow nulls?
Is the insert statement inserting a blank string in that column, or
something else that shouldn't be inserted?
Are you receiving any errors?
can we see the insert script?
Simon Worth
Scott wrote:
> I have a strange (well I think its strange) where I have create a field
> (CreateDate) with a default of (getdate()); however, on the server where this
> table is built, it is not populating the CreateDate, and as a result I am
> getting errors, has anyone ever seen an issue like this were the field
> defaults aren't getting populated?
|||"Simon Worth" wrote:
The field does NOT allow nuls, and I tested with a very simple script:
Insert into Codes
(Code,Description)
values ('0001','Test Code')
the error message that I received was that the field will not accept nulls
the actual verbage was "Cannot insert the value NULL into column
'CreateDate',table test_app.dbo.codes, column does not allow NULLs. Update
Fails."
I use this all the time, and I am sure alot of others as well, this is the
first time I have ever run into anything like this

> Does the column allow nulls?
> Is the insert statement inserting a blank string in that column, or
> something else that shouldn't be inserted?
> Are you receiving any errors?
> can we see the insert script?
> Simon Worth
|||Are you certain that the default constraint is still on the column? Try
verifying existence with sp_help 'Codes'.
Hope this helps.
Dan Guzman
SQL Server MVP
"Scott" <Scott@.discussions.microsoft.com> wrote in message
news:B5F635A5-6379-41C9-A849-BC10B57865A7@.microsoft.com...
>
> "Simon Worth" wrote:
> The field does NOT allow nuls, and I tested with a very simple script:
> Insert into Codes
> (Code,Description)
> values ('0001','Test Code')
> the error message that I received was that the field will not accept nulls
> the actual verbage was "Cannot insert the value NULL into column
> 'CreateDate',table test_app.dbo.codes, column does not allow NULLs.
> Update
> Fails."
> I use this all the time, and I am sure alot of others as well, this is the
> first time I have ever run into anything like this
>
>

Database Defaults

I have a strange (well I think its strange) where I have create a field
(CreateDate) with a default of (getdate()); however, on the server where thi
s
table is built, it is not populating the CreateDate, and as a result I am
getting errors, has anyone ever seen an issue like this were the field
defaults aren't getting populated?Does the column allow nulls?
Is the insert statement inserting a blank string in that column, or
something else that shouldn't be inserted?
Are you receiving any errors?
can we see the insert script?
Simon Worth
Scott wrote:
> I have a strange (well I think its strange) where I have create a field
> (CreateDate) with a default of (getdate()); however, on the server where t
his
> table is built, it is not populating the CreateDate, and as a result I am
> getting errors, has anyone ever seen an issue like this were the field
> defaults aren't getting populated?|||"Simon Worth" wrote:
The field does NOT allow nuls, and I tested with a very simple script:
Insert into Codes
(Code,Description)
values ('0001','Test Code')
the error message that I received was that the field will not accept nulls
the actual verbage was "Cannot insert the value NULL into column
'CreateDate',table test_app.dbo.codes, column does not allow NULLs. Update
Fails."
I use this all the time, and I am sure alot of others as well, this is the
first time I have ever run into anything like this

> Does the column allow nulls?
> Is the insert statement inserting a blank string in that column, or
> something else that shouldn't be inserted?
> Are you receiving any errors?
> can we see the insert script?
> Simon Worth|||Are you certain that the default constraint is still on the column? Try
verifying existence with sp_help 'Codes'.
Hope this helps.
Dan Guzman
SQL Server MVP
"Scott" <Scott@.discussions.microsoft.com> wrote in message
news:B5F635A5-6379-41C9-A849-BC10B57865A7@.microsoft.com...
>
> "Simon Worth" wrote:
> The field does NOT allow nuls, and I tested with a very simple script:
> Insert into Codes
> (Code,Description)
> values ('0001','Test Code')
> the error message that I received was that the field will not accept nulls
> the actual verbage was "Cannot insert the value NULL into column
> 'CreateDate',table test_app.dbo.codes, column does not allow NULLs.
> Update
> Fails."
> I use this all the time, and I am sure alot of others as well, this is the
> first time I have ever run into anything like this
>
>
>