Wednesday, March 21, 2012
Database Engine Tuning Advisor Wont Suggest Any Changes :-)
I'm having a problem with the DETA in SQL Server 2005. Or at least I
think I am.
I have an asp.net web application that invokes lots of stored
procedures. There is one page that causes a whole load of stored
procedures to be fired off. I'm pretty sure that some of these
procedures could be optimised somehow.
I've done a trace using SQL Profiler. This basically involved me setting
up the trace and fannying about with the page for a few minutes,
performing the sort of actions that users will commonly perform.
In the DETA I set up a new session, selected the workload file and the
workload database and set it off on its merry way. I set it to consider
getting rid of all indexes (there are only indexes on the PK at the moment).
The thing is, I get no suggestions, which I'm surprised about.
The SProcs that are getting executed are search procs that take a few
different parameters. I had a feeling that this procedure, given the
number of times it's executed and the fact that it has no indexes other
than the PK would be ripe for some optimisation.
Can anyone tell me if:
- Perhaps the workload file is to small? I only fannied for a couple of
minutes. Could that make a difference?
- Is there anything else I could do?
Could I just be a god like query writer and not even know it :-)
Thanks to anyone who can advise
Kindest Regards
SimonHi Simon
First of all I'd say that DETA (2005) is much more reliable rather TA
(2000) and make really good suggestions as i have been played with it
Can you show us your store procedure's code?
I did some testing and have found that DETA does make suggestion to create
an additional index on searched column.
CREATE PROC usp_Test
@.s VARCHAR(20)
AS
SELECT ProductDescription FROM Products WHERE ProductDescription LIKE
@.s+'%'
An output about 1500 rows
"Simon Harvey" <nothanks@.hotmail.com> wrote in message
news:O8NYdEFJHHA.1008@.TK2MSFTNGP06.phx.gbl...
> Hi all,
> I'm having a problem with the DETA in SQL Server 2005. Or at least I think
> I am.
> I have an asp.net web application that invokes lots of stored procedures.
> There is one page that causes a whole load of stored procedures to be
> fired off. I'm pretty sure that some of these procedures could be
> optimised somehow.
> I've done a trace using SQL Profiler. This basically involved me setting
> up the trace and fannying about with the page for a few minutes,
> performing the sort of actions that users will commonly perform.
> In the DETA I set up a new session, selected the workload file and the
> workload database and set it off on its merry way. I set it to consider
> getting rid of all indexes (there are only indexes on the PK at the
> moment).
> The thing is, I get no suggestions, which I'm surprised about.
> The SProcs that are getting executed are search procs that take a few
> different parameters. I had a feeling that this procedure, given the
> number of times it's executed and the fact that it has no indexes other
> than the PK would be ripe for some optimisation.
> Can anyone tell me if:
> - Perhaps the workload file is to small? I only fannied for a couple of
> minutes. Could that make a difference?
> - Is there anything else I could do?
> Could I just be a god like query writer and not even know it :-)
> Thanks to anyone who can advise
> Kindest Regards
> Simon|||Hi,
Thanks for your reply. The SPROC is as follows:
Apologies for the formating...I'm hoping it will look better on your
screen than it looks as I'm writing this :-)
Any suggestions on indexes would be very much appreciated. In terms of
context, the procedure is attempting to find properties that dont have
rental bookings in them between the dates passed in. The other
paramaters are optional...
Thanks
Simon
CREATE PROCEDURE [dbo].[SearchAvailability]
@.propertyIDOrAddress VARCHAR(75) = NULL,
@.startDate VARCHAR(50) = NULL,
@.endDate VARCHAR(50) = NULL,
@.propertyTypeID smallint = NULL,
@.noOfBedrooms smallint = NULL,
@.estateID smallint = NULL,
@.noOfResults smallint = 25
AS
SELECT
SharedDataStore.dbo.Properties.[ID],
SharedDataStore.dbo.Properties.[StrapLine],
SharedDataStore.dbo.Properties.[ShortDescription],
SharedDataStore.dbo.Properties.[AdditionalDetails],
SharedDataStore.dbo.Properties.[InternalNotes],
SharedDataStore.dbo.Properties.[PublicNotes],
SharedDataStore.dbo.Properties.[Address1],
SharedDataStore.dbo.Properties.[Address2],
SharedDataStore.dbo.Properties.[Address3],
SharedDataStore.dbo.Properties.[EstateID],
SharedDataStore.dbo.Estates.[Name] AS 'EstateName',
SharedDataStore.dbo.Properties.[City],
SharedDataStore.dbo.Properties.[Region],
SharedDataStore.dbo.Properties.[Country],
SharedDataStore.dbo.Properties.[PostalCode],
SharedDataStore.dbo.Properties.[Tel],
SharedDataStore.dbo.Properties.[IsCommitment],
SharedDataStore.dbo.Properties.[PropertyTypeID],
SharedDataStore.dbo.PropertyTypes.[Type] AS 'PropertyTypeName',
SharedDataStore.dbo.Properties.[Status],
SharedDataStore.dbo.PropertyStatusTypes.[Name] AS 'StatusName',
SharedDataStore.dbo.Properties.[NoOfBedrooms],
SharedDataStore.dbo.Properties.[NoOfBathrooms],
SharedDataStore.dbo.Properties.[SupplierID],
SharedDataStore.dbo.Organisations.[Name] AS 'SupplierName',
SharedDataStore.dbo.Properties.[IsDeleted],
SharedDataStore.dbo.Properties.[EnteredDate],
SharedDataStore.dbo.Properties.[EnteredBy],
SharedDataStore.dbo.Properties.[LastModified],
SharedDataStore.dbo.Properties.[LastModifiedBy]
FROM
SharedDataStore.dbo.Properties LEFT JOIN
SharedDataStore.dbo.PropertyTypes ON
SharedDataStore.dbo.Properties.PropertyTypeID =SharedDataStore.dbo.PropertyTypes.ID
LEFT JOIN SharedDataStore.dbo.Estates ON
SharedDataStore.dbo.Properties.EstateID = SharedDataStore.dbo.Estates.ID
LEFT JOIN SharedDataStore.dbo.Organisations ON
SharedDataStore.dbo.Properties.SupplierID =SharedDataStore.dbo.Organisations.ID
LEFT JOIN SharedDataStore.dbo.PropertyStatusTypes ON
SharedDataStore.dbo.Properties.Status =SharedDataStore.dbo.PropertyStatusTypes.ID
WHERE
[SharedDataStore].[dbo].[Properties].[ID] NOT IN
(
SELECT [Bookings].[PropertyID]
FROM Bookings
WHERE ArriveDate < CONVERT(DATETIME, @.endDate, 101) AND DepartDate >
CONVERT(DATETIME, @.startDate, 101)
AND ([Bookings].[IsDeleted] != 'true')
)
AND ([SharedDataStore].[dbo].[Properties].[IsDeleted] != 'true')
AND (([SharedDataStore].[dbo].[Properties].[Address1] LIKE + '%' +
@.propertyIDOrAddress + '%') OR (@.propertyIDOrAddress IS NULL))
AND (([SharedDataStore].[dbo].[Properties].PropertyTypeID =@.propertyTypeID) OR (@.propertyTypeID IS NULL))
AND (([SharedDataStore].[dbo].[Properties].NoOfBedrooms = @.noOfBedrooms)
OR (@.noOfBedrooms IS NULL))
AND (([SharedDataStore].[dbo].[Properties].EstateID = @.estateID) OR
(@.estateID IS NULL))
AND ([SharedDataStore].[dbo].[Properties].[Status] = 0) -- Property is
active
ORDER BY [SharedDataStore].[dbo].[Properties].[Address1]|||Simon Harvey wrote: [snip]
> AND (([SharedDataStore].[dbo].[Properties].[Address1] LIKE + '%' +
> @.propertyIDOrAddress + '%') OR (@.propertyIDOrAddress IS NULL))
> AND (([SharedDataStore].[dbo].[Properties].PropertyTypeID => @.propertyTypeID) OR (@.propertyTypeID IS NULL))
> AND (([SharedDataStore].[dbo].[Properties].NoOfBedrooms = @.noOfBedrooms)
> OR (@.noOfBedrooms IS NULL))
> AND (([SharedDataStore].[dbo].[Properties].EstateID = @.estateID) OR
> (@.estateID IS NULL))
Simon,
For these types of queries hardly any optimizations are possible because
of all optional parameters. So I am not surprised that the DETA could
not come up with any suggestions (there probably aren't any useful index
suggestions).
You could consider using dynamic SQL to form the exact SQL query that is
needed for the particular situation. You might get more compilations,
but such a query can make use of indexes on the search column(s) that
is/are used.
Another method to achieve the same thing is to write a query for each
combination of the optional parameters and use IF/ELSE commands to
execute the right query.
Then you can create indexes on each search column and compound indexes
on the search column combinations that are used most.
HTH,
Gert-Jan|||Hi there,
I was begining to suspect that that was going to be the problem.
I did the SProc that way because it seemed a bit (though not much) neater.
I guess if performance becomes more of a problem I may have to look at
making it dynamic.
Many thanks for your advice
Kindest Regards
Simon
Database Engine Tuning Advisor Wont Suggest Any Changes :-)
I'm having a problem with the DETA in SQL Server 2005. Or at least I
think I am.
I have an asp.net web application that invokes lots of stored
procedures. There is one page that causes a whole load of stored
procedures to be fired off. I'm pretty sure that some of these
procedures could be optimised somehow.
I've done a trace using SQL Profiler. This basically involved me setting
up the trace and fannying about with the page for a few minutes,
performing the sort of actions that users will commonly perform.
In the DETA I set up a new session, selected the workload file and the
workload database and set it off on its merry way. I set it to consider
getting rid of all indexes (there are only indexes on the PK at the moment).
The thing is, I get no suggestions, which I'm surprised about.
The SProcs that are getting executed are search procs that take a few
different parameters. I had a feeling that this procedure, given the
number of times it's executed and the fact that it has no indexes other
than the PK would be ripe for some optimisation.
Can anyone tell me if:
- Perhaps the workload file is to small? I only fannied for a couple of
minutes. Could that make a difference?
- Is there anything else I could do?
Could I just be a god like query writer and not even know it :-)
Thanks to anyone who can advise
Kindest Regards
SimonHi Simon
First of all I'd say that DETA (2005) is much more reliable rather TA
(2000) and make really good suggestions as i have been played with it
Can you show us your store procedure's code?
I did some testing and have found that DETA does make suggestion to create
an additional index on searched column.
CREATE PROC usp_Test
@.s VARCHAR(20)
AS
SELECT ProductDescription FROM Products WHERE ProductDescription LIKE
@.s+'%'
An output about 1500 rows
"Simon Harvey" <nothanks@.hotmail.com> wrote in message
news:O8NYdEFJHHA.1008@.TK2MSFTNGP06.phx.gbl...
> Hi all,
> I'm having a problem with the DETA in SQL Server 2005. Or at least I think
> I am.
> I have an asp.net web application that invokes lots of stored procedures.
> There is one page that causes a whole load of stored procedures to be
> fired off. I'm pretty sure that some of these procedures could be
> optimised somehow.
> I've done a trace using SQL Profiler. This basically involved me setting
> up the trace and fannying about with the page for a few minutes,
> performing the sort of actions that users will commonly perform.
> In the DETA I set up a new session, selected the workload file and the
> workload database and set it off on its merry way. I set it to consider
> getting rid of all indexes (there are only indexes on the PK at the
> moment).
> The thing is, I get no suggestions, which I'm surprised about.
> The SProcs that are getting executed are search procs that take a few
> different parameters. I had a feeling that this procedure, given the
> number of times it's executed and the fact that it has no indexes other
> than the PK would be ripe for some optimisation.
> Can anyone tell me if:
> - Perhaps the workload file is to small? I only fannied for a couple of
> minutes. Could that make a difference?
> - Is there anything else I could do?
> Could I just be a god like query writer and not even know it :-)
> Thanks to anyone who can advise
> Kindest Regards
> Simon|||Hi,
Thanks for your reply. The SPROC is as follows:
Apologies for the formating...I'm hoping it will look better on your
screen than it looks as I'm writing this :-)
Any suggestions on indexes would be very much appreciated. In terms of
context, the procedure is attempting to find properties that dont have
rental bookings in them between the dates passed in. The other
paramaters are optional...
Thanks
Simon
CREATE PROCEDURE [dbo].[SearchAvailability]
@.propertyIDOrAddress VARCHAR(75) = NULL,
@.startDate VARCHAR(50) = NULL,
@.endDate VARCHAR(50) = NULL,
@.propertyTypeID smallint = NULL,
@.noOfBedrooms smallint = NULL,
@.estateID smallint = NULL,
@.noOfResults smallint = 25
AS
SELECT
SharedDataStore.dbo.Properties.[ID],
SharedDataStore.dbo.Properties.[StrapLine],
SharedDataStore.dbo.Properties.[ShortDescription],
SharedDataStore.dbo.Properties.[AdditionalDetails],
SharedDataStore.dbo.Properties.[InternalNotes],
SharedDataStore.dbo.Properties.[PublicNotes],
SharedDataStore.dbo.Properties.[Address1],
SharedDataStore.dbo.Properties.[Address2],
SharedDataStore.dbo.Properties.[Address3],
SharedDataStore.dbo.Properties.[EstateID],
SharedDataStore.dbo.Estates.[Name] AS 'EstateName',
SharedDataStore.dbo.Properties.[City],
SharedDataStore.dbo.Properties.[Region],
SharedDataStore.dbo.Properties.[Country],
SharedDataStore.dbo.Properties.[PostalCode],
SharedDataStore.dbo.Properties.[Tel],
SharedDataStore.dbo.Properties.[IsCommitment],
SharedDataStore.dbo.Properties.[PropertyTypeID],
SharedDataStore.dbo.PropertyTypes.[Type] AS 'PropertyTypeName',
SharedDataStore.dbo.Properties.[Status],
SharedDataStore.dbo.PropertyStatusTypes.[Name] AS 'StatusName',
SharedDataStore.dbo.Properties.[NoOfBedrooms],
SharedDataStore.dbo.Properties.[NoOfBathrooms],
SharedDataStore.dbo.Properties.[SupplierID],
SharedDataStore.dbo.Organisations.[Name] AS 'SupplierName',
SharedDataStore.dbo.Properties.[IsDeleted],
SharedDataStore.dbo.Properties.[EnteredDate],
SharedDataStore.dbo.Properties.[EnteredBy],
SharedDataStore.dbo.Properties.[LastModified],
SharedDataStore.dbo.Properties.[LastModifiedBy]
FROM
SharedDataStore.dbo.Properties LEFT JOIN
SharedDataStore.dbo.PropertyTypes ON
SharedDataStore.dbo.Properties.PropertyTypeID =
SharedDataStore.dbo.PropertyTypes.ID
LEFT JOIN SharedDataStore.dbo.Estates ON
SharedDataStore.dbo.Properties.EstateID = SharedDataStore.dbo.Estates.ID
LEFT JOIN SharedDataStore.dbo.Organisations ON
SharedDataStore.dbo.Properties.SupplierID =
SharedDataStore.dbo.Organisations.ID
LEFT JOIN SharedDataStore.dbo.PropertyStatusTypes ON
SharedDataStore.dbo.Properties.Status =
SharedDataStore.dbo.PropertyStatusTypes.ID
WHERE
[SharedDataStore].[dbo].[Properties].[ID] NOT IN
(
SELECT [Bookings].[PropertyID]
FROM Bookings
WHERE ArriveDate < CONVERT(DATETIME, @.endDate, 101) AND DepartDate >
CONVERT(DATETIME, @.startDate, 101)
AND ([Bookings].[IsDeleted] != 'true')
)
AND ([SharedDataStore].[dbo].[Properties].[IsDeleted] != 'tr
ue')
AND (([SharedDataStore].[dbo].[Properties].[Address1] LIKE +
'%' +
@.propertyIDOrAddress + '%') OR (@.propertyIDOrAddress IS NULL))
AND (([SharedDataStore].[dbo].[Properties].PropertyTypeID =
@.propertyTypeID) OR (@.propertyTypeID IS NULL))
AND (([SharedDataStore].[dbo].[Properties].NoOfBedrooms = @.noOfB
edrooms)
OR (@.noOfBedrooms IS NULL))
AND (([SharedDataStore].[dbo].[Properties].EstateID = @.estateID)
OR
(@.estateID IS NULL))
AND ([SharedDataStore].[dbo].[Properties].[Status] = 0) -- P
roperty is
active
ORDER BY [SharedDataStore].[dbo].[Properties].[Address1]|||Simon Harvey wrote: [snip]
> AND (([SharedDataStore].[dbo].[Properties].[Address1] LIKE
+ '%' +
> @.propertyIDOrAddress + '%') OR (@.propertyIDOrAddress IS NULL))
> AND (([SharedDataStore].[dbo].[Properties].PropertyTypeID =
> @.propertyTypeID) OR (@.propertyTypeID IS NULL))
> AND (([SharedDataStore].[dbo].[Properties].NoOfBedrooms = @.noO
fBedrooms)
> OR (@.noOfBedrooms IS NULL))
> AND (([SharedDataStore].[dbo].[Properties].EstateID = @.estateI
D) OR
> (@.estateID IS NULL))
Simon,
For these types of queries hardly any optimizations are possible because
of all optional parameters. So I am not surprised that the DETA could
not come up with any suggestions (there probably aren't any useful index
suggestions).
You could consider using dynamic SQL to form the exact SQL query that is
needed for the particular situation. You might get more compilations,
but such a query can make use of indexes on the search column(s) that
is/are used.
Another method to achieve the same thing is to write a query for each
combination of the optional parameters and use IF/ELSE commands to
execute the right query.
Then you can create indexes on each search column and compound indexes
on the search column combinations that are used most.
HTH,
Gert-Jan|||Hi there,
I was begining to suspect that that was going to be the problem.
I did the SProc that way because it seemed a bit (though not much) neater.
I guess if performance becomes more of a problem I may have to look at
making it dynamic.
Many thanks for your advice
Kindest Regards
Simon
Thursday, March 8, 2012
database diagram in server explorer
You might want to post this question to the Visual Studio group. I can't recall if this facility is version dependent. They would be able to better answer your question.
Terrence Nevins
SQL Server Program Manager
|||Unfortunately, Management Studio 2005 and Visual Studio 2005 cannot work with diagrams on SQL Server 2000 servers.
Best regards,
Steve
database diagram in server explorer
You might want to post this question to the Visual Studio group. I can't recall if this facility is version dependent. They would be able to better answer your question.
Terrence Nevins
SQL Server Program Manager
|||Unfortunately, Management Studio 2005 and Visual Studio 2005 cannot work with diagrams on SQL Server 2000 servers.
Best regards,
Steve
Saturday, February 25, 2012
Database Design Question
For my app, the database consists of several relational tables that track all of the information about a job. I want to set it up so that if a job is completed, it gets archived.
Over the next 2 or 3 years, this system will likely grow to 100,000+ records.
My question is, should I (1) just use a flag to mark the job as archived, (2) make a flat record of the job and move it into one archive table, or (3) set up archive tables that mirror my production tables and copy the record to the archive tables keeping the relationships intact?
Does anyone have a method that has worked well for them in the past? What will keep my app running the fastest? Any suggestions would be appreciated. Thanks!i always prefer a single table with the jobs flagged as not active (1 or 0)
it makes for simpler querying later with just
select * from table1
where active is 1
and
when you want to see completed tasks
select * from table1
where active is 0
(less tables, less joins etc,... )|||Cool ... thanks!
Do you think 100,000+ records somewhere down the line will be a problem when running reports or doing searches if I just use a flag? I'm trying to design this w/ optimized tuning in mind because I don't know how long it will be in place.
When I flag a job as complete (bit 0,1) and then query w/
WHERE open=(0 or 1)
It still has to check every record correct? How many records before there is a noticable difference in the speed of the app?|||two ways to think of this
you could make the column a bit datatype to save on space but it stores 1,0 or null small to the point but you cannot create an index on this column
OR
create the column as tinyint (1 byte , largest value 255) and then you could index it
but the indexing on a heavily duplicated columns is not recommended
you also could create statistics on the flagged column
i'm of the mind that you should be okay with this but just monitor performance during operations|||Thanks for your ideas!|||The optimizer will likely not use an index on a boolean column, regardless of whether it is bit, tiny int, or even char(1). There just isn't sufficient cardinality to make it worthwhile.|||true
this is an area where i have (gulp!) envied ORACLE
they have bitmap indexes that you can use on heavily duplicated columns like gender and yes\no
oh well maybe in yukon.|||Yeah, but that's not the point.
You could create an index on boolean values stored in a tinyint field, but the optimizer would likely ignore it because its not any more efficient than using a table scan. The cardinality is likely so low (high? I can never remember...) that the small performance boost gained by using the index is offset by the overhead of accessing the index.
I'm not an Oracle DBA, but I don't see how Oracle would gain much from indexing boolean values either.|||yes and using bitmap indexes on yes no or gender or small sets like rating can give you a boost
but you are right about the boolean but still i prefer the 1/0 flag to the splitting up of tables|||I agree. In most cases (not all) a flag is better than splitting the data.|||And the rambling goes on. Well, I don't know who would come up with an idea on creating an index just on a Gender or Yes/No field. And you guys keep on going on it. Kind of silly, would you really consider doing it? Its value WILL come to play if such a field is combined with something else.
And another thing, - YOU CAN CREATE AN INDEX ON A BIT FIELD, even by itself.|||Yes and no.
You can't create an index on a bit field through Enterprise Manager's table design form.
Also, Books Online states:
"Columns of type bit cannot have indexes on them. "
But you can create the index using SQL:
CREATE INDEX [IX_BitTest] ON [dbo].[YourTable]([BitValue]) ON [PRIMARY]
GO
...so there is a loophole or inconsistency in SQL server regarding bit values.
Lot's of people come up with the idea of indexing boolean fields because they don't have an in depth understanding of how SQL Server works. That's why they come to dbforums for advice, not ridicule Bob.|||Originally posted by blindman
...Lot's of people come up with the idea of indexing boolean fields because they don't have an in depth understanding of how SQL Server works. That's why they come to dbforums for advice, not ridicule Bob. Hey, you started first, I just try to survive here. Remember your comments about my inability to understand English? Help me out here, - weren't you ridiculing me?
Somet things I found funny, others I saw as an insult, but in neither of those posts were you even attempting to hide or retract your intent to ridicule. I guess you yourself can't take your own medicine, hey?! ;)
Originally posted by blindman
...You can't create an index on a bit field through Enterprise Manager's table design form.
Also, Books Online states:
"Columns of type bit cannot have indexes on them. "
Which Books Online are you reading, Lindman? Get on with the program, everybody else did!
SQL Server Books Online January 2004 Update (http://www.microsoft.com/downloads/details.aspx?FamilyId=A6F79CB1-A420-445F-8A4B-BD77A7DA194B&displaylang=en#filelist)|||YEAH BOB!!!!
rdjabarov does insulted....|||Always go with your strength.|||At the scale of 100,00 or so records, you'll certainly want to be able to index the field, so bit is out. Plus people writing code against boolean fields get confused about null evaluations and equalities, so it's better to use a more humane data type anyway.
tinyint will take the least amount of space and provide the fastest field evaluation available. Although, at such a small table size, neither is a major concern. Whether or not you are expanding a server tree in the Enterprise Manager during query execution will have a much greater effect on performance than how you optimize this field. I agree that standard indexing would have little effect, but I would use a clustered index with the status field as the first element and then your primary where condition field as the second element of the index. The optimizer is unable to overlook this index becuase it is reflected in the table's physicality.|||The optimizer does not overlook boolean indexes because it is lazy, or because it can't use them. It chooses to overlook them when it is more efficient NOT to use them. You could force it to use the index using a query hint, but forcing the use of the index does not necessarily make the query more efficient. The same goes for putting the boolean index at the front of the key.|||Originally posted by dbslave
tinyint will take the least amount of space
Isn't tinyint and char(1) the same size?|||Originally posted by Lindman
The optimizer does not overlook boolean indexes because it is lazy, or because it can't use them. It chooses to overlook them when it is more efficient NOT to use them... The optimizer DOES NOT overlook indexes when it is more efficient not to use them. That decision is based on STATISTICS associated with tables referenced by the query. Don't give too much credit to the Optimizer.
And again, according to Books Online (UPDATED!!!) and personal experience, you can have an index on a bit field. In fact, you can even have a clustered index on a bit field (wouldn't know what for though). The point is that you shouldn't have an index on a field which data is equally split 50/50. Such a field should be part of a composite index to make a difference.
Going back to the original question, whether to have a flag or a replica of the original table for archive purposes...It depends on how many records are going to be archived at a time, and how many records regardless of whether they are archived or not would be retrieved. And also, how wide is the table now.
The decision should depend on these questions because if you plan to archive thousands of rows at a time and the table is wide enough (judgment call, no particulat width is in mind), I'd go with a flag of bit datatype, since updating of 1 field will not kill my transaction log as opposed to inserting into a different table.|||Originally posted by rdjabarov
In fact, you can even have a clustered index on a bit field (wouldn't know what for though). The point is that you shouldn't have an index on a field which data is equally split 50/50. Such a field should be part of a composite index to make a difference.
Logically, yes. Physically, this can be useful for partitioning data.|||Originally posted by Brett Kaiser
Isn't tinyint and char(1) the same size?
Yes, but comparisons on a tinyint are way faster than on a char(1)|||Why would that be?
Got any links that discuss this?
and whats waaaaaaaaaay faster?|||Originally posted by dbslave
Logically, yes. Physically, this can be useful for partitioning data. You're planning to partition 2 records?|||Clustered does not necessarily mean unique.|||Originally posted by blindman
Clustered does not necessarily mean unique. Did you get your "real" Books Online? You have a lot to read...|||USE Northwind
GO
CREATE TABLE myTable99(Col1 int, Col2 int)
GO
CREATE CLUSTERED INDEX myTable99_IX1 ON myTable99(Col1)
GO
INSERT INTO myTable99 SELECT 1,1 UNION ALL SELECT 1,1
GO
SELECT * FROM myTable99
GO
DROP TABLE myTable99
GO
Now say you're sorry...
And I want you all to play nice or it's time out...
Oh, wait, that stuff doesn't start till I get home...
Sorry...|||Hey Brett, I wasn't arguing that clustered does not mean unique! I just don't like "him" butting into other people's conversation, just to make some pointless point (or however "he" put it).|||Oh good heavens, that's exactly why I participate. Having a dozen eyes available helps "keep me honest" because I know that there are many points of view, most of which are probably more "on track" at any given point in time than mine are.
While there are plenty of people that annoy me in one way or another, its still a good thing that they all participate (at least in my opinion). If one person annoys me too badly, I just filter them from my view for a while (until I cool off).
There's a lot of advice that is well meant, and well received that nearly sends me ballistic. I know how badly certain approaches can hammer things up with large (anything between 150 and 1200 Gb) databases, but those approaches often work fine in smaller, simpler environments. I have to keep reminding myself to give appropriate answers based on the questions!
-PatP|||Originally posted by Brett Kaiser
Isn't tinyint and char(1) the same size?
yes they are both 1 byte datatypes but
integer vs char|||I'm with Brett in never having read anything about tinyint being a faster comparison than char(1). White papers?|||boys, this is a non-smoking flight, put your guns down
tiny, small, why not bit? you'd have to have at least 9 of them before it reaches the size of the tinyint. what's the fuss about?|||If tinyint is indeed faster than char(1) then I'll use it in the future.|||oh that's just absurd! how can a datatype be faster than another datatype?|||Originally posted by ms_sql_dba
oh that's just absurd! how can a datatype be faster than another datatype? Way to go, she got you on that one :D :D :D :D :D|||Hey .. where was I when all this was going on ...
hmm ... have been busy lately ... but will be back soon ...|||she?|||Now that you point out .. i was also confused by "She" ...
Hmm .. i m getting confused a lot lately ...|||Originally posted by r937
she? The "ms" can be interpreted as many things,k among them: Microsoft, or as an honoric "Ms." that was quite popular among what were called the "rabidly liberated" in the 1970s. Ms. was used instead of Mrs. or Miss by a very small group of women that didn't want to reveal their marital status. I assume that rdjabarov read it in that context.
-PatP|||She got me?
I never claimed it was faster. The notion that tinyint was faster than char(1) was news to both Brett and I, and we simply asked if there was any documentation to support the assertion.
As usual, ms_sql_dba didn't bother to really read the previous posts. She keeps to her old habit of dropping in at the end of the thread and repeating something somebody else had already written 10 posts before, without adding a shred of value to the conversation.
Nyah, nyah, ya missed me! Ya missed me! PPPbbbbbtttt! :p
Grow up.|||It's got to be a full moon or something...
does ANYONE want to put forth WHY tinyint would be more effecient that char(1)?
I'm here for education (and social interaction, well I guess some types of it...the others type are a watse of time...life is too short)|||Originally posted by ms_sql_dba
boys, this is a non-smoking flight, put your guns down
tiny, small, why not bit? you'd have to have at least 9 of them before it reaches the size of the tinyint. what's the fuss about?
[With the guns blazing]
This is straight out of the Holy book
bit
If there are 8 or fewer bit columns in a table, the columns are stored as 1 byte
tinyint
Integer data from 0 through 255. Storage size is 1 byte.
[/With the guns blazing]
first Get a copy of the Holy Book and then read it from end to end.|||OK, so that means...
1 bit = 1 byte
2 bits = 1 byte
3 bits = 1 byte
4 bits = 1 byte
5 bits = 1 byte
6 bits = 1 byte
7 bits = 1 byte
8 bits = 1 byte
But the values of bit is still 0 or 1 or null (which isn't a value at all)
And has very limited meaning, and I've never seen a good reason to index it..
And Char(1) and tinyint are still both 1 byte
And is it a misconception that 1 of the above is more effecient in an index than the other...|||Originally posted by Brett Kaiser
OK, so that means...
1 bit = 1 byte
2 bits = 1 byte
3 bits = 1 byte
4 bits = 1 byte
5 bits = 1 byte
6 bits = 1 byte
7 bits = 1 byte
8 bits = 1 byte
9 bit = 2 byte
10 bits = 2 byte
11 bits = 2 byte
12 bits = 2 byte
13 bits = 2 byte
14 bits = 2 byte
15 bits = 2 byte
16 bits = 2 byte
and so on ...
so 1 single bit storage space = tiny int storage space|||It was dbslave who originally stated that tinyint was faster, and since we haven't heard anything back from him I'm going to assume that was an error and recommend we drop the inquiry.|||i feel slightly responisble because it was me who brought it up.
but if you are gonna flame someone else then count me in
Mob Rules
string him up...|||Nah, I wasn't stringing him up. Everybody makes mistakes, or has made incorrect assumptions while wading through the masses of documentation for SQL server. I've learned a hell of a lot from this forum.|||Originally posted by blindman
I've learned a hell of a lot from this forum.
Ditto...
Just trying to realize either way...
My gut says no....|||My gut says no....
Thats wierd ... you still got guts|||Originally posted by blindman
She got me?
I never claimed it was faster. The notion that tinyint was faster than char(1) was news to both Brett and I, and we simply asked if there was any documentation to support the assertion.
As usual, ms_sql_dba didn't bother to really read the previous posts. She keeps to her old habit of dropping in at the end of the thread and repeating something somebody else had already written 10 posts before, without adding a shred of value to the conversation.
Nyah, nyah, ya missed me! Ya missed me! PPPbbbbbtttt! :p
Grow up. So, that makes 2 of us, me and ms_sql_dba. At least I am not alone!!!|||Not to jump in and strive for polarity, but there's been nothing I've seen that say that's true...
And are you going to the rainbow lounge?
Make sure you bring a lot of singles...
:D|||Originally posted by Enigma
Thats wierd ... you still got guts I'll bet that I've got more gut than anybody around here!
Somebody made a comment about a rather heated discussion here being an "international incident", and I pointed out that even I'm not that fat (yet).
-PatP|||...Sounds like a call for a pissing contest...|||Originally posted by rdjabarov
...Sounds like a call for a pissing contest... Well, I'd guess that I could "hold my own" (if you'll pardon the term) in that area too!
-PatP|||Try this for yourself. Run test few times to be sure table is in cache. You will be surprised.
create table test(i int identity primary key, a tinyint, b char)
set nocount on
declare @.c int
set @.c = 0
while @.c < 20000 begin
insert test(a,b)
values(0,'0')
insert test(a,b)
values(1,'1')
set @.c = @.c + 1 end
--test scan
declare @.t datetime
set @.t = getdate()
select a
from test
where a = 1
print 'tinyint: ' + cast(datediff(ms,@.t,getdate()) as varchar) + 'ms'
set @.t = getdate()
select b
from test
where b = '1'
print 'char: ' + cast(datediff(ms,@.t,getdate()) as varchar) + 'ms'
--test seek as run above script after adding these indexes:
create index test_a on test(a)
create index test_b on test(b)
Chars were slower before SQL Server version 7.|||I got about 700ms faster on tinyint after creating indexes.|||And about 1600 faster than using char if I replace tinyint with bit...|||Well, in the interests of scientific speculation:
-- CREATE TABLE test_datatype_speed_1(
-- ident INT IDENTITY PRIMARY KEY,
-- tinyint_noindex TINYINT,
-- char1_noindex CHAR(1),
-- bit_noindex BIT,
-- tinyint_index TINYINT,
-- char1_index CHAR(1),
-- bit_index BIT)
--
-- CREATE INDEX idx_test_datatype_speed_1_tinyint ON test_datatype_speed_1(tinyint_index)
-- CREATE INDEX idx_test_datatype_speed_1_char1 ON test_datatype_speed_1(char1_index)
-- CREATE INDEX idx_test_datatype_speed_1_bit ON test_datatype_speed_1(bit_index)
--
-- SET NOCOUNT ON
--
-- DECLARE @.c INT
--
-- SELECT @.c = 0
--
-- WHILE @.c < 20000
-- BEGIN
--
-- INSERT test_datatype_speed_1(tinyint_noindex, char1_noindex, bit_noindex, tinyint_index, char1_index, bit_index)
-- VALUES(0,'0',0,0,'0',0)
-- INSERT test_datatype_speed_1(tinyint_noindex, char1_noindex, bit_noindex, tinyint_index, char1_index, bit_index)
-- VALUES(1,'1',1,1,'1',1)
-- SELECT @.c = @.c + 1
--
-- END
GO
SET STATISTICS IO ON
--test scan
DECLARE @.t DATETIME
--Test tinyint_noindex
SELECT @.t = GETDATE()
SELECT tinyint_noindex
FROM test_datatype_speed_1
WHERE tinyint_noindex = 1
PRINT 'tinyint_noindex: ' + CAST(DATEDIFF(ms,@.t,GETDATE()) AS VARCHAR) + 'ms'
--Test char1_noindex
SELECT @.t = GETDATE()
SELECT char1_noindex
FROM test_datatype_speed_1
WHERE char1_noindex = '1'
PRINT 'char1_noindex: ' + CAST(DATEDIFF(ms,@.t,GETDATE()) AS VARCHAR) + 'ms'
--Test bit_noindex
SELECT @.t = GETDATE()
SELECT bit_noindex
FROM test_datatype_speed_1
WHERE bit_noindex = '1'
PRINT 'bit_noindex: ' + CAST(DATEDIFF(ms,@.t,GETDATE()) AS VARCHAR) + 'ms'
--Test tinyint_index
SELECT @.t = GETDATE()
SELECT tinyint_index
FROM test_datatype_speed_1
WHERE tinyint_index = 1
PRINT 'tinyint_index: ' + CAST(DATEDIFF(ms,@.t,GETDATE()) AS VARCHAR) + 'ms'
--Test char1_index
SELECT @.t = GETDATE()
SELECT char1_index
FROM test_datatype_speed_1
WHERE char1_index = '1'
PRINT 'char1_index: ' + CAST(DATEDIFF(ms,@.t,GETDATE()) AS VARCHAR) + 'ms'
--Test bit_index
SELECT @.t = GETDATE()
SELECT bit_index
FROM test_datatype_speed_1
WHERE bit_index = '1'
PRINT 'bit_index: ' + CAST(DATEDIFF(ms,@.t,GETDATE()) AS VARCHAR) + 'ms'
SET STATISTICS IO OFF
GO
This was the average result on my machine:
Table 'test_datatype_speed_1'. Scan count 1, logical reads 91, physical reads 0, read-ahead reads 0.
tinyint_noindex: 30ms
Table 'test_datatype_speed_1'. Scan count 1, logical reads 91, physical reads 0, read-ahead reads 0.
char1_noindex: 93ms
Table 'test_datatype_speed_1'. Scan count 1, logical reads 91, physical reads 0, read-ahead reads 0.
bit_noindex: 63ms
Table 'test_datatype_speed_1'. Scan count 1, logical reads 30, physical reads 0, read-ahead reads 0.
tinyint_index: 63ms
Table 'test_datatype_speed_1'. Scan count 1, logical reads 30, physical reads 0, read-ahead reads 0.
char1_index: 93ms
Table 'test_datatype_speed_1'. Scan count 1, logical reads 30, physical reads 0, read-ahead reads 0.
bit_index: 60ms
Kind of interesting.
I'm running Windows 2003 Standard Edition/SQL Server 2000 Enterprise Edition/2ghz Xeon/1gb RAM|||I'm running Windows 2003 Standard Edition/SQL Server 2000 Enterprise Edition/2ghz Xeon/1gb RAM
i'm more impressed by your server.
oh yeah nice code too|||OK, cool, I'm up for testing things out...
Anyone got anything from M$ though?
Also, how curious...an non indexed tinuint is 33 ms faster?
Doesn't make sense...
I bet the results are significantly different when we're talking volumes...|||I just got around to running HansVE's code, and sometimes tinyint was faster and sometimes char was faster. It was about 50/50, so it still looks like a wash to me, at least until I see some kind of documentation or at least some reasoning.
And indexes made no difference.|||How many records are you guys testing with? Also, testing against 1-field index with 1/0 or y/n is going to yield an index scan. And we all know that the number of index pages for all 3 datatypes (char, tinyint, bit) will be the same. In order to test it for practical real-life situations we at least need to agree that such a field needs to be tested with another field with higher selectivity than 50/50.|||Agreed. 50/50 may be a typical distribution of boolean data, but it is not always the case and performance may be different with different cardinality.
90/10?|||No, I was talking about combining this boolean field with another, let's say a 10-character field or a date, while creating an index, and then using this combination of fields in the WHERE clause or in a JOIN. Something like this:
create table t1 (f1 char(10) not null, f1 tinyint/char(1)/bit not null)
--here we need to insert a large number of records, about 100K, where f1 would be 10% duplicates and f2 would be 50% dups.
select * from t1 where f1 = <something> and f2 = 'y'/1/1|||If you read the whole post, the entire thing was centered around bit vs. tinyint. Then, the whole char(1) thing got thrown in. So, if we're talking true bit comparison, you should stick with 0 and 1. Otherwise, you're expanding the field to do a full-scale comparison of datatypes.
In this case, you would have to set baselines at all kinds of different selectivity levels, index level (fill-factors, clustered vs. non-clustered, single vs multi-column), scale, precision, datatype, etc.
After you set the baseline, you have just began the real work of analyzing the datatype performance. You would then have to compare across size of system, memory, load in tps, size of recordset per selectivity, etc.
It would be great fun, but it's probably a little out of the scope of the thread isn't it.
:)|||Oh no...A thread that has strayed....
What will happen next?
No more tequila?
AAAAAAAAAAAAAHHHHHHHHHHHHHHHHHHHHHHHHHHHH
:D|||Let the thread keep going as long as it is interesting and informative.
rdjabarov, can you adapt the prevous sample code to test or illustrate your conjecture?|||I was hoping Brett will do his "create mytable99..." thing and I'll tweak it ;)|||Spoken a like a true lazy DBA! (And I was hinting that YOU should do it because I didn't want to.)
In Enterprise Manager, right-click on the Procedures folder, select "New Stored Procedure", and choose the "Ask Brett to do it" button.|||use a cross join
:cool:|||Originally posted by blindman
Spoken a like a true lazy DBA! (And I was hinting that YOU should do it because I didn't want to.)
In Enterprise Manager, right-click on the Procedures folder, select "New Stored Procedure", and choose the "Ask Brett to do it" button.
That's coming out in Yukon
:D|||Let the thread keep going as long as it is interesting and informative.
We better just stick to the interesting part. :)|||Personally, I don't give a rats arse which is faster but I am surprised that the collation of the CHAR data type has not been mentioned.
A binary collation outperforms a non binary collation...
Thanks to Derrick's code and simply adding "COLLATE Latin1_General_BIN " against the CHAR columns reveals...
Table 'TEST_DATATYPE_SPEED_1'. Scan count 1, logical reads 91, physical reads 0, read-ahead reads 0.
tinyint_noindex: 73ms
Table 'TEST_DATATYPE_SPEED_1'. Scan count 1, logical reads 91, physical reads 0, read-ahead reads 0.
char1_noindex: 100ms
Table 'TEST_DATATYPE_SPEED_1'. Scan count 1, logical reads 91, physical reads 0, read-ahead reads 0.
bit_noindex: 90ms
Table 'TEST_DATATYPE_SPEED_1'. Scan count 1, logical reads 23, physical reads 0, read-ahead reads 0.
tinyint_index: 80ms
Table 'TEST_DATATYPE_SPEED_1'. Scan count 1, logical reads 23, physical reads 0, read-ahead reads 0.
char1_index: 80ms
Table 'TEST_DATATYPE_SPEED_1'. Scan count 1, logical reads 23, physical reads 0, read-ahead reads 0.
bit_index: 80ms|||(20000 row(s) affected)
Table 'test_datatype_speed_1'. Scan count 1, logical reads 92, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
tinyint_noindex: 230ms
(20000 row(s) affected)
Table 'test_datatype_speed_1'. Scan count 1, logical reads 92, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
char1_noindex: 360ms
(20000 row(s) affected)
Table 'test_datatype_speed_1'. Scan count 1, logical reads 92, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
bit_noindex: 290ms
(20000 row(s) affected)
Table 'test_datatype_speed_1'. Scan count 1, logical reads 31, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
tinyint_index: 183ms
(20000 row(s) affected)
Table 'test_datatype_speed_1'. Scan count 1, logical reads 31, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
char1_index: 230ms
(20000 row(s) affected)
Table 'test_datatype_speed_1'. Scan count 1, logical reads 31, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
bit_index: 250ms|||Tried out the following
In Enterprise Manager, right-click on the Procedures folder, select "New Stored Procedure", and choose the "Ask Brett to do it" button.
It simply closed my SQL workbench and shut down the system ...
Weird !!! :D|||That's becasue it engages the margarita request task...
It involves shut down, turn off lights, catch train, fire up blender....
and ok...if we say tinyint IS faster, that's great...
Now add the join back to the code table to obtain the meaningful value for the surrogate key...|||Originally posted by Brett Kaiser
That's becasue it engages the margarita request task...
It involves shut down, turn off lights, catch train, fire up blender....
Take a train ... you must be living quite far from your office ...|||3 stops...can read 1 article...
town shuttle picks me up 3 houses away to the train...monthly pass US$36.00 a month
Company shuttle takes me to work...
reverse on the way home...
transmission blew on the piece o junk chrysler...after they replaced it under warranty...
No payement, No Gas, No Insurance, No hassle
beautiful Newark NJ|||Wish we had commuter rail in Ohio...|||Originally posted by blindman
Wish we had commuter rail in Ohio...
http://www.cota.com/cota/cotaweb/main.xml
Actually it's just proposed
http://www.cotafasttrax.com/nclrt_transit_large.php|||Originally posted by Enigma
Take a train ... you must be living quite far from your office ... I guess that depends on your definition of far...
I can work from a number of places. They range from zero to three hundred Km from home (home itself moves twice a week too, but that's another story).
Working downtown can be ugly. If you do it the hard way, the last 100 Km took significantly longer than the first 200 Km! The commuter rail makes the whole process a lot easier, faster, and safer.
-PatP|||km?
I thought you were in southern Cal...|||I created a monster.
Thank you all for the long, very educational discussion regarding my problem. I think I am all set.|||What about another approach? Instead of using binary information such as an archive flag, use e.g. a datetime attribute for creation and another for the closing or also archiving date/time. Have an index for each. Set the value of creation and close to the same value and exclude in every selection, don't use the NOT argument but the CREATION=CLOSE for active. The index seed will be optimal and you can access only archived or open or both and in addition every range of time. This is how you would manage temporal valid records in one to many relationships, e.g. portfolio constructions or account records which you may correct but not delete for tracability reasons.
More far..if you would need uniqueness you could use a timestamp as well, but then you have to define a second attribute not as a timestamp but as a (guess) long. The timestamp field is filled automatically, a value cannot be assigned, but you can read it with an insert trigger, the inserted record will have the timestamp value, wich you can access while inserting the record and write it to the second attribute, which would not be of type timestamp.This would make the values unique.
If you have to access the archived values very often and you are not happy with the performance you have to store the latest timestamp used in a meta table. This will allow direct access of the last and compare with earlier closed ones.
Tried this with SQL Server 2000 and Sybase System 11, it is very fast.
Regards|||News flash! Dead posts are rising from the grave to walk the Earth once again! Authorities urge all citizens to remain calm! Stayed tuned for further instructions...|||mambo,
Thanks for the additional insight ... I am still in the design phase on this project (it got pushed back), so your idea may prove to be useful. I'll check it out. Thanks!|||Wow. Blast to the past. lol Let us know what you finally end up doing Wigz. It's always fun to see 1800 year old posts resurrect themselves.
Database design question
I am working on a Budgeting web based application using ASP.NET 2 and SQL
Server 2000. The budgeting data will build up over a course of period and
these historical data will be used for decision making in future budget.
My questions:
1. what design approach should I use to store the historical data? Data
Mining or Datawarehouse? or other options?
2. Is there any built in tool in sql 2000 to keep track of the audit trail?
If yes, where they are saved?
Thanks for you help!
Hi
The choice of datawarehouse or datamining really need deciding by analysing
and producing the requirements for the future needs so can't really be
answered with the level of information you have given.
Although SQL Server 2000 has no automatic method of auditing, it is possible
to implement something using triggers and there is a simple example in the
CREATE TRIGGER topic in Books Online. I would recommend that you do the
mimimum amount of work require in the trigger and do any
aggregation/formatting... as a ofline process. This will reduce the impact of
the trigger on any oltp activity. You can also get third party applications
that implement auditing for you such as Lumigent's auditdb
http://www.lumigent.com/products/auditdb.html
HTH
John
"Mindy" wrote:
> Hi,
> I am working on a Budgeting web based application using ASP.NET 2 and SQL
> Server 2000. The budgeting data will build up over a course of period and
> these historical data will be used for decision making in future budget.
> My questions:
> 1. what design approach should I use to store the historical data? Data
> Mining or Datawarehouse? or other options?
> 2. Is there any built in tool in sql 2000 to keep track of the audit trail?
> If yes, where they are saved?
> Thanks for you help!
>
|||Thanks for the quick response. Can you send me the link to the online book on
Create Triggers topic?
"John Bell" wrote:
[vbcol=seagreen]
> Hi
> The choice of datawarehouse or datamining really need deciding by analysing
> and producing the requirements for the future needs so can't really be
> answered with the level of information you have given.
> Although SQL Server 2000 has no automatic method of auditing, it is possible
> to implement something using triggers and there is a simple example in the
> CREATE TRIGGER topic in Books Online. I would recommend that you do the
> mimimum amount of work require in the trigger and do any
> aggregation/formatting... as a ofline process. This will reduce the impact of
> the trigger on any oltp activity. You can also get third party applications
> that implement auditing for you such as Lumigent's auditdb
> http://www.lumigent.com/products/auditdb.html
> HTH
> John
>
> "Mindy" wrote:
|||You can download Books Online from here:
SQL Server Books Online
2005 -
http://www.microsoft.com/technet/pro...ads/books.mspx
2000 -
http://www.microsoft.com/downloads/d...displaylang=en
Then search for CREATE TRIGGER...
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Mindy" <Mindy@.discussions.microsoft.com> wrote in message
news:49F456EE-FBC0-46AD-8257-089E6887AAFD@.microsoft.com...[vbcol=seagreen]
> Thanks for the quick response. Can you send me the link to the online book
> on
> Create Triggers topic?
>
> "John Bell" wrote:
|||Hi
If you don't want to download books online check out
http://msdn.microsoft.com/library/de...asp?frame=true
John
"Mindy" wrote:
[vbcol=seagreen]
> Thanks for the quick response. Can you send me the link to the online book on
> Create Triggers topic?
>
> "John Bell" wrote:
Database design question
I am working on a Budgeting web based application using ASP.NET 2 and SQL
Server 2000. The budgeting data will build up over a course of period and
these historical data will be used for decision making in future budget.
My questions:
1. what design approach should I use to store the historical data? Data
Mining or Datawarehouse? or other options?
2. Is there any built in tool in sql 2000 to keep track of the audit trail?
If yes, where they are saved?
Thanks for you help!Hi
The choice of datawarehouse or datamining really need deciding by analysing
and producing the requirements for the future needs so can't really be
answered with the level of information you have given.
Although SQL Server 2000 has no automatic method of auditing, it is possible
to implement something using triggers and there is a simple example in the
CREATE TRIGGER topic in Books Online. I would recommend that you do the
mimimum amount of work require in the trigger and do any
aggregation/formatting... as a ofline process. This will reduce the impact o
f
the trigger on any oltp activity. You can also get third party applications
that implement auditing for you such as Lumigent's auditdb
http://www.lumigent.com/products/auditdb.html
HTH
John
"Mindy" wrote:
> Hi,
> I am working on a Budgeting web based application using ASP.NET 2 and SQL
> Server 2000. The budgeting data will build up over a course of period and
> these historical data will be used for decision making in future budget.
> My questions:
> 1. what design approach should I use to store the historical data? Data
> Mining or Datawarehouse? or other options?
> 2. Is there any built in tool in sql 2000 to keep track of the audit trail
?
> If yes, where they are saved?
> Thanks for you help!
>|||Thanks for the quick response. Can you send me the link to the online book o
n
Create Triggers topic?
"John Bell" wrote:
[vbcol=seagreen]
> Hi
> The choice of datawarehouse or datamining really need deciding by analysin
g
> and producing the requirements for the future needs so can't really be
> answered with the level of information you have given.
> Although SQL Server 2000 has no automatic method of auditing, it is possib
le
> to implement something using triggers and there is a simple example in the
> CREATE TRIGGER topic in Books Online. I would recommend that you do the
> mimimum amount of work require in the trigger and do any
> aggregation/formatting... as a ofline process. This will reduce the impact
of
> the trigger on any oltp activity. You can also get third party application
s
> that implement auditing for you such as Lumigent's auditdb
> http://www.lumigent.com/products/auditdb.html
> HTH
> John
>
> "Mindy" wrote:
>|||You can download Books Online from here:
SQL Server Books Online
2005 -
http://www.microsoft.com/technet/pr...oads/books.mspx
2000 -
http://www.microsoft.com/downloads/...&displaylang=en
Then search for CREATE TRIGGER...
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Mindy" <Mindy@.discussions.microsoft.com> wrote in message
news:49F456EE-FBC0-46AD-8257-089E6887AAFD@.microsoft.com...[vbcol=seagreen]
> Thanks for the quick response. Can you send me the link to the online book
> on
> Create Triggers topic?
>
> "John Bell" wrote:
>|||Hi
If you don't want to download books online check out
http://msdn.microsoft.com/library/d...asp?frame=true
John
"Mindy" wrote:
[vbcol=seagreen]
> Thanks for the quick response. Can you send me the link to the online book
on
> Create Triggers topic?
>
> "John Bell" wrote:
>
Database design question
I am working on a Budgeting web based application using ASP.NET 2 and SQL
Server 2000. The budgeting data will build up over a course of period and
these historical data will be used for decision making in future budget.
My questions:
1. what design approach should I use to store the historical data? Data
Mining or Datawarehouse? or other options?
2. Is there any built in tool in sql 2000 to keep track of the audit trail?
If yes, where they are saved?
Thanks for you help!Hi
The choice of datawarehouse or datamining really need deciding by analysing
and producing the requirements for the future needs so can't really be
answered with the level of information you have given.
Although SQL Server 2000 has no automatic method of auditing, it is possible
to implement something using triggers and there is a simple example in the
CREATE TRIGGER topic in Books Online. I would recommend that you do the
mimimum amount of work require in the trigger and do any
aggregation/formatting... as a ofline process. This will reduce the impact of
the trigger on any oltp activity. You can also get third party applications
that implement auditing for you such as Lumigent's auditdb
http://www.lumigent.com/products/auditdb.html
HTH
John
"Mindy" wrote:
> Hi,
> I am working on a Budgeting web based application using ASP.NET 2 and SQL
> Server 2000. The budgeting data will build up over a course of period and
> these historical data will be used for decision making in future budget.
> My questions:
> 1. what design approach should I use to store the historical data? Data
> Mining or Datawarehouse? or other options?
> 2. Is there any built in tool in sql 2000 to keep track of the audit trail?
> If yes, where they are saved?
> Thanks for you help!
>|||Thanks for the quick response. Can you send me the link to the online book on
Create Triggers topic?
"John Bell" wrote:
> Hi
> The choice of datawarehouse or datamining really need deciding by analysing
> and producing the requirements for the future needs so can't really be
> answered with the level of information you have given.
> Although SQL Server 2000 has no automatic method of auditing, it is possible
> to implement something using triggers and there is a simple example in the
> CREATE TRIGGER topic in Books Online. I would recommend that you do the
> mimimum amount of work require in the trigger and do any
> aggregation/formatting... as a ofline process. This will reduce the impact of
> the trigger on any oltp activity. You can also get third party applications
> that implement auditing for you such as Lumigent's auditdb
> http://www.lumigent.com/products/auditdb.html
> HTH
> John
>
> "Mindy" wrote:
> > Hi,
> >
> > I am working on a Budgeting web based application using ASP.NET 2 and SQL
> > Server 2000. The budgeting data will build up over a course of period and
> > these historical data will be used for decision making in future budget.
> >
> > My questions:
> > 1. what design approach should I use to store the historical data? Data
> > Mining or Datawarehouse? or other options?
> >
> > 2. Is there any built in tool in sql 2000 to keep track of the audit trail?
> > If yes, where they are saved?
> >
> > Thanks for you help!
> >|||You can download Books Online from here:
SQL Server Books Online
2005 -
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
2000 -
http://www.microsoft.com/downloads/details.aspx?familyid=a6f79cb1-a420-445f-8a4b-bd77a7da194b&displaylang=en
Then search for CREATE TRIGGER...
--
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Mindy" <Mindy@.discussions.microsoft.com> wrote in message
news:49F456EE-FBC0-46AD-8257-089E6887AAFD@.microsoft.com...
> Thanks for the quick response. Can you send me the link to the online book
> on
> Create Triggers topic?
>
> "John Bell" wrote:
>> Hi
>> The choice of datawarehouse or datamining really need deciding by
>> analysing
>> and producing the requirements for the future needs so can't really be
>> answered with the level of information you have given.
>> Although SQL Server 2000 has no automatic method of auditing, it is
>> possible
>> to implement something using triggers and there is a simple example in
>> the
>> CREATE TRIGGER topic in Books Online. I would recommend that you do the
>> mimimum amount of work require in the trigger and do any
>> aggregation/formatting... as a ofline process. This will reduce the
>> impact of
>> the trigger on any oltp activity. You can also get third party
>> applications
>> that implement auditing for you such as Lumigent's auditdb
>> http://www.lumigent.com/products/auditdb.html
>> HTH
>> John
>>
>> "Mindy" wrote:
>> > Hi,
>> >
>> > I am working on a Budgeting web based application using ASP.NET 2 and
>> > SQL
>> > Server 2000. The budgeting data will build up over a course of period
>> > and
>> > these historical data will be used for decision making in future
>> > budget.
>> >
>> > My questions:
>> > 1. what design approach should I use to store the historical data?
>> > Data
>> > Mining or Datawarehouse? or other options?
>> >
>> > 2. Is there any built in tool in sql 2000 to keep track of the audit
>> > trail?
>> > If yes, where they are saved?
>> >
>> > Thanks for you help!
>> >|||Hi
If you don't want to download books online check out
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnanchor/html/sqlserver.asp?frame=true
John
"Mindy" wrote:
> Thanks for the quick response. Can you send me the link to the online book on
> Create Triggers topic?
>
> "John Bell" wrote:
> > Hi
> >
> > The choice of datawarehouse or datamining really need deciding by analysing
> > and producing the requirements for the future needs so can't really be
> > answered with the level of information you have given.
> >
> > Although SQL Server 2000 has no automatic method of auditing, it is possible
> > to implement something using triggers and there is a simple example in the
> > CREATE TRIGGER topic in Books Online. I would recommend that you do the
> > mimimum amount of work require in the trigger and do any
> > aggregation/formatting... as a ofline process. This will reduce the impact of
> > the trigger on any oltp activity. You can also get third party applications
> > that implement auditing for you such as Lumigent's auditdb
> > http://www.lumigent.com/products/auditdb.html
> >
> > HTH
> >
> > John
> >
> >
> > "Mindy" wrote:
> >
> > > Hi,
> > >
> > > I am working on a Budgeting web based application using ASP.NET 2 and SQL
> > > Server 2000. The budgeting data will build up over a course of period and
> > > these historical data will be used for decision making in future budget.
> > >
> > > My questions:
> > > 1. what design approach should I use to store the historical data? Data
> > > Mining or Datawarehouse? or other options?
> > >
> > > 2. Is there any built in tool in sql 2000 to keep track of the audit trail?
> > > If yes, where they are saved?
> > >
> > > Thanks for you help!
> > >
Friday, February 24, 2012
Database design help
sql 2000 database and asp.
This is call center environment where agent bid on shift (schedules) base on most disireable to least, and it process base on seniority.well, let's see, you'll need an agent table, a shift table, and a bid table|||Don't forget the junction tables...
What you really should do is interview the users and the data source providers and model the business...
THEN create the logical data model after all the Entites and attributes have been defined....|||Identifying and introducing properties is an on-going process, but of course the earlier the better ;)|||Thanks guys
i will let you guys knows as it goes.
The biggest issue im looking at so far is to automate the process of assigning shitfs to agent base on seniority. Agents with higher seniorites get thier most rank shifts and so on. also there's avaibility of each shift, e.g shitA will only be given to 20 agents only and so on.
seniority is unique number already assigned to an agent. iwant to use store procedures or function to do that manupulation if possible, after all data's has been collected.|||Need to be careful with the "t" and "f" keys. It sounds like the agents queue up at the loo based on seniority, and only the first 20 agents get the best umm ... waste matter !!
Database design for status field
Hello,
I have a database design question. I want to store a status in a table. In the ASP.NET interface for the user this status will be adapted to the language of the user.
This is how it would look like:
StatusID 1 = "yes" for English, "oui" for French, "ja" for Dutch
StatusID 2 = "no" for English, "non" for French, "neen" for Dutch
StatusID 3 = "error" for English, "erreur" for French, "fout" for Dutch
I don't want to do the translation in ASP.NET and it should be done in one query.
How would it look like in the database design.
WimVM, how about this,
if you are asking about how to design a table or tables, you could have one table (status) like this:
statusID is you primary key and three additional fields (english, french, dutch)
so for instance if you have an english speaker your "yes" would be returned by
declare @.statusintset status = (getYourIdFromApp)select english from status where statusId = @.statushope this is helpful -- jp|||
Hello,
Thanks for your reply. This would indeed do it, nice and easy. What I forgot to say was that the language is set by an ID, LanguageID. This holds the value 1 for English, 2 for Dutch, 3 for French and 4 for German.
I would need a way that I have more flexibility when I add or disable a language. In the above example you need to know the position of the value linked to the value. Sorry for not informing you about this in the first place.
Thanks
|||
WimVM:
Hello,
Thanks for your reply. This would indeed do it, nice and easy. What I forgot to say was that the language is set by an ID, LanguageID. This holds the value 1 for English, 2 for Dutch, 3 for French and 4 for German.
I would need a way that I have more flexibility when I add or disable a language. In the above example you need to know the position of the value linked to the value. Sorry for not informing you about this in the first place.
Thanks
I still have not found a solution for this. It would be great if somebody could help me with this. Thanks.
|||
WimVM:
Thanks for your reply. This would indeed do it, nice and easy. What I forgot to say was that the language is set by an ID, LanguageID. This holds the value 1 for English, 2 for Dutch, 3 for French and 4 for German.
I would need a way that I have more flexibility when I add or disable a language. In the above example you need to know the position of the value linked to the value. Sorry for not informing you about this in the first place.
Hi Wim, I have some suggestions.
Add a new table to your database called Languages. This table will have 2 fields: LanguageID and LanguageName. The data would look like this:
LanguageID LanguageName
1 English
2 Dutch
3 French
4 German
Add another table called Statuses. This table will have 2 fields: StatusID and StatusName. This table would not necessarily be used in queries, but could be used to store metadata about the Statuses. The data would look like this:
StatusID StatusName
1 Yes Status
2 No Status
Add another table called StatusTranslation. This table will have 3 fields: StatusID, LanguageID, and Translation. The data would look like this:
StatusID LanguageID StatusTranslation
1 1 yes
2 1 no
1 2 ja
2 2 neen
1 3 oui
2 3 non
1 4 ja
2 4 nein
Now, assuming your main data table is called MyTable. This table might look something like this:
ID Column1 Column2 UserLanguageID StatusID
1 Value1 Value2 2 1
2 SomeValue SomeValue2 1 2
3 Value1 Value2 3 1
The query to pull out the status data, translated for the user's language, would look like this:
SELECT
ID,
StatusTranslation
FROM
myTable
INNER JOIN
StatusTranslation ON myTable.StatusID = StatusTranslation.StatusID AND myTable.UserLanguageID = StatusTranslation.UserLanguageID
And the result of that query would be:
ID StatusTranslation
1 ja
2 no
3 oui
This has made some assumptions, such as that you have the UserLanguageID in the same table as the StatusID. You'd need to adjust the queries and data tables to fit your situation, but perhaps this will help get you started.
For some background, you might like Alister Jones' (SomeNewKid) blog post:Localizing the Content, which lead's to Karl Seguin's article:Creating multilingual websites - Part 2.
Database design before starting
I would appreciate any help or suggesions on starting with database design for an asp.net site.
I know this question maybe sounds silly but I feel stuck and don't know where to start.
Yours sincerely
AndlaThis is a very broad question.
First of all I suggest you think through what you want to accomplish with your websites, what kind of entities you'll have (eg. customers, products etc) and what properties they'll have (like phonenumber, price etc)
After that you can move forward and think of the possible workflows of your webapplication, eg. purchasing a product (which would result in an order entity, with an address property)
As I said before, your question is kind of vague, hope this helped a bit..|||For some good grounding in relational database design theory have a read of :
http://r937.com/relational.html
If you want more specific help then describe what kind of data you are anticipating or wanting to store in the database and, indeed, what database product you are using.
Sunday, February 19, 2012
Database Design
Server. For this I need some help for the Database design. My design
is as follows.
I)User table: User_id, UserName....
Users (e.g. John Smith) Each User would contain a following Group of
tables
a)Customers
b)Suppliers
c)Bank Accounts
d)Transactions
Tables under :
User_FinYear_Customers (e.g JohnSmith_02_03_Customers)
User_FinYear_Suppliers (e.g JohnSmith_02_03_Suppliers)
User_FinYear_BankAccounts (e.g JohnSmith_02_03_BankAccounts)
User_FinYear_Transactions (e.g JohnSmith_02_03_Transactions)
As new user is created all the above tables are created at run time.
These tables are created for each and every user. There can be more
than 4 tables (as mentioned above) for one user. These tables will
increase as more users are added. Only thing in support of this design
is that, the record fetching time for a particular user would be
minimum and the table for a particular user will only load in Memory.
IS IT FEASIBLE TO CREATE ABOUT 20 TABLES FOR EACH NEW USER ADDED TO
THE DATABASE? WHICH MEANS IF THERE ARE 1000 USERS THERE WOULD BE 20000
TABLES IN THE DATABASE. THIS CASE CAN GO WORSE IF THERE ARE MORE THAN
1000 USERS. WHAT IS BETTER DATABASE DESIGN, MORE TABLES WITH LESS
RECORDS OR LESS TABLES WITH MORE NO.OF RECORDS?
An alternative design can be as follows
Tables:
Users, Customers, Suppliers, BankAccounts, Transactions ....and so
on.
User: User_Id, UserName, .....
Customers: User_Id, Customer_Id,.....
Suppliers: User_Id, Supplier_Id,....
BankAccounts: User_Id, BankAc_Id,....
Transactions: User_Id, Trans_Id.....
..
..
..
..
All these tables would be created at the design time only and as a new
user is created a record is added to the users table. When the user
adds Customer the record is added to the Customers table... and so
on... The problem with this design is that Customers,Suppliers,
BankAccounts... etc tables would contain records for all the users
and thus the record fetching time for a particular user increases as
many times as there are users in the Database. Another problems with
this design is that more than one user would be connected at run time
will access the same tables, and for even a single user the complete
table will be loaded in memory.
WHICH DESIGN SHOULD BE USED AS FAR AS SPEED OF SERVER IS CONCERNED?
PLEASE HELP WITH CONVINCING REASONS.Clearly #2, less maint and if you ever need over all queries or a customer
gets combined it should be a lot easier. Also use Oracle not the other one
since this is an Oracle newsgroup.
Jim
"Rushikesh" <rbaiwar@.sify.com> wrote in message
news:2b29906c.0307142123.1d020801@.posting.google.c om...
> I am designing a WEB BASED Accounting Software with ASP and SQL
> Server. For this I need some help for the Database design. My design
> is as follows.
>
> I)User table: User_id, UserName....
> Users (e.g. John Smith) Each User would contain a following Group of
> tables
> a)Customers
> b)Suppliers
> c)Bank Accounts
> d)Transactions
>
> Tables under :
> User_FinYear_Customers (e.g JohnSmith_02_03_Customers)
> User_FinYear_Suppliers (e.g JohnSmith_02_03_Suppliers)
> User_FinYear_BankAccounts (e.g JohnSmith_02_03_BankAccounts)
> User_FinYear_Transactions (e.g JohnSmith_02_03_Transactions)
> As new user is created all the above tables are created at run time.
> These tables are created for each and every user. There can be more
> than 4 tables (as mentioned above) for one user. These tables will
> increase as more users are added. Only thing in support of this design
> is that, the record fetching time for a particular user would be
> minimum and the table for a particular user will only load in Memory.
> IS IT FEASIBLE TO CREATE ABOUT 20 TABLES FOR EACH NEW USER ADDED TO
> THE DATABASE? WHICH MEANS IF THERE ARE 1000 USERS THERE WOULD BE 20000
> TABLES IN THE DATABASE. THIS CASE CAN GO WORSE IF THERE ARE MORE THAN
> 1000 USERS. WHAT IS BETTER DATABASE DESIGN, MORE TABLES WITH LESS
> RECORDS OR LESS TABLES WITH MORE NO.OF RECORDS?
>
> An alternative design can be as follows
> Tables:
> Users, Customers, Suppliers, BankAccounts, Transactions ....and so
> on.
> User: User_Id, UserName, .....
> Customers: User_Id, Customer_Id,.....
> Suppliers: User_Id, Supplier_Id,....
> BankAccounts: User_Id, BankAc_Id,....
> Transactions: User_Id, Trans_Id.....
> .
> .
> .
> .
> All these tables would be created at the design time only and as a new
> user is created a record is added to the users table. When the user
> adds Customer the record is added to the Customers table... and so
> on... The problem with this design is that Customers,Suppliers,
> BankAccounts... etc tables would contain records for all the users
> and thus the record fetching time for a particular user increases as
> many times as there are users in the Database. Another problems with
> this design is that more than one user would be connected at run time
> will access the same tables, and for even a single user the complete
> table will be loaded in memory.
> WHICH DESIGN SHOULD BE USED AS FAR AS SPEED OF SERVER IS CONCERNED?
> PLEASE HELP WITH CONVINCING REASONS.|||"Rushikesh" <rbaiwar@.sify.com> wrote in message
news:2b29906c.0307142123.1d020801@.posting.google.c om...
> I am designing a WEB BASED Accounting Software with ASP and SQL
> Server. For this I need some help for the Database design. My design
> is as follows.
You may need a lot more than just these tables.
> and thus the record fetching time for a particular user increases as
> many times as there are users in the Database.
Oh no, it doesn't!
> will access the same tables, and for even a single user the complete
> table will be loaded in memory.
It is ridiculous if you write your code to do that.
> WHICH DESIGN SHOULD BE USED AS FAR AS SPEED OF SERVER IS CONCERNED?
> PLEASE HELP WITH CONVINCING REASONS.
Second.
Read a few texts about database design and normalization.
--
Cheers
Nuno Souto
wizofoz2k@.yahoo.com.au.nospam|||Rushikesh (rbaiwar@.sify.com) writes:
> I am designing a WEB BASED Accounting Software with ASP and SQL
> Server. For this I need some help for the Database design. My design
> is as follows.
As whether you should use SQL Server or Oracle, I don't have an opinion.
I come from the SQL Server side, but these questions have the same answer
for any enterprise DBMS.
> I)User table: User_id, UserName....
> Users (e.g. John Smith) Each User would contain a following Group of
> tables
> a)Customers
> b)Suppliers
> c)Bank Accounts
> d)Transactions
This is a completely unacceptable solution, and in completely violation
of the relational model. Just forget about it.
> All these tables would be created at the design time only and as a new
> user is created a record is added to the users table. When the user
> adds Customer the record is added to the Customers table... and so
> on... The problem with this design is that Customers,Suppliers,
> BankAccounts... etc tables would contain records for all the users
> and thus the record fetching time for a particular user increases as
> many times as there are users in the Database. Another problems with
> this design is that more than one user would be connected at run time
> will access the same tables, and for even a single user the complete
> table will be loaded in memory.
Your assumptions here are entirely correct. Or to be less polite: they
are flat wrong in places.
An enterprise DBMS are built for implementing this kind of solution.
With proper indexes, the difference in access time to a certain row
if you have 100 rows or million rows in the table is neglible. Or if
you for that matter have 100 million rows.
Neither does an enterprise DBMS load an entire table into memory, because
there is an access to a single row. I cannot speak for Oracle, but SQL
Server will read the pages you access into memory, and if one user is
very active, all his pages may be in cache, whereas the pages for a user
who is on vacation are only on disk. Pages per users? Ah, didn't I mention
indexes? It does seem reasonable from you mentioned to have clustered
indexes on user ids.
--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
Friday, February 17, 2012
Database Creation
Hello...
I want to develop a web site having two features
1. Online Shopping
2. Forums
Im using SQL Server, ASP.NET and C#. Now the problem is that how do I configure the Databases. Whether I create new database for each or I marge the both things into one database. if i create saperate databases for each of the feature then users have to register for two times, first for forums and second for shopping. I dont want to do this...! I want users to register just for once.
____________
Thanks in adv
Nauman Ahmed
Creating of 2 separated DB will be better.
You can use the registration info in one DB table and to use this DB table for authentication. So, registrations will be one time only
regards