Well i've been given a big job of copying all the databases from an old
server to a new server. In order to provide better security, availabilty,
performance.
My servers are in a DMZ, so i have to use remote desktop/terminal services
to connect to it.
1. I have two logical partitions in the server. Is it a good practice to
store the OS and SQL server software itself on C:\ and all the data on
d:\??(Will it help me in anyways to achieve better performance? Can i
make separate directories for each databaseon d:\. and further on extending
it to sub directories for data and log files?
2. Should i copy all objects such as logins, DB plans, jobs etc. from the
old server or is it a better a practice to start all the plans over (create
new plans) to achieve better results and only copy the databases?
3. What is a good strategy for backup plans? For Log Files? For Primary
Files?
4. How to come up with a good Disaster Recovery Plan? What are all the
things you need to have in order to create a good DR plan? what is a good
way to test it?
5. What is the best way to secure SQL server? Who should have what
access? Which people should have access to the server itself? And how can
i give people read only access to the databases if they have access to the
server? Do they even need access to the server? How can they only have
read access to the SQL server databases? What tools do i need? Since i
have to use remote desktop to conncet to the servers, how can i give my
clients that just want read access to the all the data files including log
files? What do they need installed / or use in order to achieve this?
6. Is there any way you can come up with Roles scheme for certain users?
Lets say a particular group of users should have a certain permissions? Can
we create a something like that? that need to be done on the OS level
rather than SQL level.?
I know this is asking for a lot, but its really important to me, your
valuable knowledge on all this issues would be much much appreciated?
Thank you guys very much
"Shash Goyal" <Shash703@.gmail.com> wrote in message
news:%23J1QB78tEHA.3916@.TK2MSFTNGP10.phx.gbl...
> Well i've been given a big job of copying all the databases from an old
> server to a new server. In order to provide better security, availabilty,
> performance.
> My servers are in a DMZ, so i have to use remote desktop/terminal services
> to connect to it.
> 1. I have two logical partitions in the server. Is it a good practice to
> store the OS and SQL server software itself on C:\ and all the data on
> d:\??(Will it help me in anyways to achieve better performance? Can i
> make separate directories for each databaseon d:\. and further on
extending
> it to sub directories for data and log files?
It really doesn't make a difference performance wise if they are the same
physical volume.
However, an argument can be made for maintenance to at least put the
databases on the D: drive.
I would not do a separate directory for each DB though.
> 2. Should i copy all objects such as logins, DB plans, jobs etc. from
the
> old server or is it a better a practice to start all the plans over
(create
> new plans) to achieve better results and only copy the databases?
>
"It depends". It really does.
In my recent move, I moved all the logins, etc. There's a KB article on
this.
> 3. What is a good strategy for backup plans? For Log Files? For Primary
> Files?
>
I prefer to back them up to a NAS via a UNC. From there to tape is also
recommended. Do as often as business requirements dictate.
> 4. How to come up with a good Disaster Recovery Plan? What are all the
> things you need to have in order to create a good DR plan? what is a
good
> way to test it?
>
First, determine your needs. Are you a 24/7 company expecting 100% uptime.
How much recovery time is allowed. (i.e. if you have to be up and running in
5 minutes you may go with clustering or log-shipping and a lot of additional
cost. If you can wait 5 hours, just restoring from a backup may be ok.)
Again, what are the business needs?
> 5. What is the best way to secure SQL server?
MS has some white papers on this. Ideally give as little permissions as
possible.
>Who should have what
> access? Which people should have access to the server itself? And how
can
> i give people read only access to the databases if they have access to the
> server? Do they even need access to the server?
Generalyl not.
> How can they only have
> read access to the SQL server databases?
Read up on DB Roles.
DBdatareader may work for what you want.
> What tools do i need? Since i
> have to use remote desktop to conncet to the servers, how can i give my
> clients that just want read access to the all the data files including log
> files? What do they need installed / or use in order to achieve this?
> 6. Is there any way you can come up with Roles scheme for certain users?
> Lets say a particular group of users should have a certain permissions?
Can
> we create a something like that? that need to be done on the OS level
> rather than SQL level.?
>
Well, I can't answer all your questions, but hopefully this gives you a
start.
> I know this is asking for a lot, but its really important to me, your
> valuable knowledge on all this issues would be much much appreciated?
> Thank you guys very much
>
|||For your answer of question #1 whats the reason that you should not create a
separate directory for each DB?
As far as how critical the Db' are-- the server holds all the data for
different websites, so i guess they are pretty critical. so whats the most
cost effective DR plan we can establish?
thanks for all your help
And thanks for all your help so far
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:vQ_dd.313544$bp1.178867@.twister.nyroc.rr.com. ..[vbcol=seagreen]
> "Shash Goyal" <Shash703@.gmail.com> wrote in message
> news:%23J1QB78tEHA.3916@.TK2MSFTNGP10.phx.gbl...
availabilty,[vbcol=seagreen]
services[vbcol=seagreen]
to[vbcol=seagreen]
i[vbcol=seagreen]
> extending
> It really doesn't make a difference performance wise if they are the same
> physical volume.
> However, an argument can be made for maintenance to at least put the
> databases on the D: drive.
> I would not do a separate directory for each DB though.
>
> the
> (create
> "It depends". It really does.
> In my recent move, I moved all the logins, etc. There's a KB article on
> this.
Primary[vbcol=seagreen]
> I prefer to back them up to a NAS via a UNC. From there to tape is also
> recommended. Do as often as business requirements dictate.
the
> good
> First, determine your needs. Are you a 24/7 company expecting 100%
uptime.
> How much recovery time is allowed. (i.e. if you have to be up and running
in
> 5 minutes you may go with clustering or log-shipping and a lot of
additional[vbcol=seagreen]
> cost. If you can wait 5 hours, just restoring from a backup may be ok.)
> Again, what are the business needs?
>
> MS has some white papers on this. Ideally give as little permissions as
> possible.
> can
the[vbcol=seagreen]
> Generalyl not.
>
> Read up on DB Roles.
> DBdatareader may work for what you want.
log[vbcol=seagreen]
users?
> Can
> Well, I can't answer all your questions, but hopefully this gives you a
> start.
>
>
|||"Shash Goyal" <Shash703@.gmail.com> wrote in message
news:e$fJvu%23tEHA.3476@.TK2MSFTNGP14.phx.gbl...
> For your answer of question #1 whats the reason that you should not create
a
> separate directory for each DB?
No need in my book. Just extra path info to type, etc.
> As far as how critical the Db' are-- the server holds all the data for
> different websites, so i guess they are pretty critical. so whats the
most
> cost effective DR plan we can establish?
>
Again, what's the cost if the sites go down?
I deal with sites that downtime is measured in thousands of dollars per
minute. Even then it was hard to justify a clustered server configuration.
(Which can run $50K and up. Since list price for SQL Server 2000 Enterprise
Edition is ~$20K/CPU license, it gets expensive very quickly.)
Before that, I had log-shipping. Still required two servers, but I didn't
need a SAN or SQL 2000 EE licenses.
At one point it was simply, "make sure the hardware is really really
robust."
So, again, how much can you pay?
As a consultant I could design plans that cost next to nothing to cost
$250K. It would all depend on what a client needs and is willing to pay.
The usual test is ask your business team "what's service level agreement do
I need to provide for." Then go away, figure out how much it will cost and
then go back to them. I find generally folks very quickly get much more
realistic in their needs.
(i.e. they may say, "we want 99.999% uptime, guaranteed." You come back
with a $250K price tag and then all of a sudden 99% uptime (which you can do
for say $25K) is MUCH more palatable to them. :-)
> thanks for all your help
>
> And thanks for all your help so far
> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
message[vbcol=seagreen]
> news:vQ_dd.313544$bp1.178867@.twister.nyroc.rr.com. ..
old[vbcol=seagreen]
> availabilty,
> services
practice[vbcol=seagreen]
> to
Can[vbcol=seagreen]
> i
same[vbcol=seagreen]
from[vbcol=seagreen]
> Primary
> the
> uptime.
running[vbcol=seagreen]
> in
> additional
how[vbcol=seagreen]
> the
my[vbcol=seagreen]
> log
this?[vbcol=seagreen]
> users?
permissions?
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment