Showing posts with label primary. Show all posts
Showing posts with label primary. Show all posts

Friday, March 30, 2012

How to control DB size growth?

Hello,
My database is 1gig big and it has so far one table with 1 melion records.
Once I tried to create the primary key in this table but it was rejected by
MS SQL since it was not unique. That's fine with me but I noticed that my
database doubled in size to just over 2 gigs. Then I created clastered index
and the size of the file was around 3 gigs. When I deleted newly created
index the size of the file stayed the same almost reaching the limites that
I set up before. I would never expect that the my database would grow so
fast since the same data in the Access database is just 450 MB.
Why the data grows when the key or index fails to be created?
How to control the size of the Database?
Any help is greatly appreciated,
Les
> Why the data grows when the key or index fails to be created?
It is growing because sql is creating index to the point of failure. Of
course creation takes some disk space. But after failure it cleans the data
in failure but by default it does not shrink the database file. So the file
stays as big as it was on failure.

> How to control the size of the Database?
the best way to control it is to set a maximm size, but the you should set
an alert to monitor the size and notify you database size is near full.
Best way to see how much space your data is taking is to set a TaskPad view
in Enterprise Manager.
(Select Database, go to menu View -> TaskPad).
Danijel
"Tom" <tom@.killspam.com> wrote in message
news:UjSFd.48178$TN6.1797082@.news20.bellglobal.com ...
> Hello,
> My database is 1gig big and it has so far one table with 1 melion records.
> Once I tried to create the primary key in this table but it was rejected
> by MS SQL since it was not unique. That's fine with me but I noticed that
> my database doubled in size to just over 2 gigs. Then I created clastered
> index and the size of the file was around 3 gigs. When I deleted newly
> created index the size of the file stayed the same almost reaching the
> limites that I set up before. I would never expect that the my database
> would grow so fast since the same data in the Access database is just 450
> MB.
> Why the data grows when the key or index fails to be created?
> How to control the size of the Database?
> Any help is greatly appreciated,
> Les
>

how to continue on error

How can I cause my insert statement to skip over (without failing) rows
where there's a primary key constraint violation?

I've got a case where daily I insert >500k rows of daily data, where
the data is date and time stamped. So, for example, I have an insert
statement with constraint: WHERE date >= '5/20/05' AND date <
'5/21/05'. That takes care of one day's data (5/20).

However, the next day's data (5/21) will still have some time stamps
from the previous day. Therefore the statement needs to be something
like WHERE date >= '5/20/05' AND date <= '5/21/05'. The 5/20 data is
already loaded but I need to take the 5/21 data which just happens to
contain just a few rows of data marked 5/20 and insert it without
generating a primary key error from all the other 5/20 rows that are
already inserted.

-DaveINSERT INTO TargetTable (key_col, ...)
SELECT S.key_col, ...
FROM SourceTable AS S
LEFT JOIN TargetTable AS T
ON S.key_col = T.key_col
WHERE T.key_col IS NULL
AND S.date >= '20050520' AND date < '20050522'

--
David Portas
SQL Server MVP
--|||The easy way is to limit the insert query to 11:59 59 of the previous
day. Then you tell your users, "this report contains all the data from
yesterday" In fact, if you're doing a report of some kind, this is
really the best way to do it because otherwise, you have incomplete
(and therefore bad) data for the current day.

Another way is to delete yesterday's data right before you run the
insert.|||Should the join run very slowly? If I do the insert with a standard
insert query it takes about 7 minutes. With the join query it runs and
doesn't seem to be able to finish. If I run the query on dates with no
data it finishes ok. Is my join incorrect since I can't use S.keyrow?

insert into final(keyRow, cell, recordDate, high_set )

SELECT CONVERT(CHAR(16),dateadd(hh,datepart(hh, .access_time),
S.record_date),20)+'|'+CONVERT(CHAR(3), S.bts_id)+'-'+CONVERT(CHAR(1),
S.sector_id)+'-'+CONVERT(CHAR(3), S.carrier_id) AS keyRow,
(CONVERT(CHAR(3), S.bts_id)+'-'+CONVERT(CHAR(1),
S.sector_id)+'-'+CONVERT(CHAR(3), S.carrier_id)) AS cell,
CONVERT(CHAR(16),dateadd(hh,datepart(hh, S.access_time),
S.record_date),20)as recordDate, SUM(S.high_set_int) AS high_set

from SourceTable AS S
LEFT JOIN TargetTable AS T
ON keyRow = T.keyRow

WHERE T.keyRow IS NULL
AND S.record_date >= '5/06/2005' AND S.record_date < '5/07/2005' AND
convert (char(8), S.access_time,108) != '00:00:00'

GROUP BY CONVERT(CHAR(16),dateadd(hh,datepart(hh, S.access_time),
S.record_date),20),
CONVERT(CHAR(16),dateadd(hh,datepart(hh, S.access_time),
S.record_date),20)+'|'+
CONVERT(CHAR(3), S.bts_id)+'-'+CONVERT(CHAR(1),
S.sector_id)+'-'+CONVERT(CHAR(3), S.carrier_id)

ORDER BY CONVERT(CHAR(16),dateadd(hh,datepart(hh, S.access_time),
S.record_date),20),
CONVERT(CHAR(16),dateadd(hh,datepart(hh, S.access_time),
S.record_date),20)+'|'+CONVERT(CHAR(3), S.bts_id)+'-'+
CONVERT(CHAR(1), S.sector_id)+'-'+CONVERT(CHAR(3), S.carrier_id),
S.cell|||
christopher.secord@.gmail.com wrote:
> The easy way is to limit the insert query to 11:59 59 of the previous
> day. Then you tell your users, "this report contains all the data from
> yesterday" In fact, if you're doing a report of some kind, this is
> really the best way to do it because otherwise, you have incomplete
> (and therefore bad) data for the current day.

Yes, I agree but the way the data is generated results in "today's"
data flat file containing some of yesterday's data. So although 99% of
yesterday's data is already in the db, the last little bit needs be
added for completeness. It's not that the nearly all users can't use
the 99% data for their purposes but still the missing 1% needs to be
added for later complete, accurate reports.

> Another way is to delete yesterday's data right before you run the
> insert.

This puts the problem back 1 day because I would still need to add
yesterday's data which is in its own flat file which contains data from
the day before yesterday.

-David|||David Portas wrote:
> INSERT INTO TargetTable (key_col, ...)
> SELECT S.key_col, ...
> FROM SourceTable AS S
> LEFT JOIN TargetTable AS T
> ON S.key_col = T.key_col
> WHERE T.key_col IS NULL
> AND S.date >= '20050520' AND date < '20050522'

I'm thinking maybe the best thing to do is add another column to my
table that uniquely identifies the data from a particular day. Some of
the data from the particular flat file will be from the day before but
it won't matter because I'll use the new field in the where criteria
instead of the actual record dates.

Also thought about using NOT EXISTS somehow.

-Dave|||Make sure you have indexes on the columns that are being joined.|||(wireless200@.yahoo.com) writes:
> Should the join run very slowly? If I do the insert with a standard
> insert query it takes about 7 minutes. With the join query it runs and
> doesn't seem to be able to finish. If I run the query on dates with no
> data it finishes ok. Is my join incorrect since I can't use S.keyrow?

I don't understand that last question. What do you mean, you cannot
use S.keyrow?

A clustered index on S.record_date would be a good thing.

I would also replace the LEFT JOIN with NOT EXISTS. Not because this
is faster, but because expresses what you mean.

Does the target table have an IDENTITY column? Else there is no reason at
all to have the ORDER BY clause. Removing that could also gain some
performance.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Monday, March 12, 2012

How to configure Logshipping across the two forests

Hi,
I have configured Log shipping in my organisation which has two forests. One
forest has primary server and monitor server while the other forest has
secondary server. While configuring the Log shipping, read only database was
created successfully on the secondary server. Primary server is taking
transaction log backup on the specified time interval but it is not being
restored on the secondary server. What is the problem? How can I proceed?
Please guide me.
With regards,
Rajendra
You mentioned the secondary server is set to read only. Are there users in
there while you're trying to restore logs? If so, then the database is
already in a fully recovered state. From books online:
"It is not possible to apply a transaction log backup:
Unless the database or differential database backup preceding the
transaction log backup is restored first.
Unless all preceding transaction logs created since the database or
differential database were backed up are applied first.
If the database has already recovered and all outstanding transactions
have either been rolled back or rolled forward."
If you're trying to create a secondary reporting database, you might want to
try using replication instead. Depending on how you set it up, the secondary
data set can be kept more "up to date", and users can get in and out. Of
course, the database can't be read only, as replication will be writing to
it, but you can set all the users in the db_DataReader database role.
"Rajendra" wrote:

> Hi,
> I have configured Log shipping in my organisation which has two forests. One
> forest has primary server and monitor server while the other forest has
> secondary server. While configuring the Log shipping, read only database was
> created successfully on the secondary server. Primary server is taking
> transaction log backup on the specified time interval but it is not being
> restored on the secondary server. What is the problem? How can I proceed?
> Please guide me.
> With regards,
> Rajendra
|||Users Can be in the database on a secondary server if the
database has been restored using STANDBY which is an
option in the maintenance plan wizard. This is why there
is also the option to remove the users before a log
restore.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Hi friends,
Thanx for the reply. But I was asking whether it is possible to configure
Log shipping across the forests or not?I think share name acorss the forests
is creating some problem because I am not able connect SQL server with server
name if forests is other than the forest where my computer resides. That's
why I am connecting SQL server using IP address.
With regards
"Rajendra" wrote:

> Hi,
> I have configured Log shipping in my organisation which has two forests. One
> forest has primary server and monitor server while the other forest has
> secondary server. While configuring the Log shipping, read only database was
> created successfully on the secondary server. Primary server is taking
> transaction log backup on the specified time interval but it is not being
> restored on the secondary server. What is the problem? How can I proceed?
> Please guide me.
> With regards,
> Rajendra
|||Rajendra,
do you have a forest trust? If so, this should be possible. Can you see the
share at all? If not, perhaps you need to edit the hosts file for it to be
accessible.
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

How to configure distributor thread for being up while replicated database is down

Hi
I often deal with a problem when replicated database is down due to electric outage or any other scheduled type of downtime. Primary database is up but distribution agent is going down 'after 10 attempts to connect to replicated database'. That's the problem I think because not only distribution agent goes in undetermined state and I have to restart sqlagent service, but I need to go through whole process of snapshot and applaying it to replicated db. I would like it to accumulate transactions in distribution database and once replicated db is up - distribute them.
I wonder if anybody can tell me how to configure distribution thread so it does not go down after 10 unsuccessfull attempts but stays for day or two.
Thank you, Gene.

Gene, why do you need to re-apply the snapshot, is your retention period very small? By default it's 72 hours, so if distribution agent can get back up within that time period, it should just pick up where it left off.

Unfortunately if your distribution agent is running in Continuous mode (which you mention since it goes down after 10 unsuccessful attempts), the workaround would be change it so that the agent runs on a scheduled interval, like every minute. This way in the case the machine gets knocked off the network, the agent will always run every N minutes, pass or fail. As soon as the machine gets back on the network, it should pick up where it left off, provide the oldest txn that hasn't been replicated is within the retention period.

|||

The other possible solution is to increate the number of retry attempts for your distribution agent job, when you view job steps of an agent job, there is an advanced tab, on that screen, you can specify a big value for "Retry Attempts" and how often you want the retry to happen "Retry Interval (minutes)". For example, if you want the retry lasts for 2 days, you could put 2880 for Retry Attempts and 1 for Retry Interval.

Hope that helps,

Zhiqiang Feng

This posting is provided "AS IS" with no warranties, and confers no rights.

|||I don't think this exists for SQL 2000, but if it does, then this is also a good solution.

How to configur a column to be unique value from the Enterprize Ma

Hi, I want one of the column in a table to not have duplicate values(it's
Varchar) but it's not the primary key column which is a auto-generated id
column for the table.
Thanks,
AlphaCreate a unique constraint on the column
http://sqlservercode.blogspot.com/
"Alpha" wrote:
> Hi, I want one of the column in a table to not have duplicate values(it's
> Varchar) but it's not the primary key column which is a auto-generated id
> column for the table.
> Thanks,
> Alpha|||Sorry I wasn't clear on my question. I know I need to set the unique
constraint but just didn't know where is Enterprise Manager is the setting.
I just found it now in the design and right click on the column. thanks for
your help anyway.
"SQL" wrote:
> Create a unique constraint on the column
> http://sqlservercode.blogspot.com/
>
> "Alpha" wrote:
> > Hi, I want one of the column in a table to not have duplicate values(it's
> > Varchar) but it's not the primary key column which is a auto-generated id
> > column for the table.
> >
> > Thanks,
> > Alpha

How to configur a column to be unique value from the Enterprize Ma

Hi, I want one of the column in a table to not have duplicate values(it's
Varchar) but it's not the primary key column which is a auto-generated id
column for the table.
Thanks,
Alpha
Create a unique constraint on the column
http://sqlservercode.blogspot.com/
"Alpha" wrote:

> Hi, I want one of the column in a table to not have duplicate values(it's
> Varchar) but it's not the primary key column which is a auto-generated id
> column for the table.
> Thanks,
> Alpha

How to configur a column to be unique value from the Enterprize Ma

Hi, I want one of the column in a table to not have duplicate values(it's
Varchar) but it's not the primary key column which is a auto-generated id
column for the table.
Thanks,
AlphaCreate a unique constraint on the column
http://sqlservercode.blogspot.com/
"Alpha" wrote:

> Hi, I want one of the column in a table to not have duplicate values(it's
> Varchar) but it's not the primary key column which is a auto-generated id
> column for the table.
> Thanks,
> Alpha

Friday, February 24, 2012

how to clear up log file in primary database in log shipping ?

our database system is runninig in log shipping mode.
currently, primary database log file get much more bigger,
but I dare not shrink log file in primary database ,
because do this will affect log shipping.
any way reslove this?
regards!Backup Log has Active and Inactive Portions. To Truncate Inactive
portion user the following command in SQL Query Analyser
USE The following Command
BACKUP LOG { database_name | @.database_name_var }
WITH TRUNCATE_ONLY|||That would break the log shipping!
AFAIK, shrinking of log file should be OK in a log shipping scenario...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Rex" <r.ghodasara@.gmail.com> wrote in message
news:1143808657.133001.205870@.u72g2000cwu.googlegroups.com...
> Backup Log has Active and Inactive Portions. To Truncate Inactive
> portion user the following command in SQL Query Analyser
> USE The following Command
> BACKUP LOG { database_name | @.database_name_var }
> WITH TRUNCATE_ONLY
>|||thanks ,
but according to your point
as time goes, the logfile will get bigger and bigger,
shrinking logfile is not the best way?
do you think so ?
I meant how to clear up logfile and don't affect log shipping?|||The log file is emptied each time you do backup of the transaction log. So, it will grow until it
has the size needed to accommodate the modifications you do between two log backups, so just let it
be the size it need to be. If you see it grow beyond that, you can perhaps have had a long running
transaction in the log preventing the file to be emptied to a high degree. See
http://www.karaszi.com/SQLServer/info_dont_shrink.asp for more information.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
<lovexueer@.gmail.com> wrote in message news:1143814572.962154.266770@.g10g2000cwb.googlegroups.com...
> thanks ,
> but according to your point
> as time goes, the logfile will get bigger and bigger,
> shrinking logfile is not the best way?
> do you think so ?
> I meant how to clear up logfile and don't affect log shipping?
>

how to clear up log file in primary database in log shipping ?

our database system is runninig in log shipping mode.
currently, primary database log file get much more bigger,
but I dare not shrink log file in primary database ,
because do this will affect log shipping.
any way reslove this?
regards!
Backup Log has Active and Inactive Portions. To Truncate Inactive
portion user the following command in SQL Query Analyser
USE The following Command
BACKUP LOG { database_name | @.database_name_var }
WITH TRUNCATE_ONLY
|||That would break the log shipping!
AFAIK, shrinking of log file should be OK in a log shipping scenario...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Rex" <r.ghodasara@.gmail.com> wrote in message
news:1143808657.133001.205870@.u72g2000cwu.googlegr oups.com...
> Backup Log has Active and Inactive Portions. To Truncate Inactive
> portion user the following command in SQL Query Analyser
> USE The following Command
> BACKUP LOG { database_name | @.database_name_var }
> WITH TRUNCATE_ONLY
>
|||thanks ,
but according to your point
as time goes, the logfile will get bigger and bigger,
shrinking logfile is not the best way?
do you think so ?
I meant how to clear up logfile and don't affect log shipping?
|||The log file is emptied each time you do backup of the transaction log. So, it will grow until it
has the size needed to accommodate the modifications you do between two log backups, so just let it
be the size it need to be. If you see it grow beyond that, you can perhaps have had a long running
transaction in the log preventing the file to be emptied to a high degree. See
http://www.karaszi.com/SQLServer/info_dont_shrink.asp for more information.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
<lovexueer@.gmail.com> wrote in message news:1143814572.962154.266770@.g10g2000cwb.googlegr oups.com...
> thanks ,
> but according to your point
> as time goes, the logfile will get bigger and bigger,
> shrinking logfile is not the best way?
> do you think so ?
> I meant how to clear up logfile and don't affect log shipping?
>

Sunday, February 19, 2012

how to clear up log file in primary database in log shipping ?

our database system is runninig in log shipping mode.
currently, primary database log file get much more bigger,
but I dare not shrink log file in primary database ,
because do this will affect log shipping.
any way reslove this?
regards!Backup Log has Active and Inactive Portions. To Truncate Inactive
portion user the following command in SQL Query Analyser
USE The following Command
BACKUP LOG { database_name | @.database_name_var }
WITH TRUNCATE_ONLY|||That would break the log shipping!
AFAIK, shrinking of log file should be OK in a log shipping scenario...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Rex" <r.ghodasara@.gmail.com> wrote in message
news:1143808657.133001.205870@.u72g2000cwu.googlegroups.com...
> Backup Log has Active and Inactive Portions. To Truncate Inactive
> portion user the following command in SQL Query Analyser
> USE The following Command
> BACKUP LOG { database_name | @.database_name_var }
> WITH TRUNCATE_ONLY
>|||thanks ,
but according to your point
as time goes, the logfile will get bigger and bigger,
shrinking logfile is not the best way?
do you think so ?
I meant how to clear up logfile and don't affect log shipping?|||The log file is emptied each time you do backup of the transaction log. So,
it will grow until it
has the size needed to accommodate the modifications you do between two log
backups, so just let it
be the size it need to be. If you see it grow beyond that, you can perhaps h
ave had a long running
transaction in the log preventing the file to be emptied to a high degree. S
ee
http://www.karaszi.com/SQLServer/info_dont_shrink.asp for more information.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
<lovexueer@.gmail.com> wrote in message news:1143814572.962154.266770@.g10g2000cwb.googlegroup
s.com...
> thanks ,
> but according to your point
> as time goes, the logfile will get bigger and bigger,
> shrinking logfile is not the best way?
> do you think so ?
> I meant how to clear up logfile and don't affect log shipping?
>

how to choose a Primary Key

Hi All,

I have a dilemn:
On one side, I have a column C1 which could be a primary key because it is never null, the value is unique and identify the record. The problem is its a char type and its lenght can be close to 30.
Then, I've planned to add another column C2 of int type as PK. But then I need to add a unique constraint index on C1. Does it improve performance anyway?

ThanksOnce you've got a primary key, adding a Unique constraint doesn't help performance. You should still add the constraint, it just won't help your performance.

-PatP|||I beg to differ, because unique constraint implicitly creates a unique index, thus - possibility of improving performance where the field(-s) is/are involved.|||when all around you are losing their heads.....

to answer your question generally
SQL Server 2000 Requires the primary key to be:
Unique
Not Null
16 columns or less for a composite key.

i cant help you with your di-lemon because the pk is such a personal thing. realistically your pk is your own and shouldnt be fondled by any other so i will go out on a limb here and say that what you have is fine.

However, {Opinion} i am a big fan of the monotonic key. simple, to the point, and oh, so very integer...mmmmn i can see my key right now in a dimly lit room with nothing on but an identity of 1,1 and sumptuous nonclustered index fill-factored just right.

i need a smoke..|||i need a smoke..My first guess would be that you need a bit more than a smoke, but you'll probably need to visit another web site for that!

I beg to differ, because unique constraint implicitly creates a unique index, thus - possibility of improving performance where the field(-s) is/are involved.On a slightly more serious note, I see a declared Primary Key (PK) as being somewhere between very important and critical for a table. Very little relational algebra is possible without a PK.

An Alternate Key (AK) or the equivalent unique constraint is a different matter. The AK should be declared in order to allow the database engine to do its job and ensure uniqueness of the AK. However, there is a cost assosciated with the AK, in that CPU and I/O time and disk space need to be used to enforce it. The benefits of a declared (and enforced) AK are important, but they aren't nearly as important in my opinion as the PK is. I've been willing to forego AK definitions if the potential benefits didn't outweigh the time/disk needed to acheive them.

While rdjabarov is right, and the AK might be useful, there are times that it can do more harm than good. From the pure relational standpoint of managing the table, I see the AK as optional. Of course, if you need to ensure uniqueness or if you have queries that will use the index, and the cost of maintaining it is less than the expected benefit, then I'm in favor of it. In fact, I'll even say that I'm generally in favor of AK definitions, but that you still need to use some judgement instead of just jumping in and declaring them everywhere they could be used.

-PatP|||While ... the AK might be useful, there are times that it can do more harm than good.bumph and nonsense

let's do an informal poll

of all the different threads we've seen in database forums (and some of us visit more than this one), what's the most common question?

that's right, "how do i remove my duplicates?"

i would say that, on balance, you will do good for 99 people by insisting on the unique constraint on the alternate key, and harm for 1 person, although i am hard pressed to think of the circumstances where this might occur

besides, if you (not you, pat, the reader) are really worried about the alleged "harm" of the unique constraint, then the solution is simple -- drop the stupid identity column and make the alternate key the real primary key

:) :) :)|||However, {Opinion} i am a big fan of the monotonic key. simple, to the point, and oh, so very integer

if you can revive some imperical statistics on the PK as Intelligent key vs PK as Monotonic key that would be excellent. but remember, this is a preference situation. sometimes the candidate is easily qualified to be a key and sometimes the addition of a surrogate is just as qualified.

so what is it that makes this an issue.
the inclusion of an additional column into the table that is taking up space in the dbf? or is it the suspected performance derived from SARGs based on monotonic integers?

basically this whole crappy argument comes down to who can write their name in the snow the quickest.

but for the sake of argument i will side with trotsky. :p|||I guess I've just run into too many cases that push the limits of available hardware.

One example of this is when gathering digital data from medical equipment. You often get data at rates that stress available hardware to the limit, and that data comes from well defined data sources. You often can't afford anything other than a PK on the collection table.

Some of the columns are 200+ bytes wide, and there are six of those columns that form the Alternate Key. This practically doubles the size of the row for each AK index you use. The PK being only four bytes is negligable in comparison, and is a lot more valuable to me.

While a bigger machine with a lot more disk would solve this particular problem, it would add a lot of cost and almost no benefit to the process.

There are lots of cases where data is coming in at high speed to a single centralized server. Basically a web farm being serviced by an app farm being serviced by a SQL Server. In this case, one SQL Server might be effectively servicing 50000 simultaneous users as well as a dozen or so analysts. Even though you could create a natural key, and in this case you might even choose to declare it as an alternate, the hardware won't support using that natural key as a foreign key.

Don't get me wrong... As I said in my previoius post I'm generally in favor of declaring AK even when I can't afford to use them as a PK, as long as it makes sense in the real world. Lots of things are really lovely in the gedankenexperiment that make a mess where the rubber actually meets the road!

-PatP|||Thank you all for your helpful replies :)

But I think I wasn't clear enough: what is better :
- to keep my column of type varchar(30) as PK
- to add another column of type int for example as PK and then add a unique constraint on my column of type varchar(30)

Once again, thank you for your support|||rcomaz

please provide a bit of background on your db
what kind of data is being stored there?
what is a basic business model summary?
what kind of data si being stored in the TIC(table in question)?
and how large is the TIC?
what are the related columns in other tables that reference the TIC?|||But I think I wasn't clear enough: what is better :
- to keep my column of type varchar(30) as PK
- to add another column of type int for example as PK and then add a unique constraint on my column of type varchar(30) Geez! Interrupting all of our lovely philosophical debates just to get the original question answered! ;)

On a more serious note, it doesn't matter all that much as long as your database is both small (say under 20 Gb) and low traffic (under 1000 SQL statements processed per minute), especially if you rarely use foreign keys. As your size/traffic/complexity grow, I expect that you'll want to go toward the simple INT column. A four byte INT key takes less space in indices, foreign keys, etc than a thirty byte character key, so it takes corresponding less time/disk/etc. to process.

-PatP|||hh;lkjhlkjolkhjlkjhn|||It's a no-brainer, just to second what Pat said, - it bottles down to how many values of a 4-byte size you can fit on a 8K page. This results in a number of logical reads when the optimizer scans through/seeks the index pages.|||It can be important how the PK is indexed if you had the PK as varchar then there had to be a reason as to why. If you will be storing character data in the PK then use varchar and do a clustered index for better searching.

Clustered Index = "When the DB searches data much like you would if you were trying to find a persons name in the phone book" Puts all the A's, B's, C's..... together for better performance.

Index = "Is like when you are looking up the actual number" SQL will put the rows in numeric order|||PK, even if not clustered, has to yield a unique row per value, which fits your definition for a nonclustered index, which in tern fits your clustered index definition...what books are you reading? Just a hint, - read from left to right, and once you reach the end of line, - do a carriage-return+line-fielf (vbCRLF, char(13)+char(10)) in your mind, so that you can continue reading the right way without confusing terms with definitions...But on a serious note, - come on, if you try to explain something to somebody, make sure you know it yourself, otherwise, - you're gonna run into a lot of trouble here ;)|||...be nice to the noobies...|||I really appreciate all your comments, thanks :D

By the way, from the business point of view, the varchar(30) field is the unique identifiant of the item (which is a media matrix - like an iso image), thus it could be a good candidate for PK... I'm just afraid of the response time of queries, and this even with a clustered index.|||How many records are you talking about? How big is this database going to get? Is response time your biggest concern?|||I must appologize I stand corrected clustered indexes store data in sequential order. It was 3am when i wrote the reply. I meant to explain clustered and non clustered indexes. I also like to explain things in "normal terms" not "geek". I could have went into leaf pages, why there is a unique index, creation of a unique key column (although you should always create one you dont have to) SQL Server can handle duplicate rows by adding unique idenifiers in the background, how SQL stores data, but like i said it was 3am. I am a geek so dont get all flustered by my comment. God knows Rdjabaroy might come back at me with vb or vba code, "but on a serious note":

OleDbConnection con = new OleDbConnection(strConnect);
con.Open();
string strInsert = "INSERT INTO tblRdjabaroy VALUES('Is great at VB!!!')";

OleDbCommand cmdInsert = new OleDbCommand(strInsert, con);

do while (Rdjabaroy != 'cool')
{
cmdInsert.ExecuteNonQuery();
}

con.Close();

// =)

No seriously I know what you mean. People, including myself, come here for answers. I dont want the wrong answers either. So I do apologize|||First, it's RDJABAROV, not RDJABAROY...but I was called worse things ;) (consult with blindman, he used to be good at it LOL)

if object_id('dbo.tblRdjabaroy') is not null
exec sp_rename 'dbo.tblRdjabaroy', 'dbo.tblRDjabarov', 'object'
else
create table dbo.tblRDjabarov ([jayblaze2's_definition_of_rdjabarov] varchar(8000) not null )
go
create trigger dbo.trig_blaze_jay_twice
on dbo.tblRDjabarov
for insert
as
exec master.dbo.xp_cmdshell 'net send jayblaze2 "...Hmmmm...another one ;)"'
rollback tran
go