Sql Server - Shree HN Shukla College

SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
PRELIMS EXAM (2014)
STREAM: BCA
SUB: Sql Server
TIME: - 2:30 HOURS
Que 1) Attempt the following
MARKS: 70
[20 Marks]
1) Which edition is developed for small organization?
A) Standard edition B)Enterprise edition C)workgroup edition D) Mobile edition
2) Which licence is best when multiple users connect to a single device?
A) Device Client Access Licensing B) User Client Access Licensing
C) processor Client Access Licensing D) All
3) The time it takes the head to move to where the requested data resides is
A) Seek time B)rotational latency time C ) average time D) None of these
4) RAID stands for________________________________
A) Redundant Array of Independent Disk
B) Resultant Array of Independent Disk
C) Redundant Array of Important Disk
D) Random Array of Independent Disk
5) Stripping with parity is also known as RAID_____________
A) 0
B) 1 C)10 D) 5
6) Day to day operation,maintaining database and security duties of DBA like_____
A) Product DBA B)ETL DBA C)OLAP DBA
D)Architect DBA
7) How many phases are there to perform restore operation?
A) 1
B)2
C)3
D)4
8) Which recovery model recovers only transactional log files?
A) Simple Recovery Model
B) Bulk-logged Recovery Model
C) Full Recovery Model
D) All
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
1
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
9) DBCC stands for______________
A) Database Consistency Checker
B) Datafile Consistency Checker
C) Database Correct Checker
D) Database Consistency Checkout
10) SAN stands for_____________________
A) Steer Area Net
B) Storage Area Network
C) Storage Area Net
D) Stop Area Network
11) OLAP means_____________
A) Online Analytical Process
B) Offline Analytical Process
C) Optimized Analytical Process D) Online Admission Process
12) ______________lock is used for read only operation.
A) Shared lock
B) Exclusive lock C) update lock
D) intent lock
13) __________back up only back up primary file and read-write filegroup
A) Full backup
B) Differential backup C) file/filegroup backup D)partial backup
14) Which RAID level support only mirroring?
A) 0
B) 1 C) 1+0
D) 5
15) ETL stands for__________
A) Exact, transform, load
B) Extract, transform, load
C) Essential, transaction, log
D) Extract, transaction, log
16) How many types of files database have?
A) 1
B) 2 C)3 D)4
17) Which is the phase of recovery?
A) Undo phase B) redo phase
C) data copy
D) all
18) Which index is used to perform search on text data?
A) Clustered Index B) Full-Text index
C)Non- Clustered Index D)Indexed View
19)Index architecture is based on__________
A)B-tree structure
B) index
C) primary key
20) In ACID property
“D” stands for___________
A) Duration
B) Durability
C) Done
Que 2-A) Attempt Any Three
Shree H.N.Shukla College of I.T & Management
D)BLOB Data
D) Devasting
[6marks]
”Sky is the Limit”
2
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
1) Give definition: Log Shipping, Average Time,Seek time.
Seek Time:
When retrieving data, not only disk rotates under the heads but the head must also move to the track where
the data resides. The time it takes the head to move to where the requested data resides is called the seek
time.
Average Time
The average seek time is the time the heads takes on average to seek between random tracks on the disk.
The average seek time gives a good measures of the speed of the drive in a multiuser environment where
successive read/write request are largely uncorrelated.
Log Shipping
Log shipping is a method for keeping consistent copies of a database separate to the primary instance. This is
the primary method for maintaining external copies of a database outside the local datacenter, so that in the
event of a loss to the datacenter itself, you still have a working copy offsite. With log shipping, you restore a
backup of the primary database in STANDBY or NORECOVERY mode on a secondary server.
2) Explain OLAP DBA.
In computing, online analytical processing, or OLAP is an approach to swiftly answer multi-dimensional
analytical queries. OLAP is part of the broader category of business intelligence, which also encompasses
relational reporting and data mining. Typical applications of OLAP include business reporting for sales,
marketing, management reporting, business process management (BPM), budgeting and forecasting, financial
reporting and similar areas, with new applications coming up, such as agriculture. The term OLAP was created
as a slight modification of the traditional database term OLTP (Online Transaction Processing).
Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and adhoc queries with a rapid execution time. They borrow aspects of navigational databases and hierarchical
databases that are faster than relational databases.
The output of an OLAP query is typically displayed in a matrix (or pivot) format. The dimensions form the
rows and columns of the matrix; the measures form the values.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
3
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
3) Explain Filegroup structure: primary file, Secondary file, Log file.
Files and File groups
To map a database, SQL Server uses a set of operating system files. All data and objects in the database, such as
tables, stored procedures, triggers, and views, are stored within the following types of operating system files:
■ Primary. This file contains the startup information for the database and is used to store data. Every database
has one primary data file.
■ Secondary. These files hold all of the data that does not fit into the primary data file. If the primary file can
hold all of the data in the database, databases do not need to have secondary data files. Some databases might
be large enough to need multiple secondary data files or to use secondary files on separate disk drives to spread
data across multiple disks or to improve database performance.
■ Transaction Log. These files hold the log information used to recover the database. There must be at least one
log file for each database. A simple database can be created with one primary file that contains all data and
objects and a log file that contains the transaction log information. Alternatively, a more complex database can
be created with one primary file and five secondary files. The data and objects within the database spread
across all six files, and four additional log files contain the transaction log information.
Filegroups group files together for administrative and data allocation/placement purposes. For example, three
files (Data1.ndf, Data2.ndf, and Data3.ndf) can be created on three disk drives and assigned to the filegroup
fgroup1. A table can then be created specifically on the filegroup fgroup1. Queries for data from the table will be
spread across the three disks, thereby improving performance. The same performance improvement can be
accomplished with a single file created on a redundant array of independent disks (RAID) stripe set. Files and
filegroups, however, help to easily add new files to new disks. Additionally, if your database exceeds the
maximum size for a single Windows NT file, you can use secondary data files to grow your database further.
4) Which are responsibilities and duties of DBA?
A database administrator (DBA) is a person responsible for the design, implementation, maintenance and repair
of an organization's database. They are also known by the titles Database Coordinator or Database
Programmer, and is closely related to the Database Analyst, Database Modeler, Programmer Analyst, and
Systems Manager. The role includes the development and design of database strategies, monitoring and
improving database performance and capacity, and planning for future expansion requirements. They may also
plan, co-ordinate and implement security measures to safeguard the database.[1] Employing organizations may
require that a database administrator have a certification or degree for database systems (for example, the
Microsoft Certified Database Administrator).
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
4
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
5) Explain System database in sql server 2005.
Maste - Stores SQL Server system-level information, such as logon accounts, server configuration settings,
existence of all other databases, and the location of the files
Model - Contains a database template used when creating new user databases.
Msdb - Stores information and history on backup and restore operations.
Tempdb - Provides temporary space for various operations, and is recreated every time SQL Server is restarted
Distribution - Exists only if the server is configured as the distributor for replication.
Resource - Contains all the system objects that are included in SQL Server 2005 but does not contain any user
data or user metadata. This is a new system database for SQL Server 2005 that is read-only.
6) Explain B-tree Structure.
Indexes are created on columns in tables or views. The index provides a fast way to look up data based
on the values within those columns. For example, if you create an index on the primary key and then
search for a row of data based on one of the primary key values, SQL Server first finds that value in the
index, and then uses the index to quickly locate the entire row of data. Without the index, a table scan
would have to be performed in order to locate the row, which can have a significant effect on
performance.
You can create indexes on most columns in a table or a view. The exceptions are primarily those columns
configured with large object (LOB) data types, such as image, text, and varchar(max). You can also create
indexes on XML columns, but those indexes are slightly different from the basic index.
An index is made up of a set of pages (index nodes) that are organized in a B-tree structure. This structure is
hierarchical in nature, with the root node at the top of the hierarchy and the leaf nodes at the bottom, as shown
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
5
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
in Figure.
[Figure: B-tree structure of a SQL Server index]
When a query is issued against an indexed column, the query engine starts at the root node and navigates down
through the intermediate nodes, with each layer of the intermediate level more granular than the one above.
The query engine continues down through the index nodes until it reaches the leaf node. For example, if you’re
searching for the value 123 in an indexed column, the query engine would first look in the root level to
determine which page to reference in the top intermediate level. In this example, the first page points the
values 1-100, and the second page, the values 101-200, so the query engine would go to the second page on
that level. The query engine would then determine that it must go to the third page at the next intermediate
level. From there, the query engine would navigate to the leaf node for value 123. The leaf node will contain
either the entire row of data or a pointer to that row, depending on whether the index is clustered or nonclustered.
7) Explain Serializable isolation level.
Highest level of isolation; transactions are completely isolated from each other. At this level, the results
achieved by running concurrent transactions on a database are the same as if the transactions had been run
serially (one at a time in order) because it locks entire ranges of keys, and all locks are held for the duration of
the transaction.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
6
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
Que2-B) Attempt any three
1)Explain DBCC Commands.
DBCC CHECKDB
[9Marks]
Used to validate the consistency and
integ-rity of a database.
DBCC CHECKDB
(AdventuresWorks)
DBCC CHECKTABLE
Use to validate the consistency
and integ-rity of a specific table.
DBCC CHECKTABLE
(‘Production.Product’);
DBCC SHOWCOUNTING Used to determine the fragmentation DBSHOWCOUNTING
level of a table or index. Will be
(‘Production.Product’);
replaced with the sys.dm_db_index_
physical_stats in a future SLQ Server
version.
DBCC SHINKFILE
Used to shrink a file,
such as a transaction
(DataFile1, 7);
log file, after data is
removed.
DBCC SHRINKFILE
DBCC SHOW_STATISTICS Used to view informaDBCC SHOW_STATISTICS
tion about statistics for
(‘Person.Address’,
table or index view.
AK_Address_rowguid);
DBCC HELP
Get help with the
SpecifiedDBCC
(‘CHECKDB’);
subcommand.
DBCC HELP
1) Explain B-tree structure of index.
Indexes are created on columns in tables or views. The index provides a fast way to look up data based
on the values within those columns. For example, if you create an index on the primary key and then
search for a row of data based on one of the primary key values, SQL Server first finds that value in the
index, and then uses the index to quickly locate the entire row of data. Without the index, a table scan
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
7
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
would have to be performed in order to locate the row, which can have a significant effect on
performance.
You can create indexes on most columns in a table or a view. The exceptions are primarily those columns
configured with large object (LOB) data types, such as image, text, and varchar(max). You can also create
indexes on XML columns, but those indexes are slightly different from the basic index.
An index is made up of a set of pages (index nodes) that are organized in a B-tree structure. This structure is
hierarchical in nature, with the root node at the top of the hierarchy and the leaf nodes at the bottom, as shown
in Figure.
[Figure: B-tree structure of a SQL Server index]
When a query is issued against an indexed column, the query engine starts at the root node and navigates down
through the intermediate nodes, with each layer of the intermediate level more granular than the one above.
The query engine continues down through the index nodes until it reaches the leaf node. For example, if you’re
searching for the value 123 in an indexed column, the query engine would first look in the root level to
determine which page to reference in the top intermediate level. In this example, the first page points the
values 1-100, and the second page, the values 101-200, so the query engine would go to the second page on
that level. The query engine would then determine that it must go to the third page at the next intermediate
level. From there, the query engine would navigate to the leaf node for value 123. The leaf node will contain
either the entire row of data or a pointer to that row, depending on whether the index is clustered or nonclustered.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
8
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
2) What is transaction ?Explain ACID properties.
Transaction:
A transaction is one or more database operations that must be executed entirely as one logical unit of work. If
one of a series of operations fails or does not complete, then all operations within that transaction should be
rolled back, or undone, so that none of them complete.
A logical unit of work must show four properties, called the atomicity, consistency, isolation, and durability (
ACID ) properties, to qualify as a valid transaction. SQL Server provides mechanisms to help ensure that a
transaction meets each of these requirements.
Atomicity
A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them
are performed.
Consistency
When completed, a transaction must leave all data in a consistent state. In SQL Server 2005, all rules must be
applied to the transaction’s modifications to maintain data integrity. All internal data structures, such as B-tree
indexes or doubly linked lists, must be correct at the end of the transaction.
Isolation
Modifications made by concurrent transactions must be isolated from modifications made by all other
concurrent transactions. A transaction either sees the data in the state it was in before another concurrent
transaction modified it, or it sees the data after the second transaction has completed, but it does not see an
intermediate state. This is referred to as serializability because it provides the
System with the capability to reload the starting data and replay a series of transactions to end up with the
data in the same state it was in after the original transactions were performed.
3) Durability
Durability means that once a transaction is committed, the effects of the transaction remain permanently in
the database, even in the event of a system failure. The SQL Server transaction log and your database
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
9
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
backups provide durability. If SQL Server, the operating system, or a component of the server fails, the
database will automatically recover when SQL Server is restarted. SQL Server uses the transaction log to
replay the committed transactions that were affected by the system crash and to roll back any uncommitted
transactions.
4) Explain any three types of DBA.
Types:
1. Architect Database Administrator:
Often referred to as a Data Architect, a DA is responsible for gathering business requirements, designing a
logical model and ultimately building the physical database. The DA is expected to analyze business needs and
create a database solution to meet them. Tasks include requirements definition, business analysis, data
modeling, database design; E-R (Entity Relationship) models, database programming, business report
generation, ETL procedure development, database performance optimization, etc.
2. Development Database administrator:
A Development DBA looks after and is responsible for the development and test databases. It truly is as simple
as that, although I think you should consider this a starting point.Many of the more experienced DBAs tend to
be Production DBAs. The reason is simple – your critical business systems are in the hands of your Production
DBAs and you have to be able to trust them to be competent and reliable. If something goes wrong in a
development environment, time might be lost, but customers shouldn't be affected directly. However, implicit in
that view is that Development and Test databases aren't very important, and Production-orientated DBAs often
exhibit very casual attitudes towards caring for non-Production environments. I've been guilty of that at times,
and it can cause problems. While even most developers would accept that Production should always be the first
priority, if you're a Developer, you might have to stop work completely if your database is down. Likewise, an
entire stream of Testing can grind to a halt for days because of a single error by a DBA, or if there is no DBA
around to help fix a problem or, better still, prevent problems happening in the first place. Developers and
testers are often working to very tight delivery schedules and this sort of downtime can be critical.
3. Production Database administrator:
A Production DBA is responsible for maintaining Databases within an organisation, so it is a very difficult and
demanding job. He or she often gets involved when all the design decisions have been made, and has simply to
keep things up and running.
Therefore, of course, it is also a rewarding job, both financially and in terms of job satisfaction. But it's a more
'lonely' job than being a Development DBA.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
10
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
4. ETL DBA:
Extract, transform, and load (ETL) is a process in database usage and especially in data warehousing that
involves:



Extracting data from outside sources
Transforming it to fit operational needs (which can include quality levels)
Loading it into the end target (database or data warehouse)
The first part of an ETL process involves extracting the data from the source systems. Most data warehousing
projects consolidate data from different source systems. Each separate system may also use a different data
organization/format. Common data source formats are relational databases and flat files, but may include nonrelational database structures such as Information Management System (IMS) or other data structures such as
Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method (ISAM), or even fetching from
outside sources such as through web spidering or screen-scraping. The streaming of extracted data source and
load on-the-fly to the destination database is another way of performing ETL when no intermediate data storage
is required. In general, the goal of the extraction phase is to convert the data into a single format which is
appropriate for transformation processing.
The transform stage applies a series of rules or functions to the extracted data from the source to derive the data
for loading into the end target. Some data sources will require very little or even no manipulation of data. In
other cases, one or more of the following transformation types may be required to meet the business and
technical needs of the target database:
The load phase loads the data into the end target, usually the data warehouse (DW). Depending on the
requirements of the organization, this process varies widely. Some data warehouses may overwrite existing
information with cumulative information, frequently updating extract data is done on daily, weekly or monthly
basis. Other DW (or even other parts of the same DW) may add new data in a historicized form, for example,
hourly. To understand this, consider a DW that is required to maintain sales record of last one year. Then, the
DW will overwrite any data that is older than a year with newer data. However, the entry of data for any one
year window will be made in a historicized manner. The timing and scope to replace or append are strategic
design choices dependent on the time available and the business needs. More complex systems can maintain a
history and audit trail of all changes to the data loaded in the DW.
As the load phase interacts with a database, the constraints defined in the database schema — as well as in
triggers activated upon data load — apply (for example, uniqueness, referential integrity, mandatory fields),
which also contribute to the overall data quality performance of the ETL process.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
11
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
5. OLAP DAB:
In computing, online analytical processing, or OLAP is an approach to swiftly answer multi-dimensional
analytical queries. OLAP is part of the broader category of business intelligence, which also encompasses
relational reporting and data mining. Typical applications of OLAP include business reporting for sales,
marketing, management reporting, business process management (BPM), budgeting and forecasting, financial
reporting and similar areas, with new applications coming up, such as agriculture. The term OLAP was created
as a slight modification of the traditional database term OLTP (Online Transaction Processing).
Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and adhoc queries with a rapid execution time. They borrow aspects of navigational databases and hierarchical
databases that are faster than relational databases.
The output of an OLAP query is typically displayed in a matrix (or pivot) format. The dimensions form the
rows and columns of the matrix; the measures form the values.
5) Explain Full text index in sql server 2005.
The full-text index is very different from a B-tree index and serves a different function. The full-text index is
built and used by the Full-Text Engine for SQL Server, or MSFTESQL.
This engine is designed to perform searches on text-based data using a mechanism that allows searching using
wildcards and pattern recognition. The full-text index is designed for pattern searches in text strings. The fulltext index is actually more like a catalog than an index, and its structure is not a B-tree.
The full-text index allows you to search by groups of keywords. The full-text index is part of the Microsoft Search
service; it is used extensively in Web site search engines and in other text-based operations.
Unlike B-tree indexes, a full-text index is stored out- side the database but is maintained by the database.
Because it is stored externally, the index can maintain its own structure.
The following restrictions apply to full-text indexes: ‫ ٱ‬A full-text index must include a column that uniquely
identifies each row in the table.
A full-text index also must include one or more character string columns in the table.
 Only one full-text index per table is allowed.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
12
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
 A full-text index is not automatically updated, as B-tree indexes are.
A table insert, update, or delete operation will update the index. With the full-text index, these operations on
the table will not automatically update the index. Updates must be scheduled or run manually.
The full-text index has a wealth of features that cannot be found in B-tree indexes. Because this index is
designed to be a text search engine, it supports more than standard text-searching capabilities.
Using a full-text index, you can search for single words and phrases, groups of words, and words that are similar
to each other.
6) Explain concept of Disaster Recovery.
Que2-c) Attempt any two
1)
[10 Marks]
Explain types of editions in sql server 2005 in detail.
SQL Server Developer Edition
SQL Server Developer Edition includes the same features as SQL Server Enterprise Edition, but is limited by
the license to be only used as a development and test system, and not as production server. This edition is
available to download by students free of charge.
SQL Server 2005 Embedded Edition (SSEE)
SQL Server 2005 Embedded Edition is a specially configured named instance of the SQL Server Express
database engine which can be accessed only by certain Windows Services.
SQL Server Enterprise Edition
SQL Server Enterprise Edition is the full-featured edition of SQL Server, including both the core database
engine and add-on services, while including a range of tools for creating and managing a SQL Server cluster.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
13
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
SQL Server Evaluation Edition
SQL Server Evaluation Edition, also known as the Trial Edition, has all the features of the Enterprise Edition,
but is limited to 180 days, after which the tools will continue to run, but the server services will stop.
SQL Server Express Edition
SQL Server Express Edition is a scaled down, free edition of SQL Server, which includes the core database
engine. While there are no limitations on the number of databases or users supported, it is limited to using
one processor, 1 GB memory and 4 GB database files (10 GB database files from SQL Server Express 2008
R2). The entire database is stored in a single .mdf file,
SQL Server Fast Track
SQL Server Fast Track is specifically for enterprise-scale data warehousing storage and business intelligence
processing, and runs on reference-architecture hardware that is optimized for Fast Track.
SQL Server Standard Edition
SQL Server Standard edition includes the core database engine, along with the stand-alone services. It
differs from Enterprise edition in that it supports fewer active instances (number of nodes in a cluster) and
does not include some high-availability functions such as hot-add memory (allowing memory to be added
while the server is still running), and parallel indexes.
SQL Server Web Edition
SQL Server Web Edition is option for Web hosting.
SQL Server Workgroup Edition
SQL Server Workgroup Edition includes the core database functionality but does not include the additional
services.
2)
Explain Storage area network (SAN) with its advantages.
Increased disk utilization
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
14
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
The number one benefit from installing a SAN is better disk utilization. When buying server attached storage
most people buy more than they currently need so they can grow into the storage. The space that is
unutilised is wasted until it is needed.
In SAN’s that space can be “assigned” to any server that needs more storage.
Flexibility
SAN simplifies storage administration and adds flexibility since cables and storage devices do not have to
physically move to shift storage from one server to another.
Simplified administration
When all your storage is tide together through a centralize storage network, you gain the ability to
manage everything as a single entity.
Scalability
Using Direct Attached Storage (DAS) you need to manually install new disk to add storage in a SAN you
can remotely assign it to a server. No downtime and not even a reboot is required if the OS can handle it.
Improved network performance
Server – storage interaction do not use network bandwidth, which reduces network traffic. This is
because bandwidth intensive bulk-data transfer such as backups and database updates occur within the
SAN. As a result traffic on the LAN which is attached to the SAN is not affected.
Likewise sever performance is also improved. Server becomes free from data input/output activities
which are highly CPU-intensive. As a result server can process client request faster.
Improved data availability
In s SAN setup multiple servers share access to the same data simultaneously. If server becomes
unavailable, other server can take over, which results improved data availability to the clients and high
failovers. This helps in eliminating data accessibility problems that arise due to a single point of a failure
(SPOF).
Fibre channel infrastructure
The fibre channel infrastructure which forms the base of the SAN supports fast data transfer rates (from 100
Mbps to 1.0625 Gbps) . this data rate is faster than data exchange rates in LANs to which the SAN is
connected.
This is especially beneficial in the case of e-commerce where fast data transactions are important.
Disaster Recovery solution for multiple applications
When you have many critical servers in your data centre running applications that simply can’t go down and
need to be recovered quickly if disaster strikes, then SAN based disaster recovery (DR) solution is like having
a good insurance policy on your business. Although the upfront cost for implementing a SAN based disaster
recovery solution are high.
Backup
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
15
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
Decreasing the time needed to backup huge amount of data is also one of the major benefits of
installing a SAN. Technology is available in today’s storage devices that enable making hardware based exact
duplicates of your data almost instantly.
The duplicates can be used as either the backup of your data or as a source for backing up that data to a
tape library connected to your SAN. Many of the backup software vendor have the capability to use this
functionality of the storage arrays to create these copies for you.
3)
What is Index ? Explain cluster and non –cluster index with figure.
Clustered Index, Nonclustered Index,
Included Column Index,
Indexed ViewFull-Text Index, XML Index
Clustered indexes
A clustered index is a special type of index that reorders the way records in the table are physically stored.
Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages. It
includes:
 Physically stored in order (ascending or descending)
 Only one per table
 When a primary key is created a clustered index is automatically created as well.
 If the table is under heavy data modifications or the primary key is used for searches, a clustered index
on the primary key is recommended.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
16
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
 Columns with values that will not change at all or very seldom, are the best choices.
[Clustered Index]
Non-clustered indexes
A non clustered index is a special type of index in which the logical order of the index does not match the
physical stored order of the rows on disk. The leaf node of a non clustered index does not consist of the data
pages. Instead, the leaf nodes contain index rows. It includes:
 Up to 249 non clustered indexes are possible for each table or indexed view.
 The clustered index keys are used for searching therefore clustered index keys should be chosen with a
minimal length.
 Covered queries (all the columns used for joining, sorting or filtering are indexed) should be nonclustered.
 Foreign keys should be non-clustered.
 If the table is under heavy data retrieval from fields other than the primary key, one clustered index
and/or one or more non-clustered indexes should be created for the column(s) used to retrieve the data.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
17
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
[Nonclustered Index]
4)
Explain RAID models with figure in detail.
RAID stands for Redundant Array of Independent (or inexpensive) Disks, is a category of disk drives which
utilizes two or more hard drives in order to ensure that data is stored safely.
There are several different levels of RAID, each which have their own specific method of protecting the data
stored on each hard drive. Some of the most commonly used are:
RAID 0
RAID 0 is known as Disk Striping and uses set of configured disks called as Stripe Set. This can be configured
with minimally two disks and set as an array. It provides good read and write performance because data is
distributed across multiple disks. This does not support fault-tolerance, hence if one disk is failed, the entire
Stripe Set fails. Although it does not provide fault-tolerance, it can be used with SQL Server, mainly for data
files, to speed up read and write performance. In addition to that, it uses all the space available, and worth for
what you has spent on it.
RAID 1
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
18
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
RAID 1 is known as Disk Mirroring and uses two disks configured as a Mirror Set. Data is written (duplicated)
to both disks, hence supports fault tolerance. If one fail, recovery is easier, and because of a full duplicate disk,
operation can be continued until the failed one is replaced. It does not give good performance on data writes
because data must be written for two disks. Data read performance is higher than a single disk because reading
can be split over two disks.
Biggest disadvantage of RAID 1 is, losing 50% of disk space. If your requirement is 100 GB, you have
purchase 200GB for setting up RAID 1. You have to double the required amount of space but gain no additional
storage space.
RAID 5
This is known as Striping with Parity and configured as a Stripe Set. This is similar to RAID 0 which write
data across multiple disks. This requires at least three disks. Not like RAID 0, RAID 5 support fault-tolerance
by striping data across multiple disks and storing parity information as data is written. Since parity information
is available, in case of a disk failure, parity information can be used for re-creating the data lost on failed disk,
allowing us to continue the operation. Note that in case of multi-disks failure, parity will not be available for recreating all data lost, hence entire Stripe Set will fail.
RAID 0+1
RAID 0+1 is a combination of disk striping and mirroring. It is a two Stripe Sets mirrored, or two duplicates of
Stripe Sets (do not confuse with 1+0, which is a Stripe Sets configured with 2 or more mirrored sets). This
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
19
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
gives best performance of Stripe Sets while providing fault-tolerance. Data read is faster, just like RAID 0 or 5,
and read can be split over mirrored set too. Data write requires two IO operations because of duplicate set.
Overall this provides best performance while supporting fault-tolerance. This has an ability to continue the
operation even after multiple disks failure in one Stripe Set. Failure of disk from both side of mirror will be a
failure of entire RAID.
Generally we use RAID 0, 1, 5 or 0+1 for SQL Server. Key factor to be considered for selecting the architecture
are cost, amount of each type of operations(reading and writing) and where fault-tolerance is required or not.
The use of RAID in personal computers is slowly on the rise. Previously, higher costs of RAID-compatible hard
drives made them undesirable to the general public. RAID is used extensively throughout high end computers
and in business computing environments; it is slowly finding ground in the home as prices continue to decrease.
5)
How to install sql server 2005? Explain with installation steps.
Que-3-a) Explain the following any three
1) Give Defination: Restore, Recovery, Restore sequence.
2) Explain different method of RAID level: Stripping, Mirroring, Parity.
Stripping:
[6Marks]
Data striping comBines the data from two or more disks into one larger RAID logical disk, which is accomPlished by placing the first piece of data on the first disk, the second piece of data on the
Second disk, and so on. These pieces are known as stripe elements, or chunks.
Mirroring:
Database mirroring is a new form of log shipping. Like log shipping, database mirroring is a copy of the
primary database that is kept in recovery mode. Unlike log shipping, rather than waiting on the transaction
log to be backed up, a copy of the live transaction log is kept on the mirror. As entries are made into the
primary transaction log, they are sent to the standby as well. In this manner, the standby is kept up-to-date with
the primary.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
20
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
Parity:
In case of a disk failure, parity information can be used for re-creating the data lost on failed disk, allowing us
to continue the operation. Note that in case of multi-disks failure, parity will not be available for re-creating all
data lost, hence entire Stripe Set will fail.
3) Explain File and filegroup recovery model.
A file or filegroup restore is applicable only for databases that contain multiple filegroups. An individual file can
be restored or an entire filegroup can be restored, including all the files within that filegroup.
With multiple filegroups in a database, files and filegroups can be backed up and restored individually. If all
database files were in the primary filegroup, there would be little benefit of having individual file backups, since
the online restore capability applies only to filegroups other than the primary filegroup.
The main benefit of being able to restore one file individually is that it can reduce the time for restoring data in
the case where only one file is damaged or affected by accidental data deletion. That file can be restored
individually instead of restoring the entire database. If the filegroup to which a file belongs is read-write, then
you must have the complete chain of log backups since the file was backed up, in order to recover the file to a
state consistent with the rest of the database. Only the changes in the log backups that affect that particular file
are applied. If the file is read-only or if that file has not had any data changes made to it, then it can be
successfully restored and recovered without applying log backups.
The following is a T-SQL example of restoring a full file backup (the base), a differential file backup, and log
backups,
restore base file backup
USE master
RESTORE DATABASE mydatabase
FILE = ‘mydb_file3_on_secondary_fg’
FROM mydb1_dev, mydb2_dev WITH FILE = 21, NORECOVERY ;
restore differential file backup
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
21
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
RESTORE DATABASE mydatabase
FILE = ‘mydb_file3_on_secondary_fg’
FROM mydb1_dev, mydb2_dev WITH FILE = 22, NORECOVERY ;
restore log backup
RESTORE LOG mydatabase
FROM mydb1_dev, mydb2_dev WITH FILE = 23, NORECOVERY ;
To restore all files in a filegroup from a filegroup backup rather than individual files, you can use the
“FILEGROUP=” syntax, as in the following example:
restore filegroup backup
USE master ;
RESTORE DATABASE mydatabase
FILEGROUP = ‘SECONDARY_FG’
FROM mydb_secondary_fg_backup_dev
WITH NORECOVERY ;
restore log backup
RESTORE LOG mydatabase
FROM mydb_log_backup_dev
WITH FILE = 26, NORECOVERY ;
.
4) Advantages of Lock in OLTP System.

Locking in SQL Server helps ensure consistency when reading and writing to the database.

A lock is used when multiple users need to access database concurrently. This prevents data from being
corrupted or invalidated when multiple user try to write to the database. Any single user can only modify
those database records to which they have applied a lock that gives them exclusive access to the record
until the lock is released.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
22
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645

Database locks can be used as a means of ensuring transaction synchronicity.

There are mechanism employed to manage the actions of multiple concurrent users on a database. The
purpose is to prevent lost updates and dirty reads.

The two types of locking are pessimistic and optimistic locking.

Pessimistic locking : a user who reads a record, with the intention of updating it places an exclusive
lock of the record to prevent other users from manipulating it. This means that no other user can
manipulate that records until the user releases the lock.

Optimistic locking: this allows multiple concurrent users access to the database while the system keeps
a copy of the initial-read made by each other. When a user wants to update a record the application
determines whether another user has changed the record since it was last read. The application does this
by comparing the initial-read held in memory to the database record to verify any changes made to the
record.

Any discrepancy between the initial-read and the database record violates concurrency rules and hence
cause the system to disregard any update request. An error message is generated and the user is asked to
start the update process again. It improves database performance by reducing the amount of locking
required there by reducing the load on the database server. it works efficiently with tables that required
limited updates since no users are locked out.
4) How many types of recovery model .explain any one in detail.
1)Simple Recovery
2)Bulk logged Recovery model
3) Full Recovery model
Simple Recovery Model
The simple recovery model provides the fewest capabilities for restoring data. You cannot restore data to a point
in time with simple recovery, and only data backups, including differential backups, can be restored. This is
because transaction log backups are not taken and are not even allowed with simple recovery model. This
method requires the least administration and simplest restore method but provides no possibility to recover
beyond the restore of the latest data backup.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
23
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
When a database is set to use the simple recovery model, the transaction log for that database is automatically
truncated after every database checkpoint and after every data backup.
Truncating the log means that the inactive log records are simply dropped without any backup of them and that
log space is freed up for reuse. Simple recovery model also provides a log of the minimum information required
for automatic recovery. The information logged is just enough for SQL Server to be able to perform automatic
recovery in case of a system crash, and to recover the database after a data restore.
Simple recovery model is useful in a variety of cases. Such as development or test databases, databases that
store read only data,
6) Write a note on Failover clustering.
The Windows Failover Clustering service provides server clustering for Windows-based servers. A cluster is a
collection of servers that work together to provide services to the network. The unique thing about a cluster of
servers, as opposed to separate servers performing independent functions, is that the collection of clustered
servers is accessed as a single server. You may have two servers in a cluster, but they appear as one to the users.
Clusters share storage.
Que-3-b) Explain the following any three
[9Marks]
1) Explain indexed view.
An ordinary view is simply a SQL statement that is stored in the database. When the view is
accessed, the SQL statement from the view is merged with the base SQL statement, forming a merged SQL
statement. This SQL statement is then executed.
When a unique clustered index is created on a view, this view is materialized. This means that the index
actually contains the view data, rather than evaluating the view each time it is accessed.
The indexed view is sometimes referred to as a materialized view . The result set of the index is stored
actually in the database like a table with a clustered index. This can be quite beneficial because these views
can include joins and aggregates, thus reducing the need for these aggregates to be computed on the fly.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
24
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
Another advantage of an indexed view is that it can be used even though the view name is not expressly
named in the WHERE clause of the SQL statement. This can be very advantageous for queries that are
extensive users of aggregates.
The indexed view is automatically updated as the underlying data is updated. Thus, these indexes can incur
significant overhead and should be used with care. Only tables that do not experience significant update,
insert, and delete activity are candidates for indexed views.
2) Define Replication: any one types of it .
Replication is another way to ensure high availability and disaster recovery.
Replication is a method of providing data to a reporting server. You can have real time or near real time
transactions applied to a secondary server. This is particularly useful for keeping a reporting environment with
current data that does not affect the hardworking primary server. However, replication has its drawbacks as it is
a copy of the data and not a copy of the database.
Replication Types
SQL Server supports three main replication types:
Transactional
Snapshot
Merge
Transactional Replication
Transactional replication starts with a snapshot of the published data for the initial data distribution to
subscribers and then replicates future changes as they occur or in near-real-time. Transactional replication is
usually implemented as one-way replication from the publisher to the subscriber. The subscriber is usually
considered to be read-only, although you can use transactional replication types that replicate in both directions.
Transactional replication is generally used when:
-
Changes should be replicated to subscribers as they happen.
The data source (publisher) has much activity (modifications, deletions, and insertions).
There is a low tolerance for latency between the time of change and the time of replication (the
subscriber must be as current as possible).
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
25
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
Snapshot Replication
Snapshot replication uses point-in-time replication and does not track changes as they occur. When it is time
for a snapshot to be taken, the data to be published is selected at that time and the subscriber receives the full
copy of the replicated data—whether it is one change or 1,000 changes—every time. Snapshot replication is
generally used when:
-
Delays in data replication are acceptable.
Data is seldom modified and these modifications are not large.
The data set being replicated is small.
Merge Replication
Merge replication allows data to be modified at either end of the replication link. The publisher and the
subscribers can modify the data. Merge replication uses triggers to make the replication happen where
transactional replication is based on the Snapshot Agent, Log Reader Agent, and the Distribution Agent. Merge
replication is generally used when:
-
You need to update data at both the publisher and the subscribers.
Each subscriber receives a different subset of the data.
Subscribers replicate while online and modify data while offline.
5) Explain Read committed snapshot isolation level in detail.
New for SQL Server 2005, this is actually a database option, not a stand-alone isolation level. It determines the
specific behaviour of the read committed isolation level. When this option is on, row versioning is used to take
a snapshot of data. Provides data access with reduced blocking in a manner similar to read uncommitted
isolation, but without allowing dirty reads.
6) Explain XML index.
XML indexes are used to speed access to XML data. XML data is stored as BLOB (Binary Large Object) data in the
database. Unlike B-tree indexes, the XML index is designed to work with the exists statement.
XML indexes are defined as either primary XML indexes or secondary XML Indexes.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
26
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
For each row in the BLOB, the index creates several rows. The number of rows in the index is roughly equivalent
to the number of nodes in the XML BLOB.
In order to have a secondary XML index, you must first have a primary XML index. The secondary XML indexes
are created on PATH, VALUE, and PROPERTY attributes of the XML BLOB data.
7) Explain the concept of rebuilding an disabling indexes in sql server 2005.
8) Rebuilding Indexes
When data is added to or updated in the index, page splits occur. These page splits cause the physical structure
of the index to become fragmented. In order to restore the structure of the index to an efficient state, the index
needs to be rebuilt. The more fragmented the index, the more performance improvement will result from
rebuilding the index.
Disabling Indexes
With SQL Server you can now disable an index. An index is disabled via the ALTER INDEX DISABLE command. This
allows you to deny access to an index without removing the index definition and statistics. With a nonclustered
index or an indexed view, the index data is removed when the index is disabled.
Disabling a clustered index also disables access to the underlying table data, but the underlying table data is not
removed.
Disabling all other indexes on a table guarantees that only the existing index will be used.
This command is useful testing since it does not require existing indexes to be rebuilt after the test.
In order to re-enable access to the index, and underlying data if it is clustered index, run the command ALTER
INDEX REBUILD or CREATE INDEX WITH DROP EXISTING.
This command recreates the index data, enable the index and allow user to access that data.
6) Explain Page restore and point-in-time restore.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
27
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
Que 3-C) Attempt any two
1) Explain Isolation level in detail.
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
[10Marks]
Read uncommitted
Lowest level of isolation. At this level, transactions are isolated just enough to ensure that physically
corrupted data is not read. Dirty reads are allowed because no shared locks are held for data reads, and
exclusive locks on data are ignored.
Read committed
Default level for SQL Server. At this level, reads are allowed only on committed data, so a read is blocked
while the data is being modified. Shared locks are held for reads, and exclusive locks are honored. Thus,
dirty reads are not allowed. There is a new database option that determines the behavior of read
committed, called read committed snapshot. By default the read committed
snapshot option is off, such that the read committed isolation level behaves exactly as described here.
Read committed snapshot (database option)
New for SQL Server 2005, this is actually a database option, not a stand-alone isolation level. It determines
the specific behaviour of the read committed isolation level. When this option is on, row versioning is used
to take a snapshot of data. Provides data access with reduced blocking in a manner similar to read
uncommitted isolation, but without allowing dirty reads.
Repeatable read
Level at which repeated reads of the same row or rows within a transaction achieve the same results. Until
a repeatable read transaction is completed, no other transactions can modify the data because all shared
locks are held for the duration of the transaction.
Snapshot isolation
New for SQL Server 2005. This isolation level uses row versioning to provide read consistency for an entire
transaction while avoiding blocking and preventing phantom reads. There is a corresponding database
option that must also be set to use this isolation level.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
28
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
2) Types of backup in detail.
Backups are an online process. When SQL Server performs a backup, the database being backed up remains
online for users to access. Backups generate additional load on the system and can block user processes, so
you absolutely want to schedule them during off-peak hours if possible to reduce overhead and contention as
much as possible.
The are many different types of backups available such as,
Data Backups
The first major category of backups is data backups. A data backup includes an image of one or more data files
and enough log record data to allow recovery of the data upon restore.
Data backups include the following three types:
 Full database backup Includes all data files in the database; a complete set of data. A complete set of file
or filegroup backups can be equivalent to a full database backup.
 Partial backup New for SQL Server 2005; includes the primary filegroup and any read-write filegroups;
excluding any read-only filegroups by default.
File or filegroup backup Includes only the file or filegroup specified
.
Full Database Backup
The full database backup is sometimes referred to simply as the “full backup”. A full database backup is a
backup of the entire database that contains all data files and the log records needed to recover the database to
the point in time when the backup completed. Full database backups should be part of the backup strategy for
all business-critical databases.
A full database backup contains the complete set of data needed to restore and recover a database to a
consistent state—so it can be thought of as a baseline. Other backups may be restored on top of a restored full
database backup—such as differential backups, partial backups, and log backups. However, all other backup
types require a full database backup to be restored before they can be restored. You cannot restore only a
differential, partial, or log backup by themselves.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
29
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
Partial Backup
The partial backup capability is new for SQL Server 2005. Partial backup is mainly intended for use with readonly databases that use the simple recovery model. however, it also works with full and bulk-logged recovery
models.
The partial backup always backs up both the primary filegroup and any filegroups that are read write.
A read-write filegroup allows data modifications to the files in that filegroup, in contrast with a read-only
filegroup, which allows only reads of that filegroup. Read-only filegroups are not backed up with a partial
backup unless they are explicitly specified in the backup command. The primary filegroup cannot be
individually set to read-only. To force the primary filegroup to read-only, you can set an entire database to readonly.
The main purpose of the partial backup is to provide a faster and smaller backup for databases with one or more
read-only filegroups.
File and Filegroup Backup
As an alternative to performing a full database backup of the entire database at once, you can choose to backup
only one file or filegroup at a time. This assumes that there are mutliple filegroups in the database (in addition
to the primary filegroup). An individual file within a filegroup may be backed up, or an entire filegroup, which
includes all the files within that filegroup, can be backed up.
File or filegroup backups can be necessary when a database is so large that back up must be done in parts
because it takes too long to back up the entire database at one time. Another potential benefit of having a file
backup is that if a disk on which a particular file resides fails and is replaced, just that file can be restored,
instead of the entire database.
To ensure that you can restore a complete copy of the database when needed, you must have either a full
database backup as a baseline or a complete set of full backups for each of the files and/or filegroups in the
database. A complete set of file or filegroup backups is equivalent to a full database backup. If you do not have
a full database backup and do not have a complete backup set of all files, then you will not be able to restore the
entire database.
Differential Backups
A differential backup backs up only the data that has changed since the last base backup.
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
30
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
A differential backup is not a stand-alone backup—there must be a full backup that the differential is based on,
called the base backup. Differential backups are a means of backing up data more quickly by backing up only
changes in data that occurred since the last base backup, resulting in a smaller backup than a full backup. This
may allow you to perform differential backups more frequently than you could perform full backups.
A differential backup can be created on the database, partial, file, or filegroup level. For smaller
databases, a full database differential is most common. For much larger databases, differential backups at the
file or filegroup level might be needed to save space and to reduce backup time and the associated system
overhead.
In addition to being faster and smaller than a full backup, a differential backup also makes the restore process
simpler. When you restore using differentials, you must first restore the full base backup. Then, you restore the
most recent differential backup that was taken. If multiple differentials were taken, you need to restore only the
most recent one, not all of them. No log backups need to be restored between the full and differential backups.
After the differential has been restored, then any log backups taken after the differential can be restored.
Log Backups
Log backups are required when a database uses the full or bulk-logged recovery models, or else the log file will
grow continually until the disk is full. Simple recovery model does not allow log backups because the log file
is truncated automatically upon database checkpoints. The transaction log contains records of transactions that
are made to the database. A backup of the log is necessary for recovering transactions between data backups.
Data may be recovered to a point in time within the log backup as well, with the exception of log backups that
contain bulk-logged records—these must be restored to the end of the backup. Without log backups, you can
restore data only to the time when a data backup was completed. Log backups are taken between data backups
to allow point-in-time recovery.
Copy-Only Backups
There may be a situation in which you would like to create a backup of a file or database but do not want to
affect the current backup and restore procedures.
You can do this using a new backup type in SQL Server called a copy-only backup. It will leave the current
backup and restore information intact in the database and will not disturb the
normal sequence of backups that are in process.
To use copy-only backups, you must use T-SQL scripts with the BACKUP and RESTORE commands. Copyonly backups are not an option in SQL Server Management Studio.
Full-Text Catalog Backups
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
31
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
SQL Server provides a new feature to backup full-text catalog data. The full-text data is backed up by default
with a regular backup. It is treated as a file and is included in the backup set with the file data. A full-text catalog
file can also be backed up alone without the database data. Use the BACKUP command to perform a full-text
catalog backup.
3)Explain recovery model of sql server with advantages and disadvantages.
4) What is DeadLock? Explain types of lock in detail.
Deadlock refers to a specific condition when two or more processes are each waiting for each other to release a
resource, or more than two processes are waiting for resources in a circular chain.
Different Types of Locks
1) Shared:A shared lock model is used for read-only operations such as operations you perform by using
SELECT statement. This mode allows concurrent transactions to read the same resource at the same time, but it
does not allow any transaction to modify the resource.
Shared locks are released after the data has been read, unless the isolation level has been set to
Repeatable read or higher.
2) Update:Update lock is used when an update might be performed on resource. Only one transaction can update
lock on resource at time SQL server places update locks on the row, page or table being read. If the transaction
makes modification, the update lock is converted to an exclusive lock; otherwise it is converted to a shared lock.
3) Exclusive:Exclusive lock mode is used for operations that modify data such as updates, inserts and deletes. When
exclusive lock is held on a resource, no other truncation can read or modify the resource.
Note: - Other transactions may read data without blocking of the exclusive lock if locking hint, read
uncommitted isolation level or read committed snapshot isolation is used.
4) Intent :-
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
32
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
Intent lock is used to established a locking hierarchy, the purpose of the intent lock is to protect the lower
level resource locks, such as page and row locks, from being exclusively locked by another transaction through
a higher-level resource lock, such as table lock.
The intent lock on table is acquired before any lower level locks are acquired. This prevents asecond
transaction from acquiring an exclusive lock on that same table, which would block intended page-level or rowlevel access by the first transaction.
5) Schema :There are two categories of schema lock
1) Schema modification
A schema modification lock occurs when someone is actually modifying the table or index
schema. While this lock is held, no user can access the table.
2) Schema stability
A schema stability lock occurs when SQL Server need to prevent a table or index from being
modified.
6) Bulk Update (BU) :Use when bulk copying data into a table and the TABLOCK hint is specified.
5) Explain types of restore.
Complete Database, Differential Database, and Log Restores
A complete database restore is performed by restoring a full database backup. It restores all the files that existed
in the database at the time of the backup. A differential database restore can be applied after its base complete
database restore is performed with the NORECOVERY option.
If multiple differential database backups have been taken since the full database backup, only the most recent
differential backup needs to be restored. This is because each differential backup contains all changes since the
base backup, not since the last differential backup. (In some cases there may not be a differential backup to
apply, only log backups.)
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
33
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
The following is the basic T-SQL syntax for a complete database restore or a differential restore operation:
RESTORE DATABASE <database_name>
FROM <backup_device>
WITH FILE =n, [RECOVERY | NORECOVERY];
The following is the basic T-SQL syntax for a log restore operation:
RESTORE LOG <database_name>
FROM <backup_device>
WITH FILE = n, [RECOVERY | NORECOVERY];
A log restore applies a log backup by rolling forward transaction records. Multiple log backups may be
applied one after the other as long as the NORECOVERY option is specified. When the last log backup will
be applied, use the RECOVERY option to recover the database and bring it online.
Point-in-Time Restore
When using full or bulk-logged recovery models, and thus taking regular log backups, it is possible to recover to
a point-in-time during a log backup. The exception is that with bulk- logged recovery model, if a particular log
backup does contain bulk-logged records, then the entire log backup must be restored and point-in-time restore is
not possible within that log backup. Whereas with bulk-logged, a log backup can be restored to a point in time if
that log backup does not contain bulk-logged records.
Point-in-time restore recovers only the transations that occurred before the specified time within a log backup.
Point-in-time recovery can be accomplished using Management Studio or the RESTORE statement with the
STOPAT option. When using the RESTORE command in a restore sequence, you should specify the time to
stop with each command in the sequence, so you don’t have to identify which backups are needed to restore to
that point. SQL Server determines when the time has been reached and does not restore records after that point
but does recover the database. For example, here is a restore sequence using STOPAT to restore up to 1:15 p.m.
and recover the database (even though we do not know within which backup the records up to 1:15 p.m. reside):
restore db backup stopping at 1:15PM
RESTORE DATABASE [mydatabase]
FROM mydb1_dev, mydb2_dev
WITH STOPAT = ‘May 17, 2006 1:15 PM’, NORECOVERY ;
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
34
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
restore records from log backup 1
RESTORE LOG [mydatabase]
FROM mydblog_dev1
WITH STOPAT = ‘May 17, 2006 1:15 PM’, NORECOVERY ;
restore records from log backup 2
RESTORE LOG [mydatabase]
FROM mydblog_dev2
WITH STOPAT = ‘May 17, 2006 1:15 PM’, RECOVERY ;
File and Filegroup Restore
A file or filegroup restore is applicable only for databases that contain multiple filegroups. An individual file
can be restored or an entire filegroup can be restored, including all the files within that filegroup.
With multiple filegroups in a database, files and filegroups can be backed up and restored individually. If all
database files were in the primary filegroup, there would be little benefit of having individual file backups,
since the online restore capability applies only to filegroups other than the primary filegroup.
The main benefit of being able to restore one file individually is that it can reduce the time for restoring data in
the case where only one file is damaged or affected by accidental data deletion. That file can be restored
individually instead of restoring the entire database. If the filegroup to which a file belongs is read-write, then
you must have the complete chain of log backups since the file was backed up, in order to recover the file to a
state consistent with the rest of the database. Only the changes in the log backups that affect that particular file
are applied. If the file is read-only or if that file has not had any data changes made to it, then it can be
successfully restored and recovered without applying log backups.
The following is a T-SQL example of restoring a full file backup (the base), a differential file backup, and log
backups,
restore base file backup
USE master
RESTORE DATABASE mydatabase
FILE = ‘mydb_file3_on_secondary_fg’
FROM mydb1_dev, mydb2_dev WITH FILE = 21, NORECOVERY ;
restore differential file backup
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
35
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
RESTORE DATABASE mydatabase
FILE = ‘mydb_file3_on_secondary_fg’
FROM mydb1_dev, mydb2_dev WITH FILE = 22, NORECOVERY ;
restore log backup
RESTORE LOG mydatabase
FROM mydb1_dev, mydb2_dev WITH FILE = 23, NORECOVERY ;
To restore all files in a filegroup from a filegroup backup rather than individual files, you can use the
“FILEGROUP=” syntax, as in the following example:
restore filegroup backup
USE master ;
RESTORE DATABASE mydatabase
FILEGROUP = ‘SECONDARY_FG’
FROM mydb_secondary_fg_backup_dev
WITH NORECOVERY ;
restore log backup
RESTORE LOG mydatabase
FROM mydb_log_backup_dev
WITH FILE = 26, NORECOVERY ;
Page Restore
Page restores are possible only for databases using the full or bulk-logged recovery models, not with the simple
recovery model, and only available with SQL Server 2005 Enterprise Edition. This capability is provided in
order to recover a corrupted data page that has been detected by checksum or a torn write. SQL Server 2005 has
improved page-level error detection and reporting.
To restore a page, the file ID number and the page ID number are both needed. Use the RESTORE
DATABASE statement to restore from the file, filegroup, or database that contains the page, and the PAGE
option with < fileID:pageID>.
The following example restores four data pages (with IDs 89, 250, 863, and 1049) within file ID = 1. Note that
to complete the page restores, a log backup must be taken and then restored at the end of the restore sequence:
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
36
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
USE master ;
RESTORE DATABASE mydatabase
PAGE = ‘1:89, 1:250, 1:863, 1:1049’
FROM file1_backup_dev
WITH NORECOVERY ;
RESTORE LOG mydatabase FROM log_backup_dev1
WITH NORECOVERY ;
RESTORE LOG mydatabase FROM log_backup_dev2
WITH NORECOVERY ;
BACKUP LOG mydatabase TO current_log_backup_dev
RESTORE LOG mydatabase FROM current_log_backup_dev
WITH RECOVERY ;
There are a number of ways to identify the file and page ID of corrupted pages, including
viewing the SQL Server error log, and there are several limitations and considerations
that you should know before performing page restores.
Partial and Piecemeal Restore
As an enhancement to partial restores in SQL Server 2000, SQL Server 2005 allows piecemeal restores from
not only a full database backup but also from a set of individual filegroup backups. The purpose of a piecemeal
restore is to provide the capability to restore and recover a database in stages or by pieces, one filegroup at a
time.
The piecemeal restore sequence recovers data at the filegroup level. The primary filegroup must be restored in
the first stage as a partial restore (optionally along with any other secondary filegroups) using the PARTIAL
option of the RESTORE command, which indicates the beginning of a piecemeal restore. When the PARTIAL
option is specified in the command, the primary filegroup is implicitly selected. If you use PARTIAL for any
other stage in the restore sequence, the primary filegroup is implicitly selected and a new piecemeal restore
scenario begins. Therefore, PARTIAL must be used only in the very first restore statement of the sequence.
Here is an example of a restore sequence that begins a piecemeal (partial) restore and restores only the primary
filegroup and one of the read- write secondary filegroups, and recovers those two filegroups only. The third
filegroup will be marked offline and will not be accessible until it is restored and brought online.
But in the meantime, the first two filegroups are made available:
USE master ;
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
37
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
first create the log backup
BACKUP LOG mydatabase TO mydb_log_backup ;
begin initial stage of a piecemeal restore with primary filegroup restore
RESTORE DATABASE mydatabase
FILEGROUP=‘PRIMARY’
FROM mydbbackup
WITH PARTIAL, NORECOVERY ;
restore one of the secondary read-write filegroups
RESTORE DATABASE mydatabase
FILEGROUP=‘SECONDARY_FG_1’
FROM secondary_fg_backup
WITH NORECOVERY ;
restore unbroken chain of log backups
RESTORE LOG mydatabase
FROM mydb_log_backup_dev1
WITH NORECOVERY ;
RESTORE LOG mydatabase
FROM mydb_log_backup_dev2
WITH NORECOVERY ;
After the primary filegroup is restored, it is brought online and any other filegroups that were not restored are
automatically marked offline and placed in a state of recovery pending. Any filegroups that are not damaged
and are read-only may be brought online without restoring the data.
BEST OF LUCK
Shree H.N.Shukla College of I.T & Management
”Sky is the Limit”
38
SHREE H. N. SHUKLA COLLEGE OF I.T. & MGMT.
(AFFILIATED TO SAURASHTRA UNIVERSITY)
2 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot – 360001
Ph. No–(0281) 2440478,2472590
Shree H.N.Shukla College of I.T & Management
3 – Vaishalinagar
Nr. Amrapali Railway Crossing
Raiya Road
Rajkot - 360001
Ph.No–(0281) 2471645
”Sky is the Limit”
39