What Is RDBMS (Relational Database Management System) ?

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 54

Basic SQL Server Questions

What is RDBMS (Relational database management system)?

RDBMS is a database management system (DBMS) that is based on the relational model.
Relationships may be created and maintained across and among the data and tables. In a relational
database, relationships between data items are expressed by means of tables. Interdependencies
among these tables are expressed by data values

What is the Difference between DBMS and RDBMS?

A DBMS has to be persistent, that is it should be accessible when the program created the data ceases
to exist or even the application that created the data restarted. A DBMS also has to provide some
uniform methods independent of a specific application for accessing the information that is stored.

RDBMS is a Relational Data Base Management System Relational DBMS. This adds the additional
condition that the system supports a tabular structure for the data, with enforced relationships
between the tables. This excludes the databases that don’t support a tabular structure or don’t enforce
relationships between tables.

What are the properties of the RDBMS tables?

1         Values are atomic.

2         Column values are of the same kind.

3         Each row is unique.

4          Each column must have a unique name.

5         The sequence of columns is insignificant.

6         The sequence of rows is insignificant.

What is Trigger ?

SQL trigger is an SQL statements or a set of SQL statements which is stored to be activated or fired
when an event associating with a database table occurs. The event can be any event including
INSERT, UPDATE  and DELETE.
Sometimes a trigger is referred as a special kind of stored procedure in term of procedural code inside
its body.
What are the advantages and disadvantages of Triggers?

Advantages

Trigger provides an alternative way to check integrity.

trigger can catch the errors in business logic in the database level.

trigger provides an alternative way to run scheduled tasks. With SQL trigger, you don’t have to wait to
run the scheduled tasks. You can handle those tasks before or after changes being made to database
tables.

trigger is very useful when you use it to audit the changes of data in a database table.

Disadvantages

Trigger only can provide extended validation and cannot replace all the validations. Some simple
validations can be done in the application level

SQL Triggers executes invisibly from client-application which connects to the database server so it is
difficult to figure out what happen underlying database layer.

SQL Triggers run every updates made to the table therefore it adds workload to the database and
cause system runs slower.

What is Stored Procedure?

A stored procedure is a named group of SQL statements that have been previously created and stored
in the server database. Stored procedures accept input parameters so that a single procedure can be
used over the network by several clients using different input data.

What is Normalization?

In relational database design, the process of organizing data to minimize redundancy of data in a table
is called normalization.

Mention few normalization forms?

1NF: Eliminate Repeating Groups

Make a separate table for each set of related attributes, and give each table a primary key. Each field
contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data

If an attribute depends on only part of a multi-valued key, remove it to a separate table.

3NF: Eliminate Columns Not Dependent On Key

If attributes do not contribute to a description of the key, remove them to a separate table. All
attributes must be directly dependent on the primary key.

BCNF: Boyce-Codd Normal Form

If there are non-trivial dependencies between candidate key attributes, separate them out into distinct
tables.

For more types Refer : http://en.wikipedia.org/wiki/Database_normalization

What is ACID Property?

ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee database
transactions are processed reliably. In the context of databases, a single logical operation on the data
is called a transaction. For example, a transfer of funds from one bank account to another, even
though that might involve multiple changes (such as debiting one account and crediting another), is a
single transaction.

Explain each characteristic of ACID Properties?

Atomicity

It requires that database modifications must follow an “all or nothing” rule. Each transaction is said to
be atomic if when one part of the transaction fails, the entire transaction fails and database state is
left unchanged. It is critical that the database management system maintains the atomic nature of
transactions in spite of any application,

Consistency

This  property ensures that the database remains in a consistent state; more precisely, it says that
any transaction will take the database from one consistent state to another consistent state. The
consistency property does not say how the DBMS should handle an inconsistency other than ensure
the database is clean at the end of the transaction

Isolation
It refers to the requirement that other operations cannot access data that has been modified during a
transaction that has not yet completed. The question of isolation occurs in case of concurrent
transactions (multiple transactions occurring at the same time). Each transaction must remain
unaware of other concurrently executing transactions, except that one transaction may be forced to
wait for the completion of another transaction that has modified data that the waiting transaction
requires

Durability

It is the ability of the DBMS to recover the committed transaction updates against any kind of system
failure (hardware or software). Durability is the DBMS’s guarantee that once the user has been
notified of a transaction’s success the transaction will not be lost, the transaction’s data changes will
survive system failure, and that all integrity constraints have been satisfied, so the DBMS won’t need
to reverse the transaction.

What is De-normalization?

It is the opposite of Normalization. De-normalization is the process of attempting to optimize the


performance of a database by adding redundant data.

What is a primary Key?

In relational database design, a primary key is used to uniquely identify each row in a table.

What is a candidate key ?

A candidate key is a field or combination of fields that can act as a primary key field for that table to
uniquely identify each record in that table.

What is a surrogate key ?

Surrogate keys are keys that have no “business” meaning and are solely used to identify a record in
the table.  Such keys are either database or system generated values

example: Identity in SQL Server , GUIDs

What are Superkeys?

A superkey is defined in the relational model as a set of attributes of a relation variable (relvar) for
which it holds that in all relations assigned to that variable there are no two distinct tuples (rows) that
have the same values for the attributes in this set. Equivalently a superkey can also be defined as a
set of attributes of a relvar upon which all attributes of the relvar are functionally dependent.
What’s the difference between a primary key and a unique key?

Both primary key and unique enforce uniqueness of the column on which they are defined. But by
default primary key creates a clustered index on the column, where are unique creates a nonclustered
index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key
allows one NULL only.

What are the advantages and disadvantages of Surrogate Key ?

Pros:

1. Business Logic is not in the keys.


2. Small 4-byte key (the surrogate key will most likely be an integer and SQL Server for example
requires only 4 bytes to store it, if a bigint, then 8 bytes).
3. Joins are very fast.
4. No locking contentions because of unique constraint (this refers to the waits that get
developed when two sessions are trying to insert the same unique business key) as the
surrogates get generated by the DB and are cached – very scalable.

Cons:

1. An additional index is needed.  In SQL Server, the PK constraint will always creates a unique
index, in Oracle, if an index already exists, PK creation will use that index for uniqueness
enforcement (not a con in Oracle).
2. Cannot be used as a search key.
3. If it is database controlled, for products that support multiple databases, different
implementations are needed, example: identity in SS2k, before triggers and sequences in
Oracle, identity/sequence in DB2 UDB.
4. Always requires a join when browsing the child table(s).

What are  identity columns?

An Identity column is a column (field ) in a database table that is made up of values generated by the
database. This is much like an AutoNumber field in Microsoft Access or a sequence in Oracle. Because
the concept is so important in database science, many RDBMS systems implement some type of
generated key, although each has its own terminology.

An identity column differs from a primary key in that its values are managed by the server and usually
cannot be modified. In many cases an identity column is used as a primary key, however this is not
always the case.
What is View?

A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as
updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the
view was created with. It should also be noted that as data in the original table changes, so does data
in the view, as views are the way to look at part of the original table. The results of using a view are
not permanently stored in the database. The data accessed through a view is actually constructed
using standard T-SQL select command and can come from one to many different base tables or even
other views.

What does @@error return?

Returns the error number for the last Transact-SQL statement executed.

Returns 0 if the previous Transact-SQL statement encountered no errors.

Returns an error number if the previous statement encountered an error. If the error was one of the
errors in the sys.messages catalog view, then @@ERROR contains the value from the
sys.messages.message_id column for that error. You can view the text associated with an @@ERROR
error number in sys.messages.

What is the difference between temp tables and CTE and Table variables ?

Refer post : http://geekepisodes.com/sqlbi/2010/difference-between-temp-tables-and-table-


variables-and-ctes/

What are function?

A function in sql server is an object that performs an operation and returns a value.

What are deterministic or nondeterministic functions?

Functions are deterministic when they always return the same result any time they are called by using
a specific set of input values.

Ex: Replace(), Rtrim(), Substring()

 Functions are nondeterministic when they could return different results every time they are called,
even with the same specific set of input values.

Ex: Getdate(), newid()


Types of user defined functions (UDF) in SQL server ?

Different Kinds of User-Defined Functions created are:

Scalar User-Defined Function

A Scalar user-defined function returns one of the scalar data types like int, varchar, float etc. You pass
in 0 to many parameters and you get a return value.

Table-Valued User-Defined Function

Table-Value user-defined function returns a table data type and is an alternative to a view

Mention the datatypes not supported by scalar UDF?

1. text
2. ntext
3. image

 What are aggregate functions? Mention few.

A function that performs a computation on a set of values rather than on a single value. For example,
finding the average or mean of a list of numbers is an aggregate function.

Ex: sum(), avg(), count(), max(), Min()

What is a stored procedure?

A set of Structured Query Language (SQL) statements with an assigned name that’s stored in the
database in compiled form so that it can be shared by a number of programs.

Mention few commonly used System stored procedures ?

1. Sp_helptext
2. Sp_spaceused
3. sp_adduser
4. sp_help
5. sp_helpdb
6. sp_helpfile
7. sp_statistics
8. sp_table_privileges
9. sp_sqlexec
10. sp_tables
11. sp_who

and many more.

How to hide the number of records affected by a query?

By using “ set nocount on ”

Refer post : http://geekepisodes.com/sqlbi/2010/set-nocount-on-hide-the-number-of-records-


affected-by-a-query/

What are linked servers ?

A linked server configuration enables SQL Server to execute commands against OLE DB data sources
on remote servers.

What are the advantages of linked servers ?

 Linked servers offer the following advantages:

1. Remote server access.


2. The ability to issue distributed queries, updates, commands, and transactions on
heterogeneous data sources across the enterprise.
3. The ability to address diverse data sources similarly.

 What is a view ?

A view is, in essence, is a virtual table. It does not physically exist. Rather, it is created by a query
joining one or more tables.

What is indexed view?

A view with an index is known as indexed view. This index stores the result set of the query separately
as an object. This helps in the improvement of the query processing as the query optimizer scans the
data in the indexed view rather than the tables used in the query to create the view. In the case of
standard view, only view definition is stored but the result set is not stored.
What is a synonym in SQL server?

Synonym is an alternate name given for a schema (Table most often.). It helps in hiding the
underlying objects details from the end user hence providing better security.

What are the advantages of procedures over normal script ?

Stored procedures are faster in execution, as the plans are already cached in the memory.

when the procedure is modified, all clients automatically get the new version.

Stored procedures reduce network traffic and improve performance.

Stored procedures can be used to help ensure the integrity of the database.

What are GUIDs?

http://geekepisodes.com/sqlbi/2010/guid-globally-unique-identifier-in-sql-server/

What are joins ? mention different types and the results.

http://sql.geekepisodes.com/2010/differnce-between-inner-join-left-right-outer-join-and-full-outer-
join-in-sql-server/

What is an OLE DB provider?

An OLE DB provider is a DLL that manages and interacts with a specific data source. An OLE DB data
source identifies the specific database that can be accessed through OLE DB.

What is the difference between a “where” clause and a “having” clause?

 ”Where” is a kind of restiriction statement. You use where clause to restrict all the data from
DB.Where clause is used before result retrieving.

Ex:

Select * from EmployeeTable

Where EmpName = ‘Smith’

“Having” clause is used after Group by clause. It works the same way as the where clause on the
results after applying group by.
Ex:

Select EmpName, Sum(salary)

        From SalaryTable

Where SalaryDate >’01/01/2010’

Group By EmpName

Having Sum(Salary) > ‘200000’

What is the basic form of a SQL statement to read data out of a table?

The basic form to read data out of table is “SELECT * FROM table_name;”

What is a “constraint”?

 A constraint allows you to apply simple integrity checks on a table. The primary types of constraints
that are currently supported by SQL Server are :

PRIMARY/UNIQUE – enforces uniqueness of a particular table column.

FOREIGN KEY – validates that every value in a referenced column exists in a primary column of
another table.

CHECK – constraint is used to limit the values that can be placed in a column. The check constraints
are used to enforce domain integrity..

NOT NULL  constraint enforces that the column will not accept null values. The not null constraints are
used to enforce domain integrity, as the check constraints.

*Note : DEFAULT – specifies a default value for a column in case an insert operation does not provide
one. Default is not a constraint in SQL server ( do remember this).

What is a “functional dependency”?

 Functional dependency relates to how one object depends upon the other in the database. for
example, procedure/function sp2 may be called by procedure sp1. Then we say that sp1 has functional
dependency on sp2.
 

Why can a “group by” or “order by” clause be expensive to process?

 Processing of “group by” or “order by” clause often requires creation of Temporary tables to process
the results of the query. Which depending of the result set can be very expensive.

What is a self join? Explain it with an example.

 Self join is just like any other join, except that two instances of the same table will be joined in the
query. It can be used with inner / outer joins.

Example:

Employees table which contains records of all the employees in the organisation including managers.
The query below fetches the details of all the employees and their manager details.

 SELECT EMP.empname [Employee], isnull(MGR.empname, ‘No manager’) [Manager]


FROM Employee Emp
LEFT OUTER JOIN
           Employee MGR
ON  Emp.mgrid = MGR.empid

You might have noticed that we used two instances of the same table “Employee” to fetch the final
results.

what are the differences between cross join and full outer join?

Cross Join :
No join conditions are specified.
Results in pairs of rows.
Results in Cartesian product of two tables. 

Full Outer Join:


A combination of both left and right outer joins.
Results in every row from both of the tables , at least once.
Assigns NULL for unmatched fields.
What is “index covering” of a query?

Index covering is a term used when data can be fetched only by using indexes without touching the
underlying tables.

What is SQL Profiler?

SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of
Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to
analyze later.

How do you implement one-to-one, one-to-many and many-to-many relationships while


designing tables?

One-to-One relationship can be implemented as a single table and rarely as two tables with primary
and foreign key relationships. One-to-Many relationships are implemented by splitting the data into
two tables with primary key and foreign key relationships. Many-to-Many relationships are
implemented using a junction table with the keys from both the tables forming the composite primary
key of the junction table.

 What are user defined datatypes?

 User defined datatypes let you extend the base SQL Server datatypes by providing a descriptive
name, and format to the database. Example, if in your database, there is a column called
EmployeeName which appears
in many tables. In all these tables it should be varchar(50).

In this case you could create a user defined datatype called UDD_EmployeeName of varchar(50) and
use it across all your tables.

What is difference between DELETE & TRUNCATE commands?

DELETE

 DELETE removes rows one at a time and records an entry in the transaction log for each
deleted row.
 Delete command removes the rows from a table based on the condition that we provide with a
WHERE clause.
 If you want to retain the identity counter, use DELETE instead. If you want to remove table
definition and its data, use the DROP TABLE statement.
 DELETE Can be used with or without a WHERE clause
 DELETE Activates Triggers.
 DELETE can be rolled back.
 DELETE is DML Command.
 DELETE does not reset identity of the table.

TRUNCATE

 Truncate will actually remove all the rows from a table and there will be no data in the table
after we run the truncate command.

 TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
 TRUNCATE removes the data by deallocating the data pages used to store the table’s data,
and only the page deallocations are recorded in the transaction log.
 TRUNCATE removes all rows from a table, but the table structure, its columns, constraints,
indexes and so on, remains. The counter used by an identity for new rows is reset to the seed
for the column.
 You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
 TRUNCATE cannot be rolled back.
 TRUNCATE is DDL Command.
 TRUNCATE Resets identity of the table

What are the advantages of using Stored Procedures?

 Stored procedure can reduced network traffic and latency.


 Stored procedures promote code reuse.
 Stored procedures provide better security to your data, as the code used in the SP can be
hidden from end users.
 Stored procedure execution plans are cached, reducing server overhead during each
execution.
 We can change stored procedure code without affecting clients.

What is a transaction?

A transaction is a logical unit of work in which, all the steps must be performed or none.

What are defaults?


A default is a value that will be used by a column, if no value is supplied to that column while inserting
data. IDENTITY columns and timestamp columns can’t have defaults bound to them.

What are cursors? different types of cursors ?

Cursors are objects that allow row-by-row prcessing of the resultsets.

Types of cursors: Static, Dynamic, Forward-only, Keyset-driven.

  

What are the disadvantages of cursors?

 Disadvantages of cursors:

1. Each time you can perform a set of task only on a single row/record from the cursor

2. it results in a network roundtrip, where as a normal SELECT query makes only one rowundtrip,
however large the resultset is.

3. Cursors are also memory intensive because they require more resources and temporary storage

4. Cursors are slower as it results in more IO operations

 What is the system function to get the current user’s user id?  USER_ID().

 What is Collation?

Collation refers to a set of rules that determine how data is sorted and compared. Character data is
sorted using rules that define the correct character sequence, with options for specifying case
sensitivity, accent marks, kana character types and character width.

 What is OLTP ?

In OLTP stands for  online transaction processing systems relational database design use the discipline
of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute
data integrity. Using these rules complex information is broken down into its most simple structures
where all of the individual atomic level elements relate to each other and satisfy the normalization
rules.

What is the difference between Global and Local temp tables?


A global temporary table remains in the database permanently, but the rows exist only within a given
connection. When connection is closed, the data in the global temporary table disappears. However,
the table definition remains with the database for access when database is opened next time.

A local temporary table exists only for the duration of a connection or, if defined inside a compound
statement, for the duration of the compound statement.

What is SSRS
SQL Server 2005 Reporting Services is a server-based reporting platform that you can use to create
and manage tabular, matrix, graphical, and free-form reports that contain data from relational and
multidimensional data sources. The reports that you create can be viewed and managed over a World
Wide Web-based connection. Reporting Services includes the following core components:

 A complete set of tools that you can use to create, manage, and view reports.
 A Report Server component that hosts and processes reports in a variety of formats. Output
formats include HTML, PDF, TIFF, Excel, CSV, and more.
 An API that allows developers to integrate or extend data and report processing in custom
applications, or create custom tools to build and manage reports.

The reports that you build can be based on relational or multidimensional data from SQL Server,
Analysis Services, Oracle, or any Microsoft .NET data provider such as ODBC or OLE DB. You can
create tabular, matrix, and free-form reports. You can also create ad hoc reports that use predefined
models and data sources.

Visually and functionally, the reports that you build in Reporting Services surpass traditional reporting
by including interactive and Web-based features. Some examples of these features include drill-down
reports that enable navigation through layers of data, parameterized reports that support content
filtering at run time, free-form reports that support content in vertical, nested, and side-by-side
layouts, links to Web-based content or resources, and secure, centralized access to reports over
remote or local Web connections.

Although Reporting Services integrates with other Microsoft technologies out-of-the-box, developers
and third-party vendors can build components to support additional report output formats, delivery
formats, authentication models, and data source types. The development and run-time architecture
was purposely created in a modular design to support third-party extension and integration
opportunities.

SSRS competes with Crystal Reports and other business intelligence tools, and is included in Express,
Workgroup, Standard, and Enterprise editions of Microsoft SQL Server as an install option. Reporting
Services was first released in 2004 as an add-on to SQL Server 2000. The second version was
released as a part of SQL Server 2005 in November 2005. The latest version was released as part of
SQL Server 2008 in August 2008.

Users can interact with the Report Server web service directly, or instead use Report Manager, a web-
based application that interfaces with the Report Server web service. With Report Manager, users can
view, subscribe to, and manage reports as well as manage and maintain data sources and security
settings. Reports can be delivered via e-mail or placed on a file system. Security is role-based and can
be assigned on an individual item, such as a report or data source, a folder of items, or site wide.
Security roles and rights are inherited and can be overloaded.

In addition to using the standalone Report Server that comes with SQL Server, RDL reports can also
be viewed using the ASP.NET ReportViewer web control or the ReportViewer Windows Forms control.
This allows reports to be embedded directly into web pages or .NET Windows applications. The
ReportViewer control processes reports in one of two ways: (a) server processing, where the report is
rendered by and obtained from the Report Server; and (b) local processing, where the control renders
the RDL file itself.

SQL Server Reporting Services also support ad hoc reports: the designer develops a report schema
and deploys it on the reporting server, where the user can choose relevant fields/data and generate
reports. Users can then download the reports locally.

Advantages Of Ssrs

 'Direct' and efficient reporting access to information residing in both Oracle and MS SQL
Server databases.
 Faster (and therefore cheaper) production of reports on both relational and cube data.
 An easy to deploy centralised reporting infrastructure based on Microsoft Reporting Services.
 Faster delivery of information to the business, providing better decision support.
 Ability for the business to self-serve, edit and interact with information without having to rely
on IT or IS resources.
 Simple pricing model tailored for both entry and enterprise level installations, allowing for
inexpensive provision of Business Intelligence for the Masses and democratisation of
information.
 No need for expensive specialist skills.
 The beauty is that the entire report and data source definition is stored as a simple XML file.
This is the file the reporting engine uses to render reports. The elements and attributes
required for defining a report format are fully documented. Further, you can even add your
custom elements if you want to enrich available functionality. Most report writers available
today never provided this functionality.
 XML based report definition allows you to directly design reports programmatically and render
them. This was very difficult to achieve in currently available report writers.
 The default report designer is integrated with Visual Studio .NET so that you can create
application and its reports in the same environment.
 The report designer eliminates the traditional bands very effectively. It provides three types of
elements—Table, Matrix and List. Table is equivalent to the traditional report with header,
footer, detail and groups. You can have multiple tables rendering different data arranged side
by side!
 For each type of reporting element, you have to attach a dataset to it. Dataset is based upon
data source.
 The matrix is like a pivot table. It has rows, columns and cells containing computed data. Very
useful and very easy. I am sure all of you remember how much we have to struggle today to
create a simple cross-tab report. Write complex queries, struggle with table formatting and so
on. With this new tool, just drag the matrix on the report, specify row, column and data fields
and that’s it.
 The list is free form data. It can be descriptive and cannot be represented as a structured
table, somewhat like a data repeater. Whatever data elements you put in the list are repeated
for each row in the base dataset. This allows you to create free form layout reports that still
repeat for each data item.
 The report items can be recursive. For example, one list can contain another list. What’s more
one report can be defined as a sub-report of the base report. This provides more than just drill
down. The subreport also need not be hard coded. Parameters can be passed online to it
based upon the area of base report clicked.
 Now, about rendering. This is the most sophisticated part. By default rendering is in HTML. But
while you are viewing the report, you can simply click on the toolbar and render it in many
different ways.
 The most important part is that all the reports are stored on the central SQL Server database.
Usually, we have reports for each application stored and managed separately. This leads to a
lot of confusion and administrative headaches.
 The reports are viewed and administered by using a Web-based implementation of the entire
reporting engine. The default website provides a base structure which is folder based.
Typically you will have folders created for each application or user functionality.
 How do you access reports usually? By instancing the report writer runtime. Here you don’t
have to do that. Because all reports are accessible in one of the two ways:

- By specifying the URL identifying the report on the reportserver or


- By calling the Web service.

 The best part of the reporting server is that the entire functionality is exposed as a single Web
service! You can create, manage, and view reports using various methods provided by the web
service.
 The security is managed in a role-based manner and can be applied to folders as well as
reports.
 User can manage their own reporting needs by accessing reports ad-hoc or by subscribing to
the reports. Subscription based reports are automatically sent by mail to the users.
 All reports require parameters to be accepted from users. Here once parameters are defined,
the UI for these parameters is automatically generated.
 Finally, you have many ways of rendering the reports:
 HTML (MHTML)
 Excel
 Acrobat
 Tiff (image)
 XML
 CSV
 And of course, you can write your own rendering mechanism, if you like!
 Depending upon the type of output you want, some features are disabled. For example, HTML
reports do not support pagination. Whereas, XML and CSV will not support drilldown. This is
primarily due to the nature of the output format and not due to inadequate rendering
capabilities of the reporting services.

Minimum Requirements

Processor Intel Pentium II or compatible 500-megahertz (MHz) or higher processor

Operating System SQL Server 2000 Reporting Services can run on the following operating systems:

•Windows Server 2003, Standard Edition


•Windows Server 2003, Enterprise Edition
•Windows Server 2003, Datacenter Edition
•Windows 2000 Server with Service Pack 4 (SP4) or later
•Windows 2000 Advanced Server with SP4 or later
•Windows 2000 Datacenter Server with SP4 or later
•Windows XP Professional with Service Pack 1 (SP1) or later 1
•Windows 2000 Professional with SP4 or later2

Database •SQL Server 2000 Standard Edition with SP3 or later 4


•SQL Server 2000 Enterprise Edition with Service Pack 3 (SP3) or later 5
•SQL Server 2000 Developer Edition with SP3 or later6

Memory7 256 megabytes (MB) of RAM; 512 MB or more recommended

Hard Disk •50 MB of available hard disk space for Report Server
•100 MB of available hard disk space for Microsoft .NET Framework 1.1
•30 MB of available hard disk space for Report Designer
•145 MB of available hard disk space for samples and Books Online

SQL Server Reporting Services (SSRS) Architecture


SSRS is a comprehensive and extensible reporting platform, which includes an integrated
set of processing components, programmatic interfaces, and tools. Processing components
are the basis for the multilayered architecture of SSRS and interact with each other to
retrieve data, process layout, render, and deliver a report to a target destination. SSRS
supports two categories of components:

Processors� Ensure integrity of SSRS and supply an infrastructure that enables


developers to add a new functionality (extensions). Processors itself are not
extendable in this release of SSRS.

Extensions� Assemblies that are invoked by processors and perform specific


processing functionality, such as data retrieval. Developers can write custom
extensions.

Reporting Services architecture diagram is depicted in Figure 3.1; components are


described in more details later in this chapter. Arrows on the diagram show a data flow
between components "within" and "outside" of SSRS (external tools, applications, services,
and utilities).
Report ServerReporting ServicesReport ServerReporting Services architecture.

Reporting Services Components Summary

Brief Description
Component
Programmatic
Provides access to SSRS functionality through SOAP and HTTP requests.
interfaces
Facilitates a set of report generation operations from data retrieval to
Report Processor rendering. Report Processor invokes other components, such as data
extensions to assist with report generation.
Data-processing Retrieves report's data from a data source. Developers can develop
extensions additional custom data-processing extensions.
Command-line Three utilities, designed to assist with scripting of administrative tasks,
utilities installed automatically during the Reporting Services install.
Transform the report's intermediate format (a combination of report's
Rendering
layout and data) into a device-specific format, such as HTML. Developers
extensions
can create new rendering extensions.
Report Server Stores report definitions, report metadata, report history, cached
Brief Description
Component
reports, snapshots, resources, security settings, encrypted data,
database
scheduling and delivery data, and more.
Report Server
temporary Stores intermediate processing products, cache, and data.
database
Monitors for events (such as timed subscription), collaborates with
Scheduling and
Report Processor to render a report, and delivery extensions to deliver
Delivery Processor
scheduled reports to a location specified in the subscription.
Delivery Deliver reports to specific devices, such as email or a file system.
extensions Developers can create new delivery extensions.
Provides web-based report access and management capabilities. Default
Report Manager
URL that invokes Report Manager is http://<server>/reports.
Report Model
Generates report models for use in Report Builder.
Designer
Provides drag-and-drop, easy-to-use report design functionality. Report
Report Builder Builder is an ad hoc end-user report authoring and publishing tool
executed on a client computer.
Allows developers to develop complex reports. Report Designer is a
Report Designer comprehensive report authoring and publishing tool, hosted in Business
Intelligence Development Studio or Visual Studio.
Enable authentication and authorization of users and groups. Developers
Security
can (excluding SQL Server Express edition) create new security
extensions
extensions.
Provides administrators with Windows form-based, integrated
SQL Server environment to manage SQL Server components including SSRS. From
Management the report management perspective, Management Studio has similar
Studio functionality to Report Manager, but provides additional capabilities,
such as consolidated web-farm management.
Reporting Provide administrators with functionality to start and stop Report Server
Services Windows service and reconfigure report servers. This is a Windows forms
Configuration Tool application.
Provides a set of WMI interfaces to manage settings of a Report Server
WMI provider
and assists with SSRS instance discovery on a network.
Performance
monitoring Provide a view of SSRS Windows and web service performance.
objects
The Reporting Services Architecture(briefly)

For a relatively new product, Reporting Services has a quite complex architecture. The full
Reporting Services architecture includes development tools, administration tools, and report
viewers. There are a number of ways to get to Reporting Services programmatically,
including SOAP and WMI interfaces.
Creating a SSRS Project

To create a report in Reporting Services, you must first create a report server project in
SQL Server Business Intelligence Development Studio where you will save your report
definition (.rdl) file and any other resource files that you need for your report.

To create a report server project using Business Intelligence Development Studio

 Open Business Intelligence Development Studio. Click Start, point to All Programs,
point to Microsoft SQL Server 2005 and select SQL Server Business Intelligence
Development Studio.
 On the File menu, point to New, and then click Project.
 In the Project Types list, click Business Intelligence Projects.
 In the Templates list, click Report Server Project. 
 In Name, type a name for the project.
 In Location, type a location for the project, or click the Browse button and select a
location.
 In Solution, type a name for the solution, or click the Browse button and select a
solution.
If a solution is currently open, select Add to Solution or Create new Solution. Add to
Solution adds the report server project to the current solution. Create new Solution
closes the current solution and creates a new one with a different name as the
current project.
 Click OK to create the project.

Creating a shared data source


Before we start the report, we are going to build a shared data source i.e. a data source that is
common to, and can be used by, all of the reports on a reporting server. While each report can
contain its own connection information, it is good practice to use shared data sources, as it will save
you a lot time and headaches. This way, you only have to set the connection information once.

For example, let's say your company has Development, Test and Production environments. If you put
the connection information into each report, you will have to change it each time when you publish to
Development, Test and Production. If you use a shared data source, you still have to set up a data
source object for each of three environments, but you can simply publish the report to each
environment, and they will automatically use the connection information associated with that
environment.

Creating a shared data source


Before we start the report, we are going to build a shared data source i.e. a data source that is
common to, and can be used by, all of the reports on a reporting server. While each report can
contain its own connection information, it is good practice to use shared data sources, as it will save
you a lot time and headaches. This way, you only have to set the connection information once.

For example, let's say your company has Development, Test and Production environments. If you put
the connection information into each report, you will have to change it each time when you publish to
Development, Test and Production. If you use a shared data source, you still have to set up a data
source object for each of three environments, but you can simply publish the report to each
environment, and they will automatically use the connection information associated with that
environment.

From the menu select Project > Add New Item > Select Data Source. On the General tab, provide
name for the data source "DataSource1". Leave the Type as Microsoft SQL Server. Click the Edit
button on the right, and enter the connection information for Reporting Demo, as before. Click OK on
the Shared Data Source screen and the data source is done.

Add a new report


From the menu select Project > Add New Item. Select Report and name it "FirstReportMan.rdl".

A new report will be added to the project, and the Report Designer will open at the Data tab. At this
point, let's take a closer look at the Report Designer tool.

At the top of the Report Designer window are three tabs: Data, Layout and Preview.

The Data tab is used to build data sources for your report.

The Layout tab is the physical report designer where you set up the header, the footer and the data
presentation of the report.

The Preview tab allows you to actually run the report from Visual Studio 2005, without having to
publish it to a report server first. If your report takes parameters, the Preview tab will ask you to fill
them out before it runs the report.

Building Reports

After adding new report to the project


The first thing we need to do is get data into our report. At the top of the Data tab choose <new
dataset…> from the dropdown list:
It will opens a new window for adding new data set. Name the dataset "DataSet1", and select the data
source from the dropdown. Select StoredProcedure as the command type, and enter Stored
Procedure Name in the query string box. Click OK to close the dialog. Test the dataset by clicking on
the big red exclamation point at the top of the report designer. This executes the query and displays
the results. You can repeat these steps if you wish to add multiple datasets to your report.

Setting up the report display


Next, switch to the Layout tab. This is where you actually build the report display. Start by dragging a
table from the Toolbox, onto the Body section of the report. By default, the table shows a header row
at the top, a detail row in the middle and a footer row at the bottom. You can add additional columns
to the table by right-clicking on one of the columns and selecting one of the two Insert Column
options.

Click on the Datasets tab under Toolbox on the left-hand side of the Visual Studio environment. You
should see the DataSet1 dataset. Expand it, select Region and drag it to the first cell of the body
row of the table. This will display the Region field in the first column. Now put the State in the second
column, and the Gender in the third and so on: Note that, when you drag the column like FirstName
into the body row of the table, SSRS make a guess as to what to call the row. It calls it "First Name".
You can click on the Preview tab to view the report.

Grouping
In a data region, you can group data by a single field, or create more complex expressions that
identify the data on which to group. Complex expressions include references to multiple fields or
parameters, conditional statements, or custom code.

Grouping Data in SSRS

1. Right click on table and go to properties. It opens a new window. 


2. In table properties window switch to Groups tab, and then click Add button.

3. You will find an another window for Grouping and Sorting properties.
4. Under group on section select an expression, on which you want to group data.

If you want to group data with in another group you can select group expression for parent group.

Adding dynamic sorting


It is incredibly easy to add sorting to the table. Open any report which contains the table. Navigate to
the Layout tab of the report designer. Click in the cell labeled First Name in the header of the table.
Right-click and select Properties. Switch to the Interactive Sort tab. Check the checkbox for 'Add
an interactive sort action to this textbox'. Next, set the Sort expression to the value that is displayed
in the column – in this case =Fields!FirstName.Value. Now repeat this process for the rest of the
report. 

Preview the report, and you should have interactive sorting on each column.
IIF() Function
The IIF() statement has the following format:

=IIF( Expression to evaluate,


         what-to-do when the expression is true,
         what-to-do when the expression is false )

Parameter1: It should be a bollean expression.


Paremeter2: This value will return when Expression is true.
Paremeter3: This value will return when Expression is false.

IIF() is same as IF ELSE function.

Built-in Functions of SSRS


In Reporting Services, expressions are used to provide dynamic flexibility for controlling the content
and appearance of a report. Some expressions are created for you automatically. For example, when
you create a new report and drag fields from datasets onto report items, the values of text boxes are
automatically set to expressions that refer to the dataset fields by name. During report processing,
each expression evaluates to a single value that replaces the expression when a report is rendered.
Expressions are also used throughout the report definition to specify or calculate values for report item
properties, text box properties, parameters, queries, bookmarks, document maps, filters, and group
and sort definitions.

You create expressions on report items by entering text in text boxes, in properties, or through a
dialog box. An expression is written in Microsoft Visual Basic. An expression begins with an equal sign
(=) and consists of references to constants, operators, functions, and built-in report global collections
including fields and parameters. You can also create references to .NET Framework classes, custom
code, and assemblies external to report processing.

Examples

Aggregate(field expr [,scope])


Returns an array containing the values of the grouped field. For example,
=Code.AggrToString(Aggregate(Fields!Year.Value)) // In Code element Public Function AggrToString(o
as object) As String Dim ar as System.Collections.ArrayList = o Dim sb as System.Text.StringBuilder
= New System.Text.StringBuilder Dim n as Integer For n = 0 To ar.Count-1 sb.Append(ar(n)) If n <>

Asc(string)
Converts the first letter in the passed string to ANSI code.

Avg(field expr [,scope])


Returns the average value of the grouped field. Returns decimal if the argument type is decimal,
otherwise double.
CBool(object)
Converts the passed argument to Boolean.

CByte(string)
Converts the passed argument to Byte.

CCur(string)
Converts the argument to type Currency (really Decimal).

Choose(number, expr1, expr2, ... exprn)


Evaluates the number and return the result of the coorespodning exprn. For example, if number
results in 3 then expr3 is returned.

CDate(string)
Converts a string to type DateTime

CDbl(object)
Converts the passed parameter to double.

Chr(int)
Converts the specified ANSI code to a character.

CInt(object)
Converts the argument to integer.

CLng(object)
Converts the argument to long.

Count(field expr [,scope])


Returns the number of values in the grouped field. Null values don't count.

Countrows([scope])
Returns the number of rows in the group.

Countdistinct(field expr [,scope])


Returns the number distinct values in the grouped field. Null values don't count.

CSng(object)
Converts the argument to Single.

CStr(object)
Converts the argument to String.

Day(datetime)
Returns the integer day of month given a date.

First(field expr [,scope])


Returns the first value in the group.

Format(string1 [,string2)
Format string1 using the format string2. Some valid formats include '#,##0', '$#,##0.00',
'MM/dd/yyyy', 'yyy-MM-dd HH:mm:ss'... string2 is a .NET Framework formatting string.

Hex(number)
Returns the hexadecimal value of a passed number.

Hour(datetime)
Returns the integer hour given a date/time variable.

Iif(bool-expr, expr2, expr3)


The Iif function evaluates bool-expr and when true returns the result of expr2 otherwise the result of
expr3. expr2 and expr3 must be the same data type.

InStr([ioffset,] string1, string2 [,icase])


1 based offset of string2 in string1. You can optionally pass an integer offset as the first argument.
You can also optionally pass a 1 as the last argument if you want the search to be case insensitive.

InStrRev(string1, string2[,offset[,case]])
1 based offset of string2 (second argument) in string1 (first argument) starting from the end of
string1. You can optionally pass an integer offset as the third argument. You can also optionally pass a
1 as the fourth argument if you want the search to be case insensitive.

Last(field expr [,scope])


Returns the last value in the group.

LCase(string)
Returns the lower case of the passed string.

Left(string)
Returns the left n characters from the string.

Len(string)
Returns the lenght of the string.

LTrim(string)
Removes leading blanks from the passed string.

Max(field expr [,scope])


Returns the maximum value in the group.

Mid
Returns the portion of the string (arg 1) denoted by the start (arg 2) and length (arg 3).

Min(field expr [,scope])


Returns the minimum value in the group.

Minute(datetime)
Returns the integer minute given a date/time variable.

Month(datetime)
Returns the integer month given a date.
MonthName(datetime)
Get the month name given a date. If the optional second argument is 'True' then the abbreviated
month name will be returned.

Next(field expr [,scope])


Returns the value of the next row in the group.

Oct(number)
Returns the octal value of a specified number.

Previous(field expr [,scope])


Returns the value of the previous row in the group.

Replace
Returns a string replacing 'count' instances of the searched for text (optionally case insensitive)
starting at position start with the replace text. The function form is
Replace(string,find,replacewith[,start[,count[,compare]]]).

Right(string, number)
Returns a string of the rightmost characters of a string.

Rownumber()
Returns the row number.

RTrim(string)
Removes trailing blanks from string.

Runningvalue(field expr, string1 [,scope])


Returns the current running value of the specified aggregate function. string1 is an expression
returning one of the following aggregate function: "sum", "avg", "count", "max", "min", "stdev",
"stdevp", "var", "varp".

Second(datetime)
Returns the integer second given a date/time variable.

Space(number)
Returns a string containing the number of spaces requested.

Stdev(field expr [,scope])


Returns the standard deviation of the group.

Stdevp(field expr [,scope])


Returns the standard deviation of the group. Use stdevp instead of stdev when the group contains the
entire population of values.

StrComp(string1, string2, compare)


Compares the strings; optionally with case insensitivity. When string1 < string1 =" string2:"> string2:
1

String(number, char)
Return string with the character repeated for the length.

StrReverse(string)
Returns a string with the characters reversed.

Sum(field expr [,scope])


Returns the total of the group.

Switch(bool-expr, result1 [, bool-expr-n, result-n])


The arguments are pairs of expression. When the bool-expr is true the result is returned. bool-expr-n
is evaluated until one is results in true then the cooresponding result-n expression is returned.

Today()
Return the current date/time on the computer running the report.

Trim(string)
Removes whitespace from beginning and end of string.

UCase(string)
Returns the uppercase version of the string.

Var(field expr [,scope])


Returns the variance of the group.

Varp(field expr [,scope])


Returns the variance of the group. Use varp instead of var when the group contains the entire
population of values.

Year(datetime)
Obtains the year from the passed date.

Weekday()
Returns the integer day of week: 1=Sunday, 2=Monday, ..., 7=Saturday given a date.

WeekdayName(iday [,abbr])
Returns the name of the day of week given the integer Weekday. The optional second argument will
return the abbreviated day of week if 'True'.

Custom functions

There are about 100 common functions in SSRS 2005, and they can handle most of what you will
need to do. Occasionally, however, you will still need to set an expression with more control than the
common functions can manage. So, to make SSRS even more flexible, you can write custom VB.NET
or C# functions and use them in expressions.

Open the report and navigate to the Layout tab. From the Report menu select Report Properties
and then jump to the Code tab.
We are going to write a custom function that returns a different color, depending on the value that is
passed to it. We will then use that function to set the background color for the status field in the
report detail table.
Copy the following code into the code window:
Public Function GetColor(ByVal status as String) as String
IF status = "1" Then
Return "White"
End IF
IF status = "2" Then
Return "Yellow"
End IF
IF status = "3" Then
Return "Tomato"
End IF
End Function

Click OK and close the window.

Now that we have a function that returns color names we need to wire up that function to an
expression.
Click on the Customer Status cell and open the Properties window. Find the Background Color
property and choose Expression from the dropdown. Add the following line of code to create an
expression:
=code.GetColor(Fields!ChStatus.Value)
When the report runs and this expression is resolved, SSRS will call your GetColor function and pass
the value of the customer status for that instance of the row to the function. The function takes over
and decides which color should be returned. SSRS then uses that value as the value for the
background property for that cell.
Please note that custom functions must be called using =code.<myfunction>.
Now navigate to the Preview tab and run the report..

Subreports
A subreport is a report that is embedded into another report. Subreports can take parameters and
execute their own datasets. A subreport in SSRS is actually just another. In fact, in SSRS you can
execute a subreport on its own.

To add a subreport to a report, you simply drag a subreport control onto the report and tell it which
report to display. If the subreport requires parameters, you have to tell the main report which value to
pass to the subreport.

Let's add a new report to the project and call it MainReport.rdl. Create a new dataset using the
shared datasource and the query:

SELECT EmployeeID, FirstName, LastName


FROM tblEmployee

Switch to the Layout tab. Drag a table on the report detail area. Set up the first column to display the
Employee's first name (by dragging that column from the Datasets tab into the Detail row) and set up
the second column to display the Employee's last name. Preview the report.

Create another report, and call this one SubReport.rdl. This time, create a dataset that uses the
shared data source, and use the following query text:

SELECT Address1, Address2, City, [State], ZIP

FROM tblEmployee

WHERE EmployeeID = @EmployeeID

In the Layout tab, use text boxes to create the address layout, as shown in figure. You can simply
drag the text boxes onto the screen by clicking on the field in datasets tab and dragging it onto design
surface. You will also note that when you do this, the expression used to set the value property for the
textbox uses the First() function. This function will use the value from the first row returned by the
dataset. All other rows returned are ignored

Subreports
A subreport is a report that is embedded into another report. Subreports can take parameters and
execute their own datasets. A subreport in SSRS is actually just another. In fact, in SSRS you can
execute a subreport on its own.

To add a subreport to a report, you simply drag a subreport control onto the report and tell it which
report to display. If the subreport requires parameters, you have to tell the main report which value to
pass to the subreport.

Let's add a new report to the project and call it MainReport.rdl. Create a new dataset using the
shared datasource and the query:

SELECT EmployeeID, FirstName, LastName

FROM tblEmployee

Switch to the Layout tab. Drag a table on the report detail area. Set up the first column to display the
Employee's first name (by dragging that column from the Datasets tab into the Detail row) and set up
the second column to display the Employee's last name. Preview the report.

Create another report, and call this one SubReport.rdl. This time, create a dataset that uses the
shared data source, and use the following query text:

SELECT Address1, Address2, City, [State], ZIP

FROM tblEmployee
WHERE EmployeeID = @EmployeeID

In the Layout tab, use text boxes to create the address layout, as shown in figure. You can simply
drag the text boxes onto the screen by clicking on the field in datasets tab and dragging it onto design
surface. You will also note that when you do this, the expression used to set the value property for the
textbox uses the First() function. This function will use the value from the first row returned by the
dataset. All other rows returned are ignored

 Drag another Textbox report item from the Toolbox window to the page footer area. Position
the text box in the bottom-left corner of the footer, under the UserId box.
 Configure the properties of the text box, as shown in the following table.

Property Value
Name txtPage
Width 3 in

   
 Type page: in the txtPage box. (Important: Be sure to type a colon and a space following the
label text.)
 Drag the Page Number field after the space following the colon. Type of (with a space before
and after the text) after the Page Number field, and then drag the Total Pages field to the end
of the expression.

Global variables

Let's go to the expression editor. Click on Globals in the first column of the expression editor. In the
second column, you will see the list of the global variables that are provided to the report.

In this example, we are going to use the global variables to create a page-numbering scheme for our
report. First, add a footer to the report. You enable this by going to the Layout tab and selecting the
Report Footer option from the Report main menu. Now that you have a footer, drag four text boxes
into the footer section. Set their font properties to 8 pt, bold. Set the text for each text box as
follows:

Page

=Globals!PageNumber

of

=Globals!TotalPages

Switch to the Preview tab and run the report. You will now have a footer on each page that looks
something like 'Page 1 of 3'.
Some other globals variables are:

 ExecutionTime – this tells you how long it took to run the report
 ReportServerUrl – this allows you to verify that you are getting your report from the correct
source
 UserID - this shows under which User ID the report was run

You can use these global variables to build some basic diagnostics into your reporting. You could also
add these variables to the report footer and use a parameter to hide or show them. That way, even in
production, your users can see a tidy, clean report, but you can see the same report with all the extra
information on it.

Adding drill downs

The drill down functionality in SSRS allows you to have areas of your report that can expand and
collapse, much like a tree view.

Create a new report with simple table.

Navigate to the Layout tab and drag a new table control onto the page.

The first thing we need to do is to add a group to the table.

Click the table to highlight it, then right-click on the little box with three horizontal lines in it that
appear at the left side of the table at the beginning of detail row.

Select Insert Group. When the Group Dialog appears use =Fields!Region.Value as the expression, so that
the report now groups on Region. Now drag the Region field into the first column on the Group row.
Next, drag the State, Gender, Category and Population fields into the detail row. Add some formatting,
and preview the report.

We've now performed a grouping, but we still have to enable the drilldown.

Click on the table to highlight it. Right-click on the nub that appears, just to the left of the detail row.
Select Properties. When the Properties window opens, expand the visibility section. Set the Hidden
property to True and set the Toggle Item property to State. The Hidden property determines the
state of the row when the report is first run. If we set it to True then the data is collapsed and hidden.

By setting the Toggle Item property to State, when the report is run a little + sign appears next to it
when the report is run and it works like a tree view.

deploying Reports
You can run your reports in the Visual Studio 2005 environment but, to make them useful, you must
publish them to a report server. The easiest way to do this is to have Visual Studio publish your
reports.

Right-click on Reports Project in the Solution Explorer and choose Properties.


It will opens the server settings dailog box.

Set TargetServerURL to the URL of your report server. Use the TargetReportFolder property to set
up a folder for your reports. If the folder does not exist, the Publisher will create it for you.

The other key thing to pay attention to here is the OverwriteDataSources property. When this
property is set to "True", it will automatically copy over all your data source objects when you publish.
When set to "False", it will copy any new data source objects, but it will not overwrite existing ones.
This is important when dealing with development, test and production servers. You can publish a set
of data source objects to each server, pointed at the correct database, and never have to worry about
what database your reports are hitting in each environment.

From the Build menu select Deploy Solution. This will publish the project to the selected folder on
your report server. You can also deploy individual reports by right-clicking on the file in the Solution
explorer and selecting Deploy.

Once your report has been published, you can access and run it on your server through the browser at
http://servername/reportservername. From the Home page, you should be able to find the
folder you published to, with the reports in it. Select a report to run. At the top of the page you can
enter any values for report parameters, and then run the report. From here, you can also print or
export the report.
Local and Server Reports

Calling Reports Through .NET from Reporting Server:

To Load a server Report in web page directly is not possible. By taking a ReportViewer control we can
show the Report in web page.

The ReportViewer control is available in ToolBox under Data Controls Tab.

<rsweb:ReportViewer ID="ReportViewer1" runat="server" ProcessingMode="Remote">

<ServerReport ReportServerUrl="http://ServerName/ReportServer"
ReportPath="/ReportsFolder/ReportName" />

</rsweb:ReportViewer>

The ProcessMode of ReportViewer Represents weather the Report is Server Report or Local Report.

To load Report from Reporting Server set the ReportViewer attribute ProcessingMode property to
Remote as ProcessingMode="Remote"

To load Report from Local set the ReportViewer attribute ProcessingMode property to Local as
ProcessingMode="Local"

Requesting a Server Report in C#:

ReportViewer1.ProcessingMode = Microsoft.Reporting.WebForms.ProcessingMode.Remote;

ReportViewer1.ServerReport.ReportServerUrl =new Uri("http://ServerName/ReportServer");

ReportViewer1.ServerReport.ReportPath = "/ReportsFolder/ReportName";

For Windows applications The processing mode is set as follows

ReportViewer1.ProcessingMode =Microsoft.Reporting.WinForms.ProcessingMode.Remote

How to Configure a Report Server for Local Administration on


windows7, Windows Vista and Windows Server 2008

Deploying Reporting Services on Windows Vista and Windows Server 2008 requires
additional configuration steps if you want to administer a report server instance locally. Both
Windows Vista and Windows Server 2008 limit the overuse of elevated permissions by
removing administrator permissions when accessing applications. Because the operating
system removes permissions, members of the local Administrators group run most
applications as if they using the Standard User account.

While this practice improves the overall security of your system, it prevents you from using
the predefined, built-in role assignments that Reporting Services creates for local
administrators. However, with additional configuration on your part, you can effectively
manage report server content and operations using standard user permissions if you do the
following:

Add Reporting Services URLs to trusted sites. By default, Internet Explorer 7.0 on Windows
Vista and Windows Server 2008 runs in Protected Mode, a feature that blocks browser
requests from reaching high-level processes that run on the same computer. You can
disable protected mode for the report server applications by adding them as Trusted Sites.

Create role assignments that grant you, the report server administrator, permission to
manage content and operations without having to use the Run as administrator feature on
Internet Explorer. By creating role assignments for your Windows user account, you gain
access to a report server with Content Manager and System Administrator permissions
through explicit role assignments that replace the predefined, built-in role assignments that
Reporting Services creates for local administrators.

To configure local report server administration on Windows Vista and Windows


Server 2008

1. Open a browser window with Run as administrator permissions. From the Start

menu, click All Programs, right-click Internet Explorer, and select Run as

administrator.

2. Click Allow to continue.

3. In the URL address, enter the Report Manager URL.

4. Click Tools.

5. Click Internet Options.

6. Click Security.
7. Click Trusted Sites.

8. Click Sites.

9. Add http://<your-server-name>.

10. Clear the check box Require server certification (https:) for all sites in this zone

if you are not using HTTPS for the default site.

11. Click Add.

12. Click OK.

13. In Report Manager, on the Home page, click Properties.

14. Click New Role Assignment.

15. Type your Windows user account in this format: <domain>\<user>.

16. Select Content Manager.

17. Click OK.

18. Click Site Settings in the upper corner of the Home page.

19. Click Configure Site-wide security.

20. Click New Role Assignment.

21. Type your Windows user account in this format: <domain>\<user>.

22. Select System Administrator.

23. Click OK.

24. Close Report Manager.

25. Re-open Report Manager in Internet Explorer, without using Run as administrator.

Text Functions

Asc- Returns an Integer value representing the character code corresponding to a


character. =Asc(Fields!Description.Value)

AscW- Returns an Integer value representing the character code corresponding to a


character.=AscW(Fields!Description.Value)
Chr - Returns the character associated with the specified character code.=Chr(65)

ChrW- Returns the character associated with the specified character code. =ChrW(241)

Filter - Returns a zero-based array containing a subset of a String array based on specified
filter criteria. =Filter(Parameters!MultivalueParameter.Value, "3", True,
CompareMethod.Binary)

Format Returns a string formatted according to instructions contained in a format String


expression. =Format(Globals!ExecutionTime, "Long Date")

FormatCurrency Returns an expression formatted as a currency value using the currency


symbol defined in the system control panel. =FormatCurrency(Fields!YearlyIncome.Value,0)

FormatDateTime Returns a string expression representing a date/time value.


=FormatDateTime(Fields!BirthDate.Value,DateFormat.ShortDate)

FormatNumber Returns an expression formatted as a number.


=FormatNumber(Fields!Weight.Value,2)

FormatPercent Returns an expression formatted as a percentage (that is, multiplied by


100) with a trailing % character.
=FormatPercent(Fields!Sales.Value/Sum(Fields!Sales.Value, "DataSet1"),0)

GetChar Returns a Char value representing the character from the specified index in the
supplied string. =GetChar(Fields!Description.Value, 5)

InStr Returns an integer specifying the start position of the first occurrence of one string
within another. =InStr(Fields!Description.Value, "car")

InStrRev Returns the position of the first occurrence of one string within another, starting
from the right side of the string. =InStrRev(Fields!Description.Value, "car")

Join Returns a string created by joining a number of substrings contained in an array.


=Join(Parameters!MultivalueParameter.Value,",")

LCase Returns a string or character converted to lowercase. =LCase(Fields!


Description.Value)

Left Returns a string containing a specified number of characters from the left side of a
string. =Left(Fields!Description.Value,4)

Len Returns an integer containing either the number of characters in a string or the number
of bytes required to store a variable. =Len(Fields!Description.Value)
LSet Returns a left-aligned string containing the specified string adjusted to the specified
length. =LSet(Fields!Description.Value,4)

LTrim Returns a string containing a copy of a specified string with no leading spaces
(LTrim), no trailing spaces (RTrim), or no leading or trailing spaces (Trim).
=LTrim(Fields!Description.Value)

Mid Returns a string containing a specified number of characters from a string.


=Mid(Fields!Description.Value,3,4)

Replace Returns a string in which a specified substring has been replaced with another
substring a specified number of times.=Replace(Fields!Description.Value,"tube","headlight")

RightReturns a string containing a specified number of characters from the right side of a
string.=Right(Fields!Description.Value,4)

RSetReturns a right-aligned string containing the specified string adjusted to the specified
length.=RSet(Fields!Description.Value,4)

RTrimReturns a string containing a copy of a specified string with no leading spaces


(LTrim), no trailing spaces (RTrim), or no leading or trailing spaces (Trim).
=RTrim(Fields!Description.Value)

Space Returns a string consisting of the specified number of spaces.=Space(3)

SplitReturns a zero-based, one-dimensional array containing a specified number of


substrings.=Split(Fields!ListWithCommas.Value,",")

StrCompReturns -1, 0, or 1, based on the result of a string comparison.=StrComp(Fields!


Description.Value,First(Fields!Description.Value))

StrConv Returns a string converted as specified.=StrConv(Fields!


Description.Value,vbProperCase)

StrDup Returns a string or object consisting of the specified character repeated the
specified number of times. =StrDup(3,"M")

StrReverse Returns a string in which the character order of a specified string is reversed.
=StrReverse(Fields!Description.Value)

Trim Returns a string containing a copy of a specified string with no leading spaces (LTrim),
no trailing spaces (RTrim), or no leading or trailing spaces (Trim).
=Trim(Fields!Description.Value)

UCaseReturns a string or character containing the specified string converted to uppercase.


=UCase(Fields!Description.Value)
Date & Time Functions
CDate Convert to Date.=CDate(Fields!BirthDate.Value)

DateAdd Returns a Date value containing a date and time value to which a specified time
interval has been added. =DateAdd("d",3,Fields!BirthDate.Value)
=DateAdd(DateInterval.Day,3,Fields!BirthDate.Value)

DateDiff Returns a Long value specifying the number of time intervals between two Date
values. =DateDiff("yyyy",Fields!BirthDate.Value,"1/1/2007")
=DateDiff(DateInterval.Year,Fields!BirthDate.Value,"1/1/2007")

DatePartReturns an Integer value containing the specified component of a given Date


value. =DatePart("q",Fields!BirthDate.Value,0,0)
=DatePart(DateInterval.Quarter,Fields!
BirthDate.Value,FirstDayOfWeek.System,FirstWeekOfYear.System)

DateSerial Returns a Date value representing a specified year, month, and day, with the
time information set to midnight (00:00:00).
=DateSerial(DatePart("yyyy",Fields!BirthDate.Value)-10, DatePart("m",Fields!
BirthDate.Value)+3,DatePart("d",Fields!BirthDate.Value)-1)
=DateSerial(DatePart(DateInterval.Year,Fields!BirthDate.Value)-10,DatePart("m",Fields!
BirthDate.Value)+3,DatePart("d",Fields!BirthDate.Value)-1)
=DateSerial(2007,10,24)

DateString Returns or sets a String value representing the current date according to your
system. =DateString()=DatePart("m",DateString())

DateValue Returns a Date value containing the date information represented by a string,
with the time information set to midnight (00:00:00). =DateValue("January 15, 2007")

Day Returns an Integer value from 1 through 31 representing the day of the month.
=Day(Fields!BirthDate.Value)

FormatDateTime Returns a string expression representing a date/time value.


=FormatDateTime(Fields!BirthDate.Value, DateFormat.ShortDate)

Hour Returns an Integer value from 0 through 23 representing the hour of the day.
=Hour(Fields!BirthDate.Value)

Minute Returns an Integer value from 0 through 59 representing the minute of the hour.
=Minute(Fields!BirthDate.Value)

Month Returns an Integer value from 1 through 12 representing the month of the year.
=Month(Fields!BirthDate.Value)
MonthName Returns a String value containing the name of the specified month.
=MonthName(10,True) =MonthName(Month(Fields!BirthDate.Value),False)
="The month of your birthday is " & MonthName(Month(Fields!BirthDate.Value))

Now Returns a Date value containing the current date and time according to your system.
=Now() ="This time tomorrow is " & DateAdd("d",1,Now())
="This time tomorrow is " & DateAdd(DateInterval.Day,1,Now())

Second Returns an Integer value from 0 through 59 representing the second of the minute.

=Second(Fields!BirthDate.Value)

TimeOfDay Returns or sets a Date value containing the current time of day according to
your system.=TimeOfDay() ="Time of the day is " & TimeOfDay()

Timer Returns a Double value representing the number of seconds elapsed since midnight.
=Timer() ="Number of seconds since midnight " & Timer()

TimeSerial Returns a Date value representing a specified hour, minute, and second, with
the date information set relative to January 1 of the year 1.
=TimeSerial(DatePart("h",Fields!BirthDate.Value),DatePart("n",Fields!
BirthDate.Value),DatePart("s",Fields!BirthDate.Value))
=TimeSerial(DatePart(DateInterval.Hour,Fields!
BirthDate.Value),DatePart(DateInterval.Minute,Fields!
BirthDate.Value),DatePart(DateInterval.Second,Fields!BirthDate.Value))
=TimeSerial(23,49,52)

TimeString Returns or sets a String value representing the current time of day according
to your system. =TimeString()

TimeValue Returns a Date value containing the time information represented by a string,
with the date information set to January 1 of the year 1. =TimeValue("16:20:17")
=TimeValue(Fields!BirthDate.Value)

Today Returns or sets a Date value containing the current date according to your system.
=Today() ="Tomorrow is " & DateAdd("d",1,Today())
="Tomorrow is " & DateAdd(DateInterval.Day,1,Today())

Weekday Returns an Integer value containing a number representing the day of the week.
=Weekday(Fields!BirthDate.Value,0)
=Weekday(Fields!BirthDate.Value,FirstDayOfWeek.System)

WeekDayName Returns a String value containing the name of the specified weekday.
=WeekdayName(2,True,0)
=WeekDayName(DatePart("w",Fields!BirthDate.Value),True,0)
=WeekDayName(DatePart(DateInterval.Weekday,Fields!
BirthDate.Value),True,FirstDayOfWeek.System)

Year Returns an Integer value from 1 through 9999 representing the year.
=Year(Fields!BirthDate.Value)

Integration Services Data Transformations

SQL Server Integration Services transformations are the components in the data flow of a
package that aggregate, merge, distribute, and modify data. Transformations can also
perform lookup operations and generate sample datasets. This section describes the
transformations that Integration Services includes and explains how they work.

The following transformations perform business intelligence operations such as cleaning


data, mining text, and running data mining prediction queries.

Slowly Changing Dimension Transformation


The transformation that configures the updating of a slowly changing dimension. The Slowly
Changing Dimension transformation coordinates the updating and inserting of records in
data warehouse dimension tables.

Important
The Slowly Changing Dimension Wizard only supports connections to SQL Server.

Fuzzy Grouping Transformation


The Fuzzy Grouping transformation performs data cleaning tasks by identifying rows of data
that are likely to be duplicates and selecting a canonical row of data to use in standardizing
the data.
Fuzzy Lookup Transformation
The transformation that looks up values in a reference table using a fuzzy match. The Fuzzy
Lookup transformation performs data cleaning tasks such as standardizing data, correcting
data, and providing missing values.

Term Extraction Transformation


The Term Extraction transformation extracts terms from text in a transformation input
column, and then writes the terms to a transformation output column. The transformation
works only with English text and it uses its own English dictionary and linguistic information
about English.

Term Lookup Transformation


The Term Lookup transformation matches terms extracted from text in a transformation
input column with terms in a reference table. It then counts the number of times a term in
the lookup table occurs in the input data set, and writes the count together with the term
from the reference table to columns in the transformation output. This transformation is
useful for creating a custom word list based on the input text, complete with word frequency
statistics.

Data Mining Query Transformation


The transformation that runs data mining prediction queries.

Row Transformations:

The following transformations update column values and create new columns. The
transformation is applied to each row in the transformation input.

Character Map Transformation


The Character Map transformation applies string functions, such as conversion from
lowercase to uppercase, to character data. This transformation operates only on column
data with a string data type.

You configure the Character Map transformation in the following ways:


         Specify the columns to convert.

         Specify the operations to apply to each column.

Copy Column Transformation


The transformation that adds copies of input columns to the transformation output. The
Copy Column transformation creates new columns by copying input columns and adding the
new columns to the transformation output.

Data Conversion Transformation


The transformation that converts the data type of a column to a different data type.

Derived Column Transformation


The Derived Column transformation creates new column values by applying expressions to
transformation input columns

Export Column Transformation


The Export Column transformation reads data in a data flow and inserts the data into a file.
For example, if the data flow contains product information, such as a picture of each
product, you could use the Export Column transformation to save the images to files.

Import Column Transformation


The Import Column transformation reads data from files and adds the data to columns in a
data flow. Using this transformation, a package can add text and images stored in separate
files to a data flow

Script Component
The Script component hosts script and enables a package to include and run custom script
code

OLE DB Command Transformation


The OLE DB Command transformation runs an SQL statement for each row in a data flow.
For example, you can run an SQL statement that inserts, updates, or deletes rows in a
database table.

Rowset Transformations:
 

The following transformations create new rowsets. The rowset can include aggregate and
sorted values, sample rowsets, or pivoted and unpivoted rowsets.

Aggregate Transformation
The Aggregate transformation applies aggregate functions, such as Average, to column
values and copies the results to the transformation output. Besides aggregate functions, the
transformation provides the GROUP BY clause, which you can use to specify groups to
aggregate across.

Sort Transformation
The Sort transformation sorts input data in ascending or descending order and copies the
sorted data to the transformation output. You can apply multiple sorts to an input; each sort
is identified by a numeral that determines the sort order

Percentage Sampling Transformation


The Percentage Sampling transformation creates a sample data set by selecting a
percentage of the transformation input rows. The sample data set is a random selection of
rows from the transformation input, to make the resultant sample representative of the
input.

Row Sampling Transformation


The Row Sampling transformation is used to obtain a randomly selected subset of an input
dataset. You can specify the exact size of the output sample, and specify a seed for the
random number generator.

Pivot Transformation
The Pivot transformation makes a normalized data set into a less normalized but more
compact version by pivoting the input data on a column value

Unpivot Transformation
The Unpivot transformation makes an unnormalized dataset into a more normalized version
by expanding values from multiple columns in a single record into multiple records with the
same values in a single column
Split and Join Transformations:

The following transformations distribute rows to different outputs, create copies of the
transformation inputs, join multiple inputs into one output, and perform lookup operations.

Conditional Split Transformation


The Conditional Split transformation can route data rows to different outputs depending on
the content of the data. The implementation of the Conditional Split transformation is
similar to a CASE decision structure in a programming language. The transformation
evaluates expressions, and based on the results, directs the data row to the specified
output. This transformation also provides a default output, so that if a row matches no
expression it is directed to the default output.

Multicast Transformation
The Multicast transformation distributes its input to one or more outputs. This
transformation is similar to the Conditional Split transformation. Both transformations direct
an input to multiple outputs. The difference between the two is that the Multicast
transformation directs every row to every output, and the Conditional Split directs a row to
a single output

Union All Transformation


The Union All transformation combines multiple inputs into one output. For example, the
outputs from five different Flat File sources can be inputs to the Union All transformation
and combined into one output

Merge Transformation The Merge transformation combines two sorted datasets into a
single dataset. The rows from each dataset are inserted into the output based on values in
their key columns

Merge Join Transformation The Merge Join transformation provides an output that is
generated by joining two sorted datasets using a FULL, LEFT, or INNER join
Lookup Transformation The Lookup transformation performs lookups by joining data in
input columns with columns in a reference dataset. You use the lookup to access additional
information in a related table that is based on values in common columns.

Cache Transform The transformation that writes data from a connected data source in the
data flow to a Cache connection manager that saves the data to a cache file. The Lookup
transformation performs lookups on the data in the cache file

Note: The Cache Transform writes only unique rows to the Cache connection manager.

Auditing Transformations: Integration Services includes the following transformations to


add audit information and count rows.

Audit Transformation
The Audit transformation enables the data flow in a package to include data about the
environment in which the package runs.

Row Count Transformation


The Row Count transformation counts rows as they pass through a data flow and stores the
final count in a variable

SSIS Tasks
Data Flow Task

 Data Flow Task: This task extracts data from a source, allows for transformations
of that data, and then the data is loaded into a target data destination.

Data Preparation Tasks

 File System Task: This task allows the user to copy/move/delete files and
directories on a file system.
 FTP Task: This task allows the user to copy/move/delete files and directories over
FTP.
 Web Service Task: This task allows the user to execute a Web service method and
store the results
 XML Task: This task is used to work with XML data. XSLT can be used along with
XPath to validate, compare and merge documents. The results of this can then be
stored.
 Data Profiling Task: This task can be used for checking and validating data quality.
Profiles can be set up and checked for varius data quality issues such as, column
length issues, column patterns, column statics, etc.

Workflow Tasks

 Execute Package Task: This task will run other SQL Server Integration Services
packages.
 Execute Process Task: This task will execute an application or batch file.
 Message Queue Task: This task allows you to send and receive messages between
SSIS packages, or to send messages to an application via an application queue. This
task uses Message Queuing (MSMQ)
 Send Mail Task: This task allows for email messages to be created and sent using
an SMTP server.
 WMI Data Reader Task: This task allows a Package to query, using WQL,
computer systems (local and remote) for information regarding that computer.
 WMI Event Watcher Task: This task watches for WMI events that have occurred
on a computer system, and allows the package to take an action if certain criteria
are met.

Scripting Tasks

 Script Task: This task can be used to program functions that are not available in
the standard SSIS tasks or transformations. In SSIS 2005 this task can be
programmed in VB .NET. In SSIS 2008 VB .NET and C# can be used to program a
Script Task.

SQL Server Maintenance Tasks

 Back Up Database Task: This task will allow you to backup a one or many SQL
Server databases.
 Check Database Integrity Task: This task will allow you to check the integrity of
all the objects in one or many SQL Server databases.
 Execute SQL Server Agent Job Task: This task allows for the execution of a SQL
Server Agent job.
 Execute T-SQL Statement Task: This task is similar to the Execute SQL Task,
however it only supports Transact SQL Statements. It should be used for SQL Server
specific SQL statements.
 History Cleanup Task: This task allows for the cleanup of historical activity data. It
will cleanup the history for database maintenance plans, backup activites, restore
activities and SQL Server agent jobs.
 Maintenance Cleanup Task: This task allows for the cleanup of backup files, and
the reports of maintenance plans.
 Notify Operator Task: This task allows SSIS to notify SQL Server Agent operators.
They can be notifies by email, pager, or netsend.
 Rebuild Index Task: This task will rebuild an index or indexes on one or many
databases.
 Reorganize Index Task: This task will reorganize an index or indexes on one or
many databases.
 Shrink Database Task: This task will shrink the size of the SQL Server database
data and database log files.
 Update Statistics Task: This task will update the statistics for one of many tables
in one or many databases.

SQL Server Tasks

 Bulk Insert Task: This task offers an efficient way to copy large volumes of data.
 Execute SQL Task: This task allows the execution of a SQL statement. If the
statement returns results, they can be stored in a variable.
 Transfer Database Task: This task will copy or move a SQL Server database
between two instances of SQL Server. It can even be used to make a copy of a
database on the same server. Databases can be copied either online or offline.
 Transfer Error Messages Task: This task will transfer a single or multiple SQL
Server user defined error messages between SQL Server instances. It can be setup
to transfer specific user messages or all error messages.
 Transfer Jobs Task: This task will transfer a single or multiple SQL Server Agent
jobs between SQL Server instances.
 Transfer Logins Task: This task will transfer a single or multiple SQL Server logins
between SQL Server instances.
 Transfer Master Stored Procedures Task: This task will transfer a single or
multiple SQL Server Master database stored procedures between SQL Server
instances.
 Transfer SQL Server Objects Task: This task will transfer a single or multiple SQL
Server database objects between SQL Server instances. Most of SQL Servers DDL
objects can be copied with this task.

Analysis Services Tasks

 Analysis Services Execute DDL Task: This task will run data definition language
statements on Analysis Services. This allows for the create, drop, alter of cubes,
dimensions and mining models.
 Analysis Services Processing Task: This task will process Analysis Services
Cubes, Dimensions, and Mining Models.
 Data Mining Query Task: This task will run a DMX (Data Mining Extensions) query
that create a prediction based on new data that is run against a Analysis Services
data mining model.
Containers

 For Loop: Repeat a task a fixed number of times


 Foreach Loop: Repeat a task by enumerating over a group of objects
 Sequence: Group multiple tasks into a single unit for easier management

Working with Connection Managers

SSIS uses connection managers to integrate different data sources into packages. SSIS includes a
wide variety of different connection managers that allow you to move data around from place to place.
Table 1 lists the available connection managers.

Handles
Connection Manager

ADO Connecting to ADO objects such as a Recordset.

ADO.NET Connecting to data sources through an ADO.NET provider.

CACHE Connects to a cache either in memory or in a file

MSOLAP100 Connecting to an Analysis Services database or cube.

EXCEL Connecting to an Excel worksheet.

FILE Connecting to a file or folder.

FLATFILE Connecting to delimited or fixed width flat files.

FTP Connecting to an FTP data source.

HTTP Connecting to an HTTP data source.

MSMQ Connecting to a Microsoft Message Queue.

MULTIFILE Connecting to a set of files, such as all text files on a particular


hard drive.

MULTIFLATFILE Connecting to a set of flat files.

ODBC Connecting to an ODBC data source.

OLEDB Connecting to an OLE DB data source.

SMOSever Connecting to a server via SMO.

SMTP Connecting to a Simple Mail Transfer Protocol server.

SQLMobile Connecting to a SQL Server Mobile database.

WMI Connecting to Windows Management Instrumentation data.


To create a Connection Manager, you right-click anywhere in the Connection Managers area of a
package in BIDS and choose the appropriate shortcut from the shortcut menu. Each Connection
Manager has its own custom configuration dialog box with specific options that you need to fill out.

To add some connection managers to your package, follow these steps:

1. Right-click in the Connection Managers area of your new package and select New OLE DB
Connection.
2. Click New to create a new data connection.
3. In the Connection Manager dialog box, select the SQL Native Client provider.
4. Select your test server and provide login information.
5. Select the Chapter16 database.
6. Click OK.
7. In the Configure OLE DB Connection Manager dialog box, click OK.
8. Right-click in the Connection Managers area of your new package and select New Flat File
Connection.
9. Enter DepartmentList as the Connection Manager Name.
10. Enter C:\Departments.txt as the File Name.
11. Check the Column Names in the First Data Row checkbox. Figure 1 shows the completed
General page of the dialog box.
12. Click the Advanced icon to move to the Advanced page of the dialog box
13. Click the New button.
14. Change the Name of the new column to DepartmentName.
15. Click OK.
16. Right-click the DepartmentList Connection Manager and select Copy.
17. Right-click in the Connection Managers area and select Paste.
18. Click on the new DepartmentList 1 connection to select it.
19. Use the Properties Window to change properties of the new connection. Change the Name
property to DepartmentListBackup. Change the ConnectionString property to
C:\DepartmentsBackup.txt.

Building Data Flows

The Data Flow tab of the Package Designer is where you specify the details of any Data Flow tasks
that you've added on the Control Flow tab. Data Flows are made up of various objects that you drag
and drop from the Toolbox:

 Data Flow Sources are ways that data gets into the system. Table 1 lists the available data
flow sources.
 Data Flow Transformations let you alter and manipulate the data in various ways.
 Data Flow Destinations are the places that you can send the transformed data. Table 2 lists
the available data flow destinations.
Use
Source

ADO NET Extracts data from a database using a .NET data


provider

Excel Extracts data from an Excel workbook

Flat File Extracts data from a flat file

OLE DB Extracts data from a database using an OLE DB


provider

Raw File Extracts data from a raw file (proprietary


Microsoft format)

XML Extracts data from an XML file

Table 1: Data flow sources

Use
Destination

ADO NET Sends data to a .NET data provider

Data Mining Model Training Sends data to an Analysis Services data mining
model

DataReader Sends data to an in-memory ADO.NET


DataReader

Dimension Processing Processes a cube dimension

Excel Sends data to an Excel worksheet

Flat File Sends data to a flat file

OLE DB Sends data to an OLE DB database

Partition Processing Processes an Analysis Services partition

Raw File Sends data to a raw file

Recordset Sends data to an in-memory ADO Recordset

SQL Server Compact Sends data to a SQL Server CE database

SQL Server Sends data to a SQL Server database

Deploy SSIS Packages

To deploy SQL Server Integration Services packages, you use the Package Installation


Wizard. By using this wizard, you can deploy packages to one of two locations:

 To an instance of SQL Server.


 To the file system.

The procedures in this topic describe how to deploy packages to each of these
locations.Before you can run the Package Installation Wizard, you must copy the
deployment folder, which was created when you built a deployment utility, to the target
computer.

To deploy packages to an instance of SQL Server

1. Open the deployment folder on the target computer.

2. Double-click the manifest file, <project name>.SSISDeploymentManifest, to start the


Package Installation Wizard.

3. On the Deploy SSIS Packages page, select the SQL Server deployment option.

4. Optionally, select Validate packages after installation to validate packages after


they are installed on the target server.

5. On the Specify Target SQL Server page, specify the instance of SQL Server to


install the packages to and select an authentication mode. If you select SQL Server
Authentication, you must provide a user name and a password.

6. On the Select Installation Folder page, specify the folder in the file system for the
package dependencies that will be installed.

7. If the package includes configurations, you can edit configurations by updating


values in the Value list on the Configure Packages page.

8. If you elected to validate packages after installation, view the validation results of
the deployed packages.

To deploy packages to the file system

1. Open the deployment folder on the target computer.

2. Double-click the manifest file, <project name>.SSISDeploymentManifest, to start the


Package Installation Wizard.

3. On the Deploy SSIS Packages page, select the File system deployment option.

4. Optionally, select Validate packages after installation to validate the packages


after they are installed on the target server.
5. On the Select Installation Folder page, specify the folder in which to install
packages and package dependencies.

6. If the package includes configurations, you can edit updatable configurations by


updating values in the Value list on the Configure Packages page.

7. If you elected to validate packages after installation, view the validation results of
the deployed packages.

You might also like