What Is RDBMS (Relational Database Management System) ?
What Is RDBMS (Relational Database Management System) ?
What Is RDBMS (Relational Database Management System) ?
RDBMS is a database management system (DBMS) that is based on the relational model.
Relationships may be created and maintained across and among the data and tables. In a relational
database, relationships between data items are expressed by means of tables. Interdependencies
among these tables are expressed by data values
A DBMS has to be persistent, that is it should be accessible when the program created the data ceases
to exist or even the application that created the data restarted. A DBMS also has to provide some
uniform methods independent of a specific application for accessing the information that is stored.
RDBMS is a Relational Data Base Management System Relational DBMS. This adds the additional
condition that the system supports a tabular structure for the data, with enforced relationships
between the tables. This excludes the databases that don’t support a tabular structure or don’t enforce
relationships between tables.
What is Trigger ?
SQL trigger is an SQL statements or a set of SQL statements which is stored to be activated or fired
when an event associating with a database table occurs. The event can be any event including
INSERT, UPDATE and DELETE.
Sometimes a trigger is referred as a special kind of stored procedure in term of procedural code inside
its body.
What are the advantages and disadvantages of Triggers?
Advantages
trigger can catch the errors in business logic in the database level.
trigger provides an alternative way to run scheduled tasks. With SQL trigger, you don’t have to wait to
run the scheduled tasks. You can handle those tasks before or after changes being made to database
tables.
trigger is very useful when you use it to audit the changes of data in a database table.
Disadvantages
Trigger only can provide extended validation and cannot replace all the validations. Some simple
validations can be done in the application level
SQL Triggers executes invisibly from client-application which connects to the database server so it is
difficult to figure out what happen underlying database layer.
SQL Triggers run every updates made to the table therefore it adds workload to the database and
cause system runs slower.
A stored procedure is a named group of SQL statements that have been previously created and stored
in the server database. Stored procedures accept input parameters so that a single procedure can be
used over the network by several clients using different input data.
What is Normalization?
In relational database design, the process of organizing data to minimize redundancy of data in a table
is called normalization.
Make a separate table for each set of related attributes, and give each table a primary key. Each field
contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If attributes do not contribute to a description of the key, remove them to a separate table. All
attributes must be directly dependent on the primary key.
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct
tables.
ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee database
transactions are processed reliably. In the context of databases, a single logical operation on the data
is called a transaction. For example, a transfer of funds from one bank account to another, even
though that might involve multiple changes (such as debiting one account and crediting another), is a
single transaction.
Atomicity
It requires that database modifications must follow an “all or nothing” rule. Each transaction is said to
be atomic if when one part of the transaction fails, the entire transaction fails and database state is
left unchanged. It is critical that the database management system maintains the atomic nature of
transactions in spite of any application,
Consistency
This property ensures that the database remains in a consistent state; more precisely, it says that
any transaction will take the database from one consistent state to another consistent state. The
consistency property does not say how the DBMS should handle an inconsistency other than ensure
the database is clean at the end of the transaction
Isolation
It refers to the requirement that other operations cannot access data that has been modified during a
transaction that has not yet completed. The question of isolation occurs in case of concurrent
transactions (multiple transactions occurring at the same time). Each transaction must remain
unaware of other concurrently executing transactions, except that one transaction may be forced to
wait for the completion of another transaction that has modified data that the waiting transaction
requires
Durability
It is the ability of the DBMS to recover the committed transaction updates against any kind of system
failure (hardware or software). Durability is the DBMS’s guarantee that once the user has been
notified of a transaction’s success the transaction will not be lost, the transaction’s data changes will
survive system failure, and that all integrity constraints have been satisfied, so the DBMS won’t need
to reverse the transaction.
What is De-normalization?
In relational database design, a primary key is used to uniquely identify each row in a table.
A candidate key is a field or combination of fields that can act as a primary key field for that table to
uniquely identify each record in that table.
Surrogate keys are keys that have no “business” meaning and are solely used to identify a record in
the table. Such keys are either database or system generated values
A superkey is defined in the relational model as a set of attributes of a relation variable (relvar) for
which it holds that in all relations assigned to that variable there are no two distinct tuples (rows) that
have the same values for the attributes in this set. Equivalently a superkey can also be defined as a
set of attributes of a relvar upon which all attributes of the relvar are functionally dependent.
What’s the difference between a primary key and a unique key?
Both primary key and unique enforce uniqueness of the column on which they are defined. But by
default primary key creates a clustered index on the column, where are unique creates a nonclustered
index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key
allows one NULL only.
Pros:
Cons:
1. An additional index is needed. In SQL Server, the PK constraint will always creates a unique
index, in Oracle, if an index already exists, PK creation will use that index for uniqueness
enforcement (not a con in Oracle).
2. Cannot be used as a search key.
3. If it is database controlled, for products that support multiple databases, different
implementations are needed, example: identity in SS2k, before triggers and sequences in
Oracle, identity/sequence in DB2 UDB.
4. Always requires a join when browsing the child table(s).
An Identity column is a column (field ) in a database table that is made up of values generated by the
database. This is much like an AutoNumber field in Microsoft Access or a sequence in Oracle. Because
the concept is so important in database science, many RDBMS systems implement some type of
generated key, although each has its own terminology.
An identity column differs from a primary key in that its values are managed by the server and usually
cannot be modified. In many cases an identity column is used as a primary key, however this is not
always the case.
What is View?
A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as
updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the
view was created with. It should also be noted that as data in the original table changes, so does data
in the view, as views are the way to look at part of the original table. The results of using a view are
not permanently stored in the database. The data accessed through a view is actually constructed
using standard T-SQL select command and can come from one to many different base tables or even
other views.
Returns the error number for the last Transact-SQL statement executed.
Returns an error number if the previous statement encountered an error. If the error was one of the
errors in the sys.messages catalog view, then @@ERROR contains the value from the
sys.messages.message_id column for that error. You can view the text associated with an @@ERROR
error number in sys.messages.
What is the difference between temp tables and CTE and Table variables ?
A function in sql server is an object that performs an operation and returns a value.
Functions are deterministic when they always return the same result any time they are called by using
a specific set of input values.
Functions are nondeterministic when they could return different results every time they are called,
even with the same specific set of input values.
A Scalar user-defined function returns one of the scalar data types like int, varchar, float etc. You pass
in 0 to many parameters and you get a return value.
Table-Value user-defined function returns a table data type and is an alternative to a view
1. text
2. ntext
3. image
A function that performs a computation on a set of values rather than on a single value. For example,
finding the average or mean of a list of numbers is an aggregate function.
A set of Structured Query Language (SQL) statements with an assigned name that’s stored in the
database in compiled form so that it can be shared by a number of programs.
1. Sp_helptext
2. Sp_spaceused
3. sp_adduser
4. sp_help
5. sp_helpdb
6. sp_helpfile
7. sp_statistics
8. sp_table_privileges
9. sp_sqlexec
10. sp_tables
11. sp_who
A linked server configuration enables SQL Server to execute commands against OLE DB data sources
on remote servers.
What is a view ?
A view is, in essence, is a virtual table. It does not physically exist. Rather, it is created by a query
joining one or more tables.
A view with an index is known as indexed view. This index stores the result set of the query separately
as an object. This helps in the improvement of the query processing as the query optimizer scans the
data in the indexed view rather than the tables used in the query to create the view. In the case of
standard view, only view definition is stored but the result set is not stored.
What is a synonym in SQL server?
Synonym is an alternate name given for a schema (Table most often.). It helps in hiding the
underlying objects details from the end user hence providing better security.
Stored procedures are faster in execution, as the plans are already cached in the memory.
when the procedure is modified, all clients automatically get the new version.
Stored procedures can be used to help ensure the integrity of the database.
http://geekepisodes.com/sqlbi/2010/guid-globally-unique-identifier-in-sql-server/
http://sql.geekepisodes.com/2010/differnce-between-inner-join-left-right-outer-join-and-full-outer-
join-in-sql-server/
An OLE DB provider is a DLL that manages and interacts with a specific data source. An OLE DB data
source identifies the specific database that can be accessed through OLE DB.
”Where” is a kind of restiriction statement. You use where clause to restrict all the data from
DB.Where clause is used before result retrieving.
Ex:
“Having” clause is used after Group by clause. It works the same way as the where clause on the
results after applying group by.
Ex:
Group By EmpName
What is the basic form of a SQL statement to read data out of a table?
The basic form to read data out of table is “SELECT * FROM table_name;”
What is a “constraint”?
A constraint allows you to apply simple integrity checks on a table. The primary types of constraints
that are currently supported by SQL Server are :
FOREIGN KEY – validates that every value in a referenced column exists in a primary column of
another table.
CHECK – constraint is used to limit the values that can be placed in a column. The check constraints
are used to enforce domain integrity..
NOT NULL constraint enforces that the column will not accept null values. The not null constraints are
used to enforce domain integrity, as the check constraints.
*Note : DEFAULT – specifies a default value for a column in case an insert operation does not provide
one. Default is not a constraint in SQL server ( do remember this).
Functional dependency relates to how one object depends upon the other in the database. for
example, procedure/function sp2 may be called by procedure sp1. Then we say that sp1 has functional
dependency on sp2.
Processing of “group by” or “order by” clause often requires creation of Temporary tables to process
the results of the query. Which depending of the result set can be very expensive.
Self join is just like any other join, except that two instances of the same table will be joined in the
query. It can be used with inner / outer joins.
Example:
Employees table which contains records of all the employees in the organisation including managers.
The query below fetches the details of all the employees and their manager details.
You might have noticed that we used two instances of the same table “Employee” to fetch the final
results.
what are the differences between cross join and full outer join?
Cross Join :
No join conditions are specified.
Results in pairs of rows.
Results in Cartesian product of two tables.
Index covering is a term used when data can be fetched only by using indexes without touching the
underlying tables.
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of
Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to
analyze later.
One-to-One relationship can be implemented as a single table and rarely as two tables with primary
and foreign key relationships. One-to-Many relationships are implemented by splitting the data into
two tables with primary key and foreign key relationships. Many-to-Many relationships are
implemented using a junction table with the keys from both the tables forming the composite primary
key of the junction table.
User defined datatypes let you extend the base SQL Server datatypes by providing a descriptive
name, and format to the database. Example, if in your database, there is a column called
EmployeeName which appears
in many tables. In all these tables it should be varchar(50).
In this case you could create a user defined datatype called UDD_EmployeeName of varchar(50) and
use it across all your tables.
DELETE
DELETE removes rows one at a time and records an entry in the transaction log for each
deleted row.
Delete command removes the rows from a table based on the condition that we provide with a
WHERE clause.
If you want to retain the identity counter, use DELETE instead. If you want to remove table
definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE can be rolled back.
DELETE is DML Command.
DELETE does not reset identity of the table.
TRUNCATE
Truncate will actually remove all the rows from a table and there will be no data in the table
after we run the truncate command.
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data,
and only the page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure, its columns, constraints,
indexes and so on, remains. The counter used by an identity for new rows is reset to the seed
for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE cannot be rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table
What is a transaction?
A transaction is a logical unit of work in which, all the steps must be performed or none.
Disadvantages of cursors:
1. Each time you can perform a set of task only on a single row/record from the cursor
2. it results in a network roundtrip, where as a normal SELECT query makes only one rowundtrip,
however large the resultset is.
3. Cursors are also memory intensive because they require more resources and temporary storage
What is the system function to get the current user’s user id? USER_ID().
What is Collation?
Collation refers to a set of rules that determine how data is sorted and compared. Character data is
sorted using rules that define the correct character sequence, with options for specifying case
sensitivity, accent marks, kana character types and character width.
What is OLTP ?
In OLTP stands for online transaction processing systems relational database design use the discipline
of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute
data integrity. Using these rules complex information is broken down into its most simple structures
where all of the individual atomic level elements relate to each other and satisfy the normalization
rules.
A local temporary table exists only for the duration of a connection or, if defined inside a compound
statement, for the duration of the compound statement.
What is SSRS
SQL Server 2005 Reporting Services is a server-based reporting platform that you can use to create
and manage tabular, matrix, graphical, and free-form reports that contain data from relational and
multidimensional data sources. The reports that you create can be viewed and managed over a World
Wide Web-based connection. Reporting Services includes the following core components:
A complete set of tools that you can use to create, manage, and view reports.
A Report Server component that hosts and processes reports in a variety of formats. Output
formats include HTML, PDF, TIFF, Excel, CSV, and more.
An API that allows developers to integrate or extend data and report processing in custom
applications, or create custom tools to build and manage reports.
The reports that you build can be based on relational or multidimensional data from SQL Server,
Analysis Services, Oracle, or any Microsoft .NET data provider such as ODBC or OLE DB. You can
create tabular, matrix, and free-form reports. You can also create ad hoc reports that use predefined
models and data sources.
Visually and functionally, the reports that you build in Reporting Services surpass traditional reporting
by including interactive and Web-based features. Some examples of these features include drill-down
reports that enable navigation through layers of data, parameterized reports that support content
filtering at run time, free-form reports that support content in vertical, nested, and side-by-side
layouts, links to Web-based content or resources, and secure, centralized access to reports over
remote or local Web connections.
Although Reporting Services integrates with other Microsoft technologies out-of-the-box, developers
and third-party vendors can build components to support additional report output formats, delivery
formats, authentication models, and data source types. The development and run-time architecture
was purposely created in a modular design to support third-party extension and integration
opportunities.
SSRS competes with Crystal Reports and other business intelligence tools, and is included in Express,
Workgroup, Standard, and Enterprise editions of Microsoft SQL Server as an install option. Reporting
Services was first released in 2004 as an add-on to SQL Server 2000. The second version was
released as a part of SQL Server 2005 in November 2005. The latest version was released as part of
SQL Server 2008 in August 2008.
Users can interact with the Report Server web service directly, or instead use Report Manager, a web-
based application that interfaces with the Report Server web service. With Report Manager, users can
view, subscribe to, and manage reports as well as manage and maintain data sources and security
settings. Reports can be delivered via e-mail or placed on a file system. Security is role-based and can
be assigned on an individual item, such as a report or data source, a folder of items, or site wide.
Security roles and rights are inherited and can be overloaded.
In addition to using the standalone Report Server that comes with SQL Server, RDL reports can also
be viewed using the ASP.NET ReportViewer web control or the ReportViewer Windows Forms control.
This allows reports to be embedded directly into web pages or .NET Windows applications. The
ReportViewer control processes reports in one of two ways: (a) server processing, where the report is
rendered by and obtained from the Report Server; and (b) local processing, where the control renders
the RDL file itself.
SQL Server Reporting Services also support ad hoc reports: the designer develops a report schema
and deploys it on the reporting server, where the user can choose relevant fields/data and generate
reports. Users can then download the reports locally.
Advantages Of Ssrs
'Direct' and efficient reporting access to information residing in both Oracle and MS SQL
Server databases.
Faster (and therefore cheaper) production of reports on both relational and cube data.
An easy to deploy centralised reporting infrastructure based on Microsoft Reporting Services.
Faster delivery of information to the business, providing better decision support.
Ability for the business to self-serve, edit and interact with information without having to rely
on IT or IS resources.
Simple pricing model tailored for both entry and enterprise level installations, allowing for
inexpensive provision of Business Intelligence for the Masses and democratisation of
information.
No need for expensive specialist skills.
The beauty is that the entire report and data source definition is stored as a simple XML file.
This is the file the reporting engine uses to render reports. The elements and attributes
required for defining a report format are fully documented. Further, you can even add your
custom elements if you want to enrich available functionality. Most report writers available
today never provided this functionality.
XML based report definition allows you to directly design reports programmatically and render
them. This was very difficult to achieve in currently available report writers.
The default report designer is integrated with Visual Studio .NET so that you can create
application and its reports in the same environment.
The report designer eliminates the traditional bands very effectively. It provides three types of
elements—Table, Matrix and List. Table is equivalent to the traditional report with header,
footer, detail and groups. You can have multiple tables rendering different data arranged side
by side!
For each type of reporting element, you have to attach a dataset to it. Dataset is based upon
data source.
The matrix is like a pivot table. It has rows, columns and cells containing computed data. Very
useful and very easy. I am sure all of you remember how much we have to struggle today to
create a simple cross-tab report. Write complex queries, struggle with table formatting and so
on. With this new tool, just drag the matrix on the report, specify row, column and data fields
and that’s it.
The list is free form data. It can be descriptive and cannot be represented as a structured
table, somewhat like a data repeater. Whatever data elements you put in the list are repeated
for each row in the base dataset. This allows you to create free form layout reports that still
repeat for each data item.
The report items can be recursive. For example, one list can contain another list. What’s more
one report can be defined as a sub-report of the base report. This provides more than just drill
down. The subreport also need not be hard coded. Parameters can be passed online to it
based upon the area of base report clicked.
Now, about rendering. This is the most sophisticated part. By default rendering is in HTML. But
while you are viewing the report, you can simply click on the toolbar and render it in many
different ways.
The most important part is that all the reports are stored on the central SQL Server database.
Usually, we have reports for each application stored and managed separately. This leads to a
lot of confusion and administrative headaches.
The reports are viewed and administered by using a Web-based implementation of the entire
reporting engine. The default website provides a base structure which is folder based.
Typically you will have folders created for each application or user functionality.
How do you access reports usually? By instancing the report writer runtime. Here you don’t
have to do that. Because all reports are accessible in one of the two ways:
The best part of the reporting server is that the entire functionality is exposed as a single Web
service! You can create, manage, and view reports using various methods provided by the web
service.
The security is managed in a role-based manner and can be applied to folders as well as
reports.
User can manage their own reporting needs by accessing reports ad-hoc or by subscribing to
the reports. Subscription based reports are automatically sent by mail to the users.
All reports require parameters to be accepted from users. Here once parameters are defined,
the UI for these parameters is automatically generated.
Finally, you have many ways of rendering the reports:
HTML (MHTML)
Excel
Acrobat
Tiff (image)
XML
CSV
And of course, you can write your own rendering mechanism, if you like!
Depending upon the type of output you want, some features are disabled. For example, HTML
reports do not support pagination. Whereas, XML and CSV will not support drilldown. This is
primarily due to the nature of the output format and not due to inadequate rendering
capabilities of the reporting services.
Minimum Requirements
Operating System SQL Server 2000 Reporting Services can run on the following operating systems:
Hard Disk •50 MB of available hard disk space for Report Server
•100 MB of available hard disk space for Microsoft .NET Framework 1.1
•30 MB of available hard disk space for Report Designer
•145 MB of available hard disk space for samples and Books Online
Brief Description
Component
Programmatic
Provides access to SSRS functionality through SOAP and HTTP requests.
interfaces
Facilitates a set of report generation operations from data retrieval to
Report Processor rendering. Report Processor invokes other components, such as data
extensions to assist with report generation.
Data-processing Retrieves report's data from a data source. Developers can develop
extensions additional custom data-processing extensions.
Command-line Three utilities, designed to assist with scripting of administrative tasks,
utilities installed automatically during the Reporting Services install.
Transform the report's intermediate format (a combination of report's
Rendering
layout and data) into a device-specific format, such as HTML. Developers
extensions
can create new rendering extensions.
Report Server Stores report definitions, report metadata, report history, cached
Brief Description
Component
reports, snapshots, resources, security settings, encrypted data,
database
scheduling and delivery data, and more.
Report Server
temporary Stores intermediate processing products, cache, and data.
database
Monitors for events (such as timed subscription), collaborates with
Scheduling and
Report Processor to render a report, and delivery extensions to deliver
Delivery Processor
scheduled reports to a location specified in the subscription.
Delivery Deliver reports to specific devices, such as email or a file system.
extensions Developers can create new delivery extensions.
Provides web-based report access and management capabilities. Default
Report Manager
URL that invokes Report Manager is http://<server>/reports.
Report Model
Generates report models for use in Report Builder.
Designer
Provides drag-and-drop, easy-to-use report design functionality. Report
Report Builder Builder is an ad hoc end-user report authoring and publishing tool
executed on a client computer.
Allows developers to develop complex reports. Report Designer is a
Report Designer comprehensive report authoring and publishing tool, hosted in Business
Intelligence Development Studio or Visual Studio.
Enable authentication and authorization of users and groups. Developers
Security
can (excluding SQL Server Express edition) create new security
extensions
extensions.
Provides administrators with Windows form-based, integrated
SQL Server environment to manage SQL Server components including SSRS. From
Management the report management perspective, Management Studio has similar
Studio functionality to Report Manager, but provides additional capabilities,
such as consolidated web-farm management.
Reporting Provide administrators with functionality to start and stop Report Server
Services Windows service and reconfigure report servers. This is a Windows forms
Configuration Tool application.
Provides a set of WMI interfaces to manage settings of a Report Server
WMI provider
and assists with SSRS instance discovery on a network.
Performance
monitoring Provide a view of SSRS Windows and web service performance.
objects
The Reporting Services Architecture(briefly)
For a relatively new product, Reporting Services has a quite complex architecture. The full
Reporting Services architecture includes development tools, administration tools, and report
viewers. There are a number of ways to get to Reporting Services programmatically,
including SOAP and WMI interfaces.
Creating a SSRS Project
To create a report in Reporting Services, you must first create a report server project in
SQL Server Business Intelligence Development Studio where you will save your report
definition (.rdl) file and any other resource files that you need for your report.
Open Business Intelligence Development Studio. Click Start, point to All Programs,
point to Microsoft SQL Server 2005 and select SQL Server Business Intelligence
Development Studio.
On the File menu, point to New, and then click Project.
In the Project Types list, click Business Intelligence Projects.
In the Templates list, click Report Server Project.
In Name, type a name for the project.
In Location, type a location for the project, or click the Browse button and select a
location.
In Solution, type a name for the solution, or click the Browse button and select a
solution.
If a solution is currently open, select Add to Solution or Create new Solution. Add to
Solution adds the report server project to the current solution. Create new Solution
closes the current solution and creates a new one with a different name as the
current project.
Click OK to create the project.
For example, let's say your company has Development, Test and Production environments. If you put
the connection information into each report, you will have to change it each time when you publish to
Development, Test and Production. If you use a shared data source, you still have to set up a data
source object for each of three environments, but you can simply publish the report to each
environment, and they will automatically use the connection information associated with that
environment.
For example, let's say your company has Development, Test and Production environments. If you put
the connection information into each report, you will have to change it each time when you publish to
Development, Test and Production. If you use a shared data source, you still have to set up a data
source object for each of three environments, but you can simply publish the report to each
environment, and they will automatically use the connection information associated with that
environment.
From the menu select Project > Add New Item > Select Data Source. On the General tab, provide
name for the data source "DataSource1". Leave the Type as Microsoft SQL Server. Click the Edit
button on the right, and enter the connection information for Reporting Demo, as before. Click OK on
the Shared Data Source screen and the data source is done.
A new report will be added to the project, and the Report Designer will open at the Data tab. At this
point, let's take a closer look at the Report Designer tool.
At the top of the Report Designer window are three tabs: Data, Layout and Preview.
The Data tab is used to build data sources for your report.
The Layout tab is the physical report designer where you set up the header, the footer and the data
presentation of the report.
The Preview tab allows you to actually run the report from Visual Studio 2005, without having to
publish it to a report server first. If your report takes parameters, the Preview tab will ask you to fill
them out before it runs the report.
Building Reports
Click on the Datasets tab under Toolbox on the left-hand side of the Visual Studio environment. You
should see the DataSet1 dataset. Expand it, select Region and drag it to the first cell of the body
row of the table. This will display the Region field in the first column. Now put the State in the second
column, and the Gender in the third and so on: Note that, when you drag the column like FirstName
into the body row of the table, SSRS make a guess as to what to call the row. It calls it "First Name".
You can click on the Preview tab to view the report.
Grouping
In a data region, you can group data by a single field, or create more complex expressions that
identify the data on which to group. Complex expressions include references to multiple fields or
parameters, conditional statements, or custom code.
3. You will find an another window for Grouping and Sorting properties.
4. Under group on section select an expression, on which you want to group data.
If you want to group data with in another group you can select group expression for parent group.
Preview the report, and you should have interactive sorting on each column.
IIF() Function
The IIF() statement has the following format:
You create expressions on report items by entering text in text boxes, in properties, or through a
dialog box. An expression is written in Microsoft Visual Basic. An expression begins with an equal sign
(=) and consists of references to constants, operators, functions, and built-in report global collections
including fields and parameters. You can also create references to .NET Framework classes, custom
code, and assemblies external to report processing.
Examples
Asc(string)
Converts the first letter in the passed string to ANSI code.
CByte(string)
Converts the passed argument to Byte.
CCur(string)
Converts the argument to type Currency (really Decimal).
CDate(string)
Converts a string to type DateTime
CDbl(object)
Converts the passed parameter to double.
Chr(int)
Converts the specified ANSI code to a character.
CInt(object)
Converts the argument to integer.
CLng(object)
Converts the argument to long.
Countrows([scope])
Returns the number of rows in the group.
CSng(object)
Converts the argument to Single.
CStr(object)
Converts the argument to String.
Day(datetime)
Returns the integer day of month given a date.
Format(string1 [,string2)
Format string1 using the format string2. Some valid formats include '#,##0', '$#,##0.00',
'MM/dd/yyyy', 'yyy-MM-dd HH:mm:ss'... string2 is a .NET Framework formatting string.
Hex(number)
Returns the hexadecimal value of a passed number.
Hour(datetime)
Returns the integer hour given a date/time variable.
InStrRev(string1, string2[,offset[,case]])
1 based offset of string2 (second argument) in string1 (first argument) starting from the end of
string1. You can optionally pass an integer offset as the third argument. You can also optionally pass a
1 as the fourth argument if you want the search to be case insensitive.
LCase(string)
Returns the lower case of the passed string.
Left(string)
Returns the left n characters from the string.
Len(string)
Returns the lenght of the string.
LTrim(string)
Removes leading blanks from the passed string.
Mid
Returns the portion of the string (arg 1) denoted by the start (arg 2) and length (arg 3).
Minute(datetime)
Returns the integer minute given a date/time variable.
Month(datetime)
Returns the integer month given a date.
MonthName(datetime)
Get the month name given a date. If the optional second argument is 'True' then the abbreviated
month name will be returned.
Oct(number)
Returns the octal value of a specified number.
Replace
Returns a string replacing 'count' instances of the searched for text (optionally case insensitive)
starting at position start with the replace text. The function form is
Replace(string,find,replacewith[,start[,count[,compare]]]).
Right(string, number)
Returns a string of the rightmost characters of a string.
Rownumber()
Returns the row number.
RTrim(string)
Removes trailing blanks from string.
Second(datetime)
Returns the integer second given a date/time variable.
Space(number)
Returns a string containing the number of spaces requested.
String(number, char)
Return string with the character repeated for the length.
StrReverse(string)
Returns a string with the characters reversed.
Today()
Return the current date/time on the computer running the report.
Trim(string)
Removes whitespace from beginning and end of string.
UCase(string)
Returns the uppercase version of the string.
Year(datetime)
Obtains the year from the passed date.
Weekday()
Returns the integer day of week: 1=Sunday, 2=Monday, ..., 7=Saturday given a date.
WeekdayName(iday [,abbr])
Returns the name of the day of week given the integer Weekday. The optional second argument will
return the abbreviated day of week if 'True'.
Custom functions
There are about 100 common functions in SSRS 2005, and they can handle most of what you will
need to do. Occasionally, however, you will still need to set an expression with more control than the
common functions can manage. So, to make SSRS even more flexible, you can write custom VB.NET
or C# functions and use them in expressions.
Open the report and navigate to the Layout tab. From the Report menu select Report Properties
and then jump to the Code tab.
We are going to write a custom function that returns a different color, depending on the value that is
passed to it. We will then use that function to set the background color for the status field in the
report detail table.
Copy the following code into the code window:
Public Function GetColor(ByVal status as String) as String
IF status = "1" Then
Return "White"
End IF
IF status = "2" Then
Return "Yellow"
End IF
IF status = "3" Then
Return "Tomato"
End IF
End Function
Now that we have a function that returns color names we need to wire up that function to an
expression.
Click on the Customer Status cell and open the Properties window. Find the Background Color
property and choose Expression from the dropdown. Add the following line of code to create an
expression:
=code.GetColor(Fields!ChStatus.Value)
When the report runs and this expression is resolved, SSRS will call your GetColor function and pass
the value of the customer status for that instance of the row to the function. The function takes over
and decides which color should be returned. SSRS then uses that value as the value for the
background property for that cell.
Please note that custom functions must be called using =code.<myfunction>.
Now navigate to the Preview tab and run the report..
Subreports
A subreport is a report that is embedded into another report. Subreports can take parameters and
execute their own datasets. A subreport in SSRS is actually just another. In fact, in SSRS you can
execute a subreport on its own.
To add a subreport to a report, you simply drag a subreport control onto the report and tell it which
report to display. If the subreport requires parameters, you have to tell the main report which value to
pass to the subreport.
Let's add a new report to the project and call it MainReport.rdl. Create a new dataset using the
shared datasource and the query:
Switch to the Layout tab. Drag a table on the report detail area. Set up the first column to display the
Employee's first name (by dragging that column from the Datasets tab into the Detail row) and set up
the second column to display the Employee's last name. Preview the report.
Create another report, and call this one SubReport.rdl. This time, create a dataset that uses the
shared data source, and use the following query text:
FROM tblEmployee
In the Layout tab, use text boxes to create the address layout, as shown in figure. You can simply
drag the text boxes onto the screen by clicking on the field in datasets tab and dragging it onto design
surface. You will also note that when you do this, the expression used to set the value property for the
textbox uses the First() function. This function will use the value from the first row returned by the
dataset. All other rows returned are ignored
Subreports
A subreport is a report that is embedded into another report. Subreports can take parameters and
execute their own datasets. A subreport in SSRS is actually just another. In fact, in SSRS you can
execute a subreport on its own.
To add a subreport to a report, you simply drag a subreport control onto the report and tell it which
report to display. If the subreport requires parameters, you have to tell the main report which value to
pass to the subreport.
Let's add a new report to the project and call it MainReport.rdl. Create a new dataset using the
shared datasource and the query:
FROM tblEmployee
Switch to the Layout tab. Drag a table on the report detail area. Set up the first column to display the
Employee's first name (by dragging that column from the Datasets tab into the Detail row) and set up
the second column to display the Employee's last name. Preview the report.
Create another report, and call this one SubReport.rdl. This time, create a dataset that uses the
shared data source, and use the following query text:
FROM tblEmployee
WHERE EmployeeID = @EmployeeID
In the Layout tab, use text boxes to create the address layout, as shown in figure. You can simply
drag the text boxes onto the screen by clicking on the field in datasets tab and dragging it onto design
surface. You will also note that when you do this, the expression used to set the value property for the
textbox uses the First() function. This function will use the value from the first row returned by the
dataset. All other rows returned are ignored
Drag another Textbox report item from the Toolbox window to the page footer area. Position
the text box in the bottom-left corner of the footer, under the UserId box.
Configure the properties of the text box, as shown in the following table.
Property Value
Name txtPage
Width 3 in
Type page: in the txtPage box. (Important: Be sure to type a colon and a space following the
label text.)
Drag the Page Number field after the space following the colon. Type of (with a space before
and after the text) after the Page Number field, and then drag the Total Pages field to the end
of the expression.
Global variables
Let's go to the expression editor. Click on Globals in the first column of the expression editor. In the
second column, you will see the list of the global variables that are provided to the report.
In this example, we are going to use the global variables to create a page-numbering scheme for our
report. First, add a footer to the report. You enable this by going to the Layout tab and selecting the
Report Footer option from the Report main menu. Now that you have a footer, drag four text boxes
into the footer section. Set their font properties to 8 pt, bold. Set the text for each text box as
follows:
Page
=Globals!PageNumber
of
=Globals!TotalPages
Switch to the Preview tab and run the report. You will now have a footer on each page that looks
something like 'Page 1 of 3'.
Some other globals variables are:
ExecutionTime – this tells you how long it took to run the report
ReportServerUrl – this allows you to verify that you are getting your report from the correct
source
UserID - this shows under which User ID the report was run
You can use these global variables to build some basic diagnostics into your reporting. You could also
add these variables to the report footer and use a parameter to hide or show them. That way, even in
production, your users can see a tidy, clean report, but you can see the same report with all the extra
information on it.
The drill down functionality in SSRS allows you to have areas of your report that can expand and
collapse, much like a tree view.
Navigate to the Layout tab and drag a new table control onto the page.
Click the table to highlight it, then right-click on the little box with three horizontal lines in it that
appear at the left side of the table at the beginning of detail row.
Select Insert Group. When the Group Dialog appears use =Fields!Region.Value as the expression, so that
the report now groups on Region. Now drag the Region field into the first column on the Group row.
Next, drag the State, Gender, Category and Population fields into the detail row. Add some formatting,
and preview the report.
We've now performed a grouping, but we still have to enable the drilldown.
Click on the table to highlight it. Right-click on the nub that appears, just to the left of the detail row.
Select Properties. When the Properties window opens, expand the visibility section. Set the Hidden
property to True and set the Toggle Item property to State. The Hidden property determines the
state of the row when the report is first run. If we set it to True then the data is collapsed and hidden.
By setting the Toggle Item property to State, when the report is run a little + sign appears next to it
when the report is run and it works like a tree view.
deploying Reports
You can run your reports in the Visual Studio 2005 environment but, to make them useful, you must
publish them to a report server. The easiest way to do this is to have Visual Studio publish your
reports.
Set TargetServerURL to the URL of your report server. Use the TargetReportFolder property to set
up a folder for your reports. If the folder does not exist, the Publisher will create it for you.
The other key thing to pay attention to here is the OverwriteDataSources property. When this
property is set to "True", it will automatically copy over all your data source objects when you publish.
When set to "False", it will copy any new data source objects, but it will not overwrite existing ones.
This is important when dealing with development, test and production servers. You can publish a set
of data source objects to each server, pointed at the correct database, and never have to worry about
what database your reports are hitting in each environment.
From the Build menu select Deploy Solution. This will publish the project to the selected folder on
your report server. You can also deploy individual reports by right-clicking on the file in the Solution
explorer and selecting Deploy.
Once your report has been published, you can access and run it on your server through the browser at
http://servername/reportservername. From the Home page, you should be able to find the
folder you published to, with the reports in it. Select a report to run. At the top of the page you can
enter any values for report parameters, and then run the report. From here, you can also print or
export the report.
Local and Server Reports
To Load a server Report in web page directly is not possible. By taking a ReportViewer control we can
show the Report in web page.
<ServerReport ReportServerUrl="http://ServerName/ReportServer"
ReportPath="/ReportsFolder/ReportName" />
</rsweb:ReportViewer>
The ProcessMode of ReportViewer Represents weather the Report is Server Report or Local Report.
To load Report from Reporting Server set the ReportViewer attribute ProcessingMode property to
Remote as ProcessingMode="Remote"
To load Report from Local set the ReportViewer attribute ProcessingMode property to Local as
ProcessingMode="Local"
ReportViewer1.ProcessingMode = Microsoft.Reporting.WebForms.ProcessingMode.Remote;
ReportViewer1.ServerReport.ReportPath = "/ReportsFolder/ReportName";
ReportViewer1.ProcessingMode =Microsoft.Reporting.WinForms.ProcessingMode.Remote
Deploying Reporting Services on Windows Vista and Windows Server 2008 requires
additional configuration steps if you want to administer a report server instance locally. Both
Windows Vista and Windows Server 2008 limit the overuse of elevated permissions by
removing administrator permissions when accessing applications. Because the operating
system removes permissions, members of the local Administrators group run most
applications as if they using the Standard User account.
While this practice improves the overall security of your system, it prevents you from using
the predefined, built-in role assignments that Reporting Services creates for local
administrators. However, with additional configuration on your part, you can effectively
manage report server content and operations using standard user permissions if you do the
following:
Add Reporting Services URLs to trusted sites. By default, Internet Explorer 7.0 on Windows
Vista and Windows Server 2008 runs in Protected Mode, a feature that blocks browser
requests from reaching high-level processes that run on the same computer. You can
disable protected mode for the report server applications by adding them as Trusted Sites.
Create role assignments that grant you, the report server administrator, permission to
manage content and operations without having to use the Run as administrator feature on
Internet Explorer. By creating role assignments for your Windows user account, you gain
access to a report server with Content Manager and System Administrator permissions
through explicit role assignments that replace the predefined, built-in role assignments that
Reporting Services creates for local administrators.
1. Open a browser window with Run as administrator permissions. From the Start
menu, click All Programs, right-click Internet Explorer, and select Run as
administrator.
4. Click Tools.
6. Click Security.
7. Click Trusted Sites.
8. Click Sites.
9. Add http://<your-server-name>.
10. Clear the check box Require server certification (https:) for all sites in this zone
18. Click Site Settings in the upper corner of the Home page.
25. Re-open Report Manager in Internet Explorer, without using Run as administrator.
Text Functions
ChrW- Returns the character associated with the specified character code. =ChrW(241)
Filter - Returns a zero-based array containing a subset of a String array based on specified
filter criteria. =Filter(Parameters!MultivalueParameter.Value, "3", True,
CompareMethod.Binary)
GetChar Returns a Char value representing the character from the specified index in the
supplied string. =GetChar(Fields!Description.Value, 5)
InStr Returns an integer specifying the start position of the first occurrence of one string
within another. =InStr(Fields!Description.Value, "car")
InStrRev Returns the position of the first occurrence of one string within another, starting
from the right side of the string. =InStrRev(Fields!Description.Value, "car")
Left Returns a string containing a specified number of characters from the left side of a
string. =Left(Fields!Description.Value,4)
Len Returns an integer containing either the number of characters in a string or the number
of bytes required to store a variable. =Len(Fields!Description.Value)
LSet Returns a left-aligned string containing the specified string adjusted to the specified
length. =LSet(Fields!Description.Value,4)
LTrim Returns a string containing a copy of a specified string with no leading spaces
(LTrim), no trailing spaces (RTrim), or no leading or trailing spaces (Trim).
=LTrim(Fields!Description.Value)
Replace Returns a string in which a specified substring has been replaced with another
substring a specified number of times.=Replace(Fields!Description.Value,"tube","headlight")
RightReturns a string containing a specified number of characters from the right side of a
string.=Right(Fields!Description.Value,4)
RSetReturns a right-aligned string containing the specified string adjusted to the specified
length.=RSet(Fields!Description.Value,4)
StrDup Returns a string or object consisting of the specified character repeated the
specified number of times. =StrDup(3,"M")
StrReverse Returns a string in which the character order of a specified string is reversed.
=StrReverse(Fields!Description.Value)
Trim Returns a string containing a copy of a specified string with no leading spaces (LTrim),
no trailing spaces (RTrim), or no leading or trailing spaces (Trim).
=Trim(Fields!Description.Value)
DateAdd Returns a Date value containing a date and time value to which a specified time
interval has been added. =DateAdd("d",3,Fields!BirthDate.Value)
=DateAdd(DateInterval.Day,3,Fields!BirthDate.Value)
DateDiff Returns a Long value specifying the number of time intervals between two Date
values. =DateDiff("yyyy",Fields!BirthDate.Value,"1/1/2007")
=DateDiff(DateInterval.Year,Fields!BirthDate.Value,"1/1/2007")
DateSerial Returns a Date value representing a specified year, month, and day, with the
time information set to midnight (00:00:00).
=DateSerial(DatePart("yyyy",Fields!BirthDate.Value)-10, DatePart("m",Fields!
BirthDate.Value)+3,DatePart("d",Fields!BirthDate.Value)-1)
=DateSerial(DatePart(DateInterval.Year,Fields!BirthDate.Value)-10,DatePart("m",Fields!
BirthDate.Value)+3,DatePart("d",Fields!BirthDate.Value)-1)
=DateSerial(2007,10,24)
DateString Returns or sets a String value representing the current date according to your
system. =DateString()=DatePart("m",DateString())
DateValue Returns a Date value containing the date information represented by a string,
with the time information set to midnight (00:00:00). =DateValue("January 15, 2007")
Day Returns an Integer value from 1 through 31 representing the day of the month.
=Day(Fields!BirthDate.Value)
Hour Returns an Integer value from 0 through 23 representing the hour of the day.
=Hour(Fields!BirthDate.Value)
Minute Returns an Integer value from 0 through 59 representing the minute of the hour.
=Minute(Fields!BirthDate.Value)
Month Returns an Integer value from 1 through 12 representing the month of the year.
=Month(Fields!BirthDate.Value)
MonthName Returns a String value containing the name of the specified month.
=MonthName(10,True) =MonthName(Month(Fields!BirthDate.Value),False)
="The month of your birthday is " & MonthName(Month(Fields!BirthDate.Value))
Now Returns a Date value containing the current date and time according to your system.
=Now() ="This time tomorrow is " & DateAdd("d",1,Now())
="This time tomorrow is " & DateAdd(DateInterval.Day,1,Now())
Second Returns an Integer value from 0 through 59 representing the second of the minute.
=Second(Fields!BirthDate.Value)
TimeOfDay Returns or sets a Date value containing the current time of day according to
your system.=TimeOfDay() ="Time of the day is " & TimeOfDay()
Timer Returns a Double value representing the number of seconds elapsed since midnight.
=Timer() ="Number of seconds since midnight " & Timer()
TimeSerial Returns a Date value representing a specified hour, minute, and second, with
the date information set relative to January 1 of the year 1.
=TimeSerial(DatePart("h",Fields!BirthDate.Value),DatePart("n",Fields!
BirthDate.Value),DatePart("s",Fields!BirthDate.Value))
=TimeSerial(DatePart(DateInterval.Hour,Fields!
BirthDate.Value),DatePart(DateInterval.Minute,Fields!
BirthDate.Value),DatePart(DateInterval.Second,Fields!BirthDate.Value))
=TimeSerial(23,49,52)
TimeString Returns or sets a String value representing the current time of day according
to your system. =TimeString()
TimeValue Returns a Date value containing the time information represented by a string,
with the date information set to January 1 of the year 1. =TimeValue("16:20:17")
=TimeValue(Fields!BirthDate.Value)
Today Returns or sets a Date value containing the current date according to your system.
=Today() ="Tomorrow is " & DateAdd("d",1,Today())
="Tomorrow is " & DateAdd(DateInterval.Day,1,Today())
Weekday Returns an Integer value containing a number representing the day of the week.
=Weekday(Fields!BirthDate.Value,0)
=Weekday(Fields!BirthDate.Value,FirstDayOfWeek.System)
WeekDayName Returns a String value containing the name of the specified weekday.
=WeekdayName(2,True,0)
=WeekDayName(DatePart("w",Fields!BirthDate.Value),True,0)
=WeekDayName(DatePart(DateInterval.Weekday,Fields!
BirthDate.Value),True,FirstDayOfWeek.System)
Year Returns an Integer value from 1 through 9999 representing the year.
=Year(Fields!BirthDate.Value)
SQL Server Integration Services transformations are the components in the data flow of a
package that aggregate, merge, distribute, and modify data. Transformations can also
perform lookup operations and generate sample datasets. This section describes the
transformations that Integration Services includes and explains how they work.
Important
The Slowly Changing Dimension Wizard only supports connections to SQL Server.
Row Transformations:
The following transformations update column values and create new columns. The
transformation is applied to each row in the transformation input.
Script Component
The Script component hosts script and enables a package to include and run custom script
code
Rowset Transformations:
The following transformations create new rowsets. The rowset can include aggregate and
sorted values, sample rowsets, or pivoted and unpivoted rowsets.
Aggregate Transformation
The Aggregate transformation applies aggregate functions, such as Average, to column
values and copies the results to the transformation output. Besides aggregate functions, the
transformation provides the GROUP BY clause, which you can use to specify groups to
aggregate across.
Sort Transformation
The Sort transformation sorts input data in ascending or descending order and copies the
sorted data to the transformation output. You can apply multiple sorts to an input; each sort
is identified by a numeral that determines the sort order
Pivot Transformation
The Pivot transformation makes a normalized data set into a less normalized but more
compact version by pivoting the input data on a column value
Unpivot Transformation
The Unpivot transformation makes an unnormalized dataset into a more normalized version
by expanding values from multiple columns in a single record into multiple records with the
same values in a single column
Split and Join Transformations:
The following transformations distribute rows to different outputs, create copies of the
transformation inputs, join multiple inputs into one output, and perform lookup operations.
Multicast Transformation
The Multicast transformation distributes its input to one or more outputs. This
transformation is similar to the Conditional Split transformation. Both transformations direct
an input to multiple outputs. The difference between the two is that the Multicast
transformation directs every row to every output, and the Conditional Split directs a row to
a single output
Merge Transformation The Merge transformation combines two sorted datasets into a
single dataset. The rows from each dataset are inserted into the output based on values in
their key columns
Merge Join Transformation The Merge Join transformation provides an output that is
generated by joining two sorted datasets using a FULL, LEFT, or INNER join
Lookup Transformation The Lookup transformation performs lookups by joining data in
input columns with columns in a reference dataset. You use the lookup to access additional
information in a related table that is based on values in common columns.
Cache Transform The transformation that writes data from a connected data source in the
data flow to a Cache connection manager that saves the data to a cache file. The Lookup
transformation performs lookups on the data in the cache file
Note: The Cache Transform writes only unique rows to the Cache connection manager.
Audit Transformation
The Audit transformation enables the data flow in a package to include data about the
environment in which the package runs.
SSIS Tasks
Data Flow Task
Data Flow Task: This task extracts data from a source, allows for transformations
of that data, and then the data is loaded into a target data destination.
File System Task: This task allows the user to copy/move/delete files and
directories on a file system.
FTP Task: This task allows the user to copy/move/delete files and directories over
FTP.
Web Service Task: This task allows the user to execute a Web service method and
store the results
XML Task: This task is used to work with XML data. XSLT can be used along with
XPath to validate, compare and merge documents. The results of this can then be
stored.
Data Profiling Task: This task can be used for checking and validating data quality.
Profiles can be set up and checked for varius data quality issues such as, column
length issues, column patterns, column statics, etc.
Workflow Tasks
Execute Package Task: This task will run other SQL Server Integration Services
packages.
Execute Process Task: This task will execute an application or batch file.
Message Queue Task: This task allows you to send and receive messages between
SSIS packages, or to send messages to an application via an application queue. This
task uses Message Queuing (MSMQ)
Send Mail Task: This task allows for email messages to be created and sent using
an SMTP server.
WMI Data Reader Task: This task allows a Package to query, using WQL,
computer systems (local and remote) for information regarding that computer.
WMI Event Watcher Task: This task watches for WMI events that have occurred
on a computer system, and allows the package to take an action if certain criteria
are met.
Scripting Tasks
Script Task: This task can be used to program functions that are not available in
the standard SSIS tasks or transformations. In SSIS 2005 this task can be
programmed in VB .NET. In SSIS 2008 VB .NET and C# can be used to program a
Script Task.
Back Up Database Task: This task will allow you to backup a one or many SQL
Server databases.
Check Database Integrity Task: This task will allow you to check the integrity of
all the objects in one or many SQL Server databases.
Execute SQL Server Agent Job Task: This task allows for the execution of a SQL
Server Agent job.
Execute T-SQL Statement Task: This task is similar to the Execute SQL Task,
however it only supports Transact SQL Statements. It should be used for SQL Server
specific SQL statements.
History Cleanup Task: This task allows for the cleanup of historical activity data. It
will cleanup the history for database maintenance plans, backup activites, restore
activities and SQL Server agent jobs.
Maintenance Cleanup Task: This task allows for the cleanup of backup files, and
the reports of maintenance plans.
Notify Operator Task: This task allows SSIS to notify SQL Server Agent operators.
They can be notifies by email, pager, or netsend.
Rebuild Index Task: This task will rebuild an index or indexes on one or many
databases.
Reorganize Index Task: This task will reorganize an index or indexes on one or
many databases.
Shrink Database Task: This task will shrink the size of the SQL Server database
data and database log files.
Update Statistics Task: This task will update the statistics for one of many tables
in one or many databases.
Bulk Insert Task: This task offers an efficient way to copy large volumes of data.
Execute SQL Task: This task allows the execution of a SQL statement. If the
statement returns results, they can be stored in a variable.
Transfer Database Task: This task will copy or move a SQL Server database
between two instances of SQL Server. It can even be used to make a copy of a
database on the same server. Databases can be copied either online or offline.
Transfer Error Messages Task: This task will transfer a single or multiple SQL
Server user defined error messages between SQL Server instances. It can be setup
to transfer specific user messages or all error messages.
Transfer Jobs Task: This task will transfer a single or multiple SQL Server Agent
jobs between SQL Server instances.
Transfer Logins Task: This task will transfer a single or multiple SQL Server logins
between SQL Server instances.
Transfer Master Stored Procedures Task: This task will transfer a single or
multiple SQL Server Master database stored procedures between SQL Server
instances.
Transfer SQL Server Objects Task: This task will transfer a single or multiple SQL
Server database objects between SQL Server instances. Most of SQL Servers DDL
objects can be copied with this task.
Analysis Services Execute DDL Task: This task will run data definition language
statements on Analysis Services. This allows for the create, drop, alter of cubes,
dimensions and mining models.
Analysis Services Processing Task: This task will process Analysis Services
Cubes, Dimensions, and Mining Models.
Data Mining Query Task: This task will run a DMX (Data Mining Extensions) query
that create a prediction based on new data that is run against a Analysis Services
data mining model.
Containers
SSIS uses connection managers to integrate different data sources into packages. SSIS includes a
wide variety of different connection managers that allow you to move data around from place to place.
Table 1 lists the available connection managers.
Handles
Connection Manager
1. Right-click in the Connection Managers area of your new package and select New OLE DB
Connection.
2. Click New to create a new data connection.
3. In the Connection Manager dialog box, select the SQL Native Client provider.
4. Select your test server and provide login information.
5. Select the Chapter16 database.
6. Click OK.
7. In the Configure OLE DB Connection Manager dialog box, click OK.
8. Right-click in the Connection Managers area of your new package and select New Flat File
Connection.
9. Enter DepartmentList as the Connection Manager Name.
10. Enter C:\Departments.txt as the File Name.
11. Check the Column Names in the First Data Row checkbox. Figure 1 shows the completed
General page of the dialog box.
12. Click the Advanced icon to move to the Advanced page of the dialog box
13. Click the New button.
14. Change the Name of the new column to DepartmentName.
15. Click OK.
16. Right-click the DepartmentList Connection Manager and select Copy.
17. Right-click in the Connection Managers area and select Paste.
18. Click on the new DepartmentList 1 connection to select it.
19. Use the Properties Window to change properties of the new connection. Change the Name
property to DepartmentListBackup. Change the ConnectionString property to
C:\DepartmentsBackup.txt.
The Data Flow tab of the Package Designer is where you specify the details of any Data Flow tasks
that you've added on the Control Flow tab. Data Flows are made up of various objects that you drag
and drop from the Toolbox:
Data Flow Sources are ways that data gets into the system. Table 1 lists the available data
flow sources.
Data Flow Transformations let you alter and manipulate the data in various ways.
Data Flow Destinations are the places that you can send the transformed data. Table 2 lists
the available data flow destinations.
Use
Source
Use
Destination
Data Mining Model Training Sends data to an Analysis Services data mining
model
The procedures in this topic describe how to deploy packages to each of these
locations.Before you can run the Package Installation Wizard, you must copy the
deployment folder, which was created when you built a deployment utility, to the target
computer.
6. On the Select Installation Folder page, specify the folder in the file system for the
package dependencies that will be installed.
8. If you elected to validate packages after installation, view the validation results of
the deployed packages.
7. If you elected to validate packages after installation, view the validation results of
the deployed packages.