Assignment Elsa
Assignment Elsa
Assignment: III
Prepared By:
NAME ID
1. Emebet Tamiru
DB/61/11
2. Kelem Aregahign 49
3. Hana Awulachew
4. Zina Dejene
DB/100/11
2
Requirements are divided into following types:
Business requirements – high-level declarations of the goals,
objectives, or needs of the organization.
Business requirements example – “The productivity will grow with 5% in
2013”
Stakeholder requirements – are declarations of the needs of a
particular stakeholder or class of stakeholders.
Stakeholder requirement example – “The accountant sector needs new
software which should provide following functionalities…”
Solution requirements – functional – describe capabilities the
system will be able to perform in terms of behaviors or operations—
specific information technology application actions or responses.
Functional requirement example – “The system shall provide following
functionalities in the case of crediting an issued invoice…”
Solution requirements – non-functional – defines conditions that do
not directly connected to the behavior or functionality of the solution,
but rather describe environmental conditions under which the solution
must remain effective or qualities that the systems must have.
Non-functional requirement example – “The system response time shall be
maximum 2 seconds.”
Transition requirements – capabilities that the solution must have in
order to facilitate a transition from the current state of the enterprise to
desired future state, but that will not be needed once that transition is
complete.
Transition requirement example – “The data migration shall require……”
The purpose of business requirements is to define a project’s business need,
as well as the criteria of its success. Business requirements describe why a
project is needed, whom it will benefit, when and where it will take place,
and what standards will be used to evaluate it.
Business requirement generally do not define how a project is to be
3
implemented; the requirements of the business need do not encompass a
project’s implementation details.
To compile business requirements, an analyst must first identify key
stakeholders, which will always include the business owners or project
sponsors. Very often, they will also include subject matter experts and the
end user/customer. BABOK 2.0 describes other stakeholders from whom an
analyst may elicit requirements as including regulators (who may impose
new regulatory requirements as a result of the project) and implementation
subject matter experts (who may be aware of capabilities currently present
in or easily added to existing systems). These stakeholders must be
thoroughly vetted and interviewed in order to complete a detailed discovery
of business requirements. Any existing documentation related to the project
must be thoroughly reviewed as well.
To better show what business requirements look like, we will use a project
example with which everyone is familiar—buying movie theater tickets.
Suppose that a chain of 400 movie theaters noticed a decline in ticket sales.
After surveying numerous customers, they found that long lines at their
ticket booths were causing customers to choose more convenient means of
entertainment. Customers stated that rather than go through the hassle of
waiting in line for 10 minutes, they would rather rent a movie or subscribe to
a movie rental service. After significant discovery in conjunction with
selected colleagues, the chain’s business analyst recommends a solution to
let customers buy their tickets online and print them ahead of time, thereby
saving time for customers and costs for the company.
The business requirements the analyst creates for this project would include
(but not be limited to):
Identification of the business problem (key objectives of the project), i.e.,
“Declining ticket sales require a strategy to increase the number of
customers at our theaters.”
4
Why the solution has been proposed (its benefits; why it will produce the
desired outcome of returning ticket sales to higher levels), i.e.,
“Customers have overwhelmingly cited the inconvenience of standing in
line as the primary reason they no longer attend our theater. We will
remove this impediment by enabling customers to buy and print their
theater tickets at home with just a few clicks.”
Key security features (again without details), i.e., “We will devise a
unique identifier for each ticket that will prohibit photocopies or
counterfeits.”
Criteria to measure the project’s success, such as: “This project will be
deemed successful if ticket sales return to 2008 levels within 12 months
of its launch.”
As with all requirements, business requirements should be:
Verifiable. Just because business requirements state business needs
rather than technical specifications doesn’t mean they mustn’t be
demonstrable. Verifiable requirements are specific and objective. A
quality control expert must be able to check, for example, that the
system accommodates the debit, credit, and PayPal methods specified in
the business requirements. (S)he could not do so if the requirements
were more vague, i.e., “The system will accommodate appropriate
payment methods.” (Appropriate is subject to interpretation.)
Unambiguous, stating precisely what problem is being solved. For
example, “This project will be deemed successful if ticket sales increase
sufficiently,” is probably too vague for all stakeholders to agree on its
meaning at the project’s end.
Comprehensive, covering every aspect of the business need. Business
requirements are indeed big picture, but they are very thorough big
picture. In the aforementioned example, if the analyst assumed that the
developers would know to design a system that could accommodate
many times the number of customers the theater chain had seen at one
5
time in the past, but did not explicitly state so in the requirements, the
developers might design a system that could accommodate only 10,000
patrons at any one time without performance issues.
Remember that business requirements answer the what’s, not the how’s, but
they are meticulously thorough in describing those what’s. No business point
is overlooked. At a project’s end, the business requirements should serve as
a methodical record of the initial business problem and the scope of its
solution.
1. What’s Business Rule?
A business rule is, at the most basic level, a specific directive that constrains
or defines a business activity. These rules can apply to nearly any aspect of a
business, in topics as diverse as supply chain protocols, data management
and customer relations. Business rules help to provide a more concrete set of
parameters for an operation or business process.
Business rules can be applied to computing systems and are designed to
help an organization achieve its goals. Software is used to automate
business rules using business logic.
A business rule is a statement that describes a business policy or procedure.
Business rules are usually expressed at the atomic level -- that is, they
cannot be broken down any further.
Techopedia explains Business Rule
One way that business rules contribute to a clearer picture of any given
business process is through a kind of binary concept. Typically, business
theory experts see a business rule as either true or false. Here, business
rules can be used in business planning in many of the same ways that they
are used for algorithm development in programming. One example is the use
of business rules on a flow chart that clearly shows how a defined true or
false case will absolutely affect the next step in a business process.
Business rules can also be generated by internal or external necessity. For
example, a business can come up with business rules that are self-imposed
in order to meet leadership’s own goals, or in the pursuit of compliance with
6
external standards. Experts also point out that while there is a system of
strategic processes governing business rules, the business rules themselves
are not strategic, but simply directive in nature.
Business rules examples and definition
Business rules – A business rule is a specific, actionable, testable directive
that is under the control of an organization and that supports a business
policy. Particularly complex rules, or rules with a number of interrelated
dependencies.
The business rules example – “Only accountants will be allowed to issue
invoices”.
3. Database scalability
Scalability is the capability of a system, network, or process to handle a
growing amount of work, or its potential to be enlarged in order to
accommodate that growth.
Database scalability is the ability of a database to handle changing
demands by adding/removing resources. Databases have adopted a host of
techniques to cope.
The initial history of database scalability was to provide service on ever
smaller computers. The first database management systems such as IMS ran
on mainframe computers. The second generation,
including Ingres, Informix, Sybase, RDB and Oracle emerged
on minicomputers. The third generation, including dBase and Oracle (again),
ran on personal computers.
Database scalability has three basic dimensions: amount of data, volume of
requests and size of requests. Requests come in many sizes: transactions
generally affect small amounts of data, but may approach thousands per
second; analytic queries are generally fewer, but may access more data. A
related concept is elasticity, the ability of a system to transparently add and
subtract capacity to meet changing workloads.
Types of Database scalability
7
a. Vertical Database Scalability
To scale up usually refers to adding more physical resources—that is,
increasing CPU, memory, and storage for an existing server or adding a
bigger one. In essence with vertical scalability:
Application compatibility is prioritized—there’s no need for code
changes.
Administrative efforts are reduced with only a single system image to
manage.
Hardware configurations tend to be more expensive, although today’s
quickly evolving hardware provides incredibly efficient server
components with great price-performance ratios.
Software costs (typically charged by the number of cores) can
increase.
This approach also comes with at least a couple of limitations:
What happens when a workload cannot fit onto the best-equipped
hardware configuration?
What if a workload is highly variable? Why make an upfront investment
in an expensive, large-capacity system that could go underutilized
much of the time? For this reason alone, many cloud providers do not
rely solely on vertical scalability.
b. Horizontal Database Scalability
Horizontal scalability accommodates variable workloads by hosting data
across multiple databases. Unlike vertical scalability, scale-out approaches
can help reduce costs by making use of less sophisticated hardware
components, freeing resources for more in-application development and data
and system maintenance.
You can use any of several well-known approaches to scaling out data tiers.
(For example, see the Wikipedia topic.) The one you choose depends on your
workload and the applications supported by the data store. Most people
choose functional partitioning in which a data set is decomposed into
business or organizational functionalities.
8
Two of the most common scale-out techniques are as follows:
Data is fully replicated across all nodes. One primary copy accepts
changes, and multiple active replicas are typically read-only, as in the
SQL Server Always on Readable Secondary’s or Replication features.
Such configurations can be a good fit for read-intensive workloads such
as reporting, where readers can potentially connect to any server and
execute their queries. By contrast, writers can connect only to the
primary copy, causing a bottleneck in write-intensive workloads.
Read and write operations are distributed across a number of nodes.
By applying a distribution logic of some kind, a given transaction is
fully satisfied by entities residing on a single node.
4. What’s the difference between; conceptual data model, logical
data model and physical data model?
a. Conceptual Data Model
The conceptual data model is a structured business view of the data required
to support business processes, record business events, and track related
performance measures. This model focuses on identifying the data used in
the business but not its processing flow or physical characteristics. This
model’s perspective is independent of any underlying business applications.
For example, it allows business people to view sales data, expense data,
customers, and products—business subjects that are in the integrated model
and outside of the applications themselves.
The conceptual data model represents the overall structure of data required
to support the business requirements independent of any software or data
storage structure. The characteristics of the conceptual data model include:
An overall view of the structure of the data in a business context.
Features that are independent of any database or physical storage
structure.
Objects that may not ever be implemented in physical databases. There are
some concepts and processes that will not find their way into models, but
they are needed for the business to understand and explain what is needed
9
in the enterprise. Data needed to perform business processes or enterprise
operations.
The conceptual data model is a tool for business and IT to define:
b. Logical Data Model
The logical data model is the one used most in designing BI applications. It
builds upon the requirements provided by the business group. It includes a
further level of detail, supporting both the business system-related and data
requirements.
The business rules are appropriated into the logical data model, where they
form relationships between the various data objects and entities. As opposed
to a conceptual data model, which may have very general terms, the logical
data model is the first step in designing and building out the architecture of
the applications.
Like the conceptual data model, the logical data model is independent of
specific database and data storage structures. It uses indexes and foreign
keys to represent data relationships, but these are defined in a generic
database context independent of any specific DBMS product.
The characteristics of the logical data model include:
Features independent of specific database and data storage structures.
Specific entities and attributes to be implemented.
Identification of the business rules and relationships between those
entities and attributes.
Definitions of the primary keys, foreign keys, alternate keys, and
inversion entities.
The logical model is used as a bridge from the application designer’s view to
the database design and the developer’s specifications. This model should
be used to validate whether the resulting applications that are built fulfill
business and data requirements.
c. Physical Data Model
Implementing the physical data model requires understanding the
characteristics and performance constraints of the database system being
10
used. Quite often, it is a relational database, and you will have to understand
how the tables, columns, data types, and the relationships between tables
and columns are implemented in the specific relational database product.
Even if it is another type of database (multidimensional, columnar, or some
other proprietary database), you need to understand the specifics of that
DBMS in order to implement the model.
Designing the physical data model requires in-depth knowledge of the
specific DBMS being used in order to:
Represent the logical data model in a database schema.
Add the entities and attribute definitions needed to meet operating
requirements.
Configure and tune the database for performance requirements.
The characteristics of the physical data model include:
DBMS-specific definitions.
Table, column, and other physical object definitions in the DBMS that
represent the entities and attributes in the logical data model. Column
attributes such as data types are defined and implemented differently
across specific DBMSs.
PARTITION IN SQL SERVER
Partitioning in SQL Server is not a new concept and has improved with
every new release of SQL Server. Partitioning is the process of dividing a
single large table into multiple logical chunks/partitions in such way that
each partition can be managed separately without having much overall
impact on the availability of the table.
Introduction
With every new release of SQL Server, partitioning has reached new heights
of improvement. For example, though we could create partitioned views for
better manageability and scalability since SQL Server 7.0, SQL Server 2005
started with native support for Table Partitioning (more on this later in this
11
article). SQL Server 2008 introduced partition table parallelism for better
performance and for better resource utilization (of modern multi-processors
hardware). With SQL Server 2012, we are now allowed to even create up to a
15K partition on a single table.
There are several reasons why we need to use Partitioning in SQL Server;
some of them are listed below:
Partitioning in SQL Server is not a new concept and has improved with every
new release of SQL Server. Partitioning is the process of dividing a single
large table into multiple logical chunks/partitions in such way that each
partition can be managed separately without having much overall impact on
the availability of the table. Partitioning improves the manageability and
availability of table as well as the performance of the queries running against
this partitioned table.
12
Partition Tables and Indexes – Partition tables and Indexes were
first introduced in SQL Server 2005 and enhanced further in SQL Server
2008 and SQL Server 2012. Partition table/index is an enterprise
edition feature and natively supported by database engine. When we
create partition table/index, data is horizontally divided into units
(called partitions) that can be spread across more than one file group
in a single database. As partitions of a table don’t cross the boundary
of the database, it is also called scale-up partitioning.
Though there are no strict rules that dictate when a table needs to be
partitioned, but when a table grows big enough in size such that
manageability, ensuring availability becomes a challenge for you or when
your users report slow query performance against a large single table then
you need to think of partitioning the table. Ideally, when your single table
grows beyond 50 GB, you should consider partitioning it but it all depends on
your requirements and environment.
13
Manageability– Manageability of partition table/index became easier as
you can rebuild/re-organize indexes of each partition separately. You
can manage each partition separately; you can take a back-up of only
the file-groups that contain partitions having volatile data etc.
Query Performance– The query optimizer uses techniques to optimize
and improve the query performance. For example,
Partition elimination – Partition Elimination is a technique used by
query optimizer to not consider partitions that don’t contain data
requested by the query. For example, if a query requests data for only
the years 2010 and 2011, in that case only two partitions will be
considered during query optimization and execution unlike a single
large table where query optimizer will consider the whole dataset; the
other partitions (2008, 2009 and 2012) will be simply ignored.
Parallel Processing – Query Optimizer uses a technique to process each
partition in parallel or even multiple CPU cores can work together to
work on a single partition. With this, the query optimizer tries to utilize
modern hardware resources efficiently. For example, if a query
requests data for only the years 2010 and 2011, in that case only two
partitions will be considered during query optimization and suppose if
you have 8 cores machine, all 8 cores can work together to produce
the result from the two identified partitions.
Indexes– You can have different settings (FILLFACTOR) or different
numbers of indexes for each partition of a table. For example, the most
recent year partition will have volatile data and will be both read and
write intensive data and used by OLTP applications and hence you
should have the minimum number of indexes, whereas older partitions
will have mostly read only data and be used by Analytical applications
and hence you can create more indexes to make your analytical
queries run faster.
14
Compression– Compression is new feature introduced with SQL Server
2008. It minimizes the need for storage space at the cost of additional
CPU cycles whenever data is read or written. Again, the most recent
year partition will have volatile data and be accessed frequently so
ideally you should not compress it, whereas the older partitions will not
be accessed frequently and hence you can compress them to minimize
the storage space requirement.
Minimized time for Backup/Restore– For a large table, normally only
the latest few partitions will be volatile and hence you can take a
regular backup of the file group (read-write) that contains this volatile
data whereas you can take occasional backups of the file group (read-
only) that contains non-volatile data. This way, we can minimize the
downtime window and reduce the backup and restore time.
15