Q1. With The Help of A Neat Diagram, Explain The Architecture of Virtualized Data Center
Q1. With The Help of A Neat Diagram, Explain The Architecture of Virtualized Data Center
Q1. With The Help of A Neat Diagram, Explain The Architecture of Virtualized Data Center
VMware vSphere virtualizes the entire IT infrastructure including servers, storage, and
networks.
VMware vSphere aggregates these resources and presents a uniform set of elements in the
virtual environment. With VMware vSphere, you can manage IT resources like a shared utility
and dynamically provision resources to different business units and projects.
You can use vSphere to view, configure, and manage these key elements. The following is a
list of the key elements:
■Computing and memory resources called hosts, clusters, and resource pools
■Virtual machines
A host is the virtual representation of the computing and memory resources of a physical
machine running ESX/ESXi. When two or more physical machines are grouped to work and
be managed as a whole, the aggregate computing and memory resources form a cluster.
Networks in the virtual environment connect virtual machines to one another and to the
physical network outside of the virtual datacenter.
Virtual machines can be designated to a particular host, cluster or resource pool, and a
datastore when they are created. After they are powered-on, virtual machines consume
resources dynamically as the workload increases or give back resources dynamically as the
workload decreases.
Provisioning of virtual machines is much faster and easier than physical machines. New
virtual machines can be created in seconds. When a virtual machine is provisioned, the
appropriate operating system and applications can be installed unaltered on the virtual
machine to handle a particular workload as though they were being installed on a physical
machine. A virtual machine can be provisioned with the operating system and applications
installed and configured.
Resources get provisioned to virtual machines based on the policies that are set by the
system administrator who owns the resources. The policies can reserve a set of resources for
a particular virtual machine to guarantee its performance. The policies can also prioritize and
set a variable portion of the total resources to each virtual machine. A virtual machine is
prevented from being powered-on and consuming resources if doing so violates the
resource allocation policies.
Lower costs – no need to purchase hardware or software, and you only pay for the
service you use.
More flexibility – your organisation can customise its cloud environment to meet
specific business needs.
High scalability – private clouds still afford the scalability and efficiency of a public
cloud.
Control – your organisation can maintain a private infrastructure for sensitive assets.
Flexibility – you can take advantage of additional resources in the public cloud when
you need them.
Cost-effectiveness – with the ability to scale to the public cloud, you pay for extra
computing power only when needed.
Ease – transitioning to the cloud doesn’t have to be overwhelming because you can
migrate gradually – phasing in workloads over time.
IaaS: Infrastructure-as-a-Service
applications in your office and control your company website, you would buy servers
and other pricy hardware in order to control local applications and make your
But now, with IaaS, you can outsource your hardware needs to someone else. IaaS
companies provide off-site server, storage, and networking hardware, which you rent
and access over the Internet. Freed from maintenance costs and wasted office space,
companies can run their applications on this hardware and access it anytime.
Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace
and Red Hat. While these companies have different specialties — some, like Amazon
and Microsoft, want to offer you more than just IaaS — they are connected by a
desire to sell you raw computing power and to host your website.
PaaS: Platform-as-a-Service
which is sometimes called middleware. The underlying idea of this category is that all
of your company’s development can happen at this layer, saving you time and
resources.
PaaS companies offer up a wide variety of solutions for developing and deploying
applications over the Internet, such as virtualized servers and operating systems. This
saves you money on hardware and also makes collaboration easier for a scattered
security, and app development collaboration tools all fall into this category.
Some of the biggest PaaS providers today are Google App Engine, Microsoft Azure,
Saleforce’s Force.com, the Salesforce-owned Heroku, and Engine Yard. A few recent
SaaS: Software-as-a-Service
The third and final layer of the cloud is Software-as-a-Service, or SaaS. This layer is
the one you’re most likely to interact with in your everyday life, and it is almost
always accessible through a web browser. Any application hosted on a remote server
Services that you consume completely from the web like Netflix, MOG, Google Apps,
Box.net, Dropbox and Apple’s new iCloud fall into this category. Regardless if these
web services are used for business, pleasure or both, they’re all technically part of the
cloud.
Some common SaaS applications used for business include Citrix’s GoToMeeting,
Disaster recovery
Cloud computing, based on virtualization, takes a very different
approach to disaster recovery. With virtualization, the entire server,
including the operating system, applications, patches and data is
encapsulated into a single software bundle or virtual server. This entire
virtual server can be copied or backed up to an offsite data center and
spun up on a virtual host in a matter of minutes.
Since the virtual server is hardware independent, the operating
system, applications, patches and data can be safely and accurately
transferred from one data center to a second data center without the
burden of reloading each component of the server. This can
dramatically reduce recovery times compared to conventional (non-
virtualized) disaster recovery approaches where servers need to be
loaded with the OS and application software and patched to the last
configuration used in production before the data can be restored.
The cloud shifts the disaster recovery tradeoff curve to the left, as
shown below. With cloud computing (as represented by the red
arrow), disaster recovery becomes much more cost-effective with
significantly faster recovery times.
User access networks connect end-user devices, such as desktop and notebook
computers, printers and Voice-over-IP handsets to enterprise networks. Generally,
the user access network consists of a wiring plant between offices and per-floor
wiring closets and switches in each wiring closet, as well as the interconnection
between the wiring closets and building data centers.
The switches that connect directly to end-user devices are called “edge” or “access”
switches. The edge switch makes the first connection between the relatively
unreliable patch and internal wiring out to each user’s workstation and the more
reliable backbone network within the building. Each user may have only a single
connection to a single edge switch, but everything above the edge switch should be
designed with redundancy in mind. The edge switch is usually chosen based on two
key requirements: high port density and low per-port costs.
Low port costs are desirable because of the cost of patching and repatching devices
in end-user workspaces. If ports are expensive, and only a few ports are available,
then each time a user moves a workstation, printer or phone, someone has to go into
a wiring closet and repatch to their network port — a cost that quickly overwhelms
the savings of buying fewer ports. Since the primary purpose of an edge switch is to
move around Ethernet packets, there’s no reason to buy expensive “feature-full”
switches for most buildings.
High port density is desirable because of the costs associated with managing
them. Each switch is a manageable element, so more switches lead to greater
management complexity, associated costs and potential network downtime due to
human error.
Network managers achieve density in different ways, depending on the size of their
building and the number of devices that must connect to each wiring closet. Chassis
devices, which include blades (typically with 48 ports each) are popular and can scale
up to a large number of users. Switch stacking, which treats a cluster of individual
switches as a single distributed chassis with a high-speed interconnect, is a very
popular and economical alternative.
If edge switches are chosen so that each wiring closet has only a single redundant
uplink, then the distribution layer is usually placed next to the network core, with a
minimum of two devices (one for each half of the redundant uplink) connecting to
each wiring closet.
Aggregation and distribution layer switches are usually selected over edge switches
for their greater reliability and larger feature set. While the aggregation/distribution
layer should always be redundant, devices at this layer should offer nonstop service,
such as in-service upgrades (software upgrades that don’t require reboots or
significant traffic interruption) and hot-swap fan and power supply modules.
Aggregation and distribution layer switches also have more stringent performance
requirements, including lower latency and larger MAC address table sizes. This is
because they may be aggregating traffic from thousands of users rather than the
hundreds that one would find in a single wiring closet.
For many network managers, a pair of core switches represents the top of their
network tree, the network backbone across which all traffic will pass. Although LANs
such as Ethernet are inherently peer to peer, most enterprise networks sync and
source traffic from a data center (either local or in the WAN cloud) and( to a lesser
extent) from the Internet.
This makes a large core switch a logical way to handle traffic passing between the
user access network and everything else. The advantage of a core switch is backplane
switching — the ability to pass traffic across the core without 1Gbps or even
10Gbps limits, achieving maximum performance.
Generally, the backbone of the network is where switching ends and routing begins,
with core switches serving as both switching and routing engines. In many cases,
core switches also have internal firewall capability as part of their routing feature set,
helping network managers segment and control traffic as it moves from one part of
the network to another.
IaaS: Infrastructure-as-a-Service
applications in your office and control your company website, you would buy servers
and other pricy hardware in order to control local applications and make your
But now, with IaaS, you can outsource your hardware needs to someone else. IaaS
companies provide off-site server, storage, and networking hardware, which you rent
and access over the Internet. Freed from maintenance costs and wasted office space,
companies can run their applications on this hardware and access it anytime.
Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace
and Red Hat. While these companies have different specialties — some, like Amazon
and Microsoft, want to offer you more than just IaaS — they are connected by a
desire to sell you raw computing power and to host your website.
Disaster Recovery
Although the concept -- and some of the products and services -- of cloud-
based disaster recovery is still nascent, some companies, especially SMBs,
are discovering and starting to leverage cloud services for DR. It can be an
attractive alternative for companies that may be strapped for IT resources
because the usage-based cost of cloud services is well suited for DR where
the secondary infrastructure is parked and idling most of the time.
Having DR sites in the cloudreduces the need for data center space, IT
infrastructure and IT resources, which leads to significant cost reductions,
enabling smaller companies to deploy disaster recovery options that were
previously only found in larger enterprises. "Cloud-based DR moves the
discussion from data center space and hardware to one about cloud
capacity planning," said Lauren Whitehouse, senior analyst at Enterprise
Strategy Group (ESG) in Milford, Mass.
But disaster recovery in the cloud isn't a perfect solution, and its
shortcomings and challenges need to be clearly understood before a firm
ventures into it. Security usually tops the list of concerns:
Are passwords the only option or does the cloud provider offer some
type of two-factor authentication?
And because clouds are accessed via the Internet, bandwidth requirements
also need to be clearly understood. There's a risk of only planning for
bandwidth requirements to move data into the cloud without sufficient
analysis of how to make the data accessible when a disaster strikes:
"If you use cloud-based backups as part of your DR, you need to design
your backup sets for recovery," said Chander Kant, CEO and founder at
Zmanda Inc., a provider of cloud backup services and an open-source
backup app.
Reliability of the cloud provider, its availability and its ability to serve your
users while a disaster is in progress are other key considerations. The choice
of a cloud service provider or managed service provider (MSP) that can
deliver service within the agreed terms is essential, and while making a
wrong choice may not land you in IT hell, it can easily put you in the
doghouse or even get you fired.
Reliability can be defined as the probability that a system will produce correct outputs up to
some given time t.[5] Reliability is enhanced by features that help to avoid, detect and repair
hardware faults. A reliable system does not silently continue and deliver results that include
uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption, for
example: by retrying an operation for transient (soft) or intermittent errors, or else, for
uncorrectable errors, isolating the fault and reporting it to higher-level recovery mechanisms
(which may failover to redundant replacement hardware, etc.), or else by halting the affected
program or the entire system and reporting the corruption. Reliability can be characterized in
terms of mean time between failures (MTBF), with reliability = exp(-t/MTBF).[5]
Availability means the probability that a system is operational at a given time, i.e. the amount
of time a device is actually operating as the percentage of total time it should be operating.
High-availability systems may report availability in terms of minutes or hours of downtime
per year. Availability features allow the system to stay operational even when faults do occur.
A highly available system would disable the malfunctioning portion and continue operating
at a reduced capacity. In contrast, a less capable system might crash and become totally
nonoperational. Availability is typically given as a percentage of the time a system is
expected to be available, e.g., 99.999 percent ("five nines").
Serviceability or maintainability is the simplicity and speed with which a system can be
repaired or maintained; if the time to repair a failed system increases, then availability will
decrease. Serviceability includes various methods of easily diagnosing the system when
problems arise. Early detection of faults can decrease or avoid system downtime. For
example, some enterprise systems can automatically call a service center (without human
intervention) when the system experiences a system fault. The traditional focus has been on
making the correct repairs with as little disruption to normal operations as possible.
Note the distinction between reliability and availability: reliability measures the ability of a system
to function correctly, including avoiding data corruption, whereas availability measures how often
the system is available for use, even though it may not be functioning correctly. For example, a
server may run forever and so have ideal availability, but may be unreliable, with frequent data
corruption.[6]
Physical faults can be temporary or permanent.
Permanent faults lead to a continuing error and are typically due to some physical failure
such as metal electromigration or dielectric breakdown.
Temporary faults include transient and intermittent faults.
Q9. Explain the layered architecture development of cloud platform for IaaS.
IaaS: Infrastructure-as-a-Service
and other pricy hardware in order to control local applications and make your
But now, with IaaS, you can outsource your hardware needs to someone else. IaaS
companies provide off-site server, storage, and networking hardware, which you rent
and access over the Internet. Freed from maintenance costs and wasted office space,
companies can run their applications on this hardware and access it anytime.
Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace
and Red Hat. While these companies have different specialties — some, like Amazon
and Microsoft, want to offer you more than just IaaS — they are connected by a
desire to sell you raw computing power and to host your website.
Datacentre
management challenges
If you’re using spreadsheets or homegrown tools to manage your server information, you
probably already know the information stored can be outdated, inaccurate, or incomplete. This
can prove challenging when unplanned downtime requires troubleshooting, or when attempting
to map the power chain.
DCIM enables consistent and accurate record keeping, and provides instantaneous visual and
textual information to reduce the time it takes to locate assets and dependencies, thereby
reducing troubleshooting time.
In a dynamic data center it is almost impossible to understand how much space, power, and
cooling you have; predict when will you run out, which server is the best for a new services, and
just how much power is needed to ensure uptime and availability.
DCIM tool to quickly model and allocate space, manage power and network connectivity are key
to discovering hidden capacity while better managing the capacity you do possess.
It’s not enough to implement solutions that reduce operating expenses, you also have to prove it.
According to Uptime institute, “Going forward, enterprise data center managers will need to be
able to collect cost and performance data, and articulate their value to the business in order to
compete with third party offerings.”
A DCIM solution with dashboard and reporting tools capable instantly aggregates data across
several dimensions allowing data center managers to quickly show stakeholders that the data
center is moving toward full operational efficiency.
According to a NY Times article, “Most data centers, by design, consume vast amounts of energy
in an incongruously wasteful manner…online companies typically run their facilities at maximum
capacity around the clock…as a result, data centers can waste 90 percent or more of the
electricity they pull off the grid.”
A DCIM solution helps data center managers monitor energy consumption, cycle off servers
during off hours, and identify candidates for consolidation
DCIM solutions automate manual processes like workflow approvals and the assignment of
echnicians to make adds, moves, and changes. This also assists with provisioning and auditing, all
while achieving operational savings.
Cloud architecture do not automatically grant security compliance for the end-user data or apps
on them, so apps written for cloud have to be secure in their own terms. Some of the
responsibility for this does fall to cloud vendors, but the lion’s share of it is still in the lap of the
application designers. Cloud computing introduces another level of risks because essential
services are often outsourced to a third party, making it harder to maintain data integrity and
privacy.
2. Client incomprehension
We have probably passed the days when people thought cloud were just big server clusters, but
that doesn’t mean that we can ignore the fact about cloud moving forward. There are also too
many misunderstandings about how public and private cloud work together, misunderstandings
about how easy it is to move from one kind of infrastructure to another. A good way to combat
this is to prevent customers with real-world examples of what is possible and why so that they
can base their understanding on the actual working.
3. Data Security
One of the major concerns associated with cloud computing is its dependency on the cloud
service provider. For uninterrupted and fast cloud service you need to choose a vendor with
proper infrastructure and technical expertise. Since you would be running your company’s asset
and data from a third party interface ensuring data security and privacy are of utmost
importance. Hence, when engaging a cloud service provider, always inquire about their cloud-
based security policies. However, cloud service companies usually employ strict data security
policies to prevent hacking and invest heavily in improved infrastructure and software.
Many applications have complex integration needs to connect to applications on the cloud
network, as well as to other on-premises applications. These include integrating existing cloud
services with existing enterprise applications and data structures. There is need to connect the
cloud application with the rest of the enterprise in a simple, quick and cost-effective way.
Integrating new applications with existing ones is a significant part of the process and cloud
services bring even more challenges from an integration perspective.
Businesses can save money on hardware but they have to spend more for the bandwidth. This
could be a low cost for small applications but can be significantly high for the data-intensive
applications. Delivering intensive and complex data over the network requires sufficient
bandwidth. Because of this many enterprises are waiting for a reduced cost, before switching to
the cloud services.
There are three types of cloud environments available – private, public and hybrid. The secret of
successful cloud implementation lies in choosing the most appropriate cloud set-up. Big
companies feel safer with their vast data in private cloud environment, small enterprises often
benefit economically by hosting their services in public cloud. Some companies also prefer the
hybrid cloud because it is flexible, cost-effective and offers a mix of public and private cloud
services.
One of the major issues with cloud computing is its dependency on the service provider. The
companies providing cloud services charge businesses for utilizing cloud computing services
based on usage. Customers typically subscribe to cloud services to avail their services. For
uninterrupted and fast services one needs to choose a vendor with proper infrastructure and
technical expertise. You need a vendor who can meet the necessary standards. The service-level
agreement should be read carefully and understood in details in case of outage, lock-in-clauses
etc. Cloud service is any service made available to businesses or corporates from a cloud
computing provider’s server. In other words, cloud services are professional services that support
organizations in selecting, deploying and management various cloud-based resources.
These challenges should not be considered as roadblocks in the pursuit of cloud computing. It is
rather important to give serious considerations to these issues and the possible ways out before
adopting the technology. Cloud computing are rapidly gaining enterprise adoption, yet many IT
professionals still remain skeptical, for good reason. Issues like security and standards continue to
challenge this emerging technology. Strong technical skills will be essential to address the
security and integration issues in the long run. There are also issues faced while making
transitions from the on-premise set-up to the cloud services like data migration issues and
network configuration issues. But planning ahead can avoid most of these problems with cloud
configurations. The extent of the advantages and disadvantages of cloud services vary from
business to business, so it is important for any business to weigh these up when considering their
move into cloud computing.
Disaster recovery
Defining DR systems
First, let’s nail down some definitions. DR is defined primarily as the protection of data in a
secure facility (generally off-site from production machines) with the intent of saving the
data in case of the loss of a data center or major data systems. DR does not include failover
capability, which is the domain of high availability (HA) systems. We'll discuss HA systems in
detail in an upcoming column. Many DR systems also include HA functionality; so if you are
considering using both types of systems, keep that fact in mind.
Both HA and DR are part of the overall science of business continuity planning (BCP), which
is the implementation of HA and DR for data systems, along with human resources and
facilities management policies, to ensure that both your data and your employees are safe.
Many DR products are on the market today, so I won't look at specific packages. Instead, I'll
go over the characteristics that most available products share. Generally speaking, DR
systems are split into two main types, defined by the methodology used to replicate data
from one location to another: synchronous and asynchronous data transfer systems.
Both DR systems let you create up-to-the-second backup copies of your valuable production
data in another physical location. This allows the data to survive intact if the data center is
lost for some reason, just as in a flood or fire. Unlike tape backup systems, the data is current
and in a useable format, as it is already on a disk system and not stored on a tape, which
must be restored to disk. A data center in Houston can be secured with a data center in
Dallas, for example, allowing systems and people to be moved to another location and then
resume operations with a minimum of recovery issues.
For most applications and businesses, asynchronous DR technologies offer a much more
cost-effective—and still quite sufficient—solution. Figure B offers a view of the typical
asynchronous system.
These systems are generally software-based and reside on the host server rather than on the
attached storage array. They can protect both local and attached disk systems. In an
asynchronous system, I/O requests are committed to the primary disk systems immediately
(blue line) while a copy of that I/O is sent via some medium (usually TCP/IP) to the backup
disk systems (red line). Since there is no waiting for the commit signal from the remote
systems, these systems can send a continuous stream of I/O data to the backup systems
without slowing down I/O response time on the primary system.
Most asynchronous systems have some methodology to make sure that if something is lost
in transmission, it can be resent. Some can also make sure that transactions are written to
both disks in the same order, which is vital for database-driven applications. In addition,
since the usual method of transmission is TCP/IP, these systems have no real distance
limitations, and there's no limit to splitting the primary and backup systems across WAN
segments or subnets.
Fibre Channel technology supports both fiber and copper cabling, but copper
limits Fibre Channel to a maximum recommended reach of 100 feet, whereas more
expensive fiber optic cables reach up to 6 miles. The technology was specifically
named Fibre Channel rather than Fiber Channel to distinguish it as supporting both
fiber and copper cabling.
The original version of Fibre Channel operated at a maximum data rate of 1 Gbps.
Newer versions of the standard increased this rate up to 128 Gbps, with 8, 16, and 32
Gbps versions also in use.
Fibre Channel does not follow the typical OSI model layering. It is split into five
layers:
Fibre Channel networks have a historical reputation for being expensive to build,
difficult to manage, and inflexible to upgrade due to incompatibilities between
vendor products. However, many storage area network solutions use Fibre Channel
technology. Gigabit Ethernet has emerged, however, as a lower cost alternative for
storage networks. Gigabit Ethernet can better take advantage of internet standards
for network management like SNMP.
SAN
Storage area networks (SANs) are the most common storage networking architecture
used by enterprises for business-critical applications that need to deliver high
SANs make up about two-thirds of the total networked storage market. They are
designed to remove single points of failure, making SANs highly available and
resilient. A well-designed SAN can easily withstand multiple component or device
failures.
Arbitrated Loop, also known as FC-AL, is a Fibre Channel topology in which devices
are connected in a one-way loop fashion in a ring topology. Historically it was a
lower-cost alternative to a fabric topology. It allowed connection of many servers
and computer storage devices without using then very costly Fibre Channel switches.
The cost of the switches dropped considerably, so by 2007, FC-AL had become rare
in server-to-storage communication. It is however still common within storage
systems.
Arbitrated loop can be physically cabled in a ring fashion or using a hub. The physical
ring ceases to work if one of the devices in the chain fails. The hub on the other
hand, while maintaining a logical ring, allows a star topology on the cable level. Each
receive port on the hub is simply passed to next active transmit port, bypassing any
inactive or failed ports.
Fibre Channel hubs therefore have another function: They provide bypass circuits
that prevent the loop from breaking if one device fails or is removed. If a device is
removed from a loop (for example, by pulling its interconnect plug), the hub’s bypass
circuit detects the absence of signal and immediately begins to route incoming data
directly to the loop’s next port, bypassing the missing device entirely. This gives
loops at least a measure of resiliency—failure of one device in a loop doesn’t cause
the entire loop to become inoperable.
Fibre Channel over IP, or FCIP, is a tunnelling protocol used to connect Fibre Channel
(FC) switches over an IP network, enabling interconnection of remote locations. From
the fabric view, an FCIP link is an inter-switch link (ISL) that transports FC control and
data frames between switches.
FCIP routers link SANs to enable data to traverse fabrics without the need to merge
fabrics. FCIP as an ISL between Fibre Channel SANs makes sense in situations such as:
Where two sites are connected by existing IP-based networks but not dark
fibre.
Where IP networking is preferred because of cost or the distance exceeds the
FC limit of 500 kilometres.
Where the duration or lead time of the requirement does not enable dark
fibre to be installed.
FCIP ISLs have inherent performance, reliability, data integrity and manageability
limitations compared with native FC ISLs. Reliability measured in percentage of
uptime is on average higher for SAN fabrics than for IP networks. Network delays
and packet loss may create bottlenecks in IP networks. FCIP troubleshooting and
performance analysis requires evaluating the whole data path from FC fabric, IP LAN
and WAN networks, which can make it more complex to manage than other
extension options.
Protocol conversion from FC to FCIP can impact the performance that is achieved,
unless the IP LAN and WAN are optimally configured, and large FC frames are likely
to fragment into two Ethernet packets. The default maximum transfer unit (MTU) size
for Ethernet is 1,500 bytes, and the maximum Fibre Channel frame size is 2,172 byes,
including FC headers. So, a review of the IP network’s support of jumbo frames is
important if sustained gigabit throughput is required. To determine the optimum
MTU size for the network, you should review IP WAN header overheads for network
resources such as the VPN and MPLS.
FCIP is typically deployed for long-haul applications that are not business-critical and
do not need especially high performance.
Software on both ends of the connection is configured to strip off the IP information
leaving the native fiber channel frame available.
iFCP is designed for customers who may have a wide range of Fibre
Channel devices (i.e. Host Bus Adapters, Subsystems, Hubs,
Switches, etc.), and want the flexibility to interconnect these devices
with IP network. iFCP can interconnect Fibre Channel SANs with IP,
as well as allow customers the freedom to use TCP/IP networks in
place of Fibre Channel networks for the SAN itself. Through the
implementation of iFCP as a gateway-to-gateway protocol, these
customers can maintain the benefit of their Fibre Channel devices
FCoE transports Fibre Channel directly over Ethernet while being independent of the
Ethernet forwarding scheme. The FCoE protocol specification replaces the FC0 and
FC1 layers of the Fibre Channel stack with Ethernet. By retaining the native Fibre
Channel constructs, FCoE was meant to integrate with existing Fibre Channel
networks and management software.
The main application of FCoE is in data center storage area networks (SANs). FCoE
has particular application in data centers due to the cabling reduction it makes
possible, as well as in server virtualization applications, which often require many
physical I/O connections per server.
With FCoE, network (IP) and storage (SAN) data traffic can be consolidated using a
single network. This consolidation can:
Data centers used Ethernet for TCP/IP networks and Fibre Channel for storage area
networks (SANs). With FCoE, Fibre Channel becomes another network protocol
running on Ethernet, alongside traditional Internet Protocol (IP) traffic. FCoE operates
directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs
on top of TCP and IP. As a consequence, FCoE is not routable at the IP layer, and will
not work across routed IP networks.
Since classical Ethernet had no priority-based flow control, unlike Fibre Channel,
FCoE required enhancements to the Ethernet standard to support a priority-based
flow control mechanism (to reduce frame loss from congestion). The IEEE standards
body added priorities in the data center bridging Task Group.
Fibre Channel required three primary extensions to deliver the capabilities of Fibre
Channel over Ethernet networks:
Computers can connect to FCoE with converged network adapters (CNAs), which
contain both Fibre Channel host bus adapter (HBA) and Ethernet network interface
controller (NIC) functionality on the same physical card. CNAs have one or more
physical Ethernet ports. FCoE encapsulation can be done in software with a
conventional Ethernet network interface card, however FCoE CNAs offload (from the
CPU) the low level frame processing and SCSI protocol functions traditionally
performed by Fibre Channel host bus adapters.
Here is a listing of technical definitions for iSCSI Connection & iSCSI Session.
Session: The group of TCP connections that link an initiator with a target form a
session (loosely equivalent to a SCSI I-T nexus). TCP connections can be added and
removed from a session. Across all connections within a session, an initiator sees one
and the same target.
SSID (Session ID): A session between an iSCSI initiator and an iSCSI target is defined
by a session ID that is a tuple composed of an initiator part (ISID) and a target part
(Target Portal Group Tag). The ISID is explicitly specified by the initiator at session
establishment. The Target Portal Group Tag is implied by the initiator through the
selection of the TCP endpoint at connection establishment. The Target Portal Group
Tag key must also be returned by the target as a confirmation during connection
establishment when TargetName is given.
CID (Connection ID): Connections within a session are identified by a connection ID.
It is a unique ID for this connection within the session for the initiator. It is generated
by the initiator and presented to the target during login requests and during logouts
that close connections.
ISID: The initiator part of the Session Identifier. It is explicitly specified by the initiator
during Login.
A session is established between an iSCSI initiator and an iSCSI target when the iSCSI
initiator performs a logon or connects with the target. The link between an initiator
and a target which contains the group of TCP connections forms a session. A session
is identified by a session ID that includes an initiator part and a target part.
Discovery-session: This type of session is used only for target discovery. The iSCSI
target may permit SendTargets text requests in such a session.
Point-to-point Architecture: In this configuration, two nodes are connected directly to each
other. This configuration provides a dedicated connection for data transmission between
nodes. However, the point-to-point configuration offers limited connectivity and scalability
and is used in a DAS environment.
FC-Arbitrated Loop: In this configuration, the devices are attached to a shared loop. Each
device contends with other devices to perform I/O operations. The devices on the loop must
“arbitrate” to gain control of the loop. At any given time, only one device can perform I/O
operations on the loop. Because each device in a loop must wait for its turn to process an I/O
request, the overall performance in FC-AL environments is low.
Further, adding or removing a device results in loop re-initialization, which can cause a
momentary pause in loop traffic. As a loop configuration, FC-AL can be implemented without
any interconnecting devices by directly connecting one device to another two devices in a ring
through cables. However, FC-AL implementations may also use FC hubs through which the
They enable the transfer of both storage traffic and fabric management traffic from one switch
to another. In FC-SW, nodes do not share a loop; instead, data is transferred through a
dedicated path between the nodes. Unlike a loop configuration, an FC-SW configuration
provides high scalability. The addition or removal of a node in a switched fabric is minimally
disruptive; it does not affect the ongoing traffic between other nodes.
Q1. What is cloud OS? What are the three key platforms of Microsoft Cloud OS?
Enumerate the four key goals of Microsoft Cloud OS.
The list of new features and enhancements in Windows Server 2012 Hyper-V (let
alone the rest of the OS) is staggering. A number of the features were designed for
building the compute resources of a cloud, and some even started life in Windows
Azure.
Prior to the release of System Center 2012, there were numerous independent
System Products that had some level of integration. Then came System Center 2012,
a suite (that some consider as a single product) that is made up of a deeply
integrated set of components. Used separately, each tool brings a lot to the Cloud
OS. When used together, System Center 2012 SP1 can revolutionize who a business
consumes (or provides) cloud computing.
Windows Server Performance Monitoring - Analyze with Drag-and-drop Dashboards
Try Datadog now! Ad
Virtual Machine Manager manages your fabric infrastructure for virtualization:
hosts, clusters and networks from the bare metal to the ultimate abstraction in
private clouds. The 2012 version has changed fundamentally from its
predecessor in the overall scope by now managing the entire fabric, creating
Hyper-V clusters from bare metal, managing resource and power optimization
natively, interfacing with Hyper-V, VMWare ESX and Citrix Xen server hosts
and orchestrating patching of clusters. There’s also a Service model that lets
you deploy (and subsequently update) entire groups of related VMs for a
Inventory and PC Auditing - Hardware and Software - Windows, Linux, VMware, macOS
Download Total Network Inventory now! Ad
Orchestrator (formerly known as Opalis) is a newcomer in the Service Center
2012 suite but it’s very important as it integrates and links the other
components through automation. Via a Visio like interface Activities are linked
together into Runbooks that can then automate IT processes on demand;
Runbooks can be started from a web interface or from Service Manager or any
Microsoft System Center App Controller is a new member of the System Center
family of products. Although other products in this suite can be implemented
independently of one another (with the ability to integrate, of course), App Controller
is highly dependent on System Center Virtual Machine Manager (VMM) or Windows
Azure. In case you aren't familiar with App Controller's purpose, let me make a brief
introduction.
App Controller is a product for managing applications and services that are deployed
in private or public cloud infrastructures, mostly from the application owner's
perspective. It provides a unified self-service experience that lets you configure,
deploy, and manage virtual machines (VMs) and services. Some people mistakenly
think that App Controller is simply the replacement for the VMM Self-Service Portal.
Although App Controller does indeed serve this function, and in some way can
replace the Self-Service Portal, its focus is different. VMM Self-Service portal was
used primarily for creating and managing VMs, based on predefined templates; App
Controller also focuses on services and applications. App Controller lets users focus
on what is deployed in the VM, rather than being limited to the VM itself.
To understand this concept, you need to be familiar with System Center 2012 VMM
2012. Although this article is not about VMM, I must mention some important things
so you can get the full picture. VMM 2012 has significantly changed from VMM 2008
R2. VMM 2012 still manages and deploys hosts and VMs, but its main focus is on
private clouds and service templates. The end result is that an administrator or end
user can deploy a service or application to a private cloud even without knowing
exactly what lies beneath it.
I mentioned earlier that you can use App Controller to connect to both private and
public clouds. Connecting to a private cloud means establishing a connection to a
VMM 2012 Management Server. However, you can also add a Windows Azure
subscription to App Controller.
Target users for App Controller are not administrators, although some admin tasks
can be performed through the App Controller console. App Controller is intended to
be used by application or service owners: the people that deploy and manage an
application or service. (Don't confuse these folks with the end users that actually use
a service or application. End users should not be doing anything with App
Controller.) An owner might be an administrator, or an owner might be a developer
that needs a platform to test an application. The key point is self-servicing: App
Controller enables application owners to deploy new instances of a service or
App Controller can't create or manage building blocks for VMs or services. Nor can it
be used to create new objects from scratch (except for service instances). Anything
you work with in App Controller must first be prepared in VMM. That means creating
VM templates, guest OS profiles, hardware profiles, application profiles and
packages, and logical networks, as well as providing Sysprepped .vhd files, ISO
images, and private cloud objects. To deploy services through App Controller, a
VMM administrator must create a service template and deployment configuration.
Self-service user roles also should be created in VMM and associated with one or
more private clouds and quotas.
App Controller doesn't have its own security infrastructure: It relies completely on
security settings in VMM, so available options for a user in App Controller depend
directly on the rights and permissions that are assigned to the user in VMM.
Authentication is performed by using a web-based form, but you can opt to use
Windows Authentication in Microsoft IIS to achieve single sign-on (SSO).
SC2012 ADVANTAGES
Here is a list of ten things that are new in Microsoft System Center 2012 R2:
2. Server support
System Center 2012 R2 server-side components prefer the latest server operating
system (OS), Windows Server 2012 R2. The major System Center component that
requires Windows Server 2012 R2 is SCVMM. Windows Server 2012 is a second
choice, and as a third choice, Windows Server 2008 R2 will host most components as
well. Orchestrator and DPM servers can still run even on Windows Server 2008. (Users
of the SharePoint-based Service Manager Self-Service Portal (SSP) must use
Windows Server 2008 or 2008 R2.)
Q6. What is virtual machine manager? What are the benefits of virtual machine
manager to an enterprise.
System Center Virtual Machine Manager enables increased physical server utilization
by making possible simple and fast consolidation on virtual infrastructure. This is
supported by consolidation candidate identification, fast Physical-to-Virtual (P2V)
migration and intelligent workload placement based on performance data and user
defined business policies (NOTE: P2V Migration capability was removed in SCVMM
2012r2). VMM enables rapid provisioning of new virtual machines by the
administrator and end users using a self-service provisioning tool. Finally, VMM
provides the central management console to manage all the building blocks of a
virtualized data center.
Microsoft System Center 2016 Virtual Machine Manager was released in September
2016. This product enables the deployment and management of a virtualized,
The latest release is System Center 2019 Virtual Machine Manager, which was
released in March 2019. It added features in the areas of azure integration,
computing, networking, security and storage.
Q9. What are the components of virtual machine manager? Explain each
component.
VMM COMPONENTS
Q10. What is virtual machine placement. What are the different ways of deploying
virtual machines.
When a virtual machine is deployed on a host, the process of selecting the most
suitable host for the virtual machine is known as virtual machine placement, or simply
placement. During placement, hosts are rated based on the virtual machine’s
hardware and resource requirements, the anticipated usage of resources, and
capabilities resulting from the specific virtualization platform. Host ratings also take
into consideration the placement goal: resource maximization on individual hosts,
load balancing among hosts, or whether the virtual machine is highly available. The
administrator selects a host for the virtual machine based on the host ratings.
Automatic Placement
In the following cases, a virtual machine is automatically placed on the most suitable
host in a host group, in a process known as automatic placement:
During automatic placement, the configuration files for the virtual machine are
moved to the volume judged most suitable on the selected host. For automatic
placement to succeed, a virtual machine path must be configured on the
recommended volume. For more information, see About Default Virtual Machine
Paths.
But the VM placement concerns also its storage and its network. Let’s think about a
storage solution where you have several LUNs (or Storage Spaces) according to a
service level. Maybe you have an LUN with HDD in RAID 6 and another in RAID 1
with SSD. You don’t want that the VM which requires intensive IO was placed on
HDD LUN.
Thanks to Virtual Machine Manager we are able to deploy a VM in the right network
and in the wanted storage. Moreover, the VM can be constrained to be hosted in a
specific hypervisor. In this topic, we will see how to deploy this kind of solution. I
assume that you have some knowledge about VMM.
Roles
WMI
PowerShell
System Center Virtual Machine Manager 2012 R2 has enormous PowerShell support.
Every task that you can perform on the SCVMM console can also be performed using
PowerShell. Also, there are some tasks in SCVMM that can only be performed using
PowerShell.
There are two ways in which you can access the PowerShell console for SCVMM:
The first technique is to launch it from the SCVMM console itself. Open the SCVMM
console in administrator mode and click on the PowerShell icon in the GUI console.
This will launch the PowerShell console with the imported virtualmachinemanager
PowerShell module:
Import-module virtualmachinemanager
This will import the cmdlets in the virtualmachinemanager moduel for administrative
use.
Note: SCCM was formerly known as SMS (Systems Management Server), originally
released in 1994. In November 2007, SMS was renamed to SCCM and is sometimes
called ConfigMgr.
Users of SCCM can integrate with Microsoft InTune, allowing them to manage
computers connected to a business, or corporate, network. SCCM allows users to
manage computers running the Windows or macOS, servers using the Linux or Unix,
and even mobile devices running the Windows, iOS, and Android operating systems.
The new edition to the System Center Configuration Manager Legacy is System
Center 2012 Configuration Manager (SCCM 2012). There have been quite a few
improvements over its predecessor, System Center Configuration Manager 2007
(SCCM 2007). These improvements stretch from the big picture improvements in
hierarchy to the more granular improvements in custom client settings. This product
has made many leaps and bounds from its SMS days. Here are some of the big
picture changes that were made.
MOF changes
More than 1 MP per site to extend the number of clients each site can handle and
help with redundancy. Clients will choose which one they want based off of capability
and proximity.
This new tool improves end user experience by giving them limited ability to manage
settings for interacting with SCCM, empowering them with self-service. With the end
user having the ability to setup “business hours”, this will help with reducing the
downtime associated with updates, software distribution, and OSD since the end user
decides when they want it done. This replaced the run advertised programs in SCCM
2007.
The big picture changes are going to make a very big impact Site/Hierarchy wide.
The granular changes being made on a client and feature level will surely make life
easier on admins. Here are some of the more critical changes made with SCCM 2012.
Permissions can now go across sites and be made granular with security scopes. This
is a much needed improvement since it was really difficult with previous versions to
separate permissions properly.
Once a change is made to client settings than they are made hierarchy wide with the
default settings or you can be granular and mix in some custom client settings for
specific collections. Remember, the custom client settings take precedence over the
default settings.
Packages contain source files that are run off of command line through programs.
Applications now have a dependency intelligence that is built into the agent which
will
Configuration Manager provides you with tools and infrastructure you can use to create and
deploy operating system images for servers and virtual machines in your environment. It
does this using the same technologies as client management, including Windows Imaging
(WIM) and the Microsoft Deployment Toolkit (MDT), which offers additional customization
capabilities. These server images can also include enterprise applications, OEM device
drivers, and additional customizations needed for your environment.
Servers can be organized by group, user, or region to phase a deployment rollout. Servers
that are upgraded also have the option to migrate their user state information. Bootable
media containing operating system images can also be created, and this can be particularly
helpful in datacenters where PXE boot isn’t possible. Configuration Manager 2012 R2 can
store images as VHD files and optionally place them in a Virtual Machine Manager library
share together with App-V packages. Virtual Machine Manager can then use these library
objects to deploy preconfigured virtual machines or inject application packages into
application profiles and virtual machines.
When we deploy applications, we will come across few of the elements of applications :
1) Application Information – This provides general information about the application such
as the name, description, version, owner and administrative categories. Configuration
Manager can read this information from the application installation files if it is present.
c) Windows Mobile Cabinet – Creates a deployment type from a Windows Mobile Cabinet
(CAB) file.
d) Nokia SIS file – Creates a deployment type from a Nokia Symbian Installation Source (SIS)
file.
When you deploy an application in CM 2012 you come across 2 things, Deployment Action
and Deployment Purpose. Both of these are really important.
– Deployment Purpose – This is really important, you have an option to specify Deployment
purpose as “Available” or “Required“. If the application is deployed to a user, the user sees
the published application in the Application Catalog and can request it on-demand. If the
application is deployed to a device, the user sees it in Software Center and can install it on
demand.
Configuration Manager provides several methods that you can use to deploy an operating
system. There are several actions that you must take regardless of the deployment method
that you use:
Identify Windows device drivers that are required to start the boot image or install
the operating system image that you have to deploy.
Identify the boot image that you want to use to start the destination computer.
Use a task sequence to capture an image of the operating system that you will
deploy. Alternatively, you can use a default operating system image.
Distribute the boot image, operating system image, and any related content to a
distribution point.
Create a task sequence with the steps to deploy the boot image and the operating
system image.
Deploy the task sequence to a collection of computers.
Monitor the deployment.
There are several methods that you can use to deploy operating systems to Configuration
Manager client computers.
Bootable media deployments: Bootable media deployments let you deploy the operating
system when the destination computer starts. When the destination computer starts, it
retrieves the task sequence, the operating system image, and any other required content
from the network. Because that content is not included on the media, you can update the
content without having to re-create the media. For more information, see Create bootable
media.
In environments where it is not practical to copy an operating system image or other large
packages over the network.
Pre-staged media deployments: Pre-staged media deployments let you deploy an operating
system to a computer that is not fully provisioned. The pre-staged media is a Windows
Imaging Format (WIM) file that can be installed on a bare-metal computer by the
manufacturer or at an enterprise staging center that is not connected to the Configuration
Manager environment.
Later in the Configuration Manager environment, the computer starts by using the boot
image provided by the media, and then connects to the site management point for available
task sequences that complete the download process. This method of deployment can reduce
network traffic because the boot image and operating system image are already on the
destination computer. You can specify applications, packages, and driver packages to include
in the pre-staged media.
Q1. What are the hardware, software and networking requirements of operations
manager 2012?
Every enterprise relies on its underlying services and applications for everyday
business and user productivity. SCOM is a monitoring and reporting tool that checks
the status of various objects defined within the environment, such as server
hardware, system services, operating systems (OSes), hypervisors and applications.
Administrators set up and configure the objects. SCOM then checks the relative
health -- such as packet loss and latency issues -- of each object and alerts
administrators to potential problems. Additionally, SCOM offers possible root causes
or corrective action to assist troubleshooting procedures.
SCOM uses traffic light color coding for object health states. Green is healthy, yellow
is a warning and red is a critical issue. (Gray can denote an item is under maintenance
or SCOM cannot connect to the object.) Administrators set a threshold for each
object's health states to determine if SCOM should issue an alert. For example, the
admin can set a disk drive as green/healthy with more than 70% capacity remaining,
yellow/warning with 70% to 80% capacity filled and red/critical with more than 80%
of storage capacity filled. The admin can adjust these levels when needed
System Center Service Manager provides you with an integrated platform for
delivering IT as a service through automation, self-service, standardization, and
compliance. System Center Orchestrator enables you to create and manage
workflows for automating cloud and datacenter management tasks. And Windows
Azure Pack lets you implement the Windows Azure self-service experience right
inside your own datacenter using your own hardware.
IT isn’t just about technology, it’s also about the people and processes that use those
services. Employees don’t care about which Microsoft Exchange Server their
Microsoft
Outlook client gets their mail from, they just need to be able to get their mail so that
they can do their job. They also don’t want to know the details of how mail servers
are upgraded or patched, they just want the newest capabilities without any service
interruptions. From the user’s perspective, IT just delivers a service they depend on as
they perform their daily routine.
The design goal of System Center Service Manager is to provide organizations with
an integrated platform for delivering IT as a Service (ITaaS) through automation, self-
service, standardization, and compliance. Service Manager does this by enabling
Building automation
System Center Orchestrator can be used to create and manage workflows for
automating cloud and datacenter management tasks. These tasks might include
automating the creation, configuration, management, and monitoring of IT systems;
provisioning new hardware, software, user accounts, storage, and other kinds of
resources; and automating various IT services or operational processes.
Orchestrator provides end-to-end automation, coordination, and management using
a graphical interface to connect diverse IT systems, software, processes, and
practices. Orchestrator provides tools for building, testing, and managing custom IT
solutions that can streamline cloud and datacenter management. Orchestrator also
facilitates cross-platform integration of disparate hardware and software systems in
heterogeneous environments.
to service requests from customers or users. Orchestrator is also a valuable tool for
automating complex, repetitive tasks in traditional datacenter environments and can
simplify the management of a large, heterogeneous datacenter.