Adl Future of Enterprise Networking PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

The future of

enterprise networking
How telecom operators need to accelerate to defend their
enterprise networking competence

February, 2018

We would like to extend a special thanks


to the contributors from Deutsche Telekom
Content

Executive summary 3

1. Enterprise networks are at the heart of digitization 5

2. Network bandwidth and data volumes will increase five- to tenfold by 2022 7

3. New, high-performance, low-latency use cases begin to emerge 8

4. Enterprises evaluate network vendors along multiple dimensions 9

5. Networking and computing will eventually be integrated 14

6. Enterprises revert to a DIY approach to networking 15

7. Operators need to rethink their networking portfolio 16

Appendix: How changes in computing and storage technology impact enterprise networking 17

Authors:

Bela Virag Andrea Faggiano


Partner, Telecommunication, Informa- Partner Telecommunication, Information,
tion, Media & Electronics, Vienna Media & Electronics, Dubai
virag.bela@adlittle.com faggiano.andrea@adlittle.com

Jonathan Rowan Agron Lasku


Partner Telecommunication, Information, Principal , Telecommunication, Information,
Media & Electronics, London Media & Electronics, Stockholm
rowan.jonathan@adlittle.com lasku.agron@adlittle.com
Executive summary

Consumers and businesses around the world increasingly demand virtually


instantaneous digital experiences in whatever they do. Many industries have
already embraced the related opportunities in serving their customers
instantaneously and seamlessly, and in reengineering their business models and
digital production methods accordingly. The more companies digitize – whatever
that may mean in each individual context – the more instant digital interactions
matter. These expectations are the key driver behind industrial digitization.

Looking back, as digitalization gained steam over the past decade it also drove a
revolution in computing. Cloud computing began to spread, ways to leverage
massive parallel computing were invented, storage technologies significantly
accelerated, new file systems and new ways to store and retrieve data were
invented, and so forth.

However, the products and services telecommunication operators provide to


enterprises have not evolved at the same pace. Despite a few facts:
nn Virtually all traffic (consumer and business) connects users to enterprises
nn All networks terminate with computers or servers of one form or another
nn Any acceleration in computing automatically lays incremental demands on
networks, too
nn Finally, we expect many new, high-performance use cases emerge in the next
5 years

Are today’s enterprise networks fit for the task?

The good news is that telecommunication service providers continue to increase


bandwidth in their access networks for both consumers and enterprises. Thus, we
expect consumed bandwidths and transported data volumes to 5-10 fold over the
next 5 years.

But when interviewing the infrastructure responsible executives of enterprise


customers across industries, we learn that when they think about their future of
enterprise networking, they do not think about bandwidth increases. To them this
is by far not anymore the most important aspect when buying a corporate
network. Automate-ability, availability, security, scalability all rank higher than
performance.

3
To enterprises, automation is no longer a hype, but founded in functional and
security requirements. Automation must span all infrastructure aspects:
computing, storage, security and networking. Those operators believing that
SD-WAN is merely a cost-saving technology are off the mark: it is about enabling
enterprise infrastructure to support agile development approaches spanning
multiple cloud environments and providing significantly enhanced cyber security
capabilities, next to improved manageability for their corporate customers’
operators.

Enterprise architects want to manage their networks via their own applications –
both network oriented applications, e.g. for internal IT departments as well as non-
network oriented applications. Some will even issue design policies making their
applications network aware.

We believe operators must rethink their approach to enterprise networking and


offer a comprehensive and integrated portfolio of services meeting the entire
CIO-infrastructure agenda: computing, storage, security and networking. Or be
superseded out by others who do.

4
1. Enterprise networks are at the heart
of digitization

We have observed that while the various aspects of computing many enterprises’ no option but to satisfy their networking
and security are evolving at blazing speeds, networks aren’t. needs elsewhere. Enterprises demanding – beyond bandwidth
This is not to say they don’t deliver more throughput – on – automation, security, availability and overarching performance
the contrary. But most networks have not evolved in their increasingly need to revert to “do-it-yourself” (DIY) models,
capabilities. In this report we will describe what these stitching together solutions from non-telecom-operator
capabilities could be. networking companies.

Enterprise infrastructure executives have told us in interviews Before diving into the topic deeper, we need to establish two
that many operators, with all of their ICT growth ambitions, thoughts:
do not yet have a comprehensive answer to the evolution
of enterprise networking. If this is true, operators are risking 1) Virtually all traffic (consumer and business)
existing revenue streams connects users to enterprises. The share of peer to
Are telecom operators and leaving money on the peer traffic is generally low
missing out on advanced table for enhanced services Take sending an email as an example: this is traffic running
enterprise networking in the networking space. between users and a mail-server operator such as Microsoft,
opportunities? This misalignment between Google, ISPs, telecommunication providers, et al. Another
demand and supply leaves
Figure 1: Three things that can be done to data

Move data through time


“Storage”

What is your (edge) What is your content


datacenter strategy? delivery strategy?

Security

Move data through


Transform data
space

What is your WAN


optimization and
security strategy?
Source: Arthur D. Little, expanding on John Leddy, Comcast “the smarter network”

5
1
example could be file sharing (e.g., a video) – which requires connectivity is one part of this, and so are cloud computing and
companies such as Dropbox, Amazon, Facebook and Google data storage.
to support it. Very little traffic actually runs between users.
This is in stark contrast to what telecommunication operators Essentially, there are three things enterprises do to data1:
would normally do: transmission of voice (which is peer to nn They move data through space: which is called routing.
peer), sending of an SMS (in contrast to WhatsApp) or torrent
downloads (in contrast to other file-sharing platforms). But nn They move data through time: which is called storage.
overall, we estimate that more than 90% of all data traffic is not nn They transform data: which is called computing
in a consumer-to-consumer context.
Of course, all of this needs to happen securely.
2) Enterprises need to satisfy the demand for an
instantaneous end-user/device experience. In this report we will outline how the increase in industrial
digitization has already led to new forms of computing and is
They must manage their applications and infrastructures
pushing the limits of enterprise networking.
highly efficiently and from an end-to-end perspective. Network

1 Source: Arthur D. Little expanding on John Leddy, Comcast, “the smarter network”

6
2. Network bandwidth and data volumes
will increase five- to tenfold by 2022

How it all started: Consumers enjoy broadband services nn M


obile technologies will evolve to 4.5G (300 Mbps),
(on ever smarter devices), becoming faster and faster. We 4.5G Pro (800 Mbps), 4.9G (1 Gbps) and eventually 5G.
predict that, whatever the baseline and whatever the access The expected migration of these services by appreciating
technology, speeds and volumes in access networks will consumers will lead to an increase in average connection
increase five- to tenfold by 2022. speeds by 5 to 10 times in the next five years. And we
expect mobile to play a greater role in fixed access than it
nn C
opper access speeds are being accelerated via vectoring,
has so far.
G.now and eventually G.Fast to deliver 10 times the current
performance. Cisco, in its visual networking index2, projects a fourfold traffic
increase by 2022. But this projection excludes a rise in internet
nn P
assive optical-fiber networks are progressing into a 4 times
gaming, virtual & augmented reality, artificial intelligence,
10Gbps symmetrical performance with a 1:256 overbooking
security-as-a-service, immersive video, video surveillance,
factor, delivering multi-Gbps speeds to end customers (and
robotics and other use cases that are hard to predict. We believe
there is a world outside of PON, too).
the total volume transported may even grow five- to tenfold in
nn D
ocsis 3.1 (cable networks) delivers 10 Gbps in the the next five years.
downstream and 2 Gbps in the upstream, and will progress
The increase in access speeds and traffic volumes is a good
towards higher manageability and even more bandwidth,
indication that customers expect (and are willing to spend
with full duplex and 40 Gbps in the shared segment already
money on) a more instantaneous experience. This drives the
agreed in the standard.
emergence of new, high-performance, low-latency use cases.

2 http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/vni-hyperconnectivity-wp.html

7
3. New, high-performance, low-latency
use cases begin to emerge

Enterprises of all types – national or global, privately or publicly nn R


eal-time steering of self-controlling applications and remote
held, whatever industry – drive digitization in order to improve control of machines, robots, vehicles, trains, planes, etc.,
customer experience, innovate, disrupt or lower cost: drive up the number of connections as well as demand for
low latency.
From our casework and dozens of interviews conducted
nn Situational awareness for robotics, mining, security forces
especially on this topic with industry executives responsible for
or emergency response teams, physical plant security –
their networks, we have seen new demands emerge: Broader
especially if remote, etc. – drive the need to keep networks
availability of digital services (towards the customer as well as
agile.
internally) drives an increase in networking needs:
nn N
atural language processing for entertainment purposes,
nn A
ll kinds of video and imagery applications: video sharing,
smart homes, customer support, contact or incident
video calling, video games, video surveillance and visual
categorization, customer experience measurements,
sensors are a large traffic and bandwidth drivers for
human-machine interfaces, etc., drive the need for network-
corporate networks.
embedded computing.
nn M
igration of data from own maintained storage and servers
nn AI-supported applications for data cleansing, real-time
to the cloud, leading to migration of traffic volume from intra-
performance improvement, scheduling, etc., drive all of the
enterprise to the wide-area-networks (WAN).
above.
nn B
ig data-driven offerings and decision-making to optimize
marketing, sales, logistics and production drive low-latency3
needs.

Figure 2: Why companies digitize

Companies digitize to…

…improve CEX …innovate …disrupt …lower cost


 By engaging via  By opening own  By leveraging own  By increasing staff
simpler channels and capabilities and assets capabilities and assets productivity and
more open with up to 3rd parties to to change other machine utilization
customers leverage industries

Source: Arthur D. Little

3 Low latency in this context refers to the time passed between when a request is submitted and an a response is generated (end to end)

8
4. Enterprises evaluate network vendors
along multiple dimensions

When enterprises rethink their networks in light of increased growing fastest in importance, followed by availability, security
demand, they no longer only consider bandwidth, but review all and scalability, and trailed by performance and manageability
six dimensions: [availability, security, manageability, automate- considerations.
ability, elasticity and bandwidth]. reader to match figure 3.
1 The need to automate networks results directly
3)
From the interviews we have held with executives responsible
from the need to more granularly and consistently
for the networking infrastructure of enterprises and from our
manage network quality and networking resources.
case work we have learned that, while telecommunication
operators often tout bandwidth – and they typically refer to The sheer volume of network ports enterprises manage in the
access bandwidth – enterprises have a much more holistic view most varied domains (e.g., offices, production facilities, clouds
of how they define network performance. We have captured and data centers, etc.), combined with the increased need to
enterprise-networking performance expectations into the segment the network4 for service quality and security, leads to
following dimensions: the need to automate network administration. Such automation
must include own networking assets (e.g., in the LAN5, WAN
The figure illustrates how executives think of networking
and datacenter(DC)-networks), as well as networking assets
qualities and how they expect the importance of each of the
provided by various cloud service providers and networks from
listed items to evolve in the near term. We have indicated this
telecommunication network operators. This either calls for
trend with the red arrows on the dials. “Automate-ability” is
standardization and APIs, or for technology that can interface

Figure 3: Aspects enterprises evaluate when procuring networks

1 2 3

Automate-
Availability Security Scalability Performance Manageability
ability

Changes
in importance

Automation of Availability of Network Elastic, scaling Bandwidth, Visibility and


network promised security resources latency, jitter, control of
configuration network links functionality (routing, packet loss, etc. network
and the ability to and the related (IDS / IPS, firewalling, resources
Description call related management of filtering, packet caching, Enable overlay
functions availability inspection, processing, etc.) end-to-end
automatically DDOS, malware, service
encryption, etc.) management

Source: Arthur D. Little

4 Segmenting a network refers to concept of logically splitting different traffic types within a network and treating each type differently. A single link may carry public
WiFi, surveillance, employee internet, office collaboration, voice and corporate database traffic. Network administrators strive to treat each of these services
differently across their networks.
5 LAN = Local Area Network. The network enterprises we use inside the offices or factories.

9

1
with the different standards and vendors. In most cases, Figure 4: Deployment frequency
though, operators do not offer such solutions. Many don’t even
today aspiration
offer automation solutions for their own networks!
>1x p.d. 0
1
Customers we have interviewed manage tens or hundreds of
thousands of ports with dozens of services. They do so across 5
daily
3
multiple clouds and data centers, often in international settings.
weekly 12
To deliver services securely and at predefined quality of service 26
(e.g., database access, office IT), each service needs to be 53%
monthly 19
performance managed throughout the network and across 27
all nodes. This effort exceeds the ability of many if managed
22
manually. quarterly
16 55%
(bi-)annualy 33
But there is also a second key driver for network automation: 17
the need for an agile infrastructure to empower agile
enterprises. Companies have recognized that in order to remain Source: Arthur D. Little based on RightScale’s 2017 State of the Cloud

competitive, they have to accelerate their software deployment


cycles. Today, more than 50% of all companies deploy software Therefore, the process of software development and
quarterly at most. However, going forward, more than 50% deployment across distributed environments (e.g. multiple
aspire to deploy software either weekly or monthly. clouds and data centers), as well as the related reconciliation
of the upstream environments on a daily, weekly or monthly
As code is written and tested, it is moved from a development basis, requires a high degree of automation. Deploying software
environment into a test environment, an integration environment more frequently means the maintenance of the development,
and eventually a production environment. Each environment integration and test environments requires more effort – to an
requires adjustments in access policies, firewalls, DNSs6, extent that only automation is really feasible (unless security is
routing policies, response simulation methods, test automation, jeopardized).
deployment automation, etc., in order for software to function
properly.

Figure 5: Aspects enterprises evaluate when procuring networks

Every quarter: manual Every month: manual


Automation need Every month: manual BUT: if every day / week:
has to be automated

Test & Deploy &


Classic phases Plan Code & build Integrate
release operate

Required compute Dev Integration Test Run


infrastructure Cloud Cloud Cloud Cloud

Required network Integration Test Run


LAN
infrastructure Cloud & DC Cloud & DC Cloud & DC
Network policy change
DNS change
Required network Cloud routing change
changes Network response sim.
Network test
Autom. deployment
Source: Arthur D. Little

6 DNS = Domain Name Servers. These are machines that maintain a directory of addresses and machine-addresses within a network.

10
4)
2 After automation, considerations on availability, 5) Surprisingly, the importance of network
3
security and scalability gain importance, too. performance and manageability are not expected
Security, especially, is an unsolved question for to increase much.
many enterprises – leading them to be ready to
move beyond a DIY approach. We were quite surprised to find that network performance
improvements ranked third after network automation. Network
While availability is paramount to any infrastructure, it seems performance includes the topics of bandwidth, latency, jitter,
that corporates have understood the fundamentals of availability packet loss, and similar. Contrary to our initial expectations,
management well: agreeing on service levels, insisting on path our interview partners often stated that, to them, improving
redundancy and managing supplier performance. This seems bandwidth was mostly a question of negotiation. While it seems
to be sufficient in most cases – if operators deliver as agreed, that this is good news, it is also driving prices down.
rather than accepting the commercial penalty.
The one topic in the “performance” category that caught more
Despite major operators such as Vodafone and Deutsche interest was the importance of managing and decreasing
Telekom having launched dedicated cyber-security teams, latency.
many enterprises, so far, rely on self-made approaches to cyber
security. However, our discussions have shown that they are Enterprises reflect on this in two contexts: According to
considering adjusting this approach to include security-as-a- WebpageFX9 and other sources, experiments show that users
service. This is mostly driven by the fact that enterprises have expect websites to load in less than three seconds or faster.
recognized an increase in threat complexity, breadth and need At the same time the top e-commerce sites take 5-10 seconds
for massive infrastructure to fend off attacks, and they simply to load. Multiple tests by Amazon, Google, Walmart, Tagman,
do not have the skill or budget to enact the related security Shopzilla and others have confirmed that increasing page load
demand. times by 1 second leads to a 1%-7% increase in sales.

So far, the most sourced services relate to infrastructure- Figure 6: User expectation: website load time
heavy parts of cyber defense (e.g., DDoS7, DNS protection,
etc.). However, providing security is not an isolated matter in a Time: from click-to-boom
highly dynamic network. And it is definitely not isolated from 5-10 seconds
computing. Firewalling, identity management, federation and
single-sign-on solutions, fraud and malware protection, data- User
loss prevention, intrusion detection and prevention, audits and expectation:
forensics, etc., are all essential and need to work hand in hand 3 seconds
with computing and networking8.
Every second improves
Any security concept needs to be adjusted to the “network conversion time
(up to 1%-7% per second)
domain” it attempts to secure. Because enterprise networks
span multiple domains, such as data centers, offices, cloud- Source: Arthur D. Little
service providers, suppliers, POSs, partners, campuses and
customer front ends, some traffic may need to be diverted Latency, as enterprises look at it, is the time it takes for a client
from their direct links to security providers before pursuing request to receive a response. This time interval is impacted by
the original destination. Security-as-a-service providers are two main factors:
developing diverse solutions for security in such environments.
a) the time it takes a request to travel through the network to
Eventually, we believe many enterprises will focus on their the server and back, and
core competences and source security services from specialist b) the time it takes the servers to compute an appropriate
vendors offering security-as-a-service functionality. response
7 DDoS = Distributed Denial of Service. The attacker will send more requests than the defender can process. This causes the defender’s systems to stall. In doing so,
often, attackers use multiple devices to run the attack (sometimes deploying tens or hundreds of thousands of hijacked machines to attack in a concerted manner,
causing up to 1terrabit per second – or 1 million gigabit per second – attacks.)
8 An example of how security concepts are intertwined with computing include functions needing to make networks change their security domains in run-time to
satisfy regulatory constraints. Another is corporate sandboxes needing to be made available to outside developers, including access to cloud-services, etc., or – as
illustrated above – software progressing through the development cycle into deployment.
9 www.webpagefx.com

11
Ad a) We have analyzed how delay builds up in great detail10 in manufacturing more generally, and the like: in these situations,
the various access technologies. To illustrate our findings, we the data exchanged is often very small and very frequent, with
assume a small, simple website of 2.5 MB in size with limited very low latency demand. Since bandwidth will not combat
intelligence. As expected, the faster the access medium, the delay in the case of small amounts of data, different approaches
faster the site delivery, as you can see in the figure below. are needed. This could include:
Figure 7: … nn WAN optimization

nn static routes
2.3 sec 1.1 sec
nn embedded/cached DNS lookups
Values in ms 2.463 128 1.258 97
15% nn locally cached content
(14)
39%
(49) 20% nn no TLS/SSL handshakes
920 KB
(20)
Remaining data 81% 79% nn moving servers closer to where they are needed
11% (993)
transmission (1.986) 360 KB
(15) nn etc.

65%
(63)
Here again, operators seldom offer above mentioned services
50%
(64) 67 KB in their portfolios – let alone integrate them with their cloud
Ramp up 61 KB
17% 19% portfolios.
Setup (235)
(426)

3G-4G 5G DSL Fiber But what if the above


(10 Mbps) (400 Mbps) (20 Mbps) (1 Gbps) Will network operators add solutions don’t help (e.g. in
Source: Arthur D. Little computing intelligence to manufacturing processes,
their networks and expose real-time monitoring and
Even though 5G is much faster than 3G or 4G, and fiber is network configuration to similar applications)? In
much faster than DSL, the most relevant observation is that the their customers? these cases, enterprise IT
increased maximum bandwidth does not linearly impact end-to- departments typically keep
end performance. In faster media the relatively slow setup and servers on site, e.g., in small, local data centers/data rooms.
ramp up phases increase in relevance – they make up 2/3s of This increases complexity and risk in regards to cyber security,
the overall time. And by the time the maximum bandwidth has increases the operational effort to maintain the infrastructure
been reached, the network will have carried more than 50% and foregoes scale efficiencies to be had in datacenters or the
of the payload already. Therefore, carrying small payloads on cloud.
TCP11-oriented networks becomes inefficient quickly. The more
requests and connections are being established, the less the Telecom operators could overcome these issues by deploying
1
maximum throughput matters. clouds in which the computing can be done in the network,
close to the customer’s sites and the networking configuration
In our example, the website will load in 2.5 seconds – excluding can be established by the client in an automated fashion. This
server-processing time. But what if a single element is not would enable operators to process the entire transaction locally,
readily available on that server? This could be an external while not having to manage the infrastructure.
advertisement server, a system to suggest the “next-best
offer”, or a user authentication system which accesses centrally Ad b) The second driver to latency is the time it takes computing
stored user-profile data. It will slow the entire process down – environments to produce desired results.
sometimes considerably.
Beyond what we discuss on computing and storage
But what if we are not talking about websites but enterprise technologies in the appendix, there is one specific point
IT? Let’s think of ERP systems, industrial control processes, IoT impacting delay: the way software is written today.
processes, cash registers in supermarkets, tracking devices in
logistics, measurements in predictive maintenance, robotics,

10 Considering a technically relatively comprehensive perspective on the impact of TCP, DNS, TLS/SSL handshakes, frame alignment, scheduling, buffering, processing,
and eventually the impact of slow-starting IP networks.
11 TCP = “Transmission Connection Protocol”. It is part of the family of internet-protocols. The protocol is used in most internet applications, such as web-surfing, e-mail,
etc. and manages the connection between client and server.

12
By far not all software developers consider the effect of network the developer’s), or at least in the same LAN. During the later
performance on application performance. Beyond, they often stages of software integration and testing, response automation
use development frameworks12, both self-made and generally techniques are employed which simulate behavior of remote
available, which sometimes obfuscates how servers and resources, only without delay. As a result, software is seldom
networks interact. developed “network aware”.

Let’s use an example: suppose the task is to load a list of However, many executives responsible for network
products and the related price for each product. How this task infrastructure are not satisfied that software developers do not
is coded has a great impact on overall performance. If the consider the network when coding.
code is written so that the first query will load the products
and the following queries will load the related price for each What needs to happen for software to become location aware?
item, in a setup of 10.000 products this will mean executing 1. Developers need to consider network topologies and data
10.001 queries. While there may be other ways to code this, localization prior to designing their software.
sometimes developers simply don’t know how to and don’t
worry too much about it or the architectural setup doesn’t allow 2. Architects need to embrace the idea that geographically
them to do so more efficiently. redundant data and functions can - and maybe should be
- maintained in real time. This means that the concept of
The real issue is that developers working in the development having a middleware- or enterprise-bus-architecture needs
environment will not experience any meaningful delay, but once to evolve beyond the idea of keeping data and functions
pushed into the production environment requests like this may as singular as possible and include the idea to keep data
even take minutes to execute. and functions maintained across the entire computational
footprint redundantly and updated in real-time.
In our interviews we asked if enterprises were beginning to
3. Finally, in more advanced settings, networks need to be
include the concept of “network-aware software” in their design
addressable by software. Software will need to be able to
policies, and the answer was a sweeping: “No”! The contrary is
assess how to optimally reach data and where to deploy
the case: software is written in environments which are highly
computational requests optimally, and possibly how to do so
“local”, with all software either on the same computer (e.g.,
in multiple locations simultaneously.

12 Framework in this context are tools which simplify the software development process by offering simple ways to invoke and manage complex processes. An example
could be a simple database request. The function used by the developer would simply formulate the database command. The underlying framework would actually
open a connection to the database, log on, execute the command and manage the response. While the software developer would use a single or a few simple
lines of code, all the handling of issues that may occur to the request are being managed by an underlying framework. Issues may include failure of the database to
respond, error in the execution of the command, no results found, interruptions in the connection, request queuing, etc.

13
5. Networking and computing will
eventually be integrated

We already know that networking is a critical part of computing Storage functions could include:
performance. Based on the following drivers, we predict that
nn Caching and CDN
networks will become even more integrated into corporate
computing environments: nn Application acceleration

nn The increasing need for instantaneity nn Database services such as NoSQL and Cassandra

nn Data and computing spreading across multiple environments nn WAN optimization services

nn The increasing need for an integrated approach to security


Computing functions could include:
nn T
he need for parallel processing and parallel storage, which nn Application acceleration
require network automation and application-driven handling
of network traffic nn Microservices (Docker, Containers, Hadoop clusters, etc.)

At the beginning of this paper, we stated that enterprises do nn Transcoding


three things to their data: transport it, store it and compute – nn V
ideo or data analytics (e.g., security-camera image
in a secure way. Given that networks are between any of the analytics)
aforementioned actions, we believe networks need to offer
nn Real-time CEX support
functionality for all three of these areas. Below is a first set of
services that might be offered by networks in the future. nn Status and control functions (e.g., for IoT applications)

Traditional functions which networks can provide include: However, this can only be a starting point for the discussion
nn Traffic routing and path computing ‘What functionalities the future enterprise network should
have?’.
nn Firewalling

nn Policy control

nn Traffic and function spawning

nn Error messaging and performance diagnostics

nn Service-oriented quality management

nn Path computing

14
6. Enterprises revert to a DIY approach
to networking

In the absence of market-available solutions, 15 years ago, sufficiently. All the while “new entrants” such as cloud-network
the best known internet giants have begun to build their own providers or SD-WAN equipment vendors provide better
networking solutions. In 1999 Google already realized there was answers.
no network provider out there that could meet its needs, and
started building its own networking capability. Today its data It may be that telecommunication operators are overwhelmed
centers operate at a mind-boggling 1 petabit per second – 1 with reformatting their networks. But if they don’t catch up on
million gigabits. Amazon and Facebook’s followed suit. the six performance dimensions shown in Figure 3, they invite
hardware vendors, software players, systems integrators and
Today, large enterprises, too, are cloud-native players to develop solutions for their clients – on the
formulating requirements which Can it be that network back of the network operator’s assets and services. Operators
telecommunication network operators have lost the would essentially leave the space wide open – maybe even
operators cannot respond to in a edge in networking to leave their enterprise customers no choice but to look to other
commercially-sensible way. Thus, software companies? software and hardware vendors to support more advanced
they, too, revert to DIY approaches enterprise networking. Players such as Cisco (after acquiring
of sourcing individual parts from software and network Meraki, Viptela, etc.), Riverbed, Masergy, Aryaka and many
equipment providers and assembling them into their others are…pushing their OTT solutions into the market.
Figure 10:
own tailor-made solutions.

We are seeing many industrial players, in their attempts Maybe a curiosity on the side: Both Google and Amazon
to digitize and cater to the new demands and competitive have begun to offer data-transfer appliances to support
pressures, contemplating the following questions: customers’ migration into their respective cloud
environments. Amazon’s “Snowball” solution is available in
1. How can they build integrated network and IT architectures? 50–100 TB sizes, while Google’s “Transfer Appliance” is
available up to 480TB for 1,800 USD plus shipping (500
2. How can they avoid being locked in to single network service USD). Their argument for “snail mail” is simple: transferring
providers? 10PB of data (typical enterprise volume) over a 100Mbps or
1Gbps link takes three to 27 years.
3. How can they design networks to leverage peering, instead
of international leased lines?

4. How can they apply/unify/automate security if computing


environments span multiple clouds?

5. How can they manage network-caused application latency?

Enterprise customers have begun asking their network service


providers for answers to these questions – or at least for
Source: Arthur D. Little and company websites
roadmaps. Sadly, more often than not, they found that telecom
operators have not advanced their own networking technologies

15
7. Operators need to rethink their
networking portfolio

Many operators are seeking growth in the B2B segment by development of services based on their own 5G infrastructures.
providing ICT services. The B2B segment often contributes And we see internet giants enter this space. In the fields of the
20–35% of overall turnover, and accounts for 50% or more of IoT, virtual reality, artificial intelligence, machine learning and
growth expectations. Corporate data networks today typically other (presently hyped) areas, telecommunication operators
amount to 4–7% of overall turnover – and are thus overlooked in may fail to grab incremental value if they fail to provide related
the strategic plans. This has two consequences: competence and capabilities in their networks.
1. Networking revenue streams remain under pressure. Operators need to prepare for, set up and initiate key client
2. Growth in other ICT areas does not compensate declines in engagements to understand infrastructural requirements in a
networking. future networked world. Operators will need to redesign their
networking portfolios to reflect the shift in importance of the six
Let us be very clear: seeking growth in the ICT segment is a
network-quality criteria:
positive aspiration and a natural choice. Some operators become
very successful in this space. However, we have also seen many nn A
utomate their network operations and offer related
operators try to fish in foreign ponds while ignoring the many capabilities (network deployment and configuration
ICT opportunities that may come from a more comprehensive automation) to enterprise customers to dock onto – including
networking, computing and security portfolio. the introduction of the necessary asset, inventory and
capacity management capabilities.
Not prioritizing the risk of revenue decline in the corporate
nn D
evelop or partner for an integrated solution that spans
networking segment is a dangerous mistake to make which
network and cloud security domains.
may amount to the loss of 20% of B2B turnover and nullify
growth expectations. nn P
rovide a broader set of network functions in all functional
domains (networking, storing and processing).
We expect software-oriented networking demands to emerge
nn B
e agnostic to who provides the underlying access circuits
from customers – which, in turn, will require network operators
(probably the most painful one), as customers operate their
to implement software-defined networking. Technologies such
solutions in multi-country environments and like sourcing
as segment routing will be more widespread and need to be
from multiple access-network providers.
exposed for use by enterprises. Beyond this, operators will need
to install and expose path computing capabilities, as well as full, nn G
etting the above done likely requires a high level of
end-to-end support of IPv6, to facilitate efficient routing. standardization across domains. Operators need to forfeit
their proprietary methods and processes and join forces to
We expect operators to develop commercial and technical develop a unified, cross-domain standard.
strategies for their (edge-) data centers, balancing resources
between CPU, storage, GPU13 and memory. Many operators Doing so will force operators to invest into renewing their own
are likely overwhelmed with even anticipating what will be networks, too, or acquiring related competencies from outside.
needed, and deploying these resources following a “build-it-and- In the first case, this may lead to a significant improvement in
they-will-come” model may be of little financial attractiveness. production cost (due to the increased internal efficiency). In the
Some operators may resort to having established computing second case, they may actually develop their own “over-the-top”
companies – the likes of Google – deploy micro data-center approaches to provide to the described customer demand of
environments for them. AT&T’S CORD initiative may be a good network automation. In both cases, operators need to develop
example at hand. commercial, operational and technical strategies for the future
of enterprise networking – and possibly the future of networking
We even see systems integrators like WiPro, IBM and others as a whole.
develop networking capabilities - possibly even exploring the
13 GPU = graphical processing unit. The microchip that is designed to be particularly performant when handling graphical (2D and 3D) or otherwise highly parallelized
computational tasks.

16
Appendix: How changes in computing
and storage technology impact enterprise
networking

Elaboration on changes in computing and storage well as in ever-larger cloud environments, exactly to the size
technology and the impact on enterprise networking they need – at any 100-millisecond interval!

This means companies can tune their environments to


Figure 8: …
a) Computing: To deliver enhanced computing
whatever performance requirements they may have, and still
performance, enterprises are increasingly utilizing
be exceptionally efficient without overprovisioning any kind
modern computing methods in multiple cloud-
of infrastructure. This ability, however, assumes that the data
computing environments.
required for computation, and the application performing the
RightScale1 shows that computation, are available in real time – and geographically
100
85 percent of companies redundant.
employ multiple cloud
Multi- environments. The most b) Storage: Storage and retrieval technologies are
85
Single private
Cloud used services today are rapidly evolving in terms of both hardware and
relational database-as-a- software – they are not the problem. The problem
Single public
service applications. The is that data, the application and the processors are
No plans
most planned services not in the same physical location.
2017 are containers2-as-a-
Source: Arthur D. Little based on Right- service. Data-storage hardware is essentially evolving from spinning
Scale’s 2017 State of the Cloud disks to memory chips. However, memory chips themselves
Cloud infrastructure-
are being connected using ever faster bus technologies,
based computing is comparatively more efficient than other
progressing from IDE6 to flash drives, and finally to NVMe7.
methods of providing computing. Studies by Google and
During this progression, storage latency has been reduced by
others have shown that 10 clusters of 1,000 servers are about
a factor of 1.000 to about 10µs for NVMe drives over HDDs8.
30 percent less efficient than one cluster of 10,000 servers.
Thus, 100 I/Os today fit into 1ms. This makes NVMe only about
Thus, the larger the infrastructural pool enterprises use, the
200 times slower than RAM9, and consequently it does not, by
more efficient they will be in delivering computing results.
far, pose the same delay issue as old-fashioned HDDs, which
Since cloud computing infrastructure is being provided on an
were 20.000 times slower than RAM. The real slowdown is
IaaS3, PaaS4 or even FaaS5 basis, enterprises can choose how
in getting the data to the processor and back. If applications
to scale their computing needs in their own data centers, as
and data are physically where the computing resources are,
storage should no longer be the bottleneck.

1 RightScale 2017 State of the Cloud Report


1
2 Containers in this context refer to fully encapsulated sets of functions that can be called from entitled applications and that will deliver responses. In contrast to
virtual machines, they do not require an operating system. Due to containers, former monolithic software stacks are being broken into their individual functions,
reducing and clarifying the dependencies within software code and easing maintenance and scale
3 Infrastructure-as-a service: the providing of a fully managed computing infrastructure
4 Platform-as-a service: This refers to the providing of execution environments (such as Java, various databases, etc.). Administrators are no longer concerned with
hardware or operating system, but with the performance of the provided platform
5 Function-as-a service: This refers to individual functions, aka software code, to be provided as a service. In this case, administrators do neither worry about
hardware, operating system or platform availability: they pay for the execution of software code
6 IDE = Integrated Drive Electronics, an interface to connect hard drives to mainboards
7 Non Volatile Memory Express. An interface which links SSDs (=Solid State Disks) with a PCI express interface. This allows NVMes to perform about 10x faster
than HDDs with 1000x lower latency
8 HDD = Hard Disk Drive

17
However, we believe there are reasons data may actually be While storage arrays, file systems, operating systems and
moving further away from where it is being computed: even software have adapted to delays in getting data, networks
and security haven’t. Figure 9: …
1. Parallel processing of vast and exponentially growing
amounts of data leads to greater distances between the We have tried to size the problem of intra-data-center or
data and where it can be processed. intra-cloud traffic (assuming that Public Private
each cloud sits in its own data
2. “Containerization” and the use of multiple clouds lead to
center), and found that traffic +28%
software calling functions which may be sitting in distant
running inside a data center is
environments. 4.1
at least 10–25 times larger than
3.2
the traffic running into that data 1.8
Modern file systems allow for fast retrieval of massive
center. Put differently, companies 1.5
amounts of data. Enterprises generating and using vast
amounts of data use systems such as the Hadoop Distributed require 25 times the networking 2.3
1.7
File System and no-SQL databases such as Cassandra to store, speed inside a data center as in
sort, map and reduce large amounts of data – both structured the access. If we assume that 2016 2017
and unstructured. These systems allow for highly scalable 10 percent of data needs to be Source: Arthur D. Little based on Right-
and widely distributed storage and retrieval – indeed, they are transported from one location Scale’s 2017 State of the Cloud

accelerated by the fact that they are distributed. to another, this would potentially more than double the WAN
traffic.
However, our observation is that, as storage arrays grow, they
seem to be moving further away from processors. Having Clearly, not every enterprise is already facing this issue, but
formerly been in-server, as volumes grew, storage arrays were if access speeds accelerate and computing performance
moved to in-rack systems, later to in-data center arrays, and requirements lead to parallel, multi-cloud computing in
finally to the cloud. Since system performance depends on the containers, a number of new questions arise:
slowest element of the system, retrieving even small elements nn Where to store data and code?
required for computation may slow down the end-to-end
nn H
ow often to store the same data and software code in
processing.
geographically distributed locations?
As computing power is being parallelized and distributed, and
nn H
ow to ensure that data and code are kept synchronous
data storage is being distributed in various environments, the
across the entire corporate footprint?
issue of distance between data and CPU10 increases.
nn H
ow to transport both data and functions to avoid having
The above-quoted study from RightScale states that
the network become a bottleneck in delivering on customer
companies, on average, use 4.1 cloud environments: 1.8 public
expectations for instantaneous business interactions?
and 2.3 private clouds.
nn H
ow to secure the data and functions as these transit
Given that software is decreasingly a monolithic piece of code,
multiple computing environments?
but increasingly containerized into hundreds or thousands of
functions, it becomes increasingly less likely that both data
and function are in the same physical place.

9 RAM = Random Access Memory. It is the main memory in a computer, generally referred to as “memory”.There are other, faster types of memory in the 1-3 levels
of CPU caching and on various other parts of computers (e.g. graphics cards, controllers, etc.), all of which are faster, but typically bound to specialty applications
10 CPU = central processing unit. It is the microchip inside a computer which hosts the operating system and executing the software running on it. It also does most
of the calculations needed

18
Contacts
If you would like more information or to arrange an informal discussion on the issues raised here and
how they affect your business, please contact:

Bela Virag Italy Sweden


Karim Taga Giancarlo Agresti Agron Lasku
virag.bela@adlittle.com agresti.giancarlo@adlittle.com lasku.agron@adlittle.com

Belgium Japan Singapore


Gregory Pankert Shinichi Akayama Yuma Ito
pankert.gregory@adlittle.com akayama.shinichi@adlittle.com ito.yuma@adlittle.com

China Korea Spain


Russell Pell Hoonjin Hwang Jesus Portal
pell.russell@adlittle.com hwang.hoonjin@adlittle.com portal.jesus@adlittle.com

Czech Republic Latin America Switzerland


Dean Brabec Guillem Casahuga Clemens Schwaiger
brabec.dean@adlittle.com casahuga.guillem@adlittle.com schwaiger.clemens@adlittle.com

France Middle East Turkey


Julien Duvaud-Schelnast Sander Koch Coskun Baban
duvaud-schelnast.julien@adlittle.com koch.sander@adlittle.com baban.coskun@adlittle.com

Germany The Netherlands UK


Michael Opitz Martijn Eikelenboom Jonathan Rowan
opitz.michael@adlittle.com eikelenboom.martijn@adlittle.com rowan.jonathan@adlittle.com

India Norway USA


Srini Srinivasan Diego MacKee Sean McDevitt
srinivasan.srini@adlittle.com mackee.diego@adlittle.com mcdevitt.sean@adlittle.com
The future of enterprise networking – How
telecom operators need to accelerate to defend
their enterprise networking competence

Arthur D. Little
Arthur D. Little has been at the forefront of innovation since
1886. We are an acknowledged thought leader in linking
strategy, innovation and transformation in technology-intensive
and converging industries. We navigate our clients through
changing business ecosystems to uncover new growth
opportunities. We enable our clients to build innovation
capabilities and transform their organizations.

Our consultants have strong practical industry experience


combined with excellent knowledge of key trends and
dynamics. Arthur D. Little is present in the most important
business centers around the world. We are proud to serve most
of the Fortune 1000 companies, in addition to other leading
firms and public sector organizations.

For further information, please visit www.adl.com.

Copyright © Arthur D. Little 2018. All rights reserved.

www.adl.com/FutureOfEnterpriseNetworks

You might also like