Oreilly Security and Frontend Performance
Oreilly Security and Frontend Performance
Oreilly Security and Frontend Performance
Performance
Breaking the Conundrum
The OReilly logo is a registered trademark of OReilly Media, Inc. Security and
Frontend Performance, the cover image, and related trade dress are trademarks of
OReilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the
information and instructions contained in this work are accurate, the publisher and
the authors disclaim all responsibility for errors or omissions, including without
limitation responsibility for damages resulting from the use of or reliance on this
work. Use of the information and instructions contained in this work is at your own
risk. If any code samples or other technology this work contains or describes is sub
ject to open source licenses or the intellectual property rights of others, it is your
responsibility to ensure that your use thereof complies with such licenses and/or
rights.
978-1-491-97757-6
[LSI]
Table of Contents
2. HTTP Strict-Transport-Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
What Is HSTS? 5
Last Thoughts 7
4. Web Linking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Prefetch and Preload 23
Where Does Security Fit In? 24
Last Thoughts 25
5. Obfuscation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Learn from Our Attackers 27
Alternative Application: URL Obfuscation 28
URL Obfuscation Benefits 29
Last Thoughts 34
iii
6. Service Workers:
An Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
What Are Service Workers? 35
Gotchas! 37
10. Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
What Did We Learn? 56
Last Thoughts 57
iv | Table of Contents
CHAPTER 1
Understanding the Problem
1
bilities for end users. Lets discuss some of these issues in detail
before diving into techniques to address them.
Web Traffic
The latest trends suggest an accelerated increase in overall web traf
fic; more and more users access the Web through mobile and desk
top devices. With the growth in web traffic and ultimately
bandwidth, end users continue to demand improved browsing expe
riences such as faster page loads, etc. Keeping that in mind, we not
only need to adapt our sites to handle additional user traffic, but we
need to do so in an optimal way to continue delivering an optimal
browsing experience for the end user.
One of the higher profile frontend issues arising today is single point
of failure. By definition, single point of failure is a situation in which
a single component in a system fails, which then results in a full sys
tem failure. When translated to websites, this occurs when a single
delayed resource in a page results in blocking the rest of the page
from loading in a browser. Generally, blocking resources are respon
sible for this type of situation due to a sites dependency on execut
ing these resources (i.e. JavaScript) before continuing to load the rest
of the page. Single point of failure is more likely to occur with third
party content, especially with the increase in web traffic and the
obstacles in trying to deliver an optimal experience for the end user.
1 Takeaways from the 2016 Verizon Data Breach Investigations Report, David Bisson,
accessed October 13, 2016, http://www.tripwire.com/state-of-security/security-data-
protection/cyber-security/takeaways-from-the-2016-verizon-data-breach-investigations-
report.
Technology Trends
Based on the latest issues, we need solutions to bridge the gap and
address both performance concerns as well as security holes at the
browser leveland some of the latest technologies do just that. Tak
ing a look at service workers and HTTP/2, these are both technolo
gies aimed at improving the browsing experience; however, both of
these methods are restricted to use over a secure connection
(HTTPS). These technologies are ideal in demonstrating how solu
tions can improve both performance and security for any given
website.
Other frontend techniques exist to help mitigate some of the secu
rity and performance vulnerabilities at the browser. Leveraging
<iframe>, Content-Security-Policy, HTTP Strict-Transport-
Security, and preload/prefetch directives prove to help protect sites
from third party vulnerabilities that may result in performance deg
radation or content tampering.
Technology Trends | 3
Start at the Browser
The main idea behind all these technology trends and frontend
techniques is to help provide a secure and optimal experience for
the end user. But rather than focusing on what a content delivery
network, origin, or web server can do, lets shift that focus to what
the browser can do. Lets start solving some of these issues at the
browser.
In the remaining chapters, we will go through each of the frontend
techniques and technology trends mentioned at a high level in this
chapter. We will review implementation strategies and analyze how
these techniques help achieve an end users expectation of a secure
yet optimal experience, starting at the browser.
What Is HSTS?
The HTTP Strict-Transport-Security (HSTS) header is a security
technique that enforces the browser to rewrite HTTP requests into
HTTPS requests, for a secure connection to the origin servers dur
ing site navigation. From HTTP Archive, 56% of base pages are
using the HTTP Strict-Transport-Security technique and this
number will continue to grow as HTTPS adoption continues to
grow. Not only does this header provide browser-level security, but
it also proves to be a frontend optimization technique to improve
5
the end user experience. By utilizing this header and the associated
parameters, we can avoid the initial HTTP to HTTPS redirects so
that pages load faster for the end user. As mentioned in High Perfor
mance Websites by Steve Souders, one of the top 14 rules in making
websites faster is to reduce the number of HTTP requests. By elimi
nating HTTP to HTTPS redirects, we are essentially removing a
browser request and loading the remaining resources sooner rather
than later.
The example in Figure 2-1 demonstrates how the browser performs
a redirect from HTTP to HTTPS, either using a redirect at a proxy
level or at the origin infrastructure. The initial request results in a
302 Temporary Redirect and returns location information in the
headers, which directs the browser to request the same page over
HTTPS. In doing so, the resulting page is delayed for the end user
due to time spent and additional bytes downloaded.
The Parameters
In order to take advantage of the Strict-Transport-Security
header from both a performance and security point of view at the
browser, the associated parameters must be utilized. These parame
ters include the max-age, includeSubDomains, and preload direc
tives:
Strict-Transport-Security:
_max-age_=_expireTime_
[; _includeSubDomains_]
[; _preload_]
Last Thoughts
With a simple security technique, we eliminate man-in-the-middle
(MITM) attacks over a nonsecure connection. While this method
proves successful, the security protocol should be investigated to
ensure MITM attacks can be avoided over a secure connection as
well. Several SSL and TLS versions have exploitable vulnerabilities
that should be considered while moving to a secure experience and
deploying this security enhancement.
Last Thoughts | 7
As a basic frontend technique, reducing the number of redirects, by
default, reduces the number of browser requests, which can help to
improve page load times. We are essentially moving the redirect
from a proxy or origin infrastructure level to the browser for the
next HTTP request. As with any additional headers, developers
are often worried about the size of requests being downloaded in
browser. The latest HTTP/2 provides header compression, which
reduces the size of requests. Additionally, for nonchanging header
values, HTTP/2 now maintains header state without having to re-
send duplicate headers during a session. Given these new benefits,
we can safely utilize additional security techniques such as Strict-
Transport-Security, without affecting overall page delivery perfor
mance. While a single HTTP header serves as a great example of
bridging the gap, we will cover other techniques such as Content-
Security-Policy to address both performance and security con
cerns in a similar manner.
9
Example 3-1. Google resource
<script
async
type="text/javascript"
src="https://www.googletagservices.com/tag/js/gpt.js">
</script>
Not only does the browser load the initial resource, but the browser
continues to load subsequent embedded third party resources in
Figure 3-1, which can lead us to the mentioned vulnerabilities.
At any point, these additional resources in Figure 3-1 can fail, slow
the page load, or become compromised and deliver malicious con
tent to end users. Major headlines indicate how often both ad con
tent and vendor content are compromised and this trend will only
continue to grow as web traffic continues to grow. With the evolving
nature of third party content, we as developers cannot prevent these
situations from happening, but we can better adapt our sites to han
dle these situations when they occur.
Sandboxing
By definition, the sandboxing concept involves separating individual
running processes for security reasons. Within the web development
world, this technique allows developers to execute third party con
tent with additional security measures in place, separate from first
party content. With that, we can restrict third party domains from
gaining access to the site and end users.
As mentioned, developers are well aware of the <iframe> tag; how
ever, HTML5 introduced a new sandbox parameter shown in
Example 3-5 that provides an additional layer of security at the
browser while maintaining the performance benefits associated with
the <iframe> tag. In a similar way, Content-Security-Policy pro
vides us with the same method of sandboxing third party content
through use of the header or <meta> tag equivalent as shown in
Example 3-6.
Using the sandbox attribute alone, we can prevent scripts and/or
plugins from running, which can be useful in the situation of load
ing third party images for example.
We are also given flexibility in how third parties can execute content
on first party websites by using a whitelist method, which allows
developers to protect the end users by specifying exactly what third
parties can display or control when loading content. The sandbox
attribute has the following options:
allow-forms
allow-modals
allow-orientation-lock
allow-pointer-lock
allow-popups
allow-popups-to-escape-sandbox
allow-same-origin
allow-scripts
allow-top-navigation
Inline Code
When working with certain third parties such as analytics vendors,
developers are often given segments of code to insert inline into the
base pages of websites as shown in Example 3-9.
In doing so, not only are we introducing the risk of single point of
failure by allowing these resources to be downloaded without pre
Embedding inline code via the <iframe> tag ensures that third party
content, such as analytics code shown above, will not affect the
remaining resource downloads. Again, using this method, we have
that ability to avoid single point of failure protect end users at the
browser in case of compromised content.
Generally, many sites, such as social media sites, include third party
content, which is subsequently loaded with the current URL being
used as a Referrer header for these third parties as shown in
Example 3-12. In doing so, we are essentially leaking information
about the end user and his/her session to these third parties so pri
vacy is no longer guaranteed.
Last Thoughts
Overall, both the <iframe> tag and Content-Security-Policy tech
niques prove to be useful in situations that result in performance
issues and/or security issues. More specifically, the newly introduced
directives including sandbox, srcdoc, referrerpolicy, and refer
rer allow developers to improve the frontend user experience in a
secure manner.
As mentioned in the beginning of this chapter, Content-Security-
Policy is often overlooked due to the required maintenance and the
added bytes to requests when downloaded in a browser. Again, with
HTTP/2, we are given header compression and maintained header
state, which allows more room for utilizing security techniques such
as Strict-Transport-Security and Content-Security-Policy.
We have other ways in which we can utilize Content-Security-
Policy along with other frontend optimization techniques to ach
ieve similar security and performance benefits, which will be
explored in the next chapter.
Last Thoughts | 21
CHAPTER 4
Web Linking
23
ity) to achieve an improved frontend user experience without
degrading site performance.
Last Thoughts
While pairing Link and Content-Security-Policy techniques, we
are able to improve page delivery while applying distinct security
measures to particular types of resources, such as JavaScript objects
and stylesheet objects. All resources are not created equal so they
should not be treated equal with a global security policy. Script type
resources may require more security measures versus style type
resources, and so, the AS attribute provides a method to associate
policies on a per-resource type basis.
Last Thoughts | 25
CHAPTER 5
Obfuscation
27
Example 5-1. Original URL
http://www.example.com/test?file=/../etc/passwd
Concept
Stepping away from the traditional application of obfuscation, we
will now proxy and obfuscate third party URLs. Under normal cir
cumstances, the browser parses a page and fetches third party
resources from a third party provider as shown in Example 5-3. If
we rewrite third party URLs to use a first party URL and obfuscated
path/filename as shown in Example 5-4, the flow will change with
the introduction of a reverse proxy. The reverse proxy has been
introduced in Figure 5-1 to provide a way to interpret obfuscated
requests, which can be done through use of Varnish, Apache
mod_rewrite functionality, any other reverse proxies that would
allow request rewrites, or simply a content delivery network. The
browser will now parse a page and fetch obfuscated content, and the
reverse proxy will then interpret the obfuscated request and fetch
28 | Chapter 5: Obfuscation
third party content on behalf of the end users browser. In doing so,
we will achieve both enhanced security and performance at the
browser.
Caching
Introducing a reverse proxy into the flow provides an additional
layer of caching for third party content, which ultimately brings
resources closer to end users.
30 | Chapter 5: Obfuscation
in the browser due to HTTP/1.1 properties. DNS Lookup is the time
spent to perform a domain lookup while Connection is the time
spent when the browser initiates a connection to the resolved
domain address. Both of these metrics contribute to how long a
browser will take to load a resource.
Content-Security-Policy
Recall that Content-Security-Policy is a security technique aimed
at whitelisting known third party domains, while prohibiting
unknown third party domains from accessing and executing content
on a site. When used as a header, it can grow large and often
becomes harder to maintain as shown in Examples 5-5 and 5-6. Not
only are we at risk of a large header delaying resource download
time, but we are essentially exposing the sources of third party con
tent as shown in the following examples. As mentioned earlier,
attackers will target vendor content in order to bring a site down.
That being said, we need to ensure information about vendor con
tent is concealed as much as possible.
32 | Chapter 5: Obfuscation
www-blogger-opensocial.googleusercontent.com
*.blogspot.com; report-uri /cspreport
Content-Security-Policy:
script-src 'self' *.obf1.firstparty.com 'unsafe-inline'
*.obf2.firstparty.com ; report-uri /cspreport
34 | Chapter 5: Obfuscation
CHAPTER 6
Service Workers:
An Introduction
1 Using Service Workers, Mozilla Development Network, accessed October 13, 2016,
https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/
Using_Service_Workers
35
As displayed in Figure 6-1, a service worker lives in the browser
while sitting in between the page/browser and the network, so that
we can intercept incoming resource requests and perform actions
based on predefined criteria. Because of the infrastructure, service
workers address the need to handle offline experiences or experien
ces with terrible network connectivity.
These event listeners include the install and activate events, which
allow developers to set up any necessary infrastructure, such as off
line caches, prior to resource handling. Once a service worker has
been installed and activated on the first page request, it can then
begin intercepting any incoming resource requests on subsequent
page requests using functional events. The available events are fetch,
sync, and push; we will focus on leveraging the fetch event in the
next few chapters. The fetch event hijacks incoming network
requests and allows the browser to fetch content from different
sources based on the request or even block content if desired.
2 Service Workers: An Introduction, Matt Gaunt, accessed October 13, 2016, http://
www.html5rocks.com/en/tutorials/service-worker/introduction
Gotchas! | 37
CHAPTER 7
Service Workers:
Analytics Monitoring
Lets jump right into the first application of service workers, analyt
ics monitoring.
39
Figure 7-1. Performance monitoring tools
Each of these has unique capabilities bundled with the data and
metrics they provide for measuring performance. Some measure
performance over time versus others that measure single snapshots
or even request transactions. And of course, some take the approach
of real user monitoring (RUM) versus simulated traffic performance
monitoring.
So what do third party analytics monitoring tools have to do with
service workers? Typically, these tools expose their services via REST
APIs. Given this approach, these tools are unable to track and pro
vide data for offline experiences. As we advance in technology, year
by year, it is important to note that we are constantly coming up
with new ways to provide new types of experiences for our end
users. If performance metrics have that much of an impact on the
business and on the end users, then its critical that we provide that
data for offline experiences.
self.addEventListener('fetch', function(event) {
navigator.services.match({name: 'analytics'}).then(
port.postMessage('log fetch'));
});
These service workers are then able to report these metrics when
connectivity is reestablished so that they can be consumed by the
service. Numerous implementation strategies exist for reporting the
metrics; for example, we can leverage background sync so that we
do not saturate the network with these requests once the user
regains connectivity.
Now lets broaden the scope from third party analytics tools to all
third party content. More specifically, lets discuss how to control the
delivery of third party content.
43
JavaScript resource and then performs some type of check based on
a predefined list of safe third party domains, or using a predefined
list of known bad third party domains. Essentially, the fetch event
uses some type of list that acts as a whitelist or blacklist.
fetch(policyRequest).then(function(response) {
return response.text().then(function(text) {
result=text.toString();
});
});
});
Analysis
Note the following method in particular:
getCounter(event, rspFcn);
This method fetches the current state of the counter for a third party
domain. Remember that, for each fetch event, we can gather a fetch
time for each resource. But the counter needs to be maintained
globally, across several fetch events, which means we need to be able
to beacon this data out to some type of data store so that we can
fetch and retrieve it at a later time. The implementation details
behind this method have not been included but there are several
strategies. For the purposes of the example, we were able to leverage
Akamais content delivery network capabilities to maintain count
values for various third party domains.
Upon retrieving the counter value, we have a decision to make as
seen in the implementation:
updateCounter(event.request.url);
Again, the implementation details for this method have not been
included, but you will need to be able to beacon out to a data store
to increment this counter. If the resource did not hit the threshold
value, then there is no need to update the counter. In both cases, we
can store the third party content in the offline cache so that if the
next time a fetch event gets triggered for the same resource, we have
the option to serve that content from the cache.
Sample code
Example 8-3, shows a more complete example for the pseudocode in
Example 8-2.
Last Thoughts
There are numerous ways to implement the getCounter and update
Counter methods so long as there exists the capability to beacon out
to some sort of data store. Also, Example 8-3 can be expanded to
count the number of times a resource request has exceeded other
metrics that are available for measurement (not just the fetch time).
In Example 8-3, we took extra precautions to ensure that third par
ties do not degrade performance, and do not result in a single point
of failure. By leveraging service workers, we make use of their asyn
chronous nature, so there is a decreased likelihood of any impact to
the user experience or the DOM.
Last Thoughts | 49
CHAPTER 9
Service Workers:
Other Applications
Input Validation
Input validation strategies typically involve client-side JavaScript,
server-side logic, or other content delivery network/origin logic in
an effort to not only prevent incorrect inputs or entries, but also to
prevent malicious content from being injected that could potentially
impact a site overall. The problem with some of the above strategies
is that a site still remains vulnerable to attacks.
With client-side JavaScript, anyone can look to see what input vali
dation strategies are in place and find a way to work around them
for different attacks such as SQL injections, which could impact the
end users experience. With server-side logic or other content deliv
ery network/origin features, the request has to go to the network
before being validated, which could impact performance for the end
user.
51
How can service workers mitigate some of these vulnerabilities?
Lets use the service worker fetch handler to validate the input field
and determine whether to forward or block a resource request. Of
course service workers can be disabled, as with JavaScript, but it is
up to the developer to put backup sever-side strategies in place as a
preventative measure.
Benefits of using service workers:
A Closer Look
Lets take a look at Example 9-2. During service worker registration,
different GeoFence regions would be added, along with any addi
tional offline caches for content.
Once the service worker is active, it can start listening for users or
devices entering or leaving the GeoFence we set up during registra
tion (Example 9-3).
Last Thoughts
Input validation and geo content control are just a couple more ser
vice worker applications, but the use cases and applications will con
tinue to increase as we advance with this technology. The idea is to
take backend solutions and bring them to the browser in an effort to
mitigate some of the common security and performance issues we
see today.
55
What Did We Learn?
Over the course of this book, we have explored several existing tech
niques as well as newer technologies to help achieve an optimal
frontend experience that is also secure. Keep these simple yet power
ful points in mind:
Last Thoughts | 57
About the Authors
Sonia Burney has a background in software development and has
been able to successfully participate in many roles throughout her
years at Santa Clara University and in the tech world. Every role, at
every company, has driven her to learn more about the tech indus
try, specifically with regards to web experience and development.
While Sonias background consists of mostly software development
roles within innovative teams/companies, her current role at Aka
mai Technologies now includes consulting and discovering new sol
utions to challenging problems in web experiencespecifically,
coming up with algorithms designed to improve the frontend expe
rience at the browser. Outside of work, not only is she a dedicated
foodie, but she enjoys traveling, running, and spending time with
friends and family.
Sabrina Burney has worked in many different fields since graduat
ing from Santa Clara University. She has a background in computer
engineering and has always had a passion for technologies in the IT
world. This passion stems from learning about newer tech being
developed as well as enhancing tech that is already present and
underutilized. While Sabrina currently works at Akamai Technolo
gies, her experience inside and outside of Akamai includes roles in
software development and web security, as well as more recently the
web experience world. She is able to utilize her backgrounds in mul
tiple fields to help improve the overall end user experience when it
comes to navigating the Web. Sabrinas recent work is focused on
third-party content and ways to improve the associated vulnerabili
ties and concernsshe has several patents pending in this subject
area. Outside of work, she enjoys playing soccer with her fellow
coworkers as well as traveling with her family.