0% found this document useful (0 votes)
9 views8 pages

Serverless Ifc Oopsla18

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 8

This is an electronic reprint of the original article.

This reprint may differ from the original in pagination and typographic detail.

Mohanty, Sunil Kumar; Premsankar, Gopika; Di Francesco, Mario


An evaluation of open source serverless computing frameworks

Published in:
Proceedings - IEEE 10th International Conference on Cloud Computing Technology and Science, CloudCom
2018

DOI:
10.1109/CloudCom2018.2018.00033

Published: 26/12/2018

Document Version
Peer-reviewed accepted author manuscript, also known as Final accepted manuscript or Post-print

Please cite the original version:


Mohanty, S. K., Premsankar, G., & Di Francesco, M. (2018). An evaluation of open source serverless computing
frameworks. In Proceedings - IEEE 10th International Conference on Cloud Computing Technology and
Science, CloudCom 2018 (pp. 115-120). Article 8591002 IEEE.
https://doi.org/10.1109/CloudCom2018.2018.00033

This material is protected by copyright and other intellectual property rights, and duplication or sale of all or
part of any of the repository collections is not permitted, except that material may be duplicated by you for
your research use or educational purposes in electronic or print form. You must obtain permission for any
other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not
an authorised user.

Powered by TCPDF (www.tcpdf.org)


An evaluation of open source
serverless computing frameworks
Sunil Kumar Mohanty, Gopika Premsankar, Mario Di Francesco
Department of Computer Science, Aalto University, Finland

Abstract—Recent advancements in virtualization and software Cloud Functions and Google Cloud Functions. However, these
architecture have led to the new paradigm of serverless comput- platforms require the functions to be written in a certain way,
ing, which allows developers to deploy applications as stateless resulting in vendor lock-in [2, 9]. Moreover, developers have
functions without worrying about the underlying infrastructure.
Accordingly, a serverless platform handles the lifecycle, exe- to rely on the serverless provider’s release cycle and additional
cution and scaling of the actual functions; these need to run services from the cloud platform such as message queuing and
only when invoked or triggered by an event. Thus, the major data storage. They also have to comply with constraints on
benefits of serverless computing are low operational concerns function code size, execution duration and concurrency [10].
and efficient resource management and utilization. Serverless Open source FaaS frameworks are a promising solution to
computing is currently offered by several public cloud service
providers. However, there are certain limitations on the public bring the power of serverless computing on-premise. Such
cloud platforms, such as vendor lock-in and restrictions on the frameworks provide more flexibility (for deploying applica-
computation of the functions. Open source serverless frameworks tions, configuring the framework, etc.) and thereby avoid
are a promising solution to avoid these limitations and bring vendor lock-in. For instance, open source frameworks can be
the power of serverless computing to on-premise deployments. deployed both on the edge/fog devices as well as on the public
However, these frameworks have not been evaluated before. Thus,
we carry out a comprehensive feature comparison of popular cloud for distributed data analytics [5, 6]. In this regard, a
open source serverless computing frameworks. We then evaluate serverless framework should be easy to set up, configure and
the performance of selected frameworks: Fission, Kubeless and manage; it should also provide certain performance guarantees.
OpenFaaS. Specifically, we characterize the response time and Although recent works have focused on serverless platforms in
ratio of successfully received responses under different loads and the public cloud [10, 11], none has evaluated open source FaaS
provide insights into the design choices of each framework.
Index Terms—serverless computing, function-as-a-service, frameworks. In contrast, this work provides a comprehensive
Kubeless, Fission, OpenFaaS, performance evaluation feature comparison of popular open source serverless frame-
works, namely, Kubeless [12], OpenFaaS [13], Fission [14]
and Apache OpenWhisk [15]. Furthermore, it evaluates the
performance (in terms of response time and ratio of successful
I. I NTRODUCTION
responses) of these frameworks under different workloads and
Serverless computing is an emerging paradigm wherein soft- provides insights into the design choices behind them. It finally
ware applications are decomposed into multiple independent examines the impact of auto scaling on performance.
stateless functions [1, 2]. Functions are only executed in re- The rest of the article is organized as follows. Section II de-
sponse to triggers (such as user interactions, messaging events scribes the considered frameworks and analyzes their features.
or database changes), and can be scaled independently as they Section III evaluates the performance of selected frameworks.
are completely stateless. Hence, serverless computing is also Section IV reviews the related work. Finally, Section V pro-
sometimes referred to as function-as-a-service (FaaS) [3]. In vides concluding remarks as well directions for future work.
this approach, almost all operating concerns are abstracted
away from developers. In fact, developers simply write code II. O PEN SOURCE SERVERLESS COMPUTING FRAMEWORKS
and deploy their functions on a serverless platform [4]. The This section describes four popular open source serverless
platform then takes care of function execution, storage, con- frameworks, namely, Fission, Kubeless, OpenFaaS and Open-
tainer infrastructure, networking, and fault tolerance. Addition- Whisk. We chose frameworks with at least 3,000 GitHub stars
ally, the serverless platform takes care of scaling the functions (a mark of appreciation from users). Table I summarizes their
according to the actual demand. Serverless computing has been features. All the considered frameworks run each serverless
identified as a promising approach for several applications, function in a separate Docker container to provide isolation.
such as those for data analytics at the network edge [5, 6], OpenFaaS, Kubeless and Fission utilize a container orchestra-
scientific computing [7] and mobile computing [8]. tor to manage the networking and lifecycle of the containers,
In serverless computing, the infrastructure is generally man- whereas OpenWhisk may be deployed with or without an
aged by a third-party service provider or an operations team orchestrator. We present a short summary of the frameworks,
when using a private cloud. Currently, all major cloud service highlighting their main components.
providers offer solutions for serverless computing, namely, Fission is an open source serverless computing framework
Amazon Web Services (AWS) Lambda, Azure Functions, IBM built on top of Kubernetes and using many Kubernetes-native

©2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or
reuse of any copyrighted component of this work in other works.
Feature Kubeless OpenWhisk Fission OpenFaaS
Open source license Apache 2.0 [12] Apache 2.0 [15] Apache 2.0 [14] MIT [13]
Framework Go Scala Go Go
development language
Programming Python, Node.js, Ruby, Javascript, Swift, Python, Node.js, Ruby, Python, C#, Go, Node.js,
languages supported PHP, Go, Java, .NET and Python, PHP, Java, Perl, Go, Bash, .NET, Ruby and custom con-
custom containers [16] Linux binaries PHP and custom con- tainers [19]
(including Go) and tainers [18]
custom containers [17]
Auto scaling metric CPU utilization, QPS QPS CPU utilization [21] CPU, QPS and custom
and custom metrics [20] metrics [22]
Container Kubernetes No orchestrator Kubernetes Kubernetes [24],
orchestrator required, Kubernetes Docker Swarm [25],
supported [23] extendable to other
orchestrators [26]
Function triggers http, event, schedule [27] http [28], event [29], http, event, schedule [31] http [32], event [33]
schedule [30]
Message queue Kafka, NATS [34] Kafka [35] NATS, Azure stor- NATS [33], Kafka [36]
integration age queue [31]
Recommended Prometheus [37] statsd Istio [38] Prometheus [39]
monitoring tool
CLI support Yes Yes Yes Yes
Industry support Bitnami IBM, Adobe, RedHat, Platform9 VMWare [40]
Apache Software Foun-
dation among others
GitHub stars 3,009 [12] 3,303 [15] 3,412 [14] 10,608 [13]
GitHub forks 273 [12] 629 [15] 277 [14] 767 [13]
GitHub contributors 61 [12] 120 [15] 65 [14] 68 [13]

TABLE I: Overview of features.

concepts [14]. The framework executes a function inside and handler has to be supplied by the developer, and the
an environment that contains a webserver and a dynamic CLI handles the packaging of the function into a Docker
language-specific loader required to run the function [18]. An container. The container comprises a function watchdog, i.e.,
executor controls how function pods are created and scaled. a webserver that acts as an entry point for function calls
One of the main advantages of Fission is that it can be within the framework. An API gateway provides an external
configured to run a pool of “warm” containers so that requests interface to the functions, collects metrics and handles scaling
are served with very low latencies [41]. by interacting with the container orchestrator plugin.
Kubeless is a Kubernetes-native serverless framework [12]. OpenWhisk is an open source, serverless computing frame-
It uses Custom Resource Definitions (CRDs) [42] to extend the work initially developed by IBM and later part of the Apache
Kubernetes API and create functions as custom objects. This Incubator project [15]. It is also the underlying technology of
allows developers to use the native Kubernetes APIs to interact the Cloud Functions FaaS product on IBM’s public cloud. The
with the functions as if they were native Kubernetes objects. OpenWhisk programming model is based on three primitives:
The language runtime is packaged in a container image. Action, Trigger and Rule [8]. Actions are stateless functions
The Kubeless controller continuously watches for changes that execute code. Triggers are a class of events that can
to function objects and takes necessary action. For instance, originate from different sources. Rules associate a trigger with
if a function object is created, the controller creates a pod an action. The scalability of functions is directly managed by
for the function and cleans up resources when the function the OpenWhisk controller.
object is deleted. A function’s runtime is encapsulated in a III. E VALUATION
container image and Kubernetes configmaps1 are used to inject
a function’s code in the runtime. We evaluate the performance of Fission, Kubeless and
OpenFaaS is an open source serverless framework for OpenFaaS when deployed on a Kubernetes cluster. We choose
Docker and Kubernetes [13]. The OpenFaaS CLI is used to Kubernetes as it is the only orchestrator supported by all
develop and deploy functions to OpenFaaS. Only the function the considered frameworks. We do not include OpenWhisk
due to issues faced in setup and its minimal dependence on
1 https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod- Kubernetes for orchestration tasks (the interested reader may
configmap/ refer to [43] for a performance evaluation of OpenWhisk).
80 80 80
Fission Fission Fission
70 Kubeless 70 Kubeless 70 Kubeless
Median response time (ms)

Median response time (ms)

Median response time (ms)


60 OpenFaaS 60 OpenFaaS 60 OpenFaaS

50 50 50

40 40 40

30 30 30

20 20 20

10 10 10

0 0 0
1 5 10 20 50 100 1 5 10 20 50 100 1 5 10 20 50 100
Number of concurrent users Number of concurrent users Number of concurrent users

(a) (b) (c)

Fig. 1: Median response time for each serverless framework with (a) 1 replica, (b) 25 replicas, and (c) 50 replicas.

This section first describes the experimental setup. Then it the independent replication method with at least 5 iterations
discusses the impact of the workload and auto scaling on the to achieve adequate statistical significance.
framework performance. Finally, it provides a summary of the
observations from the results. B. Impact of concurrent users
First, we measure the average response time and the ratio
A. Experimental setup of successfully received responses under different levels of
We run the experiments on Google Kubernetes En- concurrent requests. Our aim is to isolate any performance
gine (GKE)2 . The deployment on GKE is similar to one on a issues due to the architecture of the framework itself. To this
custom Kubernetes cluster deployed on virtualized instances. end, we write a simple function in Go that takes a string as
The serverless framework interacts directly with the Kuber- input and sends the same string as the response. We choose
netes cluster manager. We use Kubernetes version 1.10.4- this function to have minimal overhead in terms of the function
gke.2 (the latest version available at the time of writing) logic and its dependencies. We deploy the function on each
to set up a cluster with three worker nodes. Each worker framework and invoke it through HTTP. We disable auto
node has 2 vCPUs, 7.5 GB RAM and runs the Container- scaling and run a fixed number of function replicas (1, 25
Optimized OS. The cluster is setup in the europe-north1 region or 50) in each experiment. By doing so, we avoid possible
and all nodes are located within the same zone to minimize increases in response times when scaling out functions, i.e.,
the network latency they experience. Unless otherwise stated, when creating new function pods/containers. We repeat each
we deployed each framework with the default settings in experiment 10 times to improve the accuracy of the results.
the respective installation guide. We set up Fission version Figure 1 shows the median response time across all it-
0.8.0 and use the newdeploy executor [41] as it supports auto erations (i.e., 100,000 requests in total) for the different
scaling of functions. We use Kubeless version 1.0.0-alpha.6 frameworks. The lowest median response time is achieved by
and the Nginx Ingress controller to provide routing to the Fission, with values around 2 ms in all scenarios. We observe
functions. The OpenFaaS installation consists of the following that Kubeless and OpenFaaS maintain a response time below
components: gateway (v0.7.9), faas-netes (v0.5.1), Prometheus 80 ms across all scenarios. We also note that the response times
(v2.2.0), alert manager (v0.15.0-rc.0), queue worker (v0.4.3) do not show a significant change as the number of function
and faas-cli (v0.6.9). The HTTP watchdog mode is used as replicas increases. In fact, functions deployed on Kubeless
this will be the default mode of OpenFaaS in the future. with 50 replicas for 100 concurrent requests obtain a response
We use the Apache Benchmark (ab) tool3 to generate HTTP time slightly higher (by around 10 ms) than that with fewer
requests that invoke the functions deployed on each frame- function replicas. This indicates that it is possible to serve all
work. We run the ab tool on a virtual machine (VM) located requests for such a simple function with just one replica.
in the same zone as the Kubernetes cluster, again, to minimize A closer examination of the results reveals that the response
network latency. The VM has 2 vCPUs, 7.5 GB RAM and runs times for Fission have a significant number of outliers as the
Debian GNU/Linux 9.4 (stretch) OS. We configure the ab tool concurrency of requests increases over 50. In this respect,
to send 10,000 requests with different levels of concurrency Figure 2 shows the response times (on log scale) for 100
(1, 5, 10, 20, 50 or 100 concurrent requests). The concurrency concurrent requests and one function replica for the three
level affects the number of requests received simultaneously frameworks. OpenFaaS and Kubeless perform quite similarly
by the framework. We carry out the experiments according to and all responses are received within 400 ms. On the other
hand, for Fission there are several outliers: 1,336 responses
2 https://cloud.google.com/kubernetes-engine/ take more than 1 s and the longest response time goes up
3 https://httpd.apache.org/docs/2.4/programs/ab.html to 20 s. The large number of outliers for Fission pushes its
Number of concurrent users
Framework Repl.
10000.0 1 5 10 20 50 100
1 100.00 100.00 100.00 100.00 100.00 100.00
Response time (ms)

1000.0 Kubeless 25 100.00 100.00 100.00 100.00 100.00 100.00


50 100.00 100.00 100.00 100.00 100.00 100.00
100.0 1 100.00 99.90 99.84 99.78 99.54 99.32
Fission 25 100.00 99.89 99.85 99.77 99.48 99.19
10.0 50 100.00 99.88 99.81 99.79 99.61 99.31
1 99.95 99.99 99.91 99.58 98.73 98.27
1.0 OpenFaaS 25 100.00 100.00 99.92 99.67 97.76 96.04
Fission Kubeless OpenFaaS 50 100.00 100.00 99.93 99.61 97.48 96.52

Fig. 2: Response time with one function TABLE II: Success ratio (in %) of all requests for different serverless frameworks.
replica and 100 concurrent requests.

3000
6000
2000
5000 2500
Response time (ms)

Response time (ms)

Response time (ms)


1500 4000 2000

3000 1500
1000
2000 1000
500
1000 500

0 0 0
0 100 200 300 400 0 50 100 150 200 250 300 350 400 0 100 200 300 400 500 600 700
Time (s) Time (s) Time (s)

(a) (b) (c)

Fig. 3: Median response time as a function of time during auto scaling for (a) Fission, (b) Kubeless, and (c) OpenFaaS.

average response time to 176 ms, whereas this behavior is number of concurrent requests is 50 or more. Furthermore,
not seen for OpenFaaS (74 ms) or Kubeless (79 ms). We also the success ratio is higher when only one function replica
observe the same in other experiments with lower concurrency is present. This trend was seen consistently across multiple
of requests and for higher number of function replicas. Thus, runs. We attribute this to the architecture of OpenFaaS wherein
the performance of Fission deteriorates at high workloads every function call has to go through multiple steps, resulting
regardless of the number of function replicas. We attribute this in many different points of failure. For instance, the HTTP
to the router component of Fission that forwards all incoming requests and responses need to be processed by the gateway,
HTTP requests to the appropriate function. This component faas-netes and the watchdog. Hence, the gateway and faas-
becomes a bottleneck as the workload increases. On the other netes can become bottlenecks (due to design or engineering
hand, Kubeless relies on native Kubernetes components as issues) when the rate of incoming requests is high.
far as possible: it utilizes the Kubernetes Ingress controller
to route requests and balance the load. This component is at a
more mature state, having been supported by Kubernetes since C. Impact of auto scaling
version 1.1 (available in 2015).
We now examine the impact of auto scaling on the re-
Next, we examine the ratio of successful responses for sponse time and the ratio of successfully received responses.
different levels of concurrency and number of function replicas We choose to scale functions based on CPU utilization as
(Table II). The table reports the success ratio over all ten Fission supports only this scaling metric. Accordingly, we
iterations of the experiment. As the functions are invoked via consider a CPU-intensive function (in Go) that multiplies a
HTTP, we consider any response without a 2xx response code 1,000 by 1,000 matrix on each invocation. This allows us
as a failed request. We observe that Kubeless obtains the best to reach the CPU utilization threshold faster than with the
performance with a 100% success ratio across all experiments, previous function. We start each iteration of the experiment
i.e., all HTTP responses were successfully received. Fission with a single function replica and set the threshold for CPU
also manages to keep the success ratio at above 99% even at utilization at 50%. This implies that a CPU utilization exceed-
higher levels of concurrency. However, we observe that the ing 50% should trigger the creation of more function replicas.
success ratio of OpenFaaS drops to 98% or below when the All frameworks use the Kubernetes Horizontal Pod Autoscaler
1.0 components and its maturity. In fact, Kubeless has a version
1.0-alpha release whereas the other considered frameworks
0.8
are at versions below 1.0. Clearly, all frameworks are under
0.6
active development and are evolving rapidly. Nevertheless,
our work is a first important step towards benchmarking the
CDF

0.4 performance of serverless frameworks. Moreover, we note that


some tuning is still required to achieve adequate performance,
0.2 Fission
Kubeless
although serverless frameworks are expected to abstract away
OpenFaaS all scaling concerns from the developers [9]. With a simple
0.0
0 1000 2000 3000 4000 5000 6000 7000 “hello world” function, Kubeless and OpenFaaS maintain a
Response time (ms) low median and average response time below 80 ms. For
more CPU-intensive functions, such as in the auto scaling
Fig. 4: CDF of response time (in milliseconds) with auto
experiments, the serverless framework itself may need to be
scaling enabled for each serverless framework.
scaled as well to avoid bottlenecks in individual components.
IV. R ELATED W ORK
(HPA)4 to perform scaling based on CPU utilization [20–22].
Serverless computing is receiving increasing attention in
We use the ab tool to send 10,000 requests with 10 concurrent
the academia [2, 7, 10, 43, 44]. Baldini et al. [2] summarize
users and repeat each experiment 5 times.
the general features of serverless platforms and describe open
All the frameworks leave the scaling decisions to the Ku-
research problems in this area. Lynn et al. [10] present a
bernetes HPA. However, we notice that the ratio of successful
feature analysis of seven enterprise serverless computing plat-
responses and the distribution of response times varies between
forms, including AWS Lambda, Microsoft Azure Functions,
frameworks. Both Kubeless and OpenFaaS have a 100%
Google Cloud Functions and OpenWhisk. Lee et al. [11]
success ratio across all experiments, whereas the success ratio
evaluate the performance of public serverless platforms by
for Fission is at 98.11%. Next, Figure 4 shows the distribution
invoking CPU, memory and disk-intensive functions. They
of response times over all the experiment runs. The median
find that AWS Lambda outperforms other public cloud solu-
response time of OpenFaaS (1.1 s) is higher than the other
tions. Furthermore, the authors highlight the cost-effectiveness
two frameworks. Although Kubeless and Fission maintain a
of running functions on serverless platforms as compared to
lower median response time (288 ms), the outliers reach a
running them on traditional VMs. The authors also present a
significantly higher value (up to 7 s). In fact, 50 responses
feature comparison of the public serverless platforms. Lloyd
(0.1%) took more than 3 seconds for Kubeless, whereas the
et al. [44] investigate the performance of functions deployed
occurrence of such outliers for other frameworks is below 5.
on AWS Lambda and Microsoft Azure Functions. They focus
Next, we examine the variation of response time during
on the impact of infrastructure provisioning on public cloud
a single iteration of the experiment. Accordingly, Figure 3
platforms and identify variations in the functions’ performance
reports the values obtained by grouping all responses with
depending on the state (cold or warm) of the underlying VM
a granularity of one second: their median response time as
or container. McGrath and Brenner [45] develop a prototype
a solid line, as well as the corresponding minimum and
serverless platform implemented in .NET and using Windows
maximum values as a gray band. In all cases, the median
containers for executing functions. The authors compare the
response initially lies between 1 s to 1.5 s. Both Kubeless and
performance of their prototype platform to AWS Lambda,
Fission are able to scale more replicas at approximately after
Google Cloud Functions, Azure Functions and OpenWhisk.
100 s of the experiment and thus, we see a reduced response
Shillaker [43] evaluates the response latency on OpenWhisk
time. However, Kubeless is able to maintain the low response
at different levels of throughput and concurrent functions. The
time for a longer duration, whereas in the case of Fission the
author identifies research directions for improving start up time
response time increases again after 260 s of the experiment run.
in serverless frameworks by replacing containers with a new
OpenFaaS triggers a scaling request only after 200 s seconds
isolation mechanism in the runtime itself. However, none of
of the experiment. We also note that the total duration of
the works specifically address open source serverless platforms
the experiment takes longer for OpenFaaS as the response
(Fission, Kubeless, OpenFaaS).
time is quite high. This is because the ab tool waits for a
response before sending more requests and the experiment V. C ONCLUSION
only completes when all 10,000 requests have been sent.
This article analyzed the status of open source serverless
D. Discussion computing frameworks. First, we carried out a comprehensive
Our experimental results show that Kubeless has the most feature comparison of the most popular frameworks, Fission,
consistent performance across different scenarios. We attribute Kubeless, OpenFaaS and OpenWhisk. Based on that, we found
this to its simple architecture, the use of native Kubernetes that OpenFaaS has the most flexible architecture with support
for multiple container orchestrators and easy extendability.
4 https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ Next, we evaluated the performance of Fission, Kubeless and
OpenFaaS deployed on a Kubernetes cluster. Specifically, we [17] “Openwhisk actions,” https://github.com/apache/
characterized the response time and success ratio for functions incubator-openwhisk/blob/master/docs/actions.md, (Accessed:
deployed on these frameworks. We found that Kubeless has the 06/27/2018).
[18] “Fission: Environments,” https://docs.fission.io/0.8.0/concepts/
most consistent performance across different scenarios. How- environments/, (Accessed: 06/30/2018).
ever, all frameworks are under active development and changes [19] “OpenFaaS templates,” https://docs.openfaas.com/cli/
are expected before the alpha release of each framework. As templates/, (Accessed: 06/30/2018).
future work, we aim at analyzing the suitability of serverless [20] “Kubeless autoscaling,” https://github.com/kubeless/kubeless/
computing for resource-constrained edge devices. blob/master/docs/autoscaling.md, (Accessed: 06/27/2018).
[21] “Fission: Executors,” https://github.com/fission/fission/blob/
master/Documentation/docs-site/content/concepts/executor.en.
ACKNOWLEDGMENT md, (Accessed: 06/26/2018).
This work was partially supported by the Academy of [22] “OpenFaaS: autoscaling,” https://docs.openfaas.com/
architecture/autoscaling/, (Accessed: 06/29/2018).
Finland grant number 299222.
[23] “OpenWhisk deployment on Kubernetes,” https:
//github.com/apache/incubator-openwhisk-deploy-kube,
R EFERENCES (Accessed: 06/27/2018).
[1] G. Adzic and R. Chatley, “Serverless computing: economic and [24] “faas-netes,” https://github.com/openfaas/faas-netes, (Accessed:
architectural impact,” in Proceedings of the 2017 11th Joint 06/29/2018).
Meeting on Foundations of Software Engineering. ACM, 2017, [25] “faas-swarm,” https://github.com/openfaas/faas-swarm,
pp. 884–889. (Accessed: 06/29/2018).
[2] I. Baldini et al., “Serverless computing: Current trends and [26] “faas-nomad,” https://github.com/hashicorp/faas-nomad, (Ac-
open problems,” in Research Advances in Cloud Computing. cessed: 06/29/2018).
Springer, 2017, pp. 1–20. [27] “Kubeless architecture,” http://kubeless.io/docs/architecture/,
[3] A. Kanso and A. Youssef, “Serverless: beyond the cloud,” in (Accessed: 03/18/2018).
Proceedings of the 2nd International Workshop on Serverless [28] “Triggering IBM Cloud Functions on HTTP REST API calls,”
Computing. ACM, 2017, pp. 6–10. https://github.com/apache/incubator-openwhisk/blob/master/
[4] B. Varghese and R. Buyya, “Next generation cloud computing: docs/triggers rules.md, (Accessed: 06/27/2018).
New trends and research directions,” Future Generation Com- [29] “Openwhisk: creating triggers and rules,” https:
puter Systems, vol. 79, pp. 849–861, 2018. //github.com/apache/incubator-openwhisk/blob/master/docs/
[5] S. Nastic et al., “A serverless real-time data analytics platform triggers rules.md, (Accessed: 06/27/2018).
for edge computing,” IEEE Internet Computing, vol. 21, no. 4, [30] “IBM Cloud Functions: your first action, trigger and rule,” https:
pp. 64–71, 2017. //github.com/IBM/ibm-cloud-functions-action-trigger-rule,
[6] A. Glikson, S. Nastic, and S. Dustdar, “Deviceless edge comput- (Accessed: 06/27/2018).
ing: extending serverless computing to the edge of the network,” [31] “Fission: Trigger,” https://docs.fission.io/0.8.0/concepts/trigger/,
in Proceedings of the 10th ACM International Systems and (Accessed: 06/30/2018).
Storage Conference. ACM, 2017, p. 28. [32] “Openfaas watchdog,” https://github.com/openfaas/faas/tree/
[7] E. Jonas, Q. Pu, S. Venkataraman, I. Stoica, and B. Recht, master/watchdog, (Accessed: 05/27/2018).
“Occupy the cloud: Distributed computing for the 99%,” in [33] “Queue worker for OpenFaaS - NATS Streaming,”
Proceedings of the 2017 Symposium on Cloud Computing. https://github.com/openfaas/nats-queue-worker, (Accessed:
ACM, 2017, pp. 445–451. 06/29/2018).
[8] I. Baldini, P. Castro, P. Cheng, S. Fink, V. Ishakian, N. Mitchell, [34] “Kubeless autoscaling,” https://kubeless.io/docs/
V. Muthusamy, R. Rabbah, and P. Suter, “Cloud-native, event- pubsub-functions/, (Accessed: 06/30/2018).
based programming for mobile applications,” in Proceedings of [35] “OpenWhisk package for communication with Kafka
the International Conference on Mobile Software Engineering or IBM Message Hub,” https://github.com/apache/
and Systems. ACM, 2016, pp. 287–288. incubator-openwhisk-package-kafka, (Accessed: 06/27/2018).
[9] A. Eivy, “Be wary of the economics of “serverless” cloud [36] “OpenFaaS: Kafka connector,” https://github.com/
computing,” IEEE Cloud Computing, vol. 4, no. 2, pp. 6–12, openfaas-incubator/kafka-connector, (Accessed: 06/29/2018).
2017. [37] “Kubeless monitoring,” https://kubeless.io/docs/monitoring/,
[10] T. Lynn, P. Rosati, A. Lejeune, and V. Emeakaroha, “A prelimi- (Accessed: 06/30/2018).
nary review of enterprise serverless cloud computing (function- [38] “Fission: features,” https://fission.io/features/, (Accessed:
as-a-service) platforms,” in Cloud Computing Technology and 06/30/2018).
Science (CloudCom), 2017 IEEE International Conference on. [39] “OpenFaaS Workshop,” https://github.com/openfaas/workshop,
IEEE, 2017, pp. 162–169. (Accessed: 06/29/2018).
[11] H. Lee, K. Satyam, and G. C. Fox, “Evaluation of production [40] “OpenFaaS community,” https://github.com/openfaas/faas/blob/
serverless computing environments,” in Proceedings of the 3rd master/community.md, (Accessed: 06/29/2018).
International Workshop on Serverless Computing, 2018. [41] “Fission: Environments,” https://docs.fission.io/0.8.0/concepts/
[12] “Kubeless GitHub,” https://github.com/kubeless/kubeless, (Ac- executor/, (Accessed: 06/30/2018).
cessed: 06/28/2018). [42] “Custom resources - kubernetes,” https://kubernetes.io/docs/
[13] “OpenFaaS: autoscaling,” https://github.com/openfaas/faas, concepts/extend-kubernetes/api-extension/custom-resources/,
(Accessed: 06/30/2018). (Accessed: 06/27/2018).
[14] “Fission,” https://github.com/fission/fission, (Accessed: [43] S. Shillaker, “A provider-friendly serverless framework for
06/30/2018). latency-critical applications,” http://conferences.inf.ed.ac.
[15] “OpenWhisk,” https://github.com/apache/incubator-openwhisk, uk/EuroDW2018/papers/eurodw18-Shillaker.pdf, (Accessed:
(Accessed: 06/30/2018). 06/30/2018).
[16] “Kubeless runtime variants,” https://kubeless.io/docs/runtimes/, [44] W. Lloyd, S. Ramesh, S. Chinthalapati, L. Ly, and S. Pallickara,
(Accessed: 03/18/2018). “Serverless computing: An investigation of factors influencing
microservice performance,” in Cloud Engineering (IC2E), 2018
IEEE International Conference on. IEEE, 2018, pp. 159–169.
[45] G. McGrath and P. R. Brenner, “Serverless computing: Design,
implementation, and performance,” in Distributed Computing
Systems Workshops (ICDCSW), 2017 IEEE 37th International
Conference on. IEEE, 2017, pp. 405–410.

You might also like