diff --git a/.settings/org.eclipse.core.resources.prefs b/.settings/org.eclipse.core.resources.prefs deleted file mode 100644 index 99f26c0..0000000 --- a/.settings/org.eclipse.core.resources.prefs +++ /dev/null @@ -1,2 +0,0 @@ -eclipse.preferences.version=1 -encoding/=UTF-8 diff --git a/.travis.yml b/.travis.yml index 7a1c2ba..2ec896d 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,4 +1,6 @@ language: scala scala: - - 2.10.2 + - 2.11.8 +jdk: + - oraclejdk8 script: "sbt ++$TRAVIS_SCALA_VERSION test" diff --git a/README.md b/README.md index 9e52e1a..460fea3 100644 --- a/README.md +++ b/README.md @@ -1,38 +1,36 @@ # Sentinel -![Sentinel](http://images.wikia.com/matrix/images/c/c2/Sentinel_Print.jpg) +**Sentinel** is boilerplate for TCP based servers and clients through Using Akka IO and Akka Streams. -## Overview +The framework's focus is to abstract away the nitty gritty parts of stream based communication to have a solution for reactive TCP communication with reasonable defaults. +Sentinel is designed for usage in persistent connection environments, making it less suited for things like HTTP and best suited for database clients and persistent communication stacks stacks. -**Sentinel** is boilerplate for TCP based servers and clients through Akka IO (2.3). - -The implementation focusses on raw performance, using pipelines through multiple sockets represented by multiple workers (both client / server side). Sentinel is designed for usage in persistent connection environments, making it (currently) less suited for things like HTTP and best suited for DB clients / RPC stacks. - -Sentinel brings a unique symmetrical design through *Antennas*, resulting in the same request and response handling on both clients and servers. This not only makes it simple to share code on both sides, but also opens the possibility to inverse request & response flow from server to client. - -In its current state, it's being used internally as a platform to test performance strategies for CPU and IO bound services. In the nearby future, Sentinel will fuel both [Raiku](http://github.com/gideondk/raiku) as other soon-to-be-released Akka based libraries. +Sentinel brings a symmetrical design through *Processors*, resulting in the same request and response handling on both clients and servers. This not only makes it simple to share code on both sides, but also opens the possibility to inverse request & response flow from server to client. ## Status The current codebase of Sentinel can change heavily over releases. -In overall, treat Sentinel as pre-release alpha software. +In overall, treat Sentinel as alpha software. **Currently available in Sentinel:** -* Easy initialization of TCP servers and clients for default or custom router worker strategies; -* Supervision (and restart / reconnection functionality) on clients for a defined number of workers; -* Sequencing and continuing multiple client operations using `Tasks`; -* Streaming requests and responses (currently) based on Play Iteratees; -* Direct server to client communication through symmetrical signal handling design. +* Easy initialization of TCP clients, capable of handing normal request and response based flows as streaming requests and responses. +* Connection pooling and management and accompanied flow handling for clients. +* Reactive manner how handling available hosts / endpoints on clients. +* Basic server template using the same constructs / protocol as client. + +**The following is currently missing in Sentinel, but will be added soon:** + +* A far more solid test suite. +* Better error handling and recovery. +* Default functionality for callback based protocols. +* More solid server implementation, with possibility of direct server to client communication. -The following is currently missing in Sentinel, but will be added soon: +**(Currently) known issues:** -* Replacement of `Iteratees` in favour of the upcoming *Akka Streams*; -* A far more solid test suite; -* Better error handling and recovery; -* Default functionality for callback based protocols; -* Streaming server to client communication. +* There is no active (demand) buffering process within the client; when a stream is requested, but not consumed, additional requests on the same socket aren't demanded and therefore not pulled into new requests. +* No real performance testing has been done yet, so consider things shaky. ## Installation You can install Sentinel through source (by publishing it into your local Ivy repository): @@ -44,25 +42,21 @@ You can install Sentinel through source (by publishing it into your local Ivy re Or by adding the repo:
"gideondk-repo" at "https://raw.github.com/gideondk/gideondk-mvn-repo/master"
-to your SBT configuration and adding the `SNAPSHOT` to your library dependencies: +to your SBT configuration and adding Sentinel to your library dependencies (currently only build against Scala 2.11):
libraryDependencies ++= Seq(
-  "nl.gideondk" %% "sentinel" % "0.6.0"
+  "nl.gideondk" %% "sentinel" % "0.8-M1"
 )
 
## Architecture -The internal structure of Sentinel relies on a *Antenna* actor. The Antenna represents the connection between a client and a server and handles both the outgoing commands as incoming replies and handles the events received from the underlying *TCP* actors. +The internal structure of Sentinel relies on the `Processor` BidiFlow. The Processor represents the connection between a client and a server and handles both the outgoing commands as incoming events through a `ProducerStage` and `ConsumerStage`. -Within the antenna structure, two child actors are defined. One used for consuming replies from the connected host and one for the production of values for the connected host. - -Both clients as servers share the same antenna construction, which results in a symmetrical design for sending and receiving commands. When a message is received from the opposing host, a *resolver* is used to determine the action or reaction on the received event. Based on the used protocol (as defined in the underlying protocol pipeline), a host can process the event and decide whether the consume the received event or to respond with new values (as in a normal request -> response way). - -Once, for instance, a command is sent to a client (for a response from the connected server), the payload is sent to the opposing host and a reply-registration is set within the consumer part of the antenna. This registration and accompanying promise is completed with the consequential response from the server. +Both clients as servers share the same `Processor`, which results in a symmetrical design for sending and receiving commands. When a message is received from the opposing host, a `Resolver` is used to determine the action or reaction on the received event. Based on the used protocol (which is defined as an additional `BidiFlow`, converting `ByteStrings` to `Events` and `Commands` to `ByteStrings`, a host can process the event and decide whether the consume the received event or to respond with new values (as in a normal request -> response way). ## Actions -The handle incoming events, multiple actions are defined which can be used to implement logic on top of the used protocol. Actions are split into consumer actions and producers actions, which make a antenna able to: +The handle incoming events, multiple actions are defined which can be used to implement logic on top of the used protocol. Actions are split into consumer actions and producers actions, which make a antenna able to: ### Consumer Actions `AcceptSignal`: Accept and consume a incoming signal and apply it on a pending registration @@ -75,8 +69,6 @@ The handle incoming events, multiple actions are defined which can be used to im `ConsumeChunkAndEndStream`: Consumes the chunk and terminates the stream (combination of the two above) -`Ignore`: Ignores the current received signal - ### Producer Actions `Signal`: Responds to the incoming signal with a new (async) signal @@ -85,52 +77,24 @@ The handle incoming events, multiple actions are defined which can be used to im `ProduceStream`: Produces a stream (Enumerator) for the requesting hosts ## Synchronicity -Normally, Sentinel clients connect to servers through multiple sockets to increase parallel performance on top of the synchronous nature of *TCP* sockets. Producers and consumers implement a state machine to correctly respond to running incoming and outgoing streams, handling messages which don't impose treats to the message flow and stashing messages which could leak into the running streams. - -Because of the synchronous nature of the underlying semantics, you have to handle each receiving signal in a appropriate way. Not handling all signals correctly could result in values ending up in incorrect registrations etc. - +Normally, Sentinel clients connect to servers through multiple sockets to increase parallel performance on top of the synchronous nature of *TCP* sockets. -## Initialization -### Pipelines -The Pipeline implementation available in Akka 2.2 is becoming obsolete in Akka 2.3 to be replaced with a (better) alternative later on in Akka 2.4. As it seemed that pipelines aren't the best solution for Akka, this currently leaves Akka 2.3 without a reactive *protocol layer*. To bridge the period until a definite solution is available, the "older" pipeline implementation is packaged along with Sentinel. - -The pipeline implementation focusses on the definition of pipes for both incoming as outgoing messages. In these pipelines, a definition is made how incoming or outgoing messages are parsed and formatted. +Because of the *synchronous* nature of the underlying semantics, you have to handle each receiving signal in a appropriate way. Not handling all signals correctly could result in values ending up in incorrect order etc. -Each of these *stages* can easily be composed into a bigger stage (`A => B >> B => C`) taking a the input of the first stage and outputting the format of the last stage. Within Sentinel, the eventual output send to the IO workers is in the standard `ByteString` format, making it necessary that the end stage of the pipeline always outputs content of the `ByteString` type: - -```scala -case class PingPongMessageFormat(s: String) - -class PingPongMessageStage extends SymmetricPipelineStage[PipelineContext, - PingPongMessageFormat, ByteString] { - - override def apply(ctx: PipelineContext) = new SymmetricPipePair[PingPongMessageFormat, ByteString] { - implicit val byteOrder = ctx.byteOrder - - override val commandPipeline = { msg: PingPongMessageFormat ⇒ - Seq(Right(ByteString(msg.s))) - } - - override val eventPipeline = { bs: ByteString ⇒ - Seq(Left(PingPongMessageFormat(new String(bs.toArray)))) - } - } -} -``` +## Initialization ### Resolver The default resolver for a client is one that automatically accepts all signals. This default behaviour makes it able to handle basic protocols asynchronously without defining a custom resolver on the client side. It's easy to extend the behaviour on the client side for receiving stream responses by defining a custom `Resolver`: ```scala -import SimpleMessage._ trait DefaultSimpleMessageHandler extends Resolver[SimpleMessageFormat, SimpleMessageFormat] { - def process = { - case SimpleStreamChunk(x) ⇒ if (x.length > 0) ConsumerAction.ConsumeStreamChunk else ConsumerAction.EndStream - - case x: SimpleError ⇒ ConsumerAction.AcceptError - case x: SimpleReply ⇒ ConsumerAction.AcceptSignal + def process(implicit mat: Materializer): PartialFunction[SimpleMessageFormat, Action] = { + case SimpleStreamChunk(x) ⇒ if (x.length > 0) ConsumerAction.ConsumeStreamChunk else ConsumerAction.EndStream + case x: SimpleError ⇒ ConsumerAction.AcceptError + case x: SimpleReply ⇒ ConsumerAction.AcceptSignal + case SimpleCommand(PING_PONG, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply("PONG")) } } } @@ -142,112 +106,120 @@ In a traditional structure, a different resolver should be used on the server si ```scala object SimpleServerHandler extends DefaultSimpleMessageHandler { - override def process = super.process orElse { + def process(implicit mat: Materializer): PartialFunction[SimpleMessageFormat, Action] = { + case SimpleStreamChunk(x) ⇒ if (x.length > 0) ConsumerAction.ConsumeStreamChunk else ConsumerAction.EndStream case SimpleCommand(PING_PONG, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply("PONG")) } - - case SimpleCommand(TOTAL_CHUNK_SIZE, payload) ⇒ ProducerAction.ConsumeStream { x: SimpleCommand ⇒ - s: Enumerator[SimpleStreamChunk] ⇒ - s |>>> Iteratee.fold(0) { (b, a) ⇒ b + a.payload.length } map (x ⇒ SimpleReply(x.toString)) + case SimpleCommand(TOTAL_CHUNK_SIZE, payload) ⇒ ProducerAction.ConsumeStream { x: Source[SimpleStreamChunk, Any] ⇒ + x.runWith(Sink.fold[Int, SimpleMessageFormat](0) { (b, a) ⇒ b + a.payload.length }).map(x ⇒ SimpleReply(x.toString)) } - case SimpleCommand(GENERATE_NUMBERS, payload) ⇒ ProducerAction.ProduceStream { x: SimpleCommand ⇒ val count = payload.toInt - Future((Enumerator(List.range(0, count): _*) &> Enumeratee.map(x ⇒ SimpleStreamChunk(x.toString))) >>> Enumerator(SimpleStreamChunk(""))) + Future(Source(List.range(0, count)).map(x ⇒ SimpleStreamChunk(x.toString)) ++ Source.single(SimpleStreamChunk(""))) } - case SimpleCommand(ECHO, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply(x.payload)) } } } ``` -Like illustrated, the `ProducerAction.Signal` producer action makes it able to respond with a Async response. Taking a function which handles the incoming event and producing a new value, wrapped in a `Future`. +Like illustrated, the `ProducerAction.Signal` producer action makes it able to respond with a Async response. Taking a function which handles the incoming event and producing a new value, wrapped in a `Future`. -`ProducerAction.ConsumeStream` takes a function handling the incoming event and the Enumerator with the consequential chunks, resulting in a new value wrapped in a `Future` +`ProducerAction.ConsumeStream` takes a function handling the incoming `Source` with the consequential chunks, resulting in a new value wrapped in a `Future` -`ProducerAction.ProduceStream` takes a function handling the incoming event and returning a corresponding stream as a `Enumerator` wrapped in a `Future` +`ProducerAction.ProduceStream` takes a function handling the incoming event and returning a corresponding stream as a `Source` wrapped in a `Future` ### Client After the definition of the pipeline, a client is easily created: ```scala -Client.randomRouting("localhost", 9999, 4, "Ping Client", stages = stages, resolver = resolver) +val client = Client(Source.single(ClientStage.HostUp(Host("localhost", port))), SimpleHandler, false, OverflowStrategy.backpressure, SimpleMessage.protocol) +``` + +The client takes a `Source[HostEvent, Any]]` as *hosts* parameter. Using this stream of either `HostUp` or `HostDown` events, the client updates its connection pool to a potentially changing set of endpoints. + +The Client succeedingly takes the `Resolver` as parameter, a `shouldReact` parameter to configure the client if it should react to incoming events (for server to client communication), the to-be-used `OverflowStrategy` for incoming commands and the protocol `BidiFlow` to be used (`BidiFlow[Cmd, ByteString, ByteString, Evt, Any]`) + + +The client has a set of configurable settings: + +``` +nl.gideondk.sentinel { + client { + host { + max-connections = 32 + max-failures = 16 + failure-recovery-duration = 4 seconds + auto-reconnect = true + reconnect-duration = 2 seconds + } + input-buffer-size = 1024 + } +} ``` -Defining the host and port where the client should connect to, the amount of workers used to handle commands / events, description of the client and the earlier defined context, stages and resolver (for the complete list of parameters, check the code for the moment). - -You can use the `randomRouting` / `roundRobinRouting` methods depending on the routing strategy you want to use to communicate to the workers. For a more custom approach the `apply` method is available, which lets you define a router strategy yourself. +`max-connections`: defines the amount of sockets to be opened per connected host. + +`max-failures`: defines the amount of (socket) failures a host may encounter before the host is removed from the connection pool. + +`failure-recovery-duration`: period after which the failure rate is resetted per connection. + +`auto-reconnect`: when set, `HostDown` events from the client (after disconnect) are refeeded back as `HostUp` events into the client for reconnection purposes. + +`reconnect-duration`: the reconnection delay. + +`input-buffer-size`: The input buffer size of the client (before the configured `OverFlowStrategy` is used. ### Server -When the stages and resolver are defined, creation of a server is very straight forward: +When the protocol and resolver are defined, creation of a server is very straight forward: ```scala -Server(portNumber, SimpleServerHandler, "Server", SimpleMessage.stages) +Server("localhost", port, SimpleServerHandler, SimpleMessage.protocol.reversed) ``` -This will automatically start the server with the corresponding stages and handler, in the future, separate functionality for starting, restarting and stopping services will be available. +This will automatically start the server with the corresponding processor and handler, in the future, separate functionality for starting, restarting and stopping services will be available. ## Client usage -Once a client and / or server has been set up, the `?` method can be used on the client to send a command to the connected server. Results are wrapped into a `Task` containing the type `Evt` defined in the incoming stage of the client. +Once a client and / or server has been set up, the `ask` method can be used on the client to send a command to the connected server. Results are wrapped into a `Future` containing the type `Evt` defined in the incoming stage of the client. ```scala -PingPongTestHelper.pingClient ? PingPongMessageFormat("PING") -res0: Task[PingPongMessageFormat] +client.ask(SimpleCommand(PING_PONG, "PING"))` +res0: Future[SimpleMessageFormat] ``` -`Task` combines a `Try`, `Future` and `IO` Monad into one type: exceptions will be caught in the Try, all async actions are abstracted into a future monad and all IO actions are as pure as possible by using the Scalaz IO monad. - -Use `run` to expose the Future, or use `start(d: Duration)` to perform IO and wait (blocking) on the future. - -This bare bone approach to sending / receiving messages is focussed on the idea that a higher-level API on top of Sentinel is responsible to make client usage more comfortable. +The bare bone approach to sending / receiving messages is focussed on the idea that a higher-level API on top of Sentinel is responsible to make client usage more comfortable. ### Streamed requests / responses Sentinels structure for streaming requests and responses works best with protocols which somehow *pad* chunks and terminators. As the resolver has to be sure whether to consume a stream chunk and when to end the incoming stream, length based header structures are difficult to implement. Unstructured binary stream chunks can however be matched by protocol implementations if they are fundamentally different then other chunks, simply ignoring initial length headers and for instance breaking on *zero terminators* could be a way to implement *non-padded* stream chunks. -#### Sending +#### Sending It's possible to stream content towards Sentinel clients by using the the `?<<-` command, expecting the command to be send to the server, accompanied by the actual stream: ```scala -c ?<<- (SimpleCommand(TOTAL_CHUNK_SIZE, ""), Enumerator(chunks: _*)) -res0: Task[SimpleCommand] +val stream = Source.single(SimpleCommand(TOTAL_CHUNK_SIZE, "")) ++ Source(List.fill(1024)(SimpleStreamChunk("A"))) ++ Source.single(SimpleStreamChunk("")) -c ?<<- Enumerator((SimpleCommand(TOTAL_CHUNK_SIZE, "") ++ chunks): _*) -res1: Task[SimpleCommand] +client.sendStream(stream) +res0: Future[SimpleMessageFormat] ``` -The content within the *Enumerator* is folded to send each item to the TCP connection (returning in the `Evt` type, defined through the pipeline). +The content within the *Source* is sent over the TCP connection (returning in the `Evt` type, defined through the pipeline). #### Receiving In the same manner, a stream can be requested from the server: ```scala -c ?->> SimpleCommand(GENERATE_NUMBERS, count.toString) -res0: Task[Enumerator[SimpleCommand]] +client.askStream(SimpleCommand(GENERATE_NUMBERS, "1024")) +res0: Future[Source[SimpleMessageFormat, Any]] ``` -## Server usage -Although functionality will be expanded in the future, it's currently also possible to send requests from the server to the connected clients. This can be used for retrieval of client information on servers request, but could also be used as a retrieval pattern where clients are dormant after request, but respond to requests when necessary (retrieving sensor info per example). - -The following commands can be used to retrieve information: - -`?`: Sends command to *one* (randomly chosen) connected socket for a answer, resulting in one event. - -`?*`: Sends a command to all connected hosts, resulting in a list of events from each host individually. - -`?**`: Sends a command to all connected sockets, resulting in a list of events from all connected sockets. - -Simple server metrics are available through the `connectedSockets` and `connectedHosts` commands, returning a `Task[Int]` containing the corresponding count. +# Credits +The idea and internals for a large part of the client's connection pooling comes from [Maciej Ciołeks](https://github.com/maciekciolek) his wonderful [akka-http-lb](https://github.com/codeheroesdev/akka-http-lb) library. # License -Copyright © 2014 Gideon de Kok +Copyright © 2017 Gideon de Kok Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. - - -[![Bitdeli Badge](https://d2weczhvl823v0.cloudfront.net/gideondk/sentinel/trend.png)](https://bitdeli.com/free "Bitdeli Badge") - diff --git a/project/Build.scala b/project/Build.scala index 8778933..200282a 100755 --- a/project/Build.scala +++ b/project/Build.scala @@ -1,47 +1,54 @@ +import sbt.Keys._ import sbt._ -import Keys._ +import org.ensime.EnsimePlugin object ApplicationBuild extends Build { override lazy val settings = super.settings ++ Seq( name := "sentinel", - version := "0.6.0", + version := "0.8-M1", organization := "nl.gideondk", - scalaVersion := "2.10.2", + scalaVersion := "2.11.8", parallelExecution in Test := false, resolvers ++= Seq(Resolver.mavenLocal, "gideondk-repo" at "https://raw.github.com/gideondk/gideondk-mvn-repo/master", "Sonatype OSS Releases" at "http://oss.sonatype.org/content/repositories/releases/", "Sonatype OSS Snapshots" at "http://oss.sonatype.org/content/repositories/snapshots/", - + "Typesafe Snapshots" at "http://repo.typesafe.com/typesafe/snapshots/", "Typesafe Repository" at "http://repo.typesafe.com/typesafe/releases/"), - + publishTo := Some(Resolver.file("file", new File("/Users/gideondk/Development/gideondk-mvn-repo"))) ) + val akkaVersion = "2.4.11" + val appDependencies = Seq( - "org.scalaz" %% "scalaz-core" % "7.0.3", - "org.scalaz" %% "scalaz-effect" % "7.0.3", - "org.scalatest" % "scalatest_2.10" % "1.9.1" % "test", + "org.scalatest" %% "scalatest" % "3.0.0" % "test", + + "com.typesafe.akka" %% "akka-stream" % akkaVersion, + "com.typesafe.akka" %% "akka-stream-testkit" % akkaVersion, - "com.typesafe.play" %% "play-iteratees" % "2.2.0", - "com.typesafe.akka" % "akka-actor_2.10" % "2.3.0", - "com.typesafe.akka" %% "akka-testkit" % "2.3.0" + "com.typesafe.akka" %% "akka-actor" % akkaVersion, + "com.typesafe.akka" %% "akka-testkit" % akkaVersion % "test", + + "com.typesafe" % "config" % "1.3.0" ) - lazy val root = Project(id = "sentinel", - base = file("."), - settings = Project.defaultSettings ++ Seq( + lazy val root = Project( + id = "sentinel", + base = file(".") + ).settings(Project.defaultSettings ++ Seq( libraryDependencies ++= appDependencies, mainClass := Some("Main") - ) ++ Format.settings - ) + ) ++ EnsimePlugin.projectSettings ++ Format.settings) + } object Format { + import com.typesafe.sbt.SbtScalariform._ lazy val settings = scalariformSettings ++ Seq( @@ -66,4 +73,3 @@ object Format { setPreference(SpacesWithinPatternBinders, true) } } - diff --git a/project/build.properties b/project/build.properties index 37b489c..43b8278 100644 --- a/project/build.properties +++ b/project/build.properties @@ -1 +1 @@ -sbt.version=0.13.1 +sbt.version=0.13.11 diff --git a/project/plugins.sbt b/project/plugins.sbt index 337ed97..ecdd4fa 100755 --- a/project/plugins.sbt +++ b/project/plugins.sbt @@ -3,4 +3,11 @@ resolvers ++= Seq( "Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases/" ) -addSbtPlugin("com.typesafe.sbt" % "sbt-scalariform" % "1.2.0") \ No newline at end of file +addSbtPlugin("com.typesafe.sbt" % "sbt-scalariform" % "1.2.0") + +addSbtPlugin("io.get-coursier" % "sbt-coursier" % "1.0.0-M12") + +// or clone this repo and type `sbt publishLocal` +resolvers += Resolver.sonatypeRepo("snapshots") + +addSbtPlugin("org.ensime" % "sbt-ensime" % "1.11.1") diff --git a/sbt b/sbt index 4a8d430..5f343ac 100755 --- a/sbt +++ b/sbt @@ -2,5 +2,3 @@ export SBT_OPTS="-XX:+UseNUMA -XX:-UseBiasedLocking -Xms3024M -Xmx3048M -Xss1M -XX:MaxPermSize=256m -XX:+UseParallelGC" sbt "$@" - - diff --git a/src/main/resources/application.conf b/src/main/resources/application.conf index 8b0979d..21480cd 100644 --- a/src/main/resources/application.conf +++ b/src/main/resources/application.conf @@ -2,10 +2,10 @@ akka.log-dead-letters-during-shutdown = off akka.log-dead-letters = off akka { - //loglevel = DEBUG + //loglevel = DEBUG io { tcp { -// trace-logging = on + // trace-logging = on } } } \ No newline at end of file diff --git a/src/main/resources/reference.conf b/src/main/resources/reference.conf index 2794e48..6609eb4 100644 --- a/src/main/resources/reference.conf +++ b/src/main/resources/reference.conf @@ -1,15 +1,17 @@ -nl { - gideondk { - sentinel { - sentinel-dispatcher { - mailbox-type = "akka.dispatch.UnboundedDequeBasedMailbox" - } - sentinel-antenna-dispatcher { - mailbox-type = "nl.gideondk.sentinel.AntennaMailbox" - } - sentinel-consumer-dispatcher { - mailbox-type = "nl.gideondk.sentinel.rx.ConsumerMailbox" - } - } - } +nl.gideondk.sentinel { + client { + host { + max-connections = 32 + max-failures = 16 + failure-recovery-duration = 4 seconds + auto-reconnect = true + reconnect-duration = 2 seconds + } + input-buffer-size = 1024 + parallelism = 32 + } + + pipeline { + parallelism = 32 + } } diff --git a/src/main/scala/akka/io/Pipelines.scala b/src/main/scala/akka/io/Pipelines.scala deleted file mode 100644 index eace52a..0000000 --- a/src/main/scala/akka/io/Pipelines.scala +++ /dev/null @@ -1,1165 +0,0 @@ -/** Copyright (C) 2009-2013 Typesafe Inc. - */ - -package akka.io - -import java.lang.{ Iterable ⇒ JIterable } -import scala.annotation.tailrec -import scala.util.{ Try, Success, Failure } -import java.nio.ByteOrder -import akka.util.ByteString -import scala.collection.mutable -import akka.actor.{ NoSerializationVerificationNeeded, ActorContext } -import scala.concurrent.duration.FiniteDuration -import scala.collection.mutable.WrappedArray -import scala.concurrent.duration.Deadline -import scala.beans.BeanProperty -import akka.event.LoggingAdapter - -/** Scala API: A pair of pipes, one for commands and one for events, plus a - * management port. Commands travel from top to bottom, events from bottom to - * top. All messages which need to be handled “in-order” (e.g. top-down or - * bottom-up) need to be either events or commands; management messages are - * processed in no particular order. - * - * Java base classes are provided in the form of [[AbstractPipePair]] - * and [[AbstractSymmetricPipePair]] since the Scala function types can be - * awkward to handle in Java. - * - * @see [[PipelineStage]] - * @see [[AbstractPipePair]] - * @see [[AbstractSymmetricPipePair]] - * @see [[PipePairFactory]] - */ -trait PipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] { - - type Result = Either[EvtAbove, CmdBelow] - type Mgmt = PartialFunction[AnyRef, Iterable[Result]] - - /** The command pipeline transforms injected commands from the upper stage - * into commands for the stage below, but it can also emit events for the - * upper stage. Any number of each can be generated. - */ - def commandPipeline: CmdAbove ⇒ Iterable[Result] - - /** The event pipeline transforms injected event from the lower stage - * into event for the stage above, but it can also emit commands for the - * stage below. Any number of each can be generated. - */ - def eventPipeline: EvtBelow ⇒ Iterable[Result] - - /** The management port allows sending broadcast messages to all stages - * within this pipeline. This can be used to communicate with stages in the - * middle without having to thread those messages through the surrounding - * stages. Each stage can generate events and commands in response to a - * command, and the aggregation of all those is returned. - * - * The default implementation ignores all management commands. - */ - def managementPort: Mgmt = PartialFunction.empty -} - -/** A convenience type for expressing a [[PipePair]] which has the same types - * for commands and events. - */ -trait SymmetricPipePair[Above, Below] extends PipePair[Above, Below, Above, Below] - -/** Java API: A pair of pipes, one for commands and one for events. Commands travel from - * top to bottom, events from bottom to top. - * - * @see [[PipelineStage]] - * @see [[AbstractSymmetricPipePair]] - * @see [[PipePairFactory]] - */ -abstract class AbstractPipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] { - - /** Commands reaching this pipe pair are transformed into a sequence of - * commands for the next or events for the previous stage. - * - * Throwing exceptions within this method will abort processing of the whole - * pipeline which this pipe pair is part of. - * - * @param cmd the incoming command - * @return an Iterable of elements which are either events or commands - * - * @see [[#makeCommand]] - * @see [[#makeEvent]] - */ - def onCommand(cmd: CmdAbove): JIterable[Either[EvtAbove, CmdBelow]] - - /** Events reaching this pipe pair are transformed into a sequence of - * commands for the next or events for the previous stage. - * - * Throwing exceptions within this method will abort processing of the whole - * pipeline which this pipe pair is part of. - * - * @param cmd the incoming command - * @return an Iterable of elements which are either events or commands - * - * @see [[#makeCommand]] - * @see [[#makeEvent]] - */ - def onEvent(event: EvtBelow): JIterable[Either[EvtAbove, CmdBelow]] - - /** Management commands are sent to all stages in a broadcast fashion, - * conceptually in parallel (but not actually executing a stage - * reentrantly in case of events or commands being generated in response - * to a management command). - */ - def onManagementCommand(cmd: AnyRef): JIterable[Either[EvtAbove, CmdBelow]] = - java.util.Collections.emptyList() - - /** Helper method for wrapping a command which shall be emitted. - */ - def makeCommand(cmd: CmdBelow): Either[EvtAbove, CmdBelow] = Right(cmd) - - /** Helper method for wrapping an event which shall be emitted. - */ - def makeEvent(event: EvtAbove): Either[EvtAbove, CmdBelow] = Left(event) - - /** INTERNAL API: do not touch! - */ - private[io] val _internal$cmd = { - val l = new java.util.ArrayList[AnyRef](1) - l add null - l - } - /** INTERNAL API: do not touch! - */ - private[io] val _internal$evt = { - val l = new java.util.ArrayList[AnyRef](1) - l add null - l - } - - /** Wrap a single command for efficient return to the pipeline’s machinery. - * This method avoids allocating a [[scala.util.Right]] and an [[java.lang.Iterable]] by reusing - * one such instance within the AbstractPipePair, hence it can be used ONLY ONCE by - * each pipeline stage. Prototypic and safe usage looks like this: - * - * {{{ - * final MyResult result = ... ; - * return singleCommand(result); - * }}} - * - * @see PipelineContext#singleCommand - */ - def singleCommand(cmd: CmdBelow): JIterable[Either[EvtAbove, CmdBelow]] = { - _internal$cmd.set(0, cmd.asInstanceOf[AnyRef]) - _internal$cmd.asInstanceOf[JIterable[Either[EvtAbove, CmdBelow]]] - } - - /** Wrap a single event for efficient return to the pipeline’s machinery. - * This method avoids allocating a [[scala.util.Left]] and an [[java.lang.Iterable]] by reusing - * one such instance within the AbstractPipePair, hence it can be used ONLY ONCE by - * each pipeline stage. Prototypic and safe usage looks like this: - * - * {{{ - * final MyResult result = ... ; - * return singleEvent(result); - * }}} - * - * @see PipelineContext#singleEvent - */ - def singleEvent(evt: EvtAbove): JIterable[Either[EvtAbove, CmdBelow]] = { - _internal$evt.set(0, evt.asInstanceOf[AnyRef]) - _internal$evt.asInstanceOf[JIterable[Either[EvtAbove, CmdBelow]]] - } - - /** INTERNAL API: Dealias a possibly optimized return value such that it can - * be safely used; this is never needed when only using public API. - */ - def dealias[Cmd, Evt](msg: JIterable[Either[Evt, Cmd]]): JIterable[Either[Evt, Cmd]] = { - import java.util.Collections.singletonList - if (msg eq _internal$cmd) singletonList(Right(_internal$cmd.get(0).asInstanceOf[Cmd])) - else if (msg eq _internal$evt) singletonList(Left(_internal$evt.get(0).asInstanceOf[Evt])) - else msg - } -} - -/** A convenience type for expressing a [[AbstractPipePair]] which has the same types - * for commands and events. - */ -abstract class AbstractSymmetricPipePair[Above, Below] extends AbstractPipePair[Above, Below, Above, Below] - -/** This class contains static factory methods which produce [[PipePair]] - * instances; those are needed within the implementation of [[PipelineStage#apply]]. - */ -object PipePairFactory { - - /** Scala API: construct a [[PipePair]] from the two given functions; useful for not capturing `$outer` references. - */ - def apply[CmdAbove, CmdBelow, EvtAbove, EvtBelow] // - (commandPL: CmdAbove ⇒ Iterable[Either[EvtAbove, CmdBelow]], - eventPL: EvtBelow ⇒ Iterable[Either[EvtAbove, CmdBelow]], - management: PartialFunction[AnyRef, Iterable[Either[EvtAbove, CmdBelow]]] = PartialFunction.empty) = - new PipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] { - override def commandPipeline = commandPL - override def eventPipeline = eventPL - override def managementPort = management - } - - private abstract class Converter[CmdAbove <: AnyRef, CmdBelow <: AnyRef, EvtAbove <: AnyRef, EvtBelow <: AnyRef] // - (val ap: AbstractPipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow], ctx: PipelineContext) { - import scala.collection.JavaConverters._ - protected def normalize(output: JIterable[Either[EvtAbove, CmdBelow]]): Iterable[Either[EvtAbove, CmdBelow]] = - if (output == java.util.Collections.EMPTY_LIST) Nil - else if (output eq ap._internal$cmd) ctx.singleCommand(ap._internal$cmd.get(0).asInstanceOf[CmdBelow]) - else if (output eq ap._internal$evt) ctx.singleEvent(ap._internal$evt.get(0).asInstanceOf[EvtAbove]) - else output.asScala - } - - /** Java API: construct a [[PipePair]] from the given [[AbstractPipePair]]. - */ - def create[CmdAbove <: AnyRef, CmdBelow <: AnyRef, EvtAbove <: AnyRef, EvtBelow <: AnyRef] // - (ctx: PipelineContext, ap: AbstractPipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow]) // - : PipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] = - new Converter(ap, ctx) with PipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] { - override val commandPipeline = { cmd: CmdAbove ⇒ normalize(ap.onCommand(cmd)) } - override val eventPipeline = { evt: EvtBelow ⇒ normalize(ap.onEvent(evt)) } - override val managementPort: Mgmt = { case x ⇒ normalize(ap.onManagementCommand(x)) } - } - - /** Java API: construct a [[PipePair]] from the given [[AbstractSymmetricPipePair]]. - */ - def create[Above <: AnyRef, Below <: AnyRef] // - (ctx: PipelineContext, ap: AbstractSymmetricPipePair[Above, Below]): SymmetricPipePair[Above, Below] = - new Converter(ap, ctx) with SymmetricPipePair[Above, Below] { - override val commandPipeline = { cmd: Above ⇒ normalize(ap.onCommand(cmd)) } - override val eventPipeline = { evt: Below ⇒ normalize(ap.onEvent(evt)) } - override val managementPort: Mgmt = { case x ⇒ normalize(ap.onManagementCommand(x)) } - } -} - -case class PipelinePorts[CmdAbove, CmdBelow, EvtAbove, EvtBelow]( - commands: CmdAbove ⇒ (Iterable[EvtAbove], Iterable[CmdBelow]), - events: EvtBelow ⇒ (Iterable[EvtAbove], Iterable[CmdBelow]), - management: PartialFunction[AnyRef, (Iterable[EvtAbove], Iterable[CmdBelow])]) - -/** This class contains static factory methods which turn a pipeline context - * and a [[PipelineStage]] into readily usable pipelines. - */ -object PipelineFactory { - - /** Scala API: build the pipeline and return a pair of functions representing - * the command and event pipelines. Each function returns the commands and - * events resulting from running the pipeline on the given input, where the - * the sequence of events is the first element of the returned pair and the - * sequence of commands the second element. - * - * Exceptions thrown by the pipeline stages will not be caught. - * - * @param ctx The context object for this pipeline - * @param stage The (composite) pipeline stage from whcih to build the pipeline - * @return a pair of command and event pipeline functions - */ - def buildFunctionTriple[Ctx <: PipelineContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow] // - (ctx: Ctx, stage: PipelineStage[Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow]) // - : PipelinePorts[CmdAbove, CmdBelow, EvtAbove, EvtBelow] = { - val pp = stage apply ctx - val split: (Iterable[Either[EvtAbove, CmdBelow]]) ⇒ (Iterable[EvtAbove], Iterable[CmdBelow]) = { in ⇒ - if (in.isEmpty) (Nil, Nil) - else if (in eq ctx.cmd) (Nil, Seq[CmdBelow](ctx.cmd(0))) - else if (in eq ctx.evt) (Seq[EvtAbove](ctx.evt(0)), Nil) - else { - val cmds = Vector.newBuilder[CmdBelow] - val evts = Vector.newBuilder[EvtAbove] - in foreach { - case Right(cmd) ⇒ cmds += cmd - case Left(evt) ⇒ evts += evt - } - (evts.result, cmds.result) - } - } - PipelinePorts(pp.commandPipeline andThen split, pp.eventPipeline andThen split, pp.managementPort andThen split) - } - - /** Scala API: build the pipeline attaching the given command and event sinks - * to its outputs. Exceptions thrown within the pipeline stages will abort - * processing (i.e. will not be processed in following stages) but will be - * caught and passed as [[scala.util.Failure]] into the respective sink. - * - * Exceptions thrown while processing management commands are not caught. - * - * @param ctx The context object for this pipeline - * @param stage The (composite) pipeline stage from whcih to build the pipeline - * @param commandSink The function to invoke for commands or command failures - * @param eventSink The function to invoke for events or event failures - * @return a handle for injecting events or commands into the pipeline - */ - def buildWithSinkFunctions[Ctx <: PipelineContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow] // - (ctx: Ctx, - stage: PipelineStage[Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow])( - commandSink: Try[CmdBelow] ⇒ Unit, - eventSink: Try[EvtAbove] ⇒ Unit): PipelineInjector[CmdAbove, EvtBelow] = - new PipelineInjector[CmdAbove, EvtBelow] { - val pl = stage(ctx) - override def injectCommand(cmd: CmdAbove): Unit = { - Try(pl.commandPipeline(cmd)) match { - case f: Failure[_] ⇒ commandSink(f.asInstanceOf[Try[CmdBelow]]) - case Success(out) ⇒ - if (out.isEmpty) () // nothing - else if (out eq ctx.cmd) commandSink(Success(ctx.cmd(0))) - else if (out eq ctx.evt) eventSink(Success(ctx.evt(0))) - else out foreach { - case Right(cmd) ⇒ commandSink(Success(cmd)) - case Left(evt) ⇒ eventSink(Success(evt)) - } - } - } - override def injectEvent(evt: EvtBelow): Unit = { - Try(pl.eventPipeline(evt)) match { - case f: Failure[_] ⇒ eventSink(f.asInstanceOf[Try[EvtAbove]]) - case Success(out) ⇒ - if (out.isEmpty) () // nothing - else if (out eq ctx.cmd) commandSink(Success(ctx.cmd(0))) - else if (out eq ctx.evt) eventSink(Success(ctx.evt(0))) - else out foreach { - case Right(cmd) ⇒ commandSink(Success(cmd)) - case Left(evt) ⇒ eventSink(Success(evt)) - } - } - } - override def managementCommand(cmd: AnyRef): Unit = { - val out = pl.managementPort(cmd) - if (out.isEmpty) () // nothing - else if (out eq ctx.cmd) commandSink(Success(ctx.cmd(0))) - else if (out eq ctx.evt) eventSink(Success(ctx.evt(0))) - else out foreach { - case Right(cmd) ⇒ commandSink(Success(cmd)) - case Left(evt) ⇒ eventSink(Success(evt)) - } - } - } - - /** Java API: build the pipeline attaching the given callback object to its - * outputs. Exceptions thrown within the pipeline stages will abort - * processing (i.e. will not be processed in following stages) but will be - * caught and passed as [[scala.util.Failure]] into the respective sink. - * - * Exceptions thrown while processing management commands are not caught. - * - * @param ctx The context object for this pipeline - * @param stage The (composite) pipeline stage from whcih to build the pipeline - * @param callback The [[PipelineSink]] to attach to the built pipeline - * @return a handle for injecting events or commands into the pipeline - */ - def buildWithSink[Ctx <: PipelineContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow] // - (ctx: Ctx, - stage: PipelineStage[Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow], - callback: PipelineSink[CmdBelow, EvtAbove]): PipelineInjector[CmdAbove, EvtBelow] = - buildWithSinkFunctions[Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow](ctx, stage)({ - case Failure(thr) ⇒ callback.onCommandFailure(thr) - case Success(cmd) ⇒ callback.onCommand(cmd) - }, { - case Failure(thr) ⇒ callback.onEventFailure(thr) - case Success(evt) ⇒ callback.onEvent(evt) - }) -} - -/** A handle for injecting commands and events into a pipeline. Commands travel - * down (or to the right) through the stages, events travel in the opposite - * direction. - * - * @see [[PipelineFactory#buildWithSinkFunctions]] - * @see [[PipelineFactory#buildWithSink]] - */ -trait PipelineInjector[Cmd, Evt] { - - /** Inject the given command into the connected pipeline. - */ - @throws(classOf[Exception]) - def injectCommand(cmd: Cmd): Unit - - /** Inject the given event into the connected pipeline. - */ - @throws(classOf[Exception]) - def injectEvent(event: Evt): Unit - - /** Send a management command to all stages (in an unspecified order). - */ - @throws(classOf[Exception]) - def managementCommand(cmd: AnyRef): Unit -} - -/** A sink which can be attached by [[PipelineFactory#buildWithSink]] to a - * pipeline when it is being built. The methods are called when commands, - * events or their failures occur during evaluation of the pipeline (i.e. - * when injection is triggered using the associated [[PipelineInjector]]). - */ -abstract class PipelineSink[Cmd, Evt] { - - /** This callback is invoked for every command generated by the pipeline. - * - * By default this does nothing. - */ - @throws(classOf[Throwable]) - def onCommand(cmd: Cmd): Unit = () - - /** This callback is invoked if an exception occurred while processing an - * injected command. If this callback is invoked that no other callbacks will - * be invoked for the same injection. - * - * By default this will just throw the exception. - */ - @throws(classOf[Throwable]) - def onCommandFailure(thr: Throwable): Unit = throw thr - - /** This callback is invoked for every event generated by the pipeline. - * - * By default this does nothing. - */ - @throws(classOf[Throwable]) - def onEvent(event: Evt): Unit = () - - /** This callback is invoked if an exception occurred while processing an - * injected event. If this callback is invoked that no other callbacks will - * be invoked for the same injection. - * - * By default this will just throw the exception. - */ - @throws(classOf[Throwable]) - def onEventFailure(thr: Throwable): Unit = throw thr -} - -/** This base trait of each pipeline’s context provides optimized facilities - * for generating single commands or events (i.e. the fast common case of 1:1 - * message transformations). - * - * IMPORTANT NOTICE: - * - * A PipelineContext MUST NOT be shared between multiple pipelines, it contains mutable - * state without synchronization. You have been warned! - * - * @see AbstractPipelineContext see AbstractPipelineContext for a default implementation (Java) - */ -trait PipelineContext { - - /** INTERNAL API: do not touch! - */ - private val cmdHolder = new Array[AnyRef](1) - /** INTERNAL API: do not touch! - */ - private val evtHolder = new Array[AnyRef](1) - /** INTERNAL API: do not touch! - */ - private[io] val cmd = WrappedArray.make(cmdHolder) - /** INTERNAL API: do not touch! - */ - private[io] val evt = WrappedArray.make(evtHolder) - - /** Scala API: Wrap a single command for efficient return to the pipeline’s machinery. - * This method avoids allocating a [[scala.util.Right]] and an [[scala.collection.Iterable]] by reusing - * one such instance within the PipelineContext, hence it can be used ONLY ONCE by - * each pipeline stage. Prototypic and safe usage looks like this: - * - * {{{ - * override val commandPipeline = { cmd => - * val myResult = ... - * ctx.singleCommand(myResult) - * } - * }}} - * - * @see AbstractPipePair#singleCommand see AbstractPipePair for the Java API - */ - def singleCommand[Cmd <: AnyRef, Evt <: AnyRef](cmd: Cmd): Iterable[Either[Evt, Cmd]] = { - cmdHolder(0) = cmd - this.cmd - } - - /** Scala API: Wrap a single event for efficient return to the pipeline’s machinery. - * This method avoids allocating a [[scala.util.Left]] and an [[scala.collection.Iterable]] by reusing - * one such instance within the context, hence it can be used ONLY ONCE by - * each pipeline stage. Prototypic and safe usage looks like this: - * - * {{{ - * override val eventPipeline = { cmd => - * val myResult = ... - * ctx.singleEvent(myResult) - * } - * }}} - * - * @see AbstractPipePair#singleEvent see AbstractPipePair for the Java API - */ - def singleEvent[Cmd <: AnyRef, Evt <: AnyRef](evt: Evt): Iterable[Either[Evt, Cmd]] = { - evtHolder(0) = evt - this.evt - } - - /** A shared (and shareable) instance of an empty `Iterable[Either[EvtAbove, CmdBelow]]`. - * Use this when processing does not yield any commands or events as result. - */ - def nothing[Cmd, Evt]: Iterable[Either[Evt, Cmd]] = Nil - - /** INTERNAL API: Dealias a possibly optimized return value such that it can - * be safely used; this is never needed when only using public API. - */ - def dealias[Cmd, Evt](msg: Iterable[Either[Evt, Cmd]]): Iterable[Either[Evt, Cmd]] = { - if (msg.isEmpty) Nil - else if (msg eq cmd) Seq(Right(cmd(0))) - else if (msg eq evt) Seq(Left(evt(0))) - else msg - } -} - -/** This base trait of each pipeline’s context provides optimized facilities - * for generating single commands or events (i.e. the fast common case of 1:1 - * message transformations). - * - * IMPORTANT NOTICE: - * - * A PipelineContext MUST NOT be shared between multiple pipelines, it contains mutable - * state without synchronization. You have been warned! - */ -abstract class AbstractPipelineContext extends PipelineContext - -object PipelineStage { - - /** Java API: attach the two given stages such that the command output of the - * first is fed into the command input of the second, and the event output of - * the second is fed into the event input of the first. In other words: - * sequence the stages such that the left one is on top of the right one. - * - * @param left the left or upper pipeline stage - * @param right the right or lower pipeline stage - * @return a pipeline stage representing the sequence of the two stages - */ - def sequence[Ctx <: PipelineContext, CmdAbove, CmdBelow, CmdBelowBelow, EvtAbove, EvtBelow, EvtBelowBelow] // - (left: PipelineStage[_ >: Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow], - right: PipelineStage[_ >: Ctx, CmdBelow, CmdBelowBelow, EvtBelow, EvtBelowBelow]) // - : PipelineStage[Ctx, CmdAbove, CmdBelowBelow, EvtAbove, EvtBelowBelow] = - left >> right - - /** Java API: combine the two stages such that the command pipeline of the - * left stage is used and the event pipeline of the right, discarding the - * other two sub-pipelines. - * - * @param left the command pipeline - * @param right the event pipeline - * @return a pipeline stage using the left command pipeline and the right event pipeline - */ - def combine[Ctx <: PipelineContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow] // - (left: PipelineStage[Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow], - right: PipelineStage[Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow]) // - : PipelineStage[Ctx, CmdAbove, CmdBelow, EvtAbove, EvtBelow] = - left | right -} - -/** A [[PipelineStage]] which is symmetric in command and event types, i.e. it only - * has one command and event type above and one below. - */ -abstract class SymmetricPipelineStage[Context <: PipelineContext, Above, Below] extends PipelineStage[Context, Above, Below, Above, Below] - -/** A pipeline stage which can be combined with other stages to build a - * protocol stack. The main function of this class is to serve as a factory - * for the actual [[PipePair]] generated by the [[#apply]] method so that a - * context object can be passed in. - * - * @see [[PipelineFactory]] - */ -abstract class PipelineStage[Context <: PipelineContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow] { left ⇒ - - /** Implement this method to generate this stage’s pair of command and event - * functions. - * - * INTERNAL API: do not use this method to instantiate a pipeline! - * - * @see [[PipelineFactory]] - * @see [[AbstractPipePair]] - * @see [[AbstractSymmetricPipePair]] - */ - protected[io] def apply(ctx: Context): PipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] - - /** Scala API: attach the two given stages such that the command output of the - * first is fed into the command input of the second, and the event output of - * the second is fed into the event input of the first. In other words: - * sequence the stages such that the left one is on top of the right one. - * - * @param right the right or lower pipeline stage - * @return a pipeline stage representing the sequence of the two stages - */ - def >>[CmdBelowBelow, EvtBelowBelow, BelowContext <: Context] // - (right: PipelineStage[_ >: BelowContext, CmdBelow, CmdBelowBelow, EvtBelow, EvtBelowBelow]) // - : PipelineStage[BelowContext, CmdAbove, CmdBelowBelow, EvtAbove, EvtBelowBelow] = - new PipelineStage[BelowContext, CmdAbove, CmdBelowBelow, EvtAbove, EvtBelowBelow] { - - protected[io] override def apply(ctx: BelowContext): PipePair[CmdAbove, CmdBelowBelow, EvtAbove, EvtBelowBelow] = { - - val leftPL = left(ctx) - val rightPL = right(ctx) - - new PipePair[CmdAbove, CmdBelowBelow, EvtAbove, EvtBelowBelow] { - - type Output = Either[EvtAbove, CmdBelowBelow] - - import language.implicitConversions - @inline implicit def narrowRight[A, B, C](in: Right[A, B]): Right[C, B] = in.asInstanceOf[Right[C, B]] - @inline implicit def narrowLeft[A, B, C](in: Left[A, B]): Left[A, C] = in.asInstanceOf[Left[A, C]] - - def loopLeft(input: Iterable[Either[EvtAbove, CmdBelow]]): Iterable[Output] = { - if (input.isEmpty) Nil - else if (input eq ctx.cmd) loopRight(rightPL.commandPipeline(ctx.cmd(0))) - else if (input eq ctx.evt) ctx.evt - else { - val output = Vector.newBuilder[Output] - input foreach { - case Right(cmd) ⇒ output ++= ctx.dealias(loopRight(rightPL.commandPipeline(cmd))) - case l @ Left(_) ⇒ output += l - } - output.result - } - } - - def loopRight(input: Iterable[Either[EvtBelow, CmdBelowBelow]]): Iterable[Output] = { - if (input.isEmpty) Nil - else if (input eq ctx.cmd) ctx.cmd - else if (input eq ctx.evt) loopLeft(leftPL.eventPipeline(ctx.evt(0))) - else { - val output = Vector.newBuilder[Output] - input foreach { - case r @ Right(_) ⇒ output += r - case Left(evt) ⇒ output ++= ctx.dealias(loopLeft(leftPL.eventPipeline(evt))) - } - output.result - } - } - - override val commandPipeline = { a: CmdAbove ⇒ loopLeft(leftPL.commandPipeline(a)) } - - override val eventPipeline = { b: EvtBelowBelow ⇒ loopRight(rightPL.eventPipeline(b)) } - - override val managementPort: PartialFunction[AnyRef, Iterable[Either[EvtAbove, CmdBelowBelow]]] = { - case x ⇒ - val output = Vector.newBuilder[Output] - output ++= ctx.dealias(loopLeft(leftPL.managementPort.applyOrElse(x, (_: AnyRef) ⇒ Nil))) - output ++= ctx.dealias(loopRight(rightPL.managementPort.applyOrElse(x, (_: AnyRef) ⇒ Nil))) - output.result - } - } - } - } - - /** Scala API: combine the two stages such that the command pipeline of the - * left stage is used and the event pipeline of the right, discarding the - * other two sub-pipelines. - * - * @param right the event pipeline - * @return a pipeline stage using the left command pipeline and the right event pipeline - */ - def |[RightContext <: Context] // - (right: PipelineStage[_ >: RightContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow]) // - : PipelineStage[RightContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow] = - new PipelineStage[RightContext, CmdAbove, CmdBelow, EvtAbove, EvtBelow] { - override def apply(ctx: RightContext): PipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] = - new PipePair[CmdAbove, CmdBelow, EvtAbove, EvtBelow] { - - val leftPL = left(ctx) - val rightPL = right(ctx) - - override val commandPipeline = leftPL.commandPipeline - override val eventPipeline = rightPL.eventPipeline - override val managementPort: Mgmt = { - case x ⇒ - val output = Vector.newBuilder[Either[EvtAbove, CmdBelow]] - output ++= ctx.dealias(leftPL.managementPort(x)) - output ++= ctx.dealias(rightPL.managementPort(x)) - output.result - } - } - } -} - -object BackpressureBuffer { - /** Message type which is sent when the buffer’s high watermark has been - * reached, which means that further write requests should not be sent - * until the low watermark has been reached again. - */ - trait HighWatermarkReached extends Tcp.Event - case object HighWatermarkReached extends HighWatermarkReached - - /** Message type which is sent when the buffer’s fill level falls below - * the low watermark, which means that writing can commence again. - */ - trait LowWatermarkReached extends Tcp.Event - case object LowWatermarkReached extends LowWatermarkReached - -} - -/** This pipeline stage implements a configurable buffer for transforming the - * per-write ACK/NACK-based backpressure model of a TCP connection actor into - * an edge-triggered back-pressure model: the upper stages will receive - * notification when the buffer runs full ([[BackpressureBuffer.HighWatermarkReached]]) and when - * it subsequently empties ([[BackpressureBuffer.LowWatermarkReached]]). The upper layers should - * respond by not generating more writes when the buffer is full. There is also - * a hard limit upon which this buffer will abort the connection. - * - * All limits are configurable and are given in number of bytes. - * The `highWatermark` should be set such that the - * amount of data generated before reception of the asynchronous - * [[BackpressureBuffer.HighWatermarkReached]] notification does not lead to exceeding the - * `maxCapacity` hard limit; if the writes may arrive in bursts then the - * difference between these two should allow for at least one burst to be sent - * after the high watermark has been reached. The `lowWatermark` must be less - * than or equal to the `highWatermark`, where the difference between these two - * defines the hysteresis, i.e. how often these notifications are sent out (i.e. - * if the difference is rather large then it will take some time for the buffer - * to empty below the low watermark, and that room is then available for data - * sent in response to the [[BackpressureBuffer.LowWatermarkReached]] notification; if the - * difference was small then the buffer would more quickly oscillate between - * these two limits). - */ -class BackpressureBuffer(lowBytes: Long, highBytes: Long, maxBytes: Long) - extends PipelineStage[HasLogging, Tcp.Command, Tcp.Command, Tcp.Event, Tcp.Event] { - - require(lowBytes >= 0, "lowWatermark needs to be non-negative") - require(highBytes >= lowBytes, "highWatermark needs to be at least as large as lowWatermark") - require(maxBytes >= highBytes, "maxCapacity needs to be at least as large as highWatermark") - - // WARNING: Closes over enclosing class -- cannot moved outside because of backwards binary compatibility - // Fixed in 2.3 - case class Ack(num: Int, ack: Tcp.Event) extends Tcp.Event with NoSerializationVerificationNeeded - - override def apply(ctx: HasLogging) = new PipePair[Tcp.Command, Tcp.Command, Tcp.Event, Tcp.Event] { - - import Tcp._ - import BackpressureBuffer._ - - private val log = ctx.getLogger - - private var storageOffset = 0 - private var storage = Vector.empty[Write] - private def currentOffset = storageOffset + storage.size - - private var stored = 0L - private var suspended = false - - private var behavior = writing - override def commandPipeline = behavior - override def eventPipeline = behavior - - private def become(f: Message ⇒ Iterable[Result]) { behavior = f } - - private lazy val writing: Message ⇒ Iterable[Result] = { - case Write(data, ack) ⇒ - buffer(Write(data, Ack(currentOffset, ack)), doWrite = true) - - case CommandFailed(Write(_, Ack(offset, _))) ⇒ - become(buffering(offset)) - ctx.singleCommand(ResumeWriting) - - case cmd: CloseCommand ⇒ cmd match { - case _ if storage.isEmpty ⇒ - become(finished) - ctx.singleCommand(cmd) - case Abort ⇒ - storage = Vector.empty - become(finished) - ctx.singleCommand(Abort) - case _ ⇒ - become(closing(cmd)) - ctx.nothing - } - - case Ack(seq, ack) ⇒ acknowledge(seq, ack) - - case cmd: Command ⇒ ctx.singleCommand(cmd) - case evt: Event ⇒ ctx.singleEvent(evt) - } - - private def buffering(nack: Int): Message ⇒ Iterable[Result] = { - var toAck = 10 - var closed: CloseCommand = null - - { - case Write(data, ack) ⇒ - buffer(Write(data, Ack(currentOffset, ack)), doWrite = false) - - case WritingResumed ⇒ - ctx.singleCommand(storage(0)) - - case cmd: CloseCommand ⇒ cmd match { - case Abort ⇒ - storage = Vector.empty - become(finished) - ctx.singleCommand(Abort) - case _ ⇒ - closed = cmd - ctx.nothing - } - - case Ack(seq, ack) if seq < nack ⇒ acknowledge(seq, ack) - - case Ack(seq, ack) ⇒ - val ackMsg = acknowledge(seq, ack) - if (storage.nonEmpty) { - if (toAck > 0) { - toAck -= 1 - ctx.dealias(ackMsg) ++ Seq(Right(storage(0))) - } else { - become(if (closed != null) closing(closed) else writing) - ctx.dealias(ackMsg) ++ storage.map(Right(_)) - } - } else if (closed != null) { - become(finished) - ctx.dealias(ackMsg) ++ Seq(Right(closed)) - } else { - become(writing) - ackMsg - } - - case CommandFailed(_: Write) ⇒ ctx.nothing - case cmd: Command ⇒ ctx.singleCommand(cmd) - case evt: Event ⇒ ctx.singleEvent(evt) - } - } - - private def closing(cmd: CloseCommand): Message ⇒ Iterable[Result] = { - case Ack(seq, ack) ⇒ - val result = acknowledge(seq, ack) - if (storage.isEmpty) { - become(finished) - ctx.dealias(result) ++ Seq(Right(cmd)) - } else result - - case CommandFailed(_: Write) ⇒ - become({ - case WritingResumed ⇒ - become(closing(cmd)) - storage.map(Right(_)) - case CommandFailed(_: Write) ⇒ ctx.nothing - case cmd: Command ⇒ ctx.singleCommand(cmd) - case evt: Event ⇒ ctx.singleEvent(evt) - }) - ctx.singleCommand(ResumeWriting) - - case cmd: Command ⇒ ctx.singleCommand(cmd) - case evt: Event ⇒ ctx.singleEvent(evt) - } - - private val finished: Message ⇒ Iterable[Result] = { - case _: Write ⇒ ctx.nothing - case CommandFailed(_: Write) ⇒ ctx.nothing - case cmd: Command ⇒ ctx.singleCommand(cmd) - case evt: Event ⇒ ctx.singleEvent(evt) - } - - private def buffer(w: Write, doWrite: Boolean): Iterable[Result] = { - storage :+= w - stored += w.data.size - - if (stored > maxBytes) { - log.warning("aborting connection (buffer overrun)") - become(finished) - ctx.singleCommand(Abort) - } else if (stored > highBytes && !suspended) { - log.debug("suspending writes") - suspended = true - if (doWrite) { - Seq(Right(w), Left(HighWatermarkReached)) - } else { - ctx.singleEvent(HighWatermarkReached) - } - } else if (doWrite) { - ctx.singleCommand(w) - } else Nil - } - - private def acknowledge(seq: Int, ack: Event): Iterable[Result] = { - require(seq == storageOffset, s"received ack $seq at $storageOffset") - require(storage.nonEmpty, s"storage was empty at ack $seq") - - val size = storage(0).data.size - stored -= size - - storageOffset += 1 - storage = storage drop 1 - - if (suspended && stored < lowBytes) { - log.debug("resuming writes") - suspended = false - if (ack == NoAck) ctx.singleEvent(LowWatermarkReached) - else Vector(Left(ack), Left(LowWatermarkReached)) - } else if (ack == NoAck) ctx.nothing - else ctx.singleEvent(ack) - } - } - -} - -//#length-field-frame -/** Pipeline stage for length-field encoded framing. It will prepend a - * four-byte length header to the message; the header contains the length of - * the resulting frame including header in big-endian representation. - * - * The `maxSize` argument is used to protect the communication channel sanity: - * larger frames will not be sent (silently dropped) or received (in which case - * stream decoding would be broken, hence throwing an IllegalArgumentException). - */ -class LengthFieldFrame(maxSize: Int, - byteOrder: ByteOrder = ByteOrder.BIG_ENDIAN, - headerSize: Int = 4, - lengthIncludesHeader: Boolean = true) - extends SymmetricPipelineStage[PipelineContext, ByteString, ByteString] { - - //#range-checks-omitted - require(byteOrder ne null, "byteOrder must not be null") - require(headerSize > 0 && headerSize <= 4, "headerSize must be in (0, 4]") - require(maxSize > 0, "maxSize must be positive") - require(maxSize <= (Int.MaxValue >> (4 - headerSize) * 8) * (if (headerSize == 4) 1 else 2), - "maxSize cannot exceed 256**headerSize") - //#range-checks-omitted - - override def apply(ctx: PipelineContext) = - new SymmetricPipePair[ByteString, ByteString] { - var buffer = None: Option[ByteString] - implicit val byteOrder = LengthFieldFrame.this.byteOrder - - /** Extract as many complete frames as possible from the given ByteString - * and return the remainder together with the extracted frames in reverse - * order. - */ - @tailrec - def extractFrames(bs: ByteString, acc: List[ByteString]) // - : (Option[ByteString], Seq[ByteString]) = { - if (bs.isEmpty) { - (None, acc) - } else if (bs.length < headerSize) { - (Some(bs.compact), acc) - } else { - val length = bs.iterator.getLongPart(headerSize).toInt - if (length < 0 || length > maxSize) - throw new IllegalArgumentException( - s"received too large frame of size $length (max = $maxSize)") - val total = if (lengthIncludesHeader) length else length + headerSize - if (bs.length >= total) { - extractFrames(bs drop total, bs.slice(headerSize, total) :: acc) - } else { - (Some(bs.compact), acc) - } - } - } - - /* - * This is how commands (writes) are transformed: calculate length - * including header, write that to a ByteStringBuilder and append the - * payload data. The result is a single command (i.e. `Right(...)`). - */ - override def commandPipeline = - { bs: ByteString ⇒ - val length = - if (lengthIncludesHeader) bs.length + headerSize else bs.length - if (length > maxSize) Seq() - else { - val bb = ByteString.newBuilder - bb.putLongPart(length, headerSize) - bb ++= bs - ctx.singleCommand(bb.result) - } - } - - /* - * This is how events (reads) are transformed: append the received - * ByteString to the buffer (if any) and extract the frames from the - * result. In the end store the new buffer contents and return the - * list of events (i.e. `Left(...)`). - */ - override def eventPipeline = - { bs: ByteString ⇒ - val data = if (buffer.isEmpty) bs else buffer.get ++ bs - val (nb, frames) = extractFrames(data, Nil) - buffer = nb - /* - * please note the specialized (optimized) facility for emitting - * just a single event - */ - frames match { - case Nil ⇒ Nil - case one :: Nil ⇒ ctx.singleEvent(one) - case many ⇒ many reverseMap (Left(_)) - } - } - } -} -//#length-field-frame - -/** Pipeline stage for delimiter byte based framing and de-framing. Useful for string oriented protocol using '\n' - * or 0 as delimiter values. - * - * @param maxSize The maximum size of the frame the pipeline is willing to decode. Not checked for encoding, as the - * sender might decide to pass through multiple chunks in one go (multiple lines in case of a line-based - * protocol) - * @param delimiter The sequence of bytes that will be used as the delimiter for decoding. - * @param includeDelimiter If enabled, the delmiter bytes will be part of the decoded messages. In the case of sends - * the delimiter has to be appended to the end of frames by the user. It is also possible - * to send multiple frames by embedding multiple delimiters in the passed ByteString - */ -class DelimiterFraming(maxSize: Int, delimiter: ByteString = ByteString('\n'), includeDelimiter: Boolean = false) - extends SymmetricPipelineStage[PipelineContext, ByteString, ByteString] { - - require(maxSize > 0, "maxSize must be positive") - require(delimiter.nonEmpty, "delimiter must not be empty") - - override def apply(ctx: PipelineContext) = new SymmetricPipePair[ByteString, ByteString] { - val singleByteDelimiter: Boolean = delimiter.size == 1 - var buffer: ByteString = ByteString.empty - var delimiterFragment: Option[ByteString] = None - val firstByteOfDelimiter = delimiter.head - - @tailrec - private def extractParts(nextChunk: ByteString, acc: List[ByteString]): List[ByteString] = delimiterFragment match { - case Some(fragment) if nextChunk.size < fragment.size && fragment.startsWith(nextChunk) ⇒ - buffer ++= nextChunk - delimiterFragment = Some(fragment.drop(nextChunk.size)) - acc - // We got the missing parts of the delimiter - case Some(fragment) if nextChunk.startsWith(fragment) ⇒ - val decoded = if (includeDelimiter) buffer ++ fragment else buffer.take(buffer.size - delimiter.size + fragment.size) - buffer = ByteString.empty - delimiterFragment = None - extractParts(nextChunk.drop(fragment.size), decoded :: acc) - case _ ⇒ - val matchPosition = nextChunk.indexOf(firstByteOfDelimiter) - if (matchPosition == -1) { - delimiterFragment = None - val minSize = buffer.size + nextChunk.size - if (minSize > maxSize) throw new IllegalArgumentException( - s"Received too large frame of size $minSize (max = $maxSize)") - buffer ++= nextChunk - acc - } else if (matchPosition + delimiter.size > nextChunk.size) { - val delimiterMatchLength = nextChunk.size - matchPosition - if (nextChunk.drop(matchPosition) == delimiter.take(delimiterMatchLength)) { - buffer ++= nextChunk - // we are expecting the other parts of the delimiter - delimiterFragment = Some(delimiter.drop(nextChunk.size - matchPosition)) - acc - } else { - // false positive - delimiterFragment = None - buffer ++= nextChunk.take(matchPosition + 1) - extractParts(nextChunk.drop(matchPosition + 1), acc) - } - } else { - delimiterFragment = None - val missingBytes: Int = if (includeDelimiter) matchPosition + delimiter.size else matchPosition - val expectedSize = buffer.size + missingBytes - if (expectedSize > maxSize) throw new IllegalArgumentException( - s"Received frame already of size $expectedSize (max = $maxSize)") - - if (singleByteDelimiter || nextChunk.slice(matchPosition, matchPosition + delimiter.size) == delimiter) { - val decoded = buffer ++ nextChunk.take(missingBytes) - buffer = ByteString.empty - extractParts(nextChunk.drop(matchPosition + delimiter.size), decoded :: acc) - } else { - buffer ++= nextChunk.take(matchPosition + 1) - extractParts(nextChunk.drop(matchPosition + 1), acc) - } - } - - } - - override val eventPipeline = { - bs: ByteString ⇒ - val parts = extractParts(bs, Nil) - buffer = buffer.compact // TODO: This should be properly benchmarked and memory profiled - parts match { - case Nil ⇒ Nil - case one :: Nil ⇒ ctx.singleEvent(one.compact) - case many ⇒ many reverseMap { frame ⇒ Left(frame.compact) } - } - } - - override val commandPipeline = { - bs: ByteString ⇒ ctx.singleCommand(bs) - } - } -} - -/** Simple convenience pipeline stage for turning Strings into ByteStrings and vice versa. - * - * @param charset The character set to be used for encoding and decoding the raw byte representation of the strings. - */ -class StringByteStringAdapter(charset: String = "utf-8") - extends PipelineStage[PipelineContext, String, ByteString, String, ByteString] { - - override def apply(ctx: PipelineContext) = new PipePair[String, ByteString, String, ByteString] { - - val commandPipeline = (str: String) ⇒ ctx.singleCommand(ByteString(str, charset)) - - val eventPipeline = (bs: ByteString) ⇒ ctx.singleEvent(bs.decodeString(charset)) - } -} - -/** This trait expresses that the pipeline’s context needs to provide a logging - * facility. - */ -trait HasLogging extends PipelineContext { - /** Retrieve the [[akka.event.LoggingAdapter]] for this pipeline’s context. - */ - def getLogger: LoggingAdapter -} - -//#tick-generator -/** This trait expresses that the pipeline’s context needs to live within an - * actor and provide its ActorContext. - */ -trait HasActorContext extends PipelineContext { - /** Retrieve the [[akka.actor.ActorContext]] for this pipeline’s context. - */ - def getContext: ActorContext -} - -object TickGenerator { - /** This message type is used by the TickGenerator to trigger - * the rescheduling of the next Tick. The actor hosting the pipeline - * which includes a TickGenerator must arrange for messages of this - * type to be injected into the management port of the pipeline. - */ - trait Trigger - - /** This message type is emitted by the TickGenerator to the whole - * pipeline, informing all stages about the time at which this Tick - * was emitted (relative to some arbitrary epoch). - */ - case class Tick(@BeanProperty timestamp: FiniteDuration) extends Trigger -} - -/** This pipeline stage does not alter the events or commands - */ -class TickGenerator[Cmd <: AnyRef, Evt <: AnyRef](interval: FiniteDuration) - extends PipelineStage[HasActorContext, Cmd, Cmd, Evt, Evt] { - import TickGenerator._ - - override def apply(ctx: HasActorContext) = - new PipePair[Cmd, Cmd, Evt, Evt] { - - // use unique object to avoid double-activation on actor restart - private val trigger: Trigger = { - val path = ctx.getContext.self.path - - new Trigger { - override def toString = s"Tick[$path]" - } - } - - private def schedule() = - ctx.getContext.system.scheduler.scheduleOnce( - interval, ctx.getContext.self, trigger)(ctx.getContext.dispatcher) - - // automatically activate this generator - schedule() - - override val commandPipeline = (cmd: Cmd) ⇒ ctx.singleCommand(cmd) - - override val eventPipeline = (evt: Evt) ⇒ ctx.singleEvent(evt) - - override val managementPort: Mgmt = { - case `trigger` ⇒ - ctx.getContext.self ! Tick(Deadline.now.time) - schedule() - Nil - } - } -} -//#tick-generator - diff --git a/src/main/scala/akka/io/TcpPipelineHandler.scala b/src/main/scala/akka/io/TcpPipelineHandler.scala deleted file mode 100644 index abd9e79..0000000 --- a/src/main/scala/akka/io/TcpPipelineHandler.scala +++ /dev/null @@ -1,174 +0,0 @@ -/** Copyright (C) 2009-2013 Typesafe Inc. - */ - -package akka.io - -import scala.beans.BeanProperty -import scala.util.{ Failure, Success } -import akka.actor._ -import akka.dispatch.{ RequiresMessageQueue, UnboundedMessageQueueSemantics } -import akka.util.ByteString -import akka.event.Logging -import akka.event.LoggingAdapter - -object TcpPipelineHandler { - - /** This class wraps up a pipeline with its external (i.e. “top”) command and - * event types and providing unique wrappers for sending commands and - * receiving events (nested and non-static classes which are specific to each - * instance of [[Init]]). All events emitted by the pipeline will be sent to - * the registered handler wrapped in an Event. - */ - abstract class Init[Ctx <: PipelineContext, Cmd, Evt]( - val stages: PipelineStage[_ >: Ctx <: PipelineContext, Cmd, Tcp.Command, Evt, Tcp.Event]) { - - /** This method must be implemented to return the [[PipelineContext]] - * necessary for the operation of the given [[PipelineStage]]. - */ - def makeContext(actorContext: ActorContext): Ctx - - /** Java API: construct a command to be sent to the [[TcpPipelineHandler]] - * actor. - */ - def command(cmd: Cmd): Command = Command(cmd) - - /** Java API: extract a wrapped event received from the [[TcpPipelineHandler]] - * actor. - * - * @throws MatchError if the given object is not an Event matching this - * specific Init instance. - */ - def event(evt: AnyRef): Evt = evt match { - case Event(evt) ⇒ evt - } - - /** Wrapper class for commands to be sent to the [[TcpPipelineHandler]] actor. - */ - case class Command(@BeanProperty cmd: Cmd) extends NoSerializationVerificationNeeded - - /** Wrapper class for events emitted by the [[TcpPipelineHandler]] actor. - */ - case class Event(@BeanProperty evt: Evt) extends NoSerializationVerificationNeeded - } - - /** This interface bundles logging and ActorContext for Java. - */ - trait WithinActorContext extends HasLogging with HasActorContext - - def withLogger[Cmd, Evt](log: LoggingAdapter, - stages: PipelineStage[_ >: WithinActorContext <: PipelineContext, Cmd, Tcp.Command, Evt, Tcp.Event]): Init[WithinActorContext, Cmd, Evt] = - new Init[WithinActorContext, Cmd, Evt](stages) { - override def makeContext(ctx: ActorContext): WithinActorContext = new WithinActorContext { - override def getLogger = log - override def getContext = ctx - } - } - - /** Wrapper class for management commands sent to the [[TcpPipelineHandler]] actor. - */ - case class Management(@BeanProperty cmd: AnyRef) - - /** This is a new Tcp.Command which the pipeline can emit to effect the - * sending a message to another actor. Using this instead of doing the send - * directly has the advantage that other pipeline stages can also see and - * possibly transform the send. - */ - case class Tell(receiver: ActorRef, msg: Any, sender: ActorRef) extends Tcp.Command - - /** The pipeline may want to emit a [[Tcp.Event]] to the registered handler - * actor, which is enabled by emitting this [[Tcp.Command]] wrapping an event - * instead. The [[TcpPipelineHandler]] actor will upon reception of this command - * forward the wrapped event to the handler. - */ - case class TcpEvent(@BeanProperty evt: Tcp.Event) extends Tcp.Command - - /** create [[akka.actor.Props]] for a pipeline handler - */ - def props[Ctx <: PipelineContext, Cmd, Evt](init: TcpPipelineHandler.Init[Ctx, Cmd, Evt], connection: ActorRef, handler: ActorRef) = - Props(classOf[TcpPipelineHandler[_, _, _]], init, connection, handler) - -} - -/** This actor wraps a pipeline and forwards commands and events between that - * one and a [[Tcp]] connection actor. In order to inject commands into the - * pipeline send an [[TcpPipelineHandler.Init.Command]] message to this actor; events will be sent - * to the designated handler wrapped in [[TcpPipelineHandler.Init.Event]] messages. - * - * When the designated handler terminates the TCP connection is aborted. When - * the connection actor terminates this actor terminates as well; the designated - * handler may want to watch this actor’s lifecycle. - * - * IMPORTANT: - * - * Proper function of this actor (and of other pipeline stages like [[TcpReadWriteAdapter]] - * depends on the fact that stages handling TCP commands and events pass unknown - * subtypes through unaltered. There are more commands and events than are declared - * within the [[Tcp]] object and you can even define your own. - */ -class TcpPipelineHandler[Ctx <: PipelineContext, Cmd, Evt]( - init: TcpPipelineHandler.Init[Ctx, Cmd, Evt], - connection: ActorRef, - handler: ActorRef) - extends Actor with RequiresMessageQueue[UnboundedMessageQueueSemantics] { - - import init._ - import TcpPipelineHandler._ - - // sign death pact - context watch connection - // watch so we can Close - context watch handler - - val ctx = init.makeContext(context) - - val pipes = PipelineFactory.buildWithSinkFunctions(ctx, init.stages)({ - case Success(cmd) ⇒ - cmd match { - case Tell(receiver, msg, sender) ⇒ receiver.tell(msg, sender) - case TcpEvent(ev) ⇒ handler ! ev - case _ ⇒ connection ! cmd - } - case Failure(ex) ⇒ throw ex - }, { - case Success(evt) ⇒ handler ! Event(evt) - case Failure(ex) ⇒ throw ex - }) - - def receive = { - case Command(cmd) ⇒ pipes.injectCommand(cmd) - case evt: Tcp.Event ⇒ pipes.injectEvent(evt) - case Management(cmd) ⇒ pipes.managementCommand(cmd) - case Terminated(`handler`) ⇒ connection ! Tcp.Abort - case Terminated(`connection`) ⇒ context.stop(self) - } - -} - -/** Adapts a ByteString oriented pipeline stage to a stage that communicates via Tcp Commands and Events. Every ByteString - * passed down to this stage will be converted to Tcp.Write commands, while incoming Tcp.Receive events will be unwrapped - * and their contents passed up as raw ByteStrings. This adapter should be used together with TcpPipelineHandler. - * - * While this adapter communicates to the stage above it via raw ByteStrings, it is possible to inject Tcp Command - * by sending them to the management port, and the adapter will simply pass them down to the stage below. Incoming Tcp Events - * that are not Receive events will be passed downwards wrapped in a [[TcpPipelineHandler.TcpEvent]]; the [[TcpPipelineHandler]] will - * send these notifications to the registered event handler actor. - */ -class TcpReadWriteAdapter extends PipelineStage[PipelineContext, ByteString, Tcp.Command, ByteString, Tcp.Event] { - import TcpPipelineHandler.TcpEvent - - override def apply(ctx: PipelineContext) = new PipePair[ByteString, Tcp.Command, ByteString, Tcp.Event] { - - override val commandPipeline = { - data: ByteString ⇒ ctx.singleCommand(Tcp.Write(data)) - } - - override val eventPipeline = (evt: Tcp.Event) ⇒ evt match { - case Tcp.Received(data) ⇒ ctx.singleEvent(data) - case ev: Tcp.Event ⇒ ctx.singleCommand(TcpEvent(ev)) - } - - override val managementPort: Mgmt = { - case cmd: Tcp.Command ⇒ ctx.singleCommand(cmd) - } - } -} diff --git a/src/main/scala/nl/gideondk/sentinel/Antenna.scala b/src/main/scala/nl/gideondk/sentinel/Antenna.scala deleted file mode 100644 index 772f40b..0000000 --- a/src/main/scala/nl/gideondk/sentinel/Antenna.scala +++ /dev/null @@ -1,79 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.Future - -import akka.actor._ - -import akka.io._ -import akka.io.TcpPipelineHandler.{ Init, WithinActorContext } - -import processors._ - -class Antenna[Cmd, Evt](init: Init[WithinActorContext, Cmd, Evt], Resolver: Resolver[Evt, Cmd]) extends Actor with ActorLogging with Stash { - - import context.dispatcher - - def active(tcpHandler: ActorRef): Receive = { - val consumer = context.actorOf(Props(new Consumer(init)), name = "resolver") - val producer = context.actorOf(Props(new Producer(init)).withDispatcher("nl.gideondk.sentinel.sentinel-dispatcher"), name = "producer") - - context watch tcpHandler - context watch producer - context watch consumer - - def handleTermination: Receive = { - case x: Terminated ⇒ context.stop(self) - } - - def highWaterMark: Receive = handleTermination orElse { - case BackpressureBuffer.LowWatermarkReached ⇒ - unstashAll() - context.unbecome() - case _ ⇒ - stash() - } - - def handleCommands: Receive = { - case x: Command.Ask[Cmd, Evt] ⇒ - consumer ! x.registration - tcpHandler ! init.Command(x.payload) - - case x: Command.AskStream[Cmd, Evt] ⇒ - consumer ! x.registration - tcpHandler ! init.Command(x.payload) - - case x: Command.SendStream[Cmd, Evt] ⇒ - consumer ! x.registration - producer ! ProducerActionAndData(ProducerAction.ProduceStream[Unit, Cmd](Unit ⇒ Future(x.stream)), ()) - } - - def handleReplies: Receive = { - case x: Reply.Response[Cmd] ⇒ - tcpHandler ! init.Command(x.payload) - - case x: Reply.StreamResponseChunk[Cmd] ⇒ - tcpHandler ! init.Command(x.payload) - } - - handleTermination orElse handleCommands orElse handleReplies orElse { - case x: Registration[Evt, _] ⇒ - consumer ! x - - case init.Event(data) ⇒ { - Resolver.process(data) match { - case x: ProducerAction[Evt, Cmd] ⇒ producer ! ProducerActionAndData[Evt, Cmd](x, data) - case x: ConsumerAction ⇒ consumer ! ConsumerActionAndData[Evt](x, data) - } - } - - case BackpressureBuffer.HighWatermarkReached ⇒ { - context.become(highWaterMark, false) - } - } - } - - def receive = { - case Management.RegisterTcpHandler(tcpHandler) ⇒ - context.become(active(tcpHandler)) - } -} \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/Client.scala b/src/main/scala/nl/gideondk/sentinel/Client.scala deleted file mode 100644 index 7a04bc2..0000000 --- a/src/main/scala/nl/gideondk/sentinel/Client.scala +++ /dev/null @@ -1,170 +0,0 @@ -package nl.gideondk.sentinel - -import java.net.InetSocketAddress - -import scala.concurrent._ -import scala.concurrent.duration.{ DurationInt, FiniteDuration } - -import akka.actor._ -import akka.io._ -import akka.io.Tcp._ -import akka.routing._ - -import akka.util.ByteString - -import play.api.libs.iteratee._ - -trait Client[Cmd, Evt] { - import Registration._ - - def actor: ActorRef - - def ?(command: Cmd)(implicit context: ExecutionContext): Task[Evt] = ask(command) - - def ?->>(command: Cmd)(implicit context: ExecutionContext): Task[Enumerator[Evt]] = askStream(command) - - def ?<<-(command: Cmd, source: Enumerator[Cmd])(implicit context: ExecutionContext): Task[Evt] = sendStream(command, source) - - def ?<<-(source: Enumerator[Cmd])(implicit context: ExecutionContext): Task[Evt] = sendStream(source) - - def ask(command: Cmd)(implicit context: ExecutionContext): Task[Evt] = Task { - val promise = Promise[Evt]() - actor ! Command.Ask(command, ReplyRegistration(promise)) - promise.future - } - - def askStream(command: Cmd)(implicit context: ExecutionContext): Task[Enumerator[Evt]] = Task { - val promise = Promise[Enumerator[Evt]]() - actor ! Command.AskStream(command, StreamReplyRegistration(promise)) - promise.future - } - - def sendStream(command: Cmd, source: Enumerator[Cmd]): Task[Evt] = - sendStream(Enumerator(command) >>> source) - - def sendStream(source: Enumerator[Cmd]): Task[Evt] = Task { - val promise = Promise[Evt]() - actor ! Command.SendStream(source, ReplyRegistration(promise)) - promise.future - } -} - -object Client { - case class ConnectToServer(addr: InetSocketAddress) - - def defaultResolver[Cmd, Evt] = new Resolver[Evt, Cmd] { - def process = { - case _ ⇒ ConsumerAction.AcceptSignal - } - } - - def apply[Cmd, Evt](serverHost: String, serverPort: Int, routerConfig: RouterConfig, - description: String = "Sentinel Client", stages: ⇒ PipelineStage[PipelineContext, Cmd, ByteString, Evt, ByteString], workerReconnectTime: FiniteDuration = 2 seconds, resolver: Resolver[Evt, Cmd] = Client.defaultResolver[Cmd, Evt], lowBytes: Long = 100L, highBytes: Long = 5000L, maxBufferSize: Long = 20000L)(implicit system: ActorSystem) = { - val core = system.actorOf(Props(new ClientCore[Cmd, Evt](routerConfig, description, workerReconnectTime, stages, resolver)(lowBytes, highBytes, maxBufferSize)), name = "sentinel-client-" + java.util.UUID.randomUUID.toString) - core ! Client.ConnectToServer(new InetSocketAddress(serverHost, serverPort)) - new Client[Cmd, Evt] { - val actor = core - } - } - - def randomRouting[Cmd, Evt](serverHost: String, serverPort: Int, numberOfConnections: Int, description: String = "Sentinel Client", stages: ⇒ PipelineStage[PipelineContext, Cmd, ByteString, Evt, ByteString], workerReconnectTime: FiniteDuration = 2 seconds, resolver: Resolver[Evt, Cmd] = Client.defaultResolver[Cmd, Evt], lowBytes: Long = 100L, highBytes: Long = 5000L, maxBufferSize: Long = 20000L)(implicit system: ActorSystem) = { - apply(serverHost, serverPort, RandomRouter(numberOfConnections), description, stages, workerReconnectTime, resolver, lowBytes, highBytes, maxBufferSize) - } - - def roundRobinRouting[Cmd, Evt](serverHost: String, serverPort: Int, numberOfConnections: Int, description: String = "Sentinel Client", stages: ⇒ PipelineStage[PipelineContext, Cmd, ByteString, Evt, ByteString], workerReconnectTime: FiniteDuration = 2 seconds, resolver: Resolver[Evt, Cmd] = Client.defaultResolver[Cmd, Evt], lowBytes: Long = 100L, highBytes: Long = 5000L, maxBufferSize: Long = 20000L)(implicit system: ActorSystem) = { - apply(serverHost, serverPort, RoundRobinRouter(numberOfConnections), description, stages, workerReconnectTime, resolver, lowBytes, highBytes, maxBufferSize) - } -} - -class ClientAntennaManager[Cmd, Evt](address: InetSocketAddress, stages: ⇒ PipelineStage[PipelineContext, Cmd, ByteString, Evt, ByteString], Resolver: Resolver[Evt, Cmd])(lowBytes: Long, highBytes: Long, maxBufferSize: Long) extends Actor with ActorLogging with Stash { - val tcp = akka.io.IO(Tcp)(context.system) - - override def preStart = tcp ! Tcp.Connect(address) - - def connected(antenna: ActorRef): Receive = { - case x: Command[Cmd, Evt] ⇒ - antenna forward x - } - - def disconnected: Receive = { - case Connected(remoteAddr, localAddr) ⇒ - val init = TcpPipelineHandler.withLogger(log, - stages >> - new TcpReadWriteAdapter >> - new BackpressureBuffer(lowBytes, highBytes, maxBufferSize)) - - val antenna = context.actorOf(Props(new Antenna(init, Resolver)).withDispatcher("nl.gideondk.sentinel.sentinel-dispatcher")) - val handler = context.actorOf(TcpPipelineHandler.props(init, sender, antenna).withDeploy(Deploy.local)) - context watch handler - - sender ! Register(handler) - antenna ! Management.RegisterTcpHandler(handler) - - unstashAll() - context.become(connected(antenna)) - - case CommandFailed(cmd: akka.io.Tcp.Command) ⇒ - context.stop(self) // Bit harsh at the moment, but should trigger reconnect and probably do better next time... - - // case x: nl.gideondk.sentinel.Command[Cmd, Evt] ⇒ - // x.registration.promise.failure(new Exception("Client has not yet been connected to a endpoint")) - - case _ ⇒ stash() - } - - def receive = disconnected -} - -class ClientCore[Cmd, Evt](routerConfig: RouterConfig, description: String, reconnectDuration: FiniteDuration, - stages: ⇒ PipelineStage[PipelineContext, Cmd, ByteString, Evt, ByteString], Resolver: Resolver[Evt, Cmd], workerDescription: String = "Sentinel Client Worker")(lowBytes: Long, highBytes: Long, maxBufferSize: Long) extends Actor with ActorLogging { - - import context.dispatcher - - var addresses = List.empty[Tuple2[InetSocketAddress, Option[ActorRef]]] - - private case object InitializeRouter - private case class ReconnectRouter(address: InetSocketAddress) - - var coreRouter: Option[ActorRef] = None - - def antennaManagerProto(address: InetSocketAddress) = - new ClientAntennaManager(address, stages, Resolver)(lowBytes, highBytes, maxBufferSize) - - def routerProto(address: InetSocketAddress) = - context.actorOf(Props(antennaManagerProto(address)).withRouter(routerConfig).withDispatcher("nl.gideondk.sentinel.sentinel-dispatcher")) - - override def preStart = { - self ! InitializeRouter - } - - def receive = { - case x: Client.ConnectToServer ⇒ - if (!addresses.map(_._1).contains(x)) { - val router = routerProto(x.addr) - context.watch(router) - addresses = addresses ++ List(x.addr -> Some(router)) - coreRouter = Some(context.system.actorOf(Props.empty.withRouter(RoundRobinRouter(routees = addresses.map(_._2).flatten)))) - } - - case Terminated(actor) ⇒ - /* If router died, restart after a period of time */ - val terminatedRouter = addresses.find(_._2 == actor) - terminatedRouter match { - case Some(r) ⇒ - addresses = addresses diff addresses.find(_._2 == actor).toList - coreRouter = Some(context.system.actorOf(Props.empty.withRouter(RoundRobinRouter(routees = addresses.map(_._2).flatten)))) - log.debug("Router for: " + r._1 + " died, restarting in: " + reconnectDuration.toString()) - context.system.scheduler.scheduleOnce(reconnectDuration, self, Client.ConnectToServer(r._1)) - case None ⇒ - } - - case x: Command[Cmd, Evt] ⇒ - coreRouter match { - case Some(r) ⇒ - r forward x - case None ⇒ x.registration.promise.failure(new Exception("No connection(s) available")) - } - - case _ ⇒ - } -} \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/Command.scala b/src/main/scala/nl/gideondk/sentinel/Command.scala deleted file mode 100644 index a2773fd..0000000 --- a/src/main/scala/nl/gideondk/sentinel/Command.scala +++ /dev/null @@ -1,58 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.{ Future, Promise } - -import akka.actor.ActorRef - -import play.api.libs.iteratee._ - -trait Registration[Evt, A] { - def promise: Promise[A] -} - -object Registration { - case class ReplyRegistration[Evt](promise: Promise[Evt]) extends Registration[Evt, Evt] - case class StreamReplyRegistration[Evt](promise: Promise[Enumerator[Evt]]) extends Registration[Evt, Enumerator[Evt]] -} - -trait Command[Cmd, Evt] { - def registration: Registration[Evt, _] -} - -trait ServerCommand[Cmd, Evt] - -trait ServerMetric - -trait Reply[Cmd] - -object Command { - import Registration._ - - case class Ask[Cmd, Evt](payload: Cmd, registration: ReplyRegistration[Evt]) extends Command[Cmd, Evt] - case class Tell[Cmd, Evt](payload: Cmd, registration: ReplyRegistration[Evt]) extends Command[Cmd, Evt] - - case class AskStream[Cmd, Evt](payload: Cmd, registration: StreamReplyRegistration[Evt]) extends Command[Cmd, Evt] - case class SendStream[Cmd, Evt](stream: Enumerator[Cmd], registration: ReplyRegistration[Evt]) extends Command[Cmd, Evt] -} - -object ServerCommand { - case class AskAll[Cmd, Evt](payload: Cmd, promise: Promise[List[Evt]]) extends ServerCommand[Cmd, Evt] - case class AskAllHosts[Cmd, Evt](payload: Cmd, promise: Promise[List[Evt]]) extends ServerCommand[Cmd, Evt] - case class AskAny[Cmd, Evt](payload: Cmd, promise: Promise[Evt]) extends ServerCommand[Cmd, Evt] -} - -object ServerMetric { - case object ConnectedSockets extends ServerMetric - case object ConnectedHosts extends ServerMetric -} - -object Reply { - case class Response[Cmd](payload: Cmd) extends Reply[Cmd] - case class StreamResponseChunk[Cmd](payload: Cmd) extends Reply[Cmd] -} - -object Management { - trait ManagementMessage - case class RegisterTcpHandler(h: ActorRef) extends ManagementMessage -} - diff --git a/src/main/scala/nl/gideondk/sentinel/Config.scala b/src/main/scala/nl/gideondk/sentinel/Config.scala new file mode 100644 index 0000000..36bf29f --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/Config.scala @@ -0,0 +1,20 @@ +package nl.gideondk.sentinel + +import akka.actor.{ ActorSystem, ExtendedActorSystem, Extension, ExtensionId, ExtensionIdProvider } +import com.typesafe.config.{ Config ⇒ TypesafeConfig } + +class Config(config: TypesafeConfig) extends Extension { + val producerParallelism = config.getInt("pipeline.parallelism") +} + +object Config extends ExtensionId[Config] with ExtensionIdProvider { + override def lookup = Config + override def createExtension(system: ExtendedActorSystem) = + new Config(system.settings.config.getConfig("nl.gideondk.sentinel")) + override def get(system: ActorSystem): Config = super.get(system) + + private def config(implicit system: ActorSystem) = apply(system) + + def producerParallelism(implicit system: ActorSystem) = config.producerParallelism +} + diff --git a/src/main/scala/nl/gideondk/sentinel/Resolver.scala b/src/main/scala/nl/gideondk/sentinel/Resolver.scala deleted file mode 100644 index 8c1485f..0000000 --- a/src/main/scala/nl/gideondk/sentinel/Resolver.scala +++ /dev/null @@ -1,6 +0,0 @@ -package nl.gideondk.sentinel - -trait Resolver[Evt, Cmd] { - - def process: PartialFunction[Evt, Action] -} \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/Server.scala b/src/main/scala/nl/gideondk/sentinel/Server.scala deleted file mode 100644 index 4c90652..0000000 --- a/src/main/scala/nl/gideondk/sentinel/Server.scala +++ /dev/null @@ -1,137 +0,0 @@ -package nl.gideondk.sentinel - -import java.net.InetSocketAddress - -import akka.actor._ -import akka.io._ -import akka.io.Tcp._ -import akka.util.{ Timeout, ByteString } - -import scala.concurrent.{ Future, Promise, ExecutionContext } -import scala.util.Random - -import akka.pattern.ask - -trait Server[Cmd, Evt] { - def actor: ActorRef - - def ?**(command: Cmd)(implicit context: ExecutionContext): Task[List[Evt]] = askAll(command) - - def ?*(command: Cmd)(implicit context: ExecutionContext): Task[List[Evt]] = askAllHosts(command) - - def ?(command: Cmd)(implicit context: ExecutionContext): Task[Evt] = askAny(command) - - def askAll(command: Cmd)(implicit context: ExecutionContext): Task[List[Evt]] = Task { - val promise = Promise[List[Evt]]() - actor ! ServerCommand.AskAll(command, promise) - promise.future - } - - def askAllHosts(command: Cmd)(implicit context: ExecutionContext): Task[List[Evt]] = Task { - val promise = Promise[List[Evt]]() - actor ! ServerCommand.AskAllHosts(command, promise) - promise.future - } - - def askAny(command: Cmd)(implicit context: ExecutionContext): Task[Evt] = Task { - val promise = Promise[Evt]() - actor ! ServerCommand.AskAny(command, promise) - promise.future - } - - def connectedSockets(implicit timeout: Timeout): Task[Int] = Task { - (actor ? ServerMetric.ConnectedSockets).mapTo[Int] - } - - def connectedHosts(implicit timeout: Timeout): Task[Int] = Task { - (actor ? ServerMetric.ConnectedHosts).mapTo[Int] - } -} - -class ServerCore[Cmd, Evt](port: Int, description: String, stages: ⇒ PipelineStage[PipelineContext, Cmd, ByteString, Evt, ByteString], - resolver: Resolver[Evt, Cmd], workerDescription: String = "Sentinel Client Worker")(lowBytes: Long, highBytes: Long, maxBufferSize: Long) extends Actor with ActorLogging { - - import context.dispatcher - - def wrapAtenna(a: ActorRef) = new Client[Cmd, Evt] { - val actor = a - } - - val tcp = akka.io.IO(Tcp)(context.system) - val address = new InetSocketAddress(port) - - var connections = Map[String, List[ActorRef]]() - - override def preStart = { - tcp ! Bind(self, address) - } - - def receiveCommands: Receive = { - case x: ServerCommand.AskAll[Cmd, Evt] if connections.values.toList.length > 0 ⇒ - val futures = Task.sequence(connections.values.toList.flatten.map(wrapAtenna).map(_ ? x.payload)).start - x.promise.completeWith(futures) - - case x: ServerCommand.AskAllHosts[Cmd, Evt] if connections.values.toList.length > 0 ⇒ - val futures = Task.sequence(connections.values.toList.map(x ⇒ Random.shuffle(x.toList).head).map(wrapAtenna).map(_ ? x.payload)).start - x.promise.completeWith(futures) - - case x: ServerCommand.AskAny[Cmd, Evt] if connections.values.toList.length > 0 ⇒ - val future = (wrapAtenna(Random.shuffle(connections.values.toList.flatten).head) ? x.payload).start - x.promise.completeWith(future) - - case ServerMetric.ConnectedSockets ⇒ - sender ! connections.values.flatten.toList.length - - case ServerMetric.ConnectedHosts ⇒ - sender ! connections.keys.toList.length - } - - def receive = receiveCommands orElse { - case x: Terminated ⇒ - val antenna = x.getActor - connections = connections.foldLeft(Map[String, List[ActorRef]]()) { - case (c, i) ⇒ - i._2.contains(antenna) match { - case true ⇒ if (i._2.length == 1) c else c + (i._1 -> i._2.filter(_ != antenna)) - case false ⇒ c + i - } - } - - case Bound ⇒ - log.debug(description + " bound to " + address) - - case CommandFailed(cmd) ⇒ - cmd match { - case x: Bind ⇒ - log.error(description + " failed to bind to " + address) - } - - case req @ Connected(remoteAddr, localAddr) ⇒ - val init = - TcpPipelineHandler.withLogger(log, - stages >> - new TcpReadWriteAdapter >> - new BackpressureBuffer(lowBytes, highBytes, maxBufferSize)) - - val connection = sender - - val antenna = context.actorOf(Props(new Antenna(init, resolver)).withDispatcher("nl.gideondk.sentinel.sentinel-dispatcher")) - context.watch(antenna) - - val currentAtennas = connections.get(remoteAddr.getHostName).getOrElse(List[ActorRef]()) - connections = connections + (remoteAddr.getHostName -> (currentAtennas ++ List(antenna))) - - val tcpHandler = context.actorOf(TcpPipelineHandler.props(init, connection, antenna).withDeploy(Deploy.local)) - - antenna ! Management.RegisterTcpHandler(tcpHandler) - connection ! Tcp.Register(tcpHandler) - } -} - -object Server { - def apply[Evt, Cmd](serverPort: Int, resolver: Resolver[Evt, Cmd], description: String = "Sentinel Server", stages: ⇒ PipelineStage[PipelineContext, Cmd, ByteString, Evt, ByteString], lowBytes: Long = 100L, highBytes: Long = 50 * 1024L, maxBufferSize: Long = 1000L * 1024L)(implicit system: ActorSystem) = { - new Server[Evt, Cmd] { - val actor = system.actorOf(Props(new ServerCore(serverPort, description, stages, resolver)(lowBytes, highBytes, maxBufferSize)).withDispatcher("nl.gideondk.sentinel.sentinel-dispatcher"), name = "sentinel-server-" + java.util.UUID.randomUUID.toString) - } - } -} diff --git a/src/main/scala/nl/gideondk/sentinel/Task.scala b/src/main/scala/nl/gideondk/sentinel/Task.scala deleted file mode 100644 index 6744778..0000000 --- a/src/main/scala/nl/gideondk/sentinel/Task.scala +++ /dev/null @@ -1,77 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.Await -import scala.concurrent.Future -import scala.concurrent.ExecutionContext.Implicits.global -import scala.concurrent.duration.Duration -import scala.util.Try - -import scalaz._ -import scalaz.Scalaz._ -import scalaz.effect.IO - -final case class Task[A](get: IO[Future[A]]) { - self ⇒ - def start: Future[A] = get.unsafePerformIO - - def run(implicit atMost: Duration): Try[A] = Await.result((start.map(Try(_)) recover { - case x ⇒ Try(throw x) - }), atMost) -} - -trait TaskMonad extends Monad[Task] { - def point[A](a: ⇒ A): Task[A] = Task((Future(a)).point[IO]) - - def bind[A, B](fa: Task[A])(f: A ⇒ Task[B]) = - Task(Monad[IO].point(fa.get.unsafePerformIO.flatMap { - x ⇒ - f(x).get.unsafePerformIO - })) -} - -trait TaskCatchable extends Catchable[Task] with TaskMonad { - def fail[A](e: Throwable): Task[A] = Task(Future.failed(e)) - - def attempt[A](t: Task[A]): Task[Throwable \/ A] = map(t)(x ⇒ \/-(x)) -} - -trait TaskComonad extends Comonad[Task] with TaskMonad { - implicit protected def atMost: Duration - - def cobind[A, B](fa: Task[A])(f: Task[A] ⇒ B): Task[B] = point(f(fa)) - - def cojoin[A](a: Task[A]): Task[Task[A]] = point(a) - - def copoint[A](fa: Task[A]): A = fa.run.get -} - -trait TaskFunctions { - - import scalaz._ - import Scalaz._ - - def apply[A](a: ⇒ Future[A]): Task[A] = Task(Monad[IO].point(a)) - - def sequence[A](z: List[Task[A]]): Task[List[A]] = - Task(z.map(_.get).sequence[IO, Future[A]].map(x ⇒ Future.sequence(x))) - - def sequenceSuccesses[A](z: List[Task[A]]): Task[List[A]] = - Task(z.map(_.get).sequence[IO, Future[A]].map { - x ⇒ - Future.sequence(x.map(f ⇒ f.map(Some(_)) recover { - case x ⇒ None - })).map(_.filter(_.isDefined).map(_.get)) - }) - -} - -trait TaskImplementation extends TaskFunctions { - implicit def taskMonadInstance = new TaskMonad {} - - implicit def taskComonadInstance(implicit d: Duration) = new TaskComonad { - override protected val atMost = d - } -} - -object Task extends TaskImplementation { -} diff --git a/src/main/scala/nl/gideondk/sentinel/client/Client.scala b/src/main/scala/nl/gideondk/sentinel/client/Client.scala new file mode 100644 index 0000000..6db231c --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/client/Client.scala @@ -0,0 +1,209 @@ +package nl.gideondk.sentinel.client + +import java.util.concurrent.TimeUnit + +import akka.NotUsed +import akka.actor.{ ActorSystem, ExtendedActorSystem, Extension, ExtensionId, ExtensionIdProvider } +import akka.stream._ +import akka.stream.scaladsl.{ BidiFlow, Broadcast, Flow, GraphDSL, Merge, RunnableGraph, Sink, Source } +import akka.util.ByteString +import com.typesafe.config.{ Config ⇒ TypesafeConfig } +import nl.gideondk.sentinel.Config +import nl.gideondk.sentinel.client.Client._ +import nl.gideondk.sentinel.client.ClientStage.{ HostEvent, _ } +import nl.gideondk.sentinel.pipeline.{ Processor, Resolver } +import nl.gideondk.sentinel.protocol._ + +import scala.concurrent._ +import scala.concurrent.duration._ +import scala.util.Try + +class ClientConfig(config: TypesafeConfig) extends Extension { + val connectionsPerHost = config.getInt("client.host.max-connections") + val maxFailuresPerHost = config.getInt("client.host.max-failures") + val failureRecoveryPeriod = Duration(config.getDuration("client.host.failure-recovery-duration").toNanos, TimeUnit.NANOSECONDS) + + val reconnectDuration = Duration(config.getDuration("client.host.reconnect-duration").toNanos, TimeUnit.NANOSECONDS) + val shouldReconnect = config.getBoolean("client.host.auto-reconnect") + + val clientParallelism = config.getInt("client.parallelism") + val inputBufferSize = config.getInt("client.input-buffer-size") +} + +object ClientConfig extends ExtensionId[ClientConfig] with ExtensionIdProvider { + override def lookup = ClientConfig + override def createExtension(system: ExtendedActorSystem) = + new ClientConfig(system.settings.config.getConfig("nl.gideondk.sentinel")) + override def get(system: ActorSystem): ClientConfig = super.get(system) + + private def clientConfig(implicit system: ActorSystem) = apply(system) + + def connectionsPerHost(implicit system: ActorSystem) = clientConfig.connectionsPerHost + def maxFailuresPerHost(implicit system: ActorSystem) = clientConfig.maxFailuresPerHost + def failureRecoveryPeriod(implicit system: ActorSystem) = clientConfig.failureRecoveryPeriod + + def reconnectDuration(implicit system: ActorSystem) = clientConfig.reconnectDuration + def shouldReconnect(implicit system: ActorSystem) = clientConfig.shouldReconnect + + def clientParallelism(implicit system: ActorSystem) = clientConfig.clientParallelism + def inputBufferSize(implicit system: ActorSystem) = clientConfig.inputBufferSize +} + +object Client { + + private def reconnectLogic[M](builder: GraphDSL.Builder[M], hostEventSource: Source[HostEvent, NotUsed]#Shape, hostEventIn: Inlet[HostEvent], hostEventOut: Outlet[HostEvent])(implicit system: ActorSystem) = { + import GraphDSL.Implicits._ + implicit val b = builder + + val delay = ClientConfig.reconnectDuration + val groupDelay = Flow[HostEvent].groupBy[Host](1024, { x: HostEvent ⇒ x.host }).delay(delay).map { x ⇒ system.log.warning(s"Reconnecting after ${delay.toSeconds}s for ${x.host}"); HostUp(x.host) }.mergeSubstreams + + if (ClientConfig.shouldReconnect) { + val connectionMerge = builder.add(Merge[HostEvent](2)) + hostEventSource ~> connectionMerge ~> hostEventIn + hostEventOut ~> b.add(groupDelay) ~> connectionMerge + } else { + hostEventSource ~> hostEventIn + hostEventOut ~> Sink.ignore + } + } + + def apply[Cmd, Evt](hosts: Source[HostEvent, NotUsed], resolver: Resolver[Evt], + shouldReact: Boolean, inputOverflowStrategy: OverflowStrategy, + protocol: BidiFlow[Cmd, ByteString, ByteString, Evt, Any])(implicit system: ActorSystem, mat: Materializer, ec: ExecutionContext): Client[Cmd, Evt] = { + val processor = Processor[Cmd, Evt](resolver, Config.producerParallelism) + new Client(hosts, ClientConfig.connectionsPerHost, ClientConfig.maxFailuresPerHost, ClientConfig.failureRecoveryPeriod, ClientConfig.inputBufferSize, inputOverflowStrategy, processor, protocol.reversed) + } + + def apply[Cmd, Evt](hosts: List[Host], resolver: Resolver[Evt], + shouldReact: Boolean, inputOverflowStrategy: OverflowStrategy, + protocol: BidiFlow[Cmd, ByteString, ByteString, Evt, Any])(implicit system: ActorSystem, mat: Materializer, ec: ExecutionContext): Client[Cmd, Evt] = { + val processor = Processor[Cmd, Evt](resolver, Config.producerParallelism) + new Client(Source(hosts.map(HostUp)), ClientConfig.connectionsPerHost, ClientConfig.maxFailuresPerHost, ClientConfig.failureRecoveryPeriod, ClientConfig.inputBufferSize, inputOverflowStrategy, processor, protocol.reversed) + } + + def flow[Cmd, Evt](hosts: Source[HostEvent, NotUsed], resolver: Resolver[Evt], + shouldReact: Boolean = false, protocol: BidiFlow[Cmd, ByteString, ByteString, Evt, Any])(implicit system: ActorSystem, mat: Materializer, ec: ExecutionContext) = { + val processor = Processor[Cmd, Evt](resolver, Config.producerParallelism) + type Context = Promise[Event[Evt]] + + val eventHandler = Sink.foreach[(Try[Event[Evt]], Promise[Event[Evt]])] { + case (evt, context) ⇒ context.complete(evt) + } + + Flow.fromGraph(GraphDSL.create(hosts) { implicit b ⇒ + connections ⇒ + import GraphDSL.Implicits._ + + val s = b.add(new ClientStage[Context, Cmd, Evt](ClientConfig.connectionsPerHost, ClientConfig.maxFailuresPerHost, ClientConfig.failureRecoveryPeriod, true, processor, protocol.reversed)) + + reconnectLogic(b, connections, s.in2, s.out2) + + val input = b add Flow[Command[Cmd]].map(x ⇒ (x, Promise[Event[Evt]]())) + val broadcast = b add Broadcast[(Command[Cmd], Promise[Event[Evt]])](2) + + val output = b add Flow[(Command[Cmd], Promise[Event[Evt]])].mapAsync(ClientConfig.clientParallelism)(_._2.future) + + s.out1 ~> eventHandler + input ~> broadcast + broadcast ~> output + broadcast ~> s.in1 + + FlowShape(input.in, output.out) + }) + } + + def rawFlow[Context, Cmd, Evt](hosts: Source[HostEvent, NotUsed], resolver: Resolver[Evt], + shouldReact: Boolean = false, + protocol: BidiFlow[Cmd, ByteString, ByteString, Evt, Any])(implicit system: ActorSystem, mat: Materializer, ec: ExecutionContext) = { + val processor = Processor[Cmd, Evt](resolver, Config.producerParallelism) + + Flow.fromGraph(GraphDSL.create(hosts) { implicit b ⇒ + connections ⇒ + + val s = b.add(new ClientStage[Context, Cmd, Evt](ClientConfig.connectionsPerHost, ClientConfig.maxFailuresPerHost, ClientConfig.failureRecoveryPeriod, true, processor, protocol.reversed)) + + reconnectLogic(b, connections, s.in2, s.out2) + + FlowShape(s.in1, s.out2) + }) + } + + trait ClientException + + case class InputQueueClosed() extends Exception with ClientException + + case class InputQueueUnavailable() extends Exception with ClientException + + case class IncorrectEventType[A](event: A) extends Exception with ClientException + + case class EventException[A](cause: A) extends Throwable + +} + +class Client[Cmd, Evt](hosts: Source[HostEvent, NotUsed], + connectionsPerHost: Int, maximumFailuresPerHost: Int, recoveryPeriod: FiniteDuration, + inputBufferSize: Int, inputOverflowStrategy: OverflowStrategy, + processor: Processor[Cmd, Evt], protocol: BidiFlow[ByteString, Evt, Cmd, ByteString, Any])(implicit system: ActorSystem, mat: Materializer) { + + type Context = Promise[Event[Evt]] + + val eventHandler = Sink.foreach[(Try[Event[Evt]], Promise[Event[Evt]])] { + case (evt, context) ⇒ context.complete(evt) + } + + val g = RunnableGraph.fromGraph(GraphDSL.create(Source.queue[(Command[Cmd], Promise[Event[Evt]])](inputBufferSize, inputOverflowStrategy)) { implicit b ⇒ + source ⇒ + import GraphDSL.Implicits._ + + val s = b.add(new ClientStage[Context, Cmd, Evt](connectionsPerHost, maximumFailuresPerHost, recoveryPeriod, true, processor, protocol)) + + reconnectLogic(b, b.add(hosts), s.in2, s.out2) + source.out ~> s.in1 + s.out1 ~> b.add(eventHandler) + + ClosedShape + }) + + val input = g.run() + + private def send(command: Command[Cmd])(implicit ec: ExecutionContext): Future[Event[Evt]] = { + val context = Promise[Event[Evt]]() + input.offer((command, context)).flatMap { + case QueueOfferResult.Dropped ⇒ Future.failed(InputQueueUnavailable()) + case QueueOfferResult.QueueClosed ⇒ Future.failed(InputQueueClosed()) + case QueueOfferResult.Failure(reason) ⇒ Future.failed(reason) + case QueueOfferResult.Enqueued ⇒ context.future + } + } + + def ask(command: Cmd)(implicit ec: ExecutionContext): Future[Evt] = send(SingularCommand(command)) flatMap { + case SingularEvent(x) ⇒ Future(x) + case SingularErrorEvent(x) ⇒ Future.failed(EventException(x)) + case x ⇒ Future.failed(IncorrectEventType(x)) + } + + def askStream(command: Cmd)(implicit ec: ExecutionContext): Future[Source[Evt, Any]] = send(SingularCommand(command)) flatMap { + case StreamEvent(x) ⇒ Future(x) + case SingularErrorEvent(x) ⇒ Future.failed(EventException(x)) + case x ⇒ Future.failed(IncorrectEventType(x)) + } + + def sendStream(stream: Source[Cmd, Any])(implicit ec: ExecutionContext): Future[Evt] = send(StreamingCommand(stream)) flatMap { + case SingularEvent(x) ⇒ Future(x) + case SingularErrorEvent(x) ⇒ Future.failed(EventException(x)) + case x ⇒ Future.failed(IncorrectEventType(x)) + } + + def sendStream(command: Cmd, stream: Source[Cmd, Any])(implicit ec: ExecutionContext): Future[Evt] = send(StreamingCommand(Source.single(command) ++ stream)) flatMap { + case SingularEvent(x) ⇒ Future(x) + case SingularErrorEvent(x) ⇒ Future.failed(EventException(x)) + case x ⇒ Future.failed(IncorrectEventType(x)) + } + + def react(stream: Source[Cmd, Any])(implicit ec: ExecutionContext): Future[Source[Evt, Any]] = send(StreamingCommand(stream)) flatMap { + case StreamEvent(x) ⇒ Future(x) + case SingularErrorEvent(x) ⇒ Future.failed(EventException(x)) + case x ⇒ Future.failed(IncorrectEventType(x)) + } +} diff --git a/src/main/scala/nl/gideondk/sentinel/client/ClientStage.scala b/src/main/scala/nl/gideondk/sentinel/client/ClientStage.scala new file mode 100644 index 0000000..307e023 --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/client/ClientStage.scala @@ -0,0 +1,305 @@ +package nl.gideondk.sentinel.client + +import akka.actor.ActorSystem +import akka.stream._ +import akka.stream.scaladsl.{ BidiFlow, GraphDSL, RunnableGraph, Tcp } +import akka.stream.stage.GraphStageLogic.EagerTerminateOutput +import akka.stream.stage._ +import akka.util.ByteString +import akka.{ Done, stream } +import nl.gideondk.sentinel.pipeline.Processor +import nl.gideondk.sentinel.protocol.{ Command, Event } + +import scala.collection.mutable +import scala.concurrent._ +import scala.concurrent.duration._ +import scala.util.{ Failure, Success, Try } + +case class Host(host: String, port: Int) + +object ClientStage { + + trait ConnectionClosedException + + trait HostEvent { + def host: Host + } + + case class ConnectionClosedWithReasonException(message: String, cause: Throwable) extends Exception(message, cause) with ConnectionClosedException + + case class ConnectionClosedWithoutReasonException(message: String) extends Exception(message) with ConnectionClosedException + + case class HostUp(host: Host) extends HostEvent + + case class HostDown(host: Host) extends HostEvent + + case object NoConnectionsAvailableException extends Exception + +} + +import nl.gideondk.sentinel.client.ClientStage._ + +class ClientStage[Context, Cmd, Evt](connectionsPerHost: Int, maximumFailuresPerHost: Int, + recoveryPeriod: FiniteDuration, finishGracefully: Boolean, processor: Processor[Cmd, Evt], + protocol: BidiFlow[ByteString, Evt, Cmd, ByteString, Any])(implicit system: ActorSystem, mat: Materializer) + + extends GraphStage[BidiShape[(Command[Cmd], Context), (Try[Event[Evt]], Context), HostEvent, HostEvent]] { + + val connectionEventIn = Inlet[HostEvent]("ClientStage.ConnectionEvent.In") + val connectionEventOut = Outlet[HostEvent]("ClientStage.ConnectionEvent.Out") + val commandIn = Inlet[(Command[Cmd], Context)]("ClientStage.Command.In") + val eventOut = Outlet[(Try[Event[Evt]], Context)]("ClientStage.Event.Out") + + override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new TimerGraphStageLogic(shape) { + private val hosts = mutable.Map.empty[Host, Int] + private val hostFailures = mutable.Map.empty[Host, Int] + private val connectionPool = mutable.Queue.empty[Connection] + private val failures = mutable.Queue.empty[(Try[Event[Evt]], Context)] + private var antennaId = 0 + private var closingOnCommandIn = false + + override def preStart() = { + pull(connectionEventIn) + pull(commandIn) + schedulePeriodically(Done, recoveryPeriod) + } + + def nextId() = { + antennaId += 1 + antennaId + } + + def addHost(host: Host) = { + if (!hosts.contains(host)) { + hosts += (host -> 0) + pullCommand(true) + } + } + + def ensureConnections() = { + hosts + .find(_._2 < connectionsPerHost) + .foreach { + case (host, connectionCount) ⇒ + val connection = Connection(host, nextId()) + connection.initialize() + connectionPool.enqueue(connection) + hosts(connection.host) = connectionCount + 1 + } + + pullCommand(false) + } + + def pullCommand(shouldInitializeConnection: Boolean): Unit = + if (hosts.isEmpty && isAvailable(commandIn)) { + val (_, context) = grab(commandIn) + failures.enqueue((Failure(NoConnectionsAvailableException), context)) + + if (isAvailable(eventOut) && failures.nonEmpty) { + push(eventOut, failures.dequeue()) + } + + pull(commandIn) + } else if (isAvailable(commandIn)) { + connectionPool.dequeueFirst(_.canBePushedForCommand) match { + case Some(connection) ⇒ + val (command, context) = grab(commandIn) + connection.pushCommand(command, context) + connectionPool.enqueue(connection) + pull(commandIn) + + case None ⇒ if (shouldInitializeConnection) ensureConnections() + } + } + + def connectionFailed(connection: Connection, cause: Throwable) = { + val host = connection.host + val totalFailure = hostFailures.getOrElse(host, 0) + 1 + hostFailures(host) = totalFailure + system.log.warning(s"Connection ${connection.connectionId} to $host failed due to ${cause.getMessage}") + + if (hostFailures(host) >= maximumFailuresPerHost) { + system.log.error(cause, s"Dropping $host, failed $totalFailure times") + emit(connectionEventOut, HostDown(host)) + removeHost(host, Some(cause)) + } else { + removeConnection(connection, Some(cause)) + } + } + + def removeHost(host: Host, cause: Option[Throwable] = None) = { + hosts.remove(host) + hostFailures.remove(host) + connectionPool.dequeueAll(_.host == host).foreach(_.close(cause)) + + if (isAvailable(eventOut) && failures.nonEmpty) { + push(eventOut, failures.dequeue()) + } + + pullCommand(true) + } + + def removeConnection(connection: Connection, cause: Option[Throwable]) = { + hosts(connection.host) = hosts(connection.host) - 1 + connectionPool.dequeueAll(_.connectionId == connection.connectionId).foreach(_.close(cause)) + + if (isAvailable(eventOut) && failures.nonEmpty) { + push(eventOut, failures.dequeue()) + } + + pullCommand(true) + } + + setHandler(connectionEventOut, EagerTerminateOutput) + + setHandler(connectionEventIn, new InHandler { + override def onPush() = { + grab(connectionEventIn) match { + case HostUp(connection) ⇒ addHost(connection) + case HostDown(connection) ⇒ removeHost(connection) + } + pull(connectionEventIn) + } + + override def onUpstreamFinish() = () + + override def onUpstreamFailure(ex: Throwable) = + failStage(throw new IllegalStateException(s"Stream for ConnectionEvents failed", ex)) + }) + + setHandler(commandIn, new InHandler { + override def onPush() = pullCommand(shouldInitializeConnection = true) + + override def onUpstreamFinish() = { + if (finishGracefully) { + closingOnCommandIn = true + connectionPool.foreach(_.requestClose()) + } else { + connectionPool.foreach(_.close(None)) + completeStage() + } + } + + override def onUpstreamFailure(ex: Throwable) = + failStage(throw new IllegalStateException(s"Requests stream failed", ex)) + }) + + setHandler(eventOut, new OutHandler { + override def onPull() = + if (failures.nonEmpty) push(eventOut, failures.dequeue()) + else { + connectionPool + .dequeueFirst(_.canBePulledForEvent) + .foreach(connection ⇒ { + if (isAvailable(eventOut)) { + val event = connection.pullEvent + push(eventOut, event) + } + connectionPool.enqueue(connection) + }) + } + + override def onDownstreamFinish() = { + completeStage() + } + }) + + override def onTimer(timerKey: Any) = { + hostFailures.clear() + } + + case class Connection(host: Host, connectionId: Int) { + connection ⇒ + private val connectionEventIn = new SubSinkInlet[Event[Evt]](s"Connection.[$host].[$connectionId].in") + private val connectionCommandOut = new SubSourceOutlet[Command[Cmd]](s"Connection.[$host].[$connectionId].out") + private val contexts = mutable.Queue.empty[Context] + private var closing = false + + def canBePushedForCommand = connectionCommandOut.isAvailable + + def canBePulledForEvent = connectionEventIn.isAvailable + + def pushCommand(command: Command[Cmd], context: Context) = { + contexts.enqueue(context) + connectionCommandOut.push(command) + } + + def pullEvent() = { + val event = connectionEventIn.grab() + val context = contexts.dequeue() + + if (closing) { + close(None) + (Success(event), context) + } else { + connectionEventIn.pull() + (Success(event), context) + } + } + + def requestClose() = { + closing = true + if (contexts.length == 0) { + close(None) + } + } + + def close(cause: Option[Throwable]) = { + val exception = cause match { + case Some(cause) ⇒ ConnectionClosedWithReasonException(s"Failure to process request to $host at connection $connectionId", cause) + case None ⇒ ConnectionClosedWithoutReasonException(s"Failure to process request to $host connection $connectionId") + } + + contexts.dequeueAll(_ ⇒ true).foreach(context ⇒ { + failures.enqueue((Failure(exception), context)) + }) + + connectionEventIn.cancel() + connectionCommandOut.complete() + } + + def initialize() = { + connectionEventIn.setHandler(new InHandler { + override def onPush() = if (isAvailable(eventOut)) { + push(eventOut, connection.pullEvent) + } + + override def onUpstreamFinish() = { + removeConnection(connection, None) + } + + override def onUpstreamFailure(reason: Throwable) = reason match { + case t: TimeoutException ⇒ removeConnection(connection, Some(t)) + case _ ⇒ connectionFailed(connection, reason) + } + }) + + connectionCommandOut.setHandler(new OutHandler { + override def onPull() = pullCommand(shouldInitializeConnection = true) + + override def onDownstreamFinish() = { + () + } + }) + + RunnableGraph.fromGraph(GraphDSL.create() { implicit b ⇒ + import GraphDSL.Implicits._ + + val pipeline = b.add(processor + .flow + .atop(protocol.reversed) + .join(Tcp().outgoingConnection(host.host, host.port))) + + connectionCommandOut.source ~> pipeline.in + pipeline.out ~> connectionEventIn.sink + + stream.ClosedShape + }).run()(subFusingMaterializer) + + connectionEventIn.pull() + } + } + } + + override def shape = new BidiShape(commandIn, eventOut, connectionEventIn, connectionEventOut) +} \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/pipeline/ConsumerStage.scala b/src/main/scala/nl/gideondk/sentinel/pipeline/ConsumerStage.scala new file mode 100644 index 0000000..c122094 --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/pipeline/ConsumerStage.scala @@ -0,0 +1,138 @@ +package nl.gideondk.sentinel.pipeline + +import akka.stream._ +import akka.stream.scaladsl.Source +import akka.stream.stage.{ GraphStage, GraphStageLogic, InHandler, OutHandler } +import nl.gideondk.sentinel.protocol.ConsumerAction._ +import nl.gideondk.sentinel.protocol._ + +class ConsumerStage[Evt, Cmd](resolver: Resolver[Evt]) extends GraphStage[FanOutShape2[Evt, (Event[Evt], ProducerAction[Evt, Cmd]), Event[Evt]]] { + private val eventIn = Inlet[Evt]("ConsumerStage.Event.In") + private val actionOut = Outlet[(Event[Evt], ProducerAction[Evt, Cmd])]("ConsumerStage.Action.Out") + private val signalOut = Outlet[Event[Evt]]("ConsumerStage.Signal.Out") + + val shape = new FanOutShape2(eventIn, actionOut, signalOut) + + override def createLogic(effectiveAttributes: Attributes) = new GraphStageLogic(shape) with InHandler with OutHandler { + private var chunkSource: SubSourceOutlet[Evt] = _ + + private def chunkSubStreamStarted = chunkSource != null + + private def idle = this + + def setInitialHandlers(): Unit = setHandlers(eventIn, signalOut, idle) + + /* + * + * Substream Logic + * + * */ + + implicit def mat = this.materializer + + val pullThroughHandler = new OutHandler { + override def onPull() = { + pull(eventIn) + } + } + + val substreamHandler = new InHandler with OutHandler { + def endStream(): Unit = { + chunkSource.complete() + chunkSource = null + + if (isAvailable(signalOut) && !hasBeenPulled(eventIn)) pull(eventIn) + setInitialHandlers() + } + + override def onPush(): Unit = { + val chunk = grab(eventIn) + + resolver.process(mat)(chunk) match { + case ConsumeStreamChunk ⇒ + chunkSource.push(chunk) + + case EndStream ⇒ + endStream() + + case ConsumeChunkAndEndStream ⇒ + chunkSource.push(chunk) + endStream() + + case Ignore ⇒ () + } + } + + override def onPull(): Unit = { + // TODO: Recheck internal flow; checking should be obsolete + if (!hasBeenPulled(eventIn)) pull(eventIn) + } + + override def onUpstreamFinish(): Unit = { + chunkSource.complete() + completeStage() + } + + override def onUpstreamFailure(reason: Throwable): Unit = { + chunkSource.fail(reason) + failStage(reason) + } + } + + def startStream(initialChunk: Option[Evt]): Unit = { + chunkSource = new SubSourceOutlet[Evt]("ConsumerStage.Event.In.ChunkSubStream") + chunkSource.setHandler(pullThroughHandler) + setHandler(eventIn, substreamHandler) + + initialChunk match { + case Some(x) ⇒ push(signalOut, StreamEvent(Source.single(x) ++ Source.fromGraph(chunkSource.source))) + case None ⇒ push(signalOut, StreamEvent(Source.fromGraph(chunkSource.source))) + } + } + + def startStreamForAction(initialChunk: Evt, action: ProducerAction.StreamReaction[Evt, Cmd]): Unit = { + chunkSource = new SubSourceOutlet[Evt]("ConsumerStage.Event.In.ChunkSubStream") + chunkSource.setHandler(pullThroughHandler) + setHandler(eventIn, substreamHandler) + setHandler(actionOut, substreamHandler) + + push(actionOut, (StreamEvent(Source.single(initialChunk) ++ Source.fromGraph(chunkSource.source)), action)) + } + + def onPush(): Unit = { + val evt = grab(eventIn) + + resolver.process(mat)(evt) match { + case x: ProducerAction.Signal[Evt, Cmd] ⇒ push(actionOut, (SingularEvent(evt), x)) + + case x: ProducerAction.ProduceStream[Evt, Cmd] ⇒ push(actionOut, (SingularEvent(evt), x)) + + case x: ProducerAction.ConsumeStream[Evt, Cmd] ⇒ startStreamForAction(evt, x) + + case x: ProducerAction.ProcessStream[Evt, Cmd] ⇒ startStreamForAction(evt, x) + + case AcceptSignal ⇒ push(signalOut, SingularEvent(evt)) + + case AcceptError ⇒ push(signalOut, SingularErrorEvent(evt)) + + case StartStream ⇒ startStream(None) + + case ConsumeStreamChunk ⇒ startStream(Some(evt)) + + case ConsumeChunkAndEndStream ⇒ push(signalOut, StreamEvent(Source.single(evt))) + + case Ignore ⇒ () + } + } + + def onPull(): Unit = { + if (!chunkSubStreamStarted && !hasBeenPulled(eventIn)) { + pull(eventIn) + } + } + + setHandler(actionOut, this) + + setInitialHandlers() + } +} \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/pipeline/Processor.scala b/src/main/scala/nl/gideondk/sentinel/pipeline/Processor.scala new file mode 100644 index 0000000..11eea89 --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/pipeline/Processor.scala @@ -0,0 +1,48 @@ +package nl.gideondk.sentinel.pipeline + +import akka.stream.{ BidiShape, Materializer } +import akka.stream.scaladsl.{ BidiFlow, Flow, GraphDSL, Merge, Sink } +import nl.gideondk.sentinel.protocol._ + +import scala.concurrent.ExecutionContext + +case class Processor[Cmd, Evt](flow: BidiFlow[Command[Cmd], Cmd, Evt, Event[Evt], Any]) + +object Processor { + def apply[Cmd, Evt](resolver: Resolver[Evt], producerParallism: Int, shouldReact: Boolean = false)(implicit ec: ExecutionContext): Processor[Cmd, Evt] = { + + val consumerStage = new ConsumerStage[Evt, Cmd](resolver) + val producerStage = new ProducerStage[Evt, Cmd]() + + val functionApply = Flow[(Event[Evt], ProducerAction[Evt, Cmd])].mapAsync[Command[Cmd]](producerParallism) { + case (SingularEvent(evt), x: ProducerAction.Signal[Evt, Cmd]) ⇒ x.f(evt).map(SingularCommand[Cmd]) + case (SingularEvent(evt), x: ProducerAction.ProduceStream[Evt, Cmd]) ⇒ x.f(evt).map(StreamingCommand[Cmd]) + case (StreamEvent(evt), x: ProducerAction.ConsumeStream[Evt, Cmd]) ⇒ x.f(evt).map(SingularCommand[Cmd]) + case (StreamEvent(evt), x: ProducerAction.ProcessStream[Evt, Cmd]) ⇒ x.f(evt).map(StreamingCommand[Cmd]) + } + + Processor(BidiFlow.fromGraph[Command[Cmd], Cmd, Evt, Event[Evt], Any] { + GraphDSL.create() { implicit b ⇒ + import GraphDSL.Implicits._ + + val producer = b add producerStage + val consumer = b add consumerStage + + val commandIn = b add Flow[Command[Cmd]] + + if (shouldReact) { + val fa = b add functionApply + val merge = b add Merge[Command[Cmd]](2) + commandIn ~> merge.in(0) + consumer.out0 ~> fa ~> merge.in(1) + merge.out ~> producer + } else { + consumer.out0 ~> Sink.ignore + commandIn ~> producer + } + + BidiShape(commandIn.in, producer.out, consumer.in, consumer.out1) + } + }) + } +} diff --git a/src/main/scala/nl/gideondk/sentinel/pipeline/ProducerStage.scala b/src/main/scala/nl/gideondk/sentinel/pipeline/ProducerStage.scala new file mode 100644 index 0000000..8462577 --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/pipeline/ProducerStage.scala @@ -0,0 +1,67 @@ +package nl.gideondk.sentinel.pipeline + +import akka.stream._ +import akka.stream.scaladsl.Source +import akka.stream.stage.{ GraphStage, GraphStageLogic, InHandler, OutHandler } +import nl.gideondk.sentinel.protocol.{ Command, SingularCommand, StreamingCommand } + +class ProducerStage[In, Out] extends GraphStage[FlowShape[Command[Out], Out]] { + private val in = Inlet[Command[Out]]("ProducerStage.Command.In") + private val out = Outlet[Out]("ProducerStage.Command.Out") + + val shape = new FlowShape(in, out) + + override def createLogic(effectiveAttributes: Attributes) = new GraphStageLogic(shape) { + var streaming = false + var closeAfterCompletion = false + + val defaultInHandler = new InHandler { + override def onPush(): Unit = grab(in) match { + case x: SingularCommand[Out] ⇒ push(out, x.payload) + case x: StreamingCommand[Out] ⇒ stream(x.stream) + } + + override def onUpstreamFinish(): Unit = { + if (streaming) closeAfterCompletion = true + else completeStage() + } + } + + val waitForDemandHandler = new OutHandler { + def onPull(): Unit = pull(in) + } + + setHandler(in, defaultInHandler) + setHandler(out, waitForDemandHandler) + + def stream(outStream: Source[Out, Any]): Unit = { + streaming = true + val sinkIn = new SubSinkInlet[Out]("ProducerStage.Command.Out.ChunkSubStream") + sinkIn.setHandler(new InHandler { + override def onPush(): Unit = push(out, sinkIn.grab()) + + override def onUpstreamFinish(): Unit = { + if (closeAfterCompletion) { + completeStage() + } else { + streaming = false + setHandler(out, waitForDemandHandler) + if (isAvailable(out)) pull(in) + } + } + }) + + setHandler(out, new OutHandler { + override def onPull(): Unit = sinkIn.pull() + + override def onDownstreamFinish(): Unit = { + completeStage() + sinkIn.cancel() + } + }) + + sinkIn.pull() + outStream.runWith(sinkIn.sink)(subFusingMaterializer) + } + } +} diff --git a/src/main/scala/nl/gideondk/sentinel/pipeline/Resolver.scala b/src/main/scala/nl/gideondk/sentinel/pipeline/Resolver.scala new file mode 100644 index 0000000..450df42 --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/pipeline/Resolver.scala @@ -0,0 +1,9 @@ +package nl.gideondk.sentinel.pipeline + +import akka.stream.Materializer +import nl.gideondk.sentinel.protocol.Action + +trait Resolver[In] { + def process(implicit mat: Materializer): PartialFunction[In, Action] +} + diff --git a/src/main/scala/nl/gideondk/sentinel/processors/Consumer.scala b/src/main/scala/nl/gideondk/sentinel/processors/Consumer.scala deleted file mode 100644 index 215ad6e..0000000 --- a/src/main/scala/nl/gideondk/sentinel/processors/Consumer.scala +++ /dev/null @@ -1,152 +0,0 @@ -package nl.gideondk.sentinel.processors - -import scala.collection.immutable.Queue -import scala.concurrent._ -import scala.concurrent.duration.DurationInt - -import akka.actor._ -import akka.io.TcpPipelineHandler.{ Init, WithinActorContext } -import akka.pattern.ask -import akka.util.Timeout - -import play.api.libs.iteratee._ - -import nl.gideondk.sentinel._ - -object Consumer { - - trait StreamConsumerMessage - - case object ReadyForStream extends StreamConsumerMessage - - case object StartingWithStream extends StreamConsumerMessage - - case object AskNextChunk extends StreamConsumerMessage - - case object RegisterStreamConsumer extends StreamConsumerMessage - - case object ReleaseStreamConsumer extends StreamConsumerMessage - - trait ConsumerData[Evt] - - case class ConsumerException[Evt](cause: Evt) extends Exception - - case class DataChunk[Evt](c: Evt) extends ConsumerData[Evt] - - case class ErrorChunk[Evt](c: Evt) extends ConsumerData[Evt] - - case class EndOfStream[Evt]() extends ConsumerData[Evt] - -} - -class Consumer[Cmd, Evt](init: Init[WithinActorContext, Cmd, Evt], streamChunkTimeout: Timeout = Timeout(5 seconds)) extends Actor with ActorLogging { - - import Registration._ - import Consumer._ - import ConsumerAction._ - - import context.dispatcher - - var hooks = Queue[Promise[ConsumerData[Evt]]]() - var buffer = Queue[Promise[ConsumerData[Evt]]]() - - var registrations = Queue[Registration[Evt, _]]() - var currentPromise: Option[Promise[Evt]] = None - - var runningSource: Option[Enumerator[Evt]] = None - - def processAction(data: Evt, action: ConsumerAction) = { - def handleConsumerData(cd: ConsumerData[Evt]) = { - hooks.headOption match { - case Some(x) ⇒ - x.success(cd) - hooks = hooks.tail - case None ⇒ - buffer :+= Promise.successful(cd) - } - } - - action match { - case AcceptSignal ⇒ - handleConsumerData(DataChunk(data)) - case AcceptError ⇒ - handleConsumerData(ErrorChunk(data)) - - case ConsumeStreamChunk ⇒ - handleConsumerData(DataChunk(data)) // Should eventually seperate data chunks and stream chunks for better socket consistency handling - case EndStream ⇒ - handleConsumerData(EndOfStream[Evt]()) - case ConsumeChunkAndEndStream ⇒ - handleConsumerData(DataChunk(data)) - handleConsumerData(EndOfStream[Evt]()) - - case Ignore ⇒ () - } - } - - def popAndSetHook = { - val worker = self - val registration = registrations.head - registrations = registrations.tail - - implicit val timeout = streamChunkTimeout - - registration match { - case x: ReplyRegistration[Evt] ⇒ x.promise.completeWith((self ? AskNextChunk).mapTo[Promise[ConsumerData[Evt]]].flatMap(_.future.flatMap { - _ match { - case x: DataChunk[Evt] ⇒ - Future.successful(x.c) - case x: ErrorChunk[Evt] ⇒ - Future.failed(ConsumerException(x.c)) - } - })) - case x: StreamReplyRegistration[Evt] ⇒ - val resource = Enumerator.generateM { - (worker ? AskNextChunk).mapTo[Promise[ConsumerData[Evt]]].flatMap(_.future).flatMap { - _ match { - case x: EndOfStream[Evt] ⇒ (worker ? ReleaseStreamConsumer) flatMap (u ⇒ Future(None)) - case x: DataChunk[Evt] ⇒ Future(Some(x.c)) - case x: ErrorChunk[Evt] ⇒ (worker ? ReleaseStreamConsumer) flatMap (u ⇒ Future.failed(ConsumerException(x.c))) - } - } - } - - runningSource = Some(resource) - x.promise success resource - } - } - - def handleRegistrations: Receive = { - case rc: Registration[Evt, _] ⇒ - registrations :+= rc - if (runningSource.isEmpty && currentPromise.isEmpty) popAndSetHook - } - - var behavior: Receive = handleRegistrations orElse { - case ReleaseStreamConsumer ⇒ - runningSource = None - if (registrations.headOption.isDefined) popAndSetHook - sender ! () - - case AskNextChunk ⇒ - val promise = buffer.headOption match { - case Some(p) ⇒ - buffer = buffer.tail - p - case None ⇒ - val p = Promise[ConsumerData[Evt]]() - hooks :+= p - p - } - sender ! promise - - case x: ConsumerActionAndData[Evt] ⇒ processAction(x.data, x.action) - - } - - override def postStop() = { - hooks.foreach(_.failure(new Exception("Actor quit unexpectedly"))) - } - - def receive = behavior -} \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/processors/Producer.scala b/src/main/scala/nl/gideondk/sentinel/processors/Producer.scala deleted file mode 100644 index 51a8ad9..0000000 --- a/src/main/scala/nl/gideondk/sentinel/processors/Producer.scala +++ /dev/null @@ -1,147 +0,0 @@ -package nl.gideondk.sentinel.processors - -import scala.collection.immutable.Queue -import scala.concurrent.{ ExecutionContext, Future, Promise } -import scala.concurrent.duration.DurationInt -import scala.util.{ Failure, Success } - -import akka.actor._ -import akka.io.TcpPipelineHandler.{ Init, WithinActorContext } -import akka.pattern.ask -import akka.util.Timeout - -import scalaz._ -import Scalaz._ - -import play.api.libs.iteratee._ - -import nl.gideondk.sentinel._ - -object Producer { - trait HandleResult - case class HandleAsyncResult[Cmd](response: Cmd) extends HandleResult - case class HandleStreamResult[Cmd](stream: Enumerator[Cmd]) extends HandleResult - - trait StreamProducerMessage - case class StreamProducerChunk[Cmd](c: Cmd) extends StreamProducerMessage - - case object StartStreamHandling extends StreamProducerMessage - case object ReadyForStream extends StreamProducerMessage - case object StreamProducerEnded extends StreamProducerMessage - case object StreamProducerChunkReceived extends StreamProducerMessage - - case object DequeueResponse -} - -class Producer[Cmd, Evt](init: Init[WithinActorContext, Cmd, Evt], streamChunkTimeout: Timeout = Timeout(5 seconds)) extends Actor with ActorLogging with Stash { - import Producer._ - import ProducerAction._ - import context.dispatcher - - var responseQueue = Queue.empty[Promise[HandleResult]] - - def produceAsyncResult(data: Evt, f: Evt ⇒ Future[Cmd]) = { - val worker = self - val promise = Promise[HandleResult]() - responseQueue :+= promise - - for { - response ← f(data) map (result ⇒ HandleAsyncResult(result)) - } yield { - promise.success(response) - worker ! DequeueResponse - } - } - - def produceStreamResult(data: Evt, f: Evt ⇒ Future[Enumerator[Cmd]]) = { - val worker = self - val promise = Promise[HandleResult]() - responseQueue :+= promise - - for { - response ← f(data) map (result ⇒ HandleStreamResult(result)) - } yield { - promise.success(response) - worker ! DequeueResponse - } - } - - val initSignal = produceAsyncResult(_, _) - val initStreamConsumer = produceAsyncResult(_, _) - val initStreamProducer = produceStreamResult(_, _) - - def processAction(data: Evt, action: ProducerAction[Evt, Cmd]) = { - val worker = self - val future = action match { - case x: Signal[Evt, Cmd] ⇒ initSignal(data, x.f) - - case x: ProduceStream[Evt, Cmd] ⇒ initStreamProducer(data, x.f) - - case x: ConsumeStream[Evt, Cmd] ⇒ - val imcomingStreamPromise = Promise[Enumerator[Evt]]() - context.parent ! Registration.StreamReplyRegistration(imcomingStreamPromise) - imcomingStreamPromise.future flatMap ((s) ⇒ initStreamConsumer(data, x.f(_)(s))) - } - - future.onFailure { - case e ⇒ - log.error(e, e.getMessage) - context.stop(self) - } - } - - def handleRequest: Receive = { - case x: ProducerActionAndData[Evt, Cmd] ⇒ - processAction(x.data, x.action) - } - - def handleDequeue: Receive = { - case DequeueResponse ⇒ { - def dequeueAndSend: Unit = { - if (!responseQueue.isEmpty && responseQueue.front.isCompleted) { - // TODO: Should be handled a lot safer! - val promise = responseQueue.head - responseQueue = responseQueue.tail - promise.future.value match { - case Some(Success(v)) ⇒ - self ! v - dequeueAndSend - case Some(Failure(e)) ⇒ // Would normally not occur... - log.error(e, e.getMessage) - context.stop(self) - } - } - - } - dequeueAndSend - } - } - - def handleRequestAndResponse: Receive = handleRequest orElse handleDequeue orElse { - case x: HandleAsyncResult[Cmd] ⇒ context.parent ! Reply.Response(x.response) - case x: HandleStreamResult[Cmd] ⇒ - val worker = self - implicit val timeout = streamChunkTimeout - - (x.stream |>>> Iteratee.foldM(())((a, b) ⇒ (worker ? StreamProducerChunk(b)).map(x ⇒ ()))).flatMap(x ⇒ (worker ? StreamProducerEnded)) - - context.become(handleRequestAndStreamResponse) - - case x: StreamProducerMessage ⇒ - log.error("Internal leakage in stream: received unexpected stream chunk") - context.stop(self) - } - - def handleRequestAndStreamResponse: Receive = handleRequest orElse { - case StreamProducerChunk(c) ⇒ - sender ! StreamProducerChunkReceived - context.parent ! Reply.StreamResponseChunk(c) - case StreamProducerEnded ⇒ - sender ! StreamProducerChunkReceived - context.become(handleRequestAndResponse) - unstashAll() - case _ ⇒ stash() - } - - def receive = handleRequestAndResponse -} \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/Action.scala b/src/main/scala/nl/gideondk/sentinel/protocol/Action.scala similarity index 59% rename from src/main/scala/nl/gideondk/sentinel/Action.scala rename to src/main/scala/nl/gideondk/sentinel/protocol/Action.scala index 194ff39..117cd9e 100644 --- a/src/main/scala/nl/gideondk/sentinel/Action.scala +++ b/src/main/scala/nl/gideondk/sentinel/protocol/Action.scala @@ -1,40 +1,53 @@ -package nl.gideondk.sentinel +package nl.gideondk.sentinel.protocol -import scala.concurrent.Future +import akka.stream.scaladsl.Source -import play.api.libs.iteratee._ +import scala.concurrent.Future trait Action trait ProducerAction[E, C] extends Action + trait ConsumerAction extends Action object ProducerAction { + trait Reaction[E, C] extends ProducerAction[E, C] + trait StreamReaction[E, C] extends Reaction[E, C] - trait Signal[E, C] extends Reaction[E, C] { - def f: E ⇒ Future[C] + trait Signal[In, Out] extends Reaction[In, Out] { + def f: In ⇒ Future[Out] } - object Signal { - def apply[E, C](fun: E ⇒ Future[C]): Signal[E, C] = new Signal[E, C] { val f = fun } + trait ConsumeStream[E, C] extends StreamReaction[E, C] { + def f: Source[E, Any] ⇒ Future[C] } - trait ConsumeStream[E, C] extends StreamReaction[E, C] { - def f: E ⇒ Enumerator[E] ⇒ Future[C] + trait ProduceStream[E, C] extends StreamReaction[E, C] { + def f: E ⇒ Future[Source[C, Any]] } - object ConsumeStream { - def apply[E, A <: E, B <: E, C](fun: A ⇒ Enumerator[B] ⇒ Future[C]): ConsumeStream[E, C] = new ConsumeStream[E, C] { val f = fun.asInstanceOf[E ⇒ Enumerator[E] ⇒ Future[C]] } // Yikes :/ + trait ProcessStream[E, C] extends StreamReaction[E, C] { + def f: Source[E, Any] ⇒ Future[Source[C, Any]] } - trait ProduceStream[E, C] extends StreamReaction[E, C] { - def f: E ⇒ Future[Enumerator[C]] + object Signal { + def apply[E, C](fun: E ⇒ Future[C]): Signal[E, C] = new Signal[E, C] { + val f = fun + } + } + + object ConsumeStream { + def apply[Evt, Cmd](fun: Source[Evt, Any] ⇒ Future[Cmd]): ConsumeStream[Evt, Cmd] = new ConsumeStream[Evt, Cmd] { + val f = fun + } } object ProduceStream { - def apply[E, C](fun: E ⇒ Future[Enumerator[C]]): ProduceStream[E, C] = new ProduceStream[E, C] { val f = fun } + def apply[E, C](fun: E ⇒ Future[Source[C, Any]]): ProduceStream[E, C] = new ProduceStream[E, C] { + val f = fun + } } } @@ -42,14 +55,21 @@ object ProducerAction { case class ProducerActionAndData[Evt, Cmd](action: ProducerAction[Evt, Cmd], data: Evt) object ConsumerAction { + case object AcceptSignal extends ConsumerAction + case object AcceptError extends ConsumerAction + case object StartStream extends ConsumerAction + case object ConsumeStreamChunk extends ConsumerAction + case object EndStream extends ConsumerAction + case object ConsumeChunkAndEndStream extends ConsumerAction case object Ignore extends ConsumerAction + } case class ConsumerActionAndData[Evt](action: ConsumerAction, data: Evt) \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/protocol/Command.scala b/src/main/scala/nl/gideondk/sentinel/protocol/Command.scala new file mode 100644 index 0000000..55f528f --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/protocol/Command.scala @@ -0,0 +1,32 @@ +package nl.gideondk.sentinel.protocol + +import akka.actor.ActorRef +import akka.stream.scaladsl.Source + +import scala.concurrent.Promise + +trait Event[A] + +case class SingularEvent[A](data: A) extends Event[A] + +case class SingularErrorEvent[A](data: A) extends Event[A] + +case class StreamEvent[A](chunks: Source[A, Any]) extends Event[A] + +trait Registration[A, E <: Event[A]] { + def promise: Promise[E] +} + +object Registration { + + case class SingularResponseRegistration[A](promise: Promise[SingularEvent[A]]) extends Registration[A, SingularEvent[A]] + + case class StreamReplyRegistration[A](promise: Promise[StreamEvent[A]]) extends Registration[A, StreamEvent[A]] + +} + +trait Command[Out] + +case class SingularCommand[Out](payload: Out) extends Command[Out] + +case class StreamingCommand[Out](stream: Source[Out, Any]) extends Command[Out] \ No newline at end of file diff --git a/src/main/scala/nl/gideondk/sentinel/server/Server.scala b/src/main/scala/nl/gideondk/sentinel/server/Server.scala new file mode 100644 index 0000000..395c258 --- /dev/null +++ b/src/main/scala/nl/gideondk/sentinel/server/Server.scala @@ -0,0 +1,43 @@ +package nl.gideondk.sentinel.server + +import akka.actor.ActorSystem +import akka.stream.scaladsl.{ BidiFlow, Flow, GraphDSL, Sink, Source, Tcp } +import akka.stream.{ Materializer, FlowShape } +import akka.util.ByteString +import nl.gideondk.sentinel.pipeline.{ Processor, Resolver } + +import scala.concurrent.ExecutionContext +import scala.util.{ Failure, Success } + +object Server { + def apply[Cmd, Evt](interface: String, port: Int, resolver: Resolver[Evt], protocol: BidiFlow[ByteString, Evt, Cmd, ByteString, Any])(implicit system: ActorSystem, mat: Materializer, ec: ExecutionContext): Unit = { + + val handler = Sink.foreach[Tcp.IncomingConnection] { conn ⇒ + val processor = Processor[Cmd, Evt](resolver, 1, true) + + val flow = Flow.fromGraph(GraphDSL.create() { implicit b ⇒ + import GraphDSL.Implicits._ + + val pipeline = b.add(processor.flow.atop(protocol.reversed)) + + pipeline.in1 <~ Source.empty + pipeline.out2 ~> Sink.ignore + + FlowShape(pipeline.in2, pipeline.out1) + }) + + conn handleWith flow + } + + val connections = Tcp().bind(interface, port, halfClose = true) + val binding = connections.to(handler).run() + + binding.onComplete { + case Success(b) ⇒ println("Bound to: " + b.localAddress) + case Failure(e) ⇒ + system.terminate() + } + + binding + } +} diff --git a/src/test/scala/nl/gideondk/sentinel/ClientSpec.scala b/src/test/scala/nl/gideondk/sentinel/ClientSpec.scala new file mode 100644 index 0000000..5cd8d5a --- /dev/null +++ b/src/test/scala/nl/gideondk/sentinel/ClientSpec.scala @@ -0,0 +1,59 @@ +package nl.gideondk.sentinel + +import akka.actor.ActorSystem +import akka.stream.{ ActorMaterializer, ClosedShape, OverflowStrategy } +import akka.stream.scaladsl.{ GraphDSL, RunnableGraph, Sink, Source } +import nl.gideondk.sentinel.client.ClientStage.NoConnectionsAvailableException +import nl.gideondk.sentinel.client.{ Client, ClientStage, Host } +import nl.gideondk.sentinel.pipeline.Processor +import nl.gideondk.sentinel.protocol._ + +import scala.concurrent.{ Await, Promise, duration } +import duration._ +import scala.util.{ Failure, Success, Try } + +class ClientSpec extends SentinelSpec(ActorSystem()) { + "a Client" should { + "keep message order intact" in { + val port = TestHelpers.portNumber.incrementAndGet() + val server = ClientStageSpec.mockServer(system, port) + implicit val materializer = ActorMaterializer() + + val numberOfMessages = 100 + + val messages = (for (i ← 0 to numberOfMessages) yield (SingularCommand[SimpleMessageFormat](SimpleReply(i.toString)))).toList + val sink = Sink.foreach[(Try[Event[SimpleMessageFormat]], Promise[Event[SimpleMessageFormat]])] { case (event, context) ⇒ context.complete(event) } + + val client = Client.flow(Source.single(ClientStage.HostUp(Host("localhost", port))), SimpleHandler, false, SimpleMessage.protocol) + val results = Source(messages).via(client).runWith(Sink.seq) + + whenReady(results) { result ⇒ + result should equal(messages.map(x ⇒ SingularEvent(x.payload))) + } + } + + "handle connection issues" in { + val port = TestHelpers.portNumber.incrementAndGet() + val serverSystem = ActorSystem() + ClientStageSpec.mockServer(serverSystem, port) + + implicit val materializer = ActorMaterializer() + + type Context = Promise[Event[SimpleMessageFormat]] + + val client = Client(Source.single(ClientStage.HostUp(Host("localhost", port))), SimpleHandler, false, OverflowStrategy.backpressure, SimpleMessage.protocol) + + Await.result(client.ask(SimpleReply("1")), 5 seconds) shouldEqual (SimpleReply("1")) + + serverSystem.terminate() + Thread.sleep(100) + + Try(Await.result(client.ask(SimpleReply("1")), 5 seconds)) shouldEqual (Failure(NoConnectionsAvailableException)) + + ClientStageSpec.mockServer(system, port) + Thread.sleep(3000) + + Await.result(client.ask(SimpleReply("1")), 5 seconds) shouldEqual (SimpleReply("1")) + } + } +} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/ClientStageSpec.scala b/src/test/scala/nl/gideondk/sentinel/ClientStageSpec.scala new file mode 100644 index 0000000..c899f85 --- /dev/null +++ b/src/test/scala/nl/gideondk/sentinel/ClientStageSpec.scala @@ -0,0 +1,128 @@ +package nl.gideondk.sentinel + +import akka.actor.ActorSystem +import akka.stream.scaladsl.{ Flow, GraphDSL, RunnableGraph, Sink, Source, Tcp } +import akka.stream.{ ActorMaterializer, ClosedShape, OverflowStrategy } +import akka.util.ByteString +import nl.gideondk.sentinel.client.ClientStage.{ HostEvent, NoConnectionsAvailableException } +import nl.gideondk.sentinel.client.{ ClientStage, Host } +import nl.gideondk.sentinel.pipeline.Processor +import nl.gideondk.sentinel.protocol._ + +import scala.concurrent._ +import scala.concurrent.duration._ +import scala.util.{ Failure, Success, Try } + +object ClientStageSpec { + def mockServer(system: ActorSystem, port: Int): Unit = { + implicit val sys = system + import system.dispatcher + implicit val materializer = ActorMaterializer() + + val handler = Sink.foreach[Tcp.IncomingConnection] { conn ⇒ + conn handleWith Flow[ByteString] + } + + val connections = Tcp().bind("localhost", port, halfClose = true) + val binding = connections.to(handler).run() + + binding.onComplete { + case Success(b) ⇒ println("Bound to: " + b.localAddress) + case Failure(e) ⇒ + system.terminate() + } + } + + def createCommand(s: String) = { + (SingularCommand[SimpleMessageFormat](SimpleReply(s)), Promise[Event[SimpleMessageFormat]]()) + } +} + +class ClientStageSpec extends SentinelSpec(ActorSystem()) { + + import ClientStageSpec._ + + "The ClientStage" should { + "keep message order intact" in { + val port = TestHelpers.portNumber.incrementAndGet() + val server = mockServer(system, port) + + implicit val materializer = ActorMaterializer() + + type Context = Promise[Event[SimpleMessageFormat]] + + val numberOfMessages = 1024 + + val messages = (for (i ← 0 to numberOfMessages) yield (createCommand(i.toString))).toList + val sink = Sink.foreach[(Try[Event[SimpleMessageFormat]], Context)] { case (event, context) ⇒ context.complete(event) } + + val g = RunnableGraph.fromGraph(GraphDSL.create(Source.queue[(Command[SimpleMessageFormat], Promise[Event[SimpleMessageFormat]])](numberOfMessages, OverflowStrategy.backpressure)) { implicit b ⇒ + source ⇒ + import GraphDSL.Implicits._ + + val s = b.add(new ClientStage[Context, SimpleMessageFormat, SimpleMessageFormat](32, 8, 2 seconds, true, Processor(SimpleHandler, 1, false), SimpleMessage.protocol.reversed)) + + Source.single(ClientStage.HostUp(Host("localhost", port))) ~> s.in2 + source.out ~> s.in1 + + s.out1 ~> b.add(sink) + s.out2 ~> b.add(Sink.ignore) + + ClosedShape + }) + + val sourceQueue = g.run() + messages.foreach(sourceQueue.offer) + val results = Future.sequence(messages.map(_._2.future)) + + whenReady(results) { result ⇒ + sourceQueue.complete() + result should equal(messages.map(x ⇒ SingularEvent(x._1.payload))) + } + } + + "handle host up and down events" in { + val port = TestHelpers.portNumber.incrementAndGet() + val server = mockServer(system, port) + + implicit val materializer = ActorMaterializer() + + type Context = Promise[Event[SimpleMessageFormat]] + + val hostEvents = Source.queue[HostEvent](10, OverflowStrategy.backpressure) + val commands = Source.queue[(Command[SimpleMessageFormat], Context)](10, OverflowStrategy.backpressure) + val events = Sink.queue[(Try[Event[SimpleMessageFormat]], Context)] + + val (hostQueue, commandQueue, eventQueue) = RunnableGraph.fromGraph(GraphDSL.create(hostEvents, commands, events)((_, _, _)) { implicit b ⇒ + (hostEvents, commands, events) ⇒ + + import GraphDSL.Implicits._ + + val s = b.add(new ClientStage[Context, SimpleMessageFormat, SimpleMessageFormat](1, 8, 2 seconds, true, Processor(SimpleHandler, 1, false), SimpleMessage.protocol.reversed)) + + hostEvents ~> s.in2 + commands ~> s.in1 + + s.out1 ~> events + s.out2 ~> b.add(Sink.ignore) + + ClosedShape + }).run() + + commandQueue.offer(createCommand("")) + Await.result(eventQueue.pull(), 5 seconds).get._1 shouldEqual Failure(NoConnectionsAvailableException) + + hostQueue.offer(ClientStage.HostUp(Host("localhost", port))) + Thread.sleep(200) + + commandQueue.offer(createCommand("")) + Await.result(eventQueue.pull(), 5 seconds).get._1 shouldEqual Success(SingularEvent(SimpleReply(""))) + + hostQueue.offer(ClientStage.HostDown(Host("localhost", port))) + Thread.sleep(200) + + commandQueue.offer(createCommand("")) + Await.result(eventQueue.pull(), 5 seconds).get._1 shouldEqual Failure(NoConnectionsAvailableException) + } + } +} diff --git a/src/test/scala/nl/gideondk/sentinel/ConsumerStageSpec.scala b/src/test/scala/nl/gideondk/sentinel/ConsumerStageSpec.scala new file mode 100644 index 0000000..6c7bcac --- /dev/null +++ b/src/test/scala/nl/gideondk/sentinel/ConsumerStageSpec.scala @@ -0,0 +1,236 @@ +package nl.gideondk.sentinel + +import akka.actor.ActorSystem +import akka.stream.scaladsl.{ Flow, GraphDSL, RunnableGraph, Sink, Source } +import akka.stream.testkit.{ TestPublisher, TestSubscriber } +import akka.stream.{ ActorMaterializer, Attributes, ClosedShape } +import nl.gideondk.sentinel.pipeline.ConsumerStage +import nl.gideondk.sentinel.protocol.SimpleMessage._ +import nl.gideondk.sentinel.protocol._ + +import scala.concurrent._ +import scala.concurrent.duration._ + +class ConsumerStageSpec extends SentinelSpec(ActorSystem()) { + + val eventFlow = Flow[Event[SimpleMessageFormat]].flatMapConcat { + case x: StreamEvent[SimpleMessageFormat] ⇒ x.chunks + case x: SingularEvent[SimpleMessageFormat] ⇒ Source.single(x.data) + } + + val stage = new ConsumerStage[SimpleMessageFormat, SimpleMessageFormat](SimpleHandler) + + "The ConsumerStage" should { + "handle incoming events" in { + implicit val materializer = ActorMaterializer() + + val g = RunnableGraph.fromGraph(GraphDSL.create(Sink.head[Event[SimpleMessageFormat]]) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val s = b add stage + + Source.single(SimpleReply("")) ~> s.in + s.out1 ~> sink.in + s.out0 ~> Sink.ignore + + ClosedShape + }) + + whenReady(g.run()) { result ⇒ + result should equal(SingularEvent(SimpleReply(""))) + } + } + + "handle multiple incoming events" in { + implicit val materializer = ActorMaterializer() + + val g = RunnableGraph.fromGraph(GraphDSL.create(Sink.seq[Event[SimpleMessageFormat]]) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val s = b add stage + + Source(List(SimpleReply("A"), SimpleReply("B"), SimpleReply("C"))) ~> s.in + s.out1 ~> sink.in + s.out0 ~> Sink.ignore + + ClosedShape + }) + + whenReady(g.run()) { result ⇒ + result should equal(Seq(SingularEvent(SimpleReply("A")), SingularEvent(SimpleReply("B")), SingularEvent(SimpleReply("C")))) + } + } + + "not lose demand that comes in while handling incoming streams" in { + implicit val materializer = ActorMaterializer() + + val inProbe = TestPublisher.manualProbe[SimpleMessageFormat]() + val responseProbe = TestSubscriber.manualProbe[Event[SimpleMessageFormat]] + + val g = RunnableGraph.fromGraph(GraphDSL.create(Sink.fromSubscriber(responseProbe)) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val s = b add stage + + Source.fromPublisher(inProbe) ~> s.in + s.out1 ~> sink.in + s.out0 ~> Sink.ignore + + ClosedShape + }) + + g.withAttributes(Attributes.inputBuffer(1, 1)).run() + + val inSub = inProbe.expectSubscription() + val responseSub = responseProbe.expectSubscription() + + // Pull first response + responseSub.request(1) + + // Expect propagation towards source + inSub.expectRequest(1) + + // Push one element into stream + inSub.sendNext(SimpleStreamChunk("A")) + + // Expect element flow towards response output + val response = responseProbe.expectNext() + + val entityProbe = TestSubscriber.manualProbe[SimpleMessageFormat]() + response.asInstanceOf[StreamEvent[SimpleMessageFormat]].chunks.to(Sink.fromSubscriber(entityProbe)).run() + + // Expect a subscription is made for the sub-stream + val entitySub = entityProbe.expectSubscription() + + // Request the initial element from the sub-source + entitySub.request(1) + + // // Pull is coming from merged stream for initial element + // inSub.expectRequest(1) + + // Expect initial element to be available + entityProbe.expectNext() + + // Request an additional chunk + entitySub.request(1) + + // Merged stream is empty, so expect demand to be propagated towards the source + inSub.expectRequest(1) + + // Send successive element + inSub.sendNext(SimpleStreamChunk("B")) + + // Expect the element to be pushed directly into the sub-source + entityProbe.expectNext() + + responseSub.request(1) + + inSub.sendNext(SimpleStreamChunk("")) + entityProbe.expectComplete() + + // and that demand should go downstream + // since the chunk end was consumed by the stage + inSub.expectRequest(1) + } + + "correctly output stream responses" in { + implicit val materializer = ActorMaterializer() + + val chunkSource = Source(List(SimpleStreamChunk("A"), SimpleStreamChunk("B"), SimpleStreamChunk("C"), SimpleStreamChunk(""))) + + val g = RunnableGraph.fromGraph(GraphDSL.create(Sink.seq[SimpleMessageFormat]) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val s = b add stage + + chunkSource ~> s.in + s.out1 ~> eventFlow ~> sink.in + s.out0 ~> Sink.ignore + + ClosedShape + }) + + whenReady(g.run()) { result ⇒ + result should equal(Seq(SimpleStreamChunk("A"), SimpleStreamChunk("B"), SimpleStreamChunk("C"))) + } + } + + "correctly output multiple stream responses" in { + implicit val materializer = ActorMaterializer() + + val items = List.fill(10)(List(SimpleStreamChunk("A"), SimpleStreamChunk("B"), SimpleStreamChunk("C"), SimpleStreamChunk(""))).flatten + val chunkSource = Source(items) + + val g = RunnableGraph.fromGraph(GraphDSL.create(Sink.seq[SimpleMessageFormat]) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val s = b add stage + + chunkSource ~> s.in + s.out1 ~> eventFlow ~> sink.in + s.out0 ~> Sink.ignore + + ClosedShape + }) + + whenReady(g.run()) { result ⇒ + result should equal(items.filter(_.payload.length > 0)) + } + } + + "correctly handle asymmetrical message types" in { + implicit val materializer = ActorMaterializer() + + val a = List(SimpleReply("A"), SimpleReply("B"), SimpleReply("C")) + val b = List.fill(10)(List(SimpleStreamChunk("A"), SimpleStreamChunk("B"), SimpleStreamChunk("C"), SimpleStreamChunk(""))).flatten + val c = List(SimpleReply("A"), SimpleReply("B"), SimpleReply("C")) + + val chunkSource = Source(a ++ b ++ c) + + val g = RunnableGraph.fromGraph(GraphDSL.create(Sink.seq[SimpleMessageFormat]) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val s = b add stage + + chunkSource ~> s.in + s.out1 ~> eventFlow ~> sink.in + s.out0 ~> Sink.ignore + + ClosedShape + }) + + whenReady(g.run()) { result ⇒ + result should equal(a ++ b.filter(_.payload.length > 0) ++ c) + } + } + + "correctly output signals on event-out pipe" in { + implicit val materializer = ActorMaterializer() + + val items = List(SimpleCommand(PING_PONG, ""), SimpleCommand(PING_PONG, ""), SimpleCommand(PING_PONG, "")) + + val g = RunnableGraph.fromGraph(GraphDSL.create(Sink.seq[(Event[SimpleMessageFormat], ProducerAction[SimpleMessageFormat, SimpleMessageFormat])]) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val s = b add stage + + Source(items) ~> s.in + s.out1 ~> Sink.ignore + s.out0 ~> sink.in + + ClosedShape + }) + + whenReady(g.run()) { result ⇒ + result.map(_._1).asInstanceOf[Seq[SingularEvent[SimpleMessageFormat]]].map(_.data) should equal(items) + } + } + } +} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/FullDuplexSpec.scala b/src/test/scala/nl/gideondk/sentinel/FullDuplexSpec.scala deleted file mode 100644 index ae2b72c..0000000 --- a/src/test/scala/nl/gideondk/sentinel/FullDuplexSpec.scala +++ /dev/null @@ -1,69 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.ExecutionContext.Implicits.global - -import org.scalatest.WordSpec -import org.scalatest.matchers.ShouldMatchers - -import scalaz._ -import Scalaz._ - -import akka.actor._ -import akka.routing._ -import scala.concurrent.duration._ - -import protocols._ - -class FullDuplexSpec extends WordSpec with ShouldMatchers { - - import SimpleMessage._ - - implicit val duration = Duration(25, SECONDS) - - def client(portNumber: Int)(implicit system: ActorSystem) = Client.randomRouting("localhost", portNumber, 1, "Worker", SimpleMessage.stages, 5 seconds, SimpleServerHandler)(system) - - def server(portNumber: Int)(implicit system: ActorSystem) = { - val s = Server(portNumber, SimpleServerHandler, stages = SimpleMessage.stages)(system) - Thread.sleep(100) - s - } - - "A client and a server" should { - "be able to exchange requests simultaneously" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val action = c ? SimpleCommand(PING_PONG, "") - val serverAction = (s ?* SimpleCommand(PING_PONG, "")).map(_.head) - - val responses = Task.sequence(List(action, serverAction)) - - val results = responses.copoint - - results.length should equal(2) - results.distinct.length should equal(1) - } - - "be able to exchange multiple requests simultaneously" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - val secC = client(portNumber) - - val numberOfRequests = 1000 - - val actions = Task.sequenceSuccesses(List.fill(numberOfRequests)(c ? SimpleCommand(PING_PONG, ""))) - val secActions = Task.sequenceSuccesses(List.fill(numberOfRequests)(secC ? SimpleCommand(PING_PONG, ""))) - val serverActions = Task.sequenceSuccesses(List.fill(numberOfRequests)((s ?** SimpleCommand(PING_PONG, "")))) - - val combined = Task.sequence(List(actions, serverActions.map(_.flatten), secActions)) - - val results = combined.copoint - - results(0).length should equal(numberOfRequests) - results(2).length should equal(numberOfRequests) - results(1).length should equal(numberOfRequests * 2) - } - } -} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/ProcessorSpec.scala b/src/test/scala/nl/gideondk/sentinel/ProcessorSpec.scala new file mode 100644 index 0000000..be00796 --- /dev/null +++ b/src/test/scala/nl/gideondk/sentinel/ProcessorSpec.scala @@ -0,0 +1,52 @@ +package nl.gideondk.sentinel + +import akka.actor.ActorSystem +import akka.stream.scaladsl.{ GraphDSL, RunnableGraph, Sink, Source } +import akka.stream.{ ActorMaterializer, ClosedShape } +import nl.gideondk.sentinel.pipeline.Processor +import nl.gideondk.sentinel.protocol._ + +import scala.concurrent.Await +import scala.concurrent.duration._ + +class ProcessorSpec extends SentinelSpec(ActorSystem()) { + val processor = Processor[SimpleMessageFormat, SimpleMessageFormat](SimpleHandler, 1) + val serverProcessor = Processor[SimpleMessageFormat, SimpleMessageFormat](SimpleServerHandler, 1, true) + + "The AntennaStage" should { + "correctly flow in a client, server like situation" in { + import nl.gideondk.sentinel.protocol.SimpleMessage._ + + implicit val materializer = ActorMaterializer() + + val pingCommand = SingularCommand[SimpleMessageFormat](SimpleCommand(PING_PONG, "")) + val zeroCommand = SingularCommand[SimpleMessageFormat](SimpleCommand(0, "")) + + val source = Source[SingularCommand[SimpleMessageFormat]](List(pingCommand, zeroCommand, pingCommand, zeroCommand)) + + val flow = RunnableGraph.fromGraph(GraphDSL.create(Sink.seq[Event[SimpleMessageFormat]]) { implicit b ⇒ + sink ⇒ + import GraphDSL.Implicits._ + + val client = b.add(processor.flow) + val server = b.add(serverProcessor.flow.reversed) + + source ~> client.in1 + client.out1 ~> server.in1 + + server.out1 ~> b.add(Sink.ignore) + server.out2 ~> client.in2 + + client.out2 ~> sink.in + + Source.empty[SingularCommand[SimpleMessageFormat]] ~> server.in2 + + ClosedShape + }) + + whenReady(flow.run()) { result ⇒ + result should equal(Seq(SingularEvent(SimpleReply("PONG")), SingularEvent(SimpleReply("PONG")))) + } + } + } +} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/ProducerStageSpec.scala b/src/test/scala/nl/gideondk/sentinel/ProducerStageSpec.scala new file mode 100644 index 0000000..e49aaa0 --- /dev/null +++ b/src/test/scala/nl/gideondk/sentinel/ProducerStageSpec.scala @@ -0,0 +1,53 @@ +package nl.gideondk.sentinel + +import akka.actor.ActorSystem +import akka.stream.ActorMaterializer +import akka.stream.scaladsl._ +import nl.gideondk.sentinel.pipeline.ProducerStage +import nl.gideondk.sentinel.protocol.{ SimpleMessageFormat, SimpleReply, SingularCommand, StreamingCommand } + +import scala.concurrent._ +import scala.concurrent.duration._ + +object ProducerStageSpec { + val stage = new ProducerStage[SimpleMessageFormat, SimpleMessageFormat]() +} + +class ProducerStageSpec extends SentinelSpec(ActorSystem()) { + + import ProducerStageSpec._ + + "The ProducerStage" should { + "handle outgoing messages" in { + implicit val materializer = ActorMaterializer() + + val command = SingularCommand[SimpleMessageFormat](SimpleReply("A")) + val singularResult = Source(List(command)).via(stage).runWith(Sink.seq) + whenReady(singularResult) { result ⇒ + result should equal(Seq(SimpleReply("A"))) + } + + val multiResult = Source(List(command, command, command)).via(stage).runWith(Sink.seq) + whenReady(multiResult) { result ⇒ + result should equal(Seq(SimpleReply("A"), SimpleReply("A"), SimpleReply("A"))) + } + } + + "handle outgoing streams" in { + implicit val materializer = ActorMaterializer() + + val items = List(SimpleReply("A"), SimpleReply("B"), SimpleReply("C"), SimpleReply("D")) + val command = StreamingCommand[SimpleMessageFormat](Source(items)) + + val singularResult = Source(List(command)).via(stage).runWith(Sink.seq) + whenReady(singularResult) { result ⇒ + result should equal(items) + } + + val multiResult = Source(List(command, command, command)).via(stage).runWith(Sink.seq) + whenReady(multiResult) { result ⇒ + result should equal((items ++ items ++ items)) + } + } + } +} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/RequestResponse.scala b/src/test/scala/nl/gideondk/sentinel/RequestResponse.scala deleted file mode 100644 index 627963d..0000000 --- a/src/test/scala/nl/gideondk/sentinel/RequestResponse.scala +++ /dev/null @@ -1,74 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.ExecutionContext.Implicits.global - -import org.scalatest.WordSpec -import org.scalatest.matchers.{ Matchers, ShouldMatchers } - -import scalaz._ -import Scalaz._ - -import akka.actor._ -import akka.routing._ -import scala.concurrent.duration._ -import scala.concurrent._ - -import play.api.libs.iteratee._ - -import protocols._ - -class RequestResponseSpec extends WordSpec with Matchers { - - import SimpleMessage._ - - implicit val duration = Duration(5, SECONDS) - - def client(portNumber: Int)(implicit system: ActorSystem) = Client.randomRouting("localhost", portNumber, 16, "Worker", SimpleMessage.stages, 5 seconds, SimpleServerHandler)(system) - - def server(portNumber: Int)(implicit system: ActorSystem) = { - val s = Server(portNumber, SimpleServerHandler, stages = SimpleMessage.stages)(system) - Thread.sleep(100) - s - } - - "A client" should { - "be able to request a response from a server" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val action = c ? SimpleCommand(PING_PONG, "") - val result = action.run - - result.isSuccess should equal(true) - } - - "be able to requests multiple requests from a server" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val numberOfRequests = 20 * 1000 - - val action = Task.sequenceSuccesses(List.fill(numberOfRequests)(c ? SimpleCommand(PING_PONG, ""))) - val result = action.run - - result.get.length should equal(numberOfRequests) - result.isSuccess should equal(true) - } - - "be able to receive responses in correct order" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val numberOfRequests = 90 * 1000 - - val items = List.range(0, numberOfRequests).map(_.toString) - val action = Task.sequenceSuccesses(items.map(x ⇒ (c ? SimpleCommand(ECHO, x)))) - val result = action.run.get - - result.map(_.payload) should equal(items) - } - } -} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/ServerClientSpec.scala b/src/test/scala/nl/gideondk/sentinel/ServerClientSpec.scala new file mode 100644 index 0000000..ebeef0d --- /dev/null +++ b/src/test/scala/nl/gideondk/sentinel/ServerClientSpec.scala @@ -0,0 +1,39 @@ +package nl.gideondk.sentinel + +import akka.actor.ActorSystem +import akka.stream.{ ActorMaterializer, ClosedShape, OverflowStrategy } +import akka.stream.scaladsl.{ GraphDSL, RunnableGraph, Sink, Source } +import nl.gideondk.sentinel.client.ClientStage.NoConnectionsAvailableException +import nl.gideondk.sentinel.client.{ Client, ClientStage, Host } +import nl.gideondk.sentinel.pipeline.Processor +import nl.gideondk.sentinel.protocol._ +import nl.gideondk.sentinel.server.Server + +import scala.concurrent.{ Await, Promise, duration } +import duration._ +import scala.util.{ Failure, Success, Try } + +class ServerClientSpec extends SentinelSpec(ActorSystem()) { + "a Server and Client" should { + "correctly handle asymmetrical message types in a client, server situation" in { + import nl.gideondk.sentinel.protocol.SimpleMessage._ + + val port = TestHelpers.portNumber.incrementAndGet() + implicit val materializer = ActorMaterializer() + + type Context = Promise[Event[SimpleMessageFormat]] + + val server = Server("localhost", port, SimpleServerHandler, SimpleMessage.protocol.reversed) + val client = Client(Source.single(ClientStage.HostUp(Host("localhost", port))), SimpleHandler, false, OverflowStrategy.backpressure, SimpleMessage.protocol) + + val pingCommand = SimpleCommand(PING_PONG, "") + val generateNumbersCommand = SimpleCommand(GENERATE_NUMBERS, "1024") + val sendStream = Source.single(SimpleCommand(TOTAL_CHUNK_SIZE, "")) ++ Source(List.fill(1024)(SimpleStreamChunk("A"))) ++ Source.single(SimpleStreamChunk("")) + + Await.result(client.ask(pingCommand), 5 seconds) shouldBe SimpleReply("PONG") + Await.result(client.sendStream(sendStream), 5 seconds) shouldBe SimpleReply("1024") + Await.result(client.ask(pingCommand), 5 seconds) shouldBe SimpleReply("PONG") + Await.result(client.askStream(generateNumbersCommand).flatMap(x ⇒ x.runWith(Sink.seq)), 5 seconds) shouldBe (for (i ← 0 until 1024) yield (SimpleStreamChunk(i.toString))) + } + } +} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/ServerRequestSpec.scala b/src/test/scala/nl/gideondk/sentinel/ServerRequestSpec.scala deleted file mode 100644 index ee9edb8..0000000 --- a/src/test/scala/nl/gideondk/sentinel/ServerRequestSpec.scala +++ /dev/null @@ -1,101 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.ExecutionContext.Implicits.global - -import org.scalatest.WordSpec -import org.scalatest.matchers.ShouldMatchers - -import scalaz._ -import Scalaz._ - -import akka.actor._ -import akka.routing._ -import scala.concurrent.duration._ - -import protocols._ -import akka.util.Timeout - -class ServerRequestSpec extends WordSpec with ShouldMatchers { - - import SimpleMessage._ - - implicit val duration = Duration(5, SECONDS) - implicit val timeout = Timeout(Duration(5, SECONDS)) - - val numberOfConnections = 16 - - def client(portNumber: Int)(implicit system: ActorSystem) = Client.randomRouting("localhost", portNumber, numberOfConnections, "Worker", SimpleMessage.stages, 5 seconds, SimpleServerHandler)(system) - - def server(portNumber: Int)(implicit system: ActorSystem) = { - val s = Server(portNumber, SimpleServerHandler, stages = SimpleMessage.stages)(system) - Thread.sleep(100) - s - } - - "A server" should { - "be able to send a request to a client" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - Thread.sleep(500) - - val action = (s ? SimpleCommand(PING_PONG, "")) - val result = action.copoint - - result should equal(SimpleReply("PONG")) - } - - "be able to send a request to a all unique connected hosts" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - - val numberOfClients = 5 - List.fill(numberOfClients)(client(portNumber)) - - Thread.sleep(500) - - val action = (s ?* SimpleCommand(PING_PONG, "")) - val result = action.copoint - - result.length should equal(1) - } - - "be able to send a request to a all connected clients" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - - val numberOfClients = 5 - List.fill(numberOfClients)(client(portNumber)) - - Thread.sleep(500) - - val action = (s ?** SimpleCommand(PING_PONG, "")) - val result = action.copoint - - result.length should equal(numberOfClients * numberOfConnections) - } - - "be able to retrieve the correct number of connected sockets" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - - val numberOfClients = 5 - val clients = List.fill(numberOfClients)(client(portNumber)) - - Thread.sleep(500) - - val connectedSockets = (s connectedSockets).copoint - connectedSockets should equal(numberOfClients * numberOfConnections) - - val connectedHosts = (s connectedHosts).copoint - connectedHosts should equal(1) - - val toBeKilledActors = clients.splitAt(3)._1.map(_.actor) - toBeKilledActors.foreach(x ⇒ x ! PoisonPill) - Thread.sleep(500) - - val stillConnectedSockets = (s connectedSockets).copoint - stillConnectedSockets should equal(2 * numberOfConnections) - } - } -} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/StreamingSpec.scala b/src/test/scala/nl/gideondk/sentinel/StreamingSpec.scala deleted file mode 100644 index 587d611..0000000 --- a/src/test/scala/nl/gideondk/sentinel/StreamingSpec.scala +++ /dev/null @@ -1,98 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.ExecutionContext.Implicits.global - -import org.scalatest.WordSpec -import org.scalatest.matchers.ShouldMatchers - -import scalaz._ -import Scalaz._ - -import akka.actor._ -import akka.routing._ -import scala.concurrent.duration._ -import scala.concurrent._ - -import play.api.libs.iteratee._ - -import protocols._ - -class StreamingSpec extends WordSpec with ShouldMatchers { - - import SimpleMessage._ - - implicit val duration = Duration(5, SECONDS) - - def client(portNumber: Int)(implicit system: ActorSystem) = Client.randomRouting("localhost", portNumber, 2, "Worker", SimpleMessage.stages, 5 seconds, SimpleServerHandler)(system) - - def server(portNumber: Int)(implicit system: ActorSystem) = { - val s = Server(portNumber, SimpleServerHandler, stages = SimpleMessage.stages)(system) - Thread.sleep(100) - s - } - - "A client" should { - "be able to send a stream to a server" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val count = 500 - val chunks = List.fill(count)(SimpleStreamChunk("ABCDEF")) ++ List(SimpleStreamChunk("")) - val action = c ?<<- (SimpleCommand(TOTAL_CHUNK_SIZE, ""), Enumerator(chunks: _*)) - - val localLength = chunks.foldLeft(0)((b, a) ⇒ b + a.payload.length) - - val result = action.run - - result.isSuccess should equal(true) - result.get.payload.toInt should equal(localLength) - } - - "be able to receive streams from a server" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val count = 500 - val action = c ?->> SimpleCommand(GENERATE_NUMBERS, count.toString) - - val stream = action.copoint - val result = Await.result(stream |>>> Iteratee.getChunks, 5 seconds) - - result.length should equal(count) - } - - "be able to receive multiple streams simultaneously from a server" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val count = 500 - val numberOfActions = 8 - val actions = Task.sequenceSuccesses(List.fill(numberOfActions)((c ?->> SimpleCommand(GENERATE_NUMBERS, count.toString)).flatMap(x ⇒ Task(x |>>> Iteratee.getChunks)))) - - val result = actions.map(_.flatten).copoint - - result.length should equal(count * numberOfActions) - } - - "be able to receive send streams simultaneously to a server" in new TestKitSpec { - val portNumber = TestHelpers.portNumber.getAndIncrement() - val s = server(portNumber) - val c = client(portNumber) - - val count = 500 - val chunks = List.fill(count)(SimpleStreamChunk("ABCDEF")) ++ List(SimpleStreamChunk("")) - val action = c ?<<- (SimpleCommand(TOTAL_CHUNK_SIZE, ""), Enumerator(chunks: _*)) - - val numberOfActions = 8 - val actions = Task.sequenceSuccesses(List.fill(numberOfActions)(c ?<<- (SimpleCommand(TOTAL_CHUNK_SIZE, ""), Enumerator(chunks: _*)))) - - val localLength = chunks.foldLeft(0)((b, a) ⇒ b + a.payload.length) - val result = actions.copoint - - result.map(_.payload.toInt).sum should equal(localLength * numberOfActions) - } - } -} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/TaskSpec.scala b/src/test/scala/nl/gideondk/sentinel/TaskSpec.scala deleted file mode 100644 index 199854b..0000000 --- a/src/test/scala/nl/gideondk/sentinel/TaskSpec.scala +++ /dev/null @@ -1,51 +0,0 @@ -package nl.gideondk.sentinel - -import scala.concurrent.ExecutionContext.Implicits.global -import scala.concurrent.Future -import scala.concurrent.duration.{ Duration, SECONDS } - -import org.scalatest._ - -import org.scalatest.BeforeAndAfterAll -import org.scalatest.WordSpec -import org.scalatest.matchers.ShouldMatchers - -import Task.taskComonadInstance -import scalaz.Scalaz._ - -class TaskSpec extends WordSpec with ShouldMatchers { - implicit val timeout = Duration(10, SECONDS) - - "A Task" should { - "be able to be run correctly" in { - val task = Task(Future(1)) - task.copoint should equal(1) - } - - "be able to be sequenced correctly" in { - val tasks = Task.sequence((for (i ← 0 to 9) yield i.point[Task]).toList) - tasks.copoint.length should equal(10) - } - - "should short circuit in case of a sequenced failure" in { - val s1 = 1.point[Task] - val s2 = 2.point[Task] - val f1: Task[Int] = Task(Future.failed(new Exception(""))) - - val tasks = Task.sequence(List(s1, f1, s2)) - tasks.run.isFailure - } - - "should only return successes when sequenced for successes" in { - val s1 = 1.point[Task] - val s2 = 2.point[Task] - val f1: Task[Int] = Task(Future.failed(new Exception(""))) - - val f: Task[Int] ⇒ String = ((t: Task[Int]) ⇒ t.copoint + "123") - s1.cobind(f) - - val tasks = Task.sequenceSuccesses(List(s1, f1, s2)) - tasks.run.get.length == 2 - } - } -} diff --git a/src/test/scala/nl/gideondk/sentinel/TestHelpers.scala b/src/test/scala/nl/gideondk/sentinel/TestHelpers.scala index a38b037..5415906 100644 --- a/src/test/scala/nl/gideondk/sentinel/TestHelpers.scala +++ b/src/test/scala/nl/gideondk/sentinel/TestHelpers.scala @@ -1,77 +1,41 @@ package nl.gideondk.sentinel -import scala.concurrent._ -import scala.concurrent.ExecutionContext.Implicits.global - -import scala.util.Try - -import org.scalatest.BeforeAndAfterAll -import org.scalatest.WordSpec -import org.scalatest.matchers.ShouldMatchers - -import akka.io.{ LengthFieldFrame, PipelineContext, SymmetricPipePair, SymmetricPipelineStage } -import akka.routing.RoundRobinRouter -import akka.util.ByteString - -import Task._ - -import scalaz._ -import Scalaz._ +import java.util.concurrent.TimeUnit +import java.util.concurrent.atomic.AtomicInteger -import akka.actor._ +import akka.actor.ActorSystem +import akka.event.{ Logging, LoggingAdapter } import akka.testkit._ -import scala.concurrent.duration._ -import scala.concurrent._ +import org.scalatest.concurrent.ScalaFutures +import org.scalatest.time.Span +import org.scalatest.{ BeforeAndAfterAll, Matchers, WordSpecLike } -import java.util.concurrent.atomic.AtomicInteger +import scala.concurrent.duration.Duration +import scala.concurrent.{ Await, Future } +import scala.language.postfixOps -import protocols._ +abstract class SentinelSpec(_system: ActorSystem) + extends TestKit(_system) with WordSpecLike with Matchers with BeforeAndAfterAll with ScalaFutures { -import java.net.InetSocketAddress + implicit val patience = PatienceConfig(testKitSettings.DefaultTimeout.duration, Span(1500, org.scalatest.time.Millis)) + override val invokeBeforeAllAndAfterAllEvenIfNoTestsAreExpected = true + implicit val ec = _system.dispatcher -abstract class TestKitSpec extends TestKit(ActorSystem()) - with WordSpec - with ShouldMatchers - with BeforeAndAfterAll - with ImplicitSender { - override def afterAll = { - system.shutdown() - } -} + val log: LoggingAdapter = Logging(system, this.getClass) -object TestHelpers { - val portNumber = new AtomicInteger(10500) -} - -object BenchmarkHelpers { - def timed(desc: String, n: Int)(benchmark: ⇒ Unit) = { - println("* " + desc) - val t = System.currentTimeMillis - benchmark - val d = System.currentTimeMillis - t - - println("* - number of ops/s: " + n / (d / 1000.0) + "\n") + override protected def afterAll(): Unit = { + super.afterAll() + TestKit.shutdownActorSystem(system) } - def throughput(desc: String, size: Double, n: Int)(benchmark: ⇒ Unit) = { - println("* " + desc) + def benchmark[A](f: Future[A], numberOfItems: Int, waitFor: Duration = Duration(10, TimeUnit.SECONDS)): Unit = { val t = System.currentTimeMillis - benchmark + Await.result(f, waitFor) val d = System.currentTimeMillis - t - - val totalSize = n * size - println("* - number of mb/s: " + totalSize / (d / 1000.0) + "\n") + println("Number of ops/s: " + numberOfItems / (d / 1000.0) + "\n") } } -object LargerPayloadTestHelper { - def randomBSForSize(size: Int) = { - implicit val be = java.nio.ByteOrder.BIG_ENDIAN - val stringB = new StringBuilder(size) - val paddingString = "abcdefghijklmnopqrs" - - while (stringB.length() + paddingString.length() < size) stringB.append(paddingString) - - ByteString(stringB.toString().getBytes()) - } -} \ No newline at end of file +object TestHelpers { + val portNumber = new AtomicInteger(10500) +} diff --git a/src/test/scala/nl/gideondk/sentinel/protocol/SimpleMessage.scala b/src/test/scala/nl/gideondk/sentinel/protocol/SimpleMessage.scala new file mode 100644 index 0000000..42b2447 --- /dev/null +++ b/src/test/scala/nl/gideondk/sentinel/protocol/SimpleMessage.scala @@ -0,0 +1,101 @@ +package nl.gideondk.sentinel.protocol + +import akka.stream.Materializer +import akka.stream.scaladsl.{ BidiFlow, Framing, Sink, Source } +import akka.util.{ ByteString, ByteStringBuilder } +import nl.gideondk.sentinel.pipeline.Resolver + +import scala.concurrent.ExecutionContext.Implicits.global +import scala.concurrent.Future + +sealed trait SimpleMessageFormat { + def payload: String +} + +case class SimpleCommand(cmd: Int, payload: String) extends SimpleMessageFormat + +// 1 +case class SimpleReply(payload: String) extends SimpleMessageFormat + +// 2 +case class SimpleStreamChunk(payload: String) extends SimpleMessageFormat + +// 3 +case class SimpleError(payload: String) extends SimpleMessageFormat + +object SimpleMessage { + val PING_PONG = 1 + val TOTAL_CHUNK_SIZE = 2 + val GENERATE_NUMBERS = 3 + val CHUNK_LENGTH = 4 + val ECHO = 5 + + implicit val byteOrder = java.nio.ByteOrder.BIG_ENDIAN + + def deserialize(bs: ByteString): SimpleMessageFormat = { + val iter = bs.iterator + iter.getByte.toInt match { + case 1 ⇒ + SimpleCommand(iter.getInt, new String(iter.toByteString.toArray)) + case 2 ⇒ + SimpleReply(new String(iter.toByteString.toArray)) + case 3 ⇒ + SimpleStreamChunk(new String(iter.toByteString.toArray)) + case 4 ⇒ + SimpleError(new String(iter.toByteString.toArray)) + } + } + + def serialize(m: SimpleMessageFormat): ByteString = { + val bsb = new ByteStringBuilder() + m match { + case x: SimpleCommand ⇒ + bsb.putByte(1.toByte) + bsb.putInt(x.cmd) + bsb.putBytes(x.payload.getBytes) + case x: SimpleReply ⇒ + bsb.putByte(2.toByte) + bsb.putBytes(x.payload.getBytes) + case x: SimpleStreamChunk ⇒ + bsb.putByte(3.toByte) + bsb.putBytes(x.payload.getBytes) + case x: SimpleError ⇒ + bsb.putByte(4.toByte) + bsb.putBytes(x.payload.getBytes) + case _ ⇒ + } + bsb.result + } + + val flow = BidiFlow.fromFunctions(serialize, deserialize) + + def protocol = flow.atop(Framing.simpleFramingProtocol(1024)) +} + +import nl.gideondk.sentinel.protocol.SimpleMessage._ + +object SimpleHandler extends Resolver[SimpleMessageFormat] { + def process(implicit mat: Materializer): PartialFunction[SimpleMessageFormat, Action] = { + case SimpleStreamChunk(x) ⇒ if (x.length > 0) ConsumerAction.ConsumeStreamChunk else ConsumerAction.EndStream + case x: SimpleError ⇒ ConsumerAction.AcceptError + case x: SimpleReply ⇒ ConsumerAction.AcceptSignal + case SimpleCommand(PING_PONG, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply("PONG")) } + case x ⇒ println("Unhandled: " + x); ConsumerAction.Ignore + } +} + +object SimpleServerHandler extends Resolver[SimpleMessageFormat] { + def process(implicit mat: Materializer): PartialFunction[SimpleMessageFormat, Action] = { + case SimpleStreamChunk(x) ⇒ if (x.length > 0) ConsumerAction.ConsumeStreamChunk else ConsumerAction.EndStream + case SimpleCommand(PING_PONG, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply("PONG")) } + case SimpleCommand(TOTAL_CHUNK_SIZE, payload) ⇒ ProducerAction.ConsumeStream { x: Source[SimpleStreamChunk, Any] ⇒ + x.runWith(Sink.fold[Int, SimpleMessageFormat](0) { (b, a) ⇒ b + a.payload.length }).map(x ⇒ SimpleReply(x.toString)) + } + case SimpleCommand(GENERATE_NUMBERS, payload) ⇒ ProducerAction.ProduceStream { x: SimpleCommand ⇒ + val count = payload.toInt + Future(Source(List.range(0, count)).map(x ⇒ SimpleStreamChunk(x.toString)) ++ Source.single(SimpleStreamChunk(""))) + } + case SimpleCommand(ECHO, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply(x.payload)) } + case x ⇒ println("Unhandled: " + x); ConsumerAction.Ignore + } +} \ No newline at end of file diff --git a/src/test/scala/nl/gideondk/sentinel/protocols/SimpleMessage.scala b/src/test/scala/nl/gideondk/sentinel/protocols/SimpleMessage.scala deleted file mode 100644 index aa62590..0000000 --- a/src/test/scala/nl/gideondk/sentinel/protocols/SimpleMessage.scala +++ /dev/null @@ -1,103 +0,0 @@ -package nl.gideondk.sentinel.protocols - -import scala.concurrent._ -import scala.concurrent.ExecutionContext.Implicits.global - -import akka.io._ -import akka.util.{ ByteString, ByteStringBuilder } - -import nl.gideondk.sentinel._ -import play.api.libs.iteratee._ - -trait SimpleMessageFormat { - def payload: String -} - -case class SimpleCommand(cmd: Int, payload: String) extends SimpleMessageFormat // 1 -case class SimpleReply(payload: String) extends SimpleMessageFormat // 2 -case class SimpleStreamChunk(payload: String) extends SimpleMessageFormat // 3 -case class SimpleError(payload: String) extends SimpleMessageFormat // 4 - -class PingPongMessageStage extends SymmetricPipelineStage[PipelineContext, SimpleMessageFormat, ByteString] { - override def apply(ctx: PipelineContext) = new SymmetricPipePair[SimpleMessageFormat, ByteString] { - implicit val byteOrder = java.nio.ByteOrder.BIG_ENDIAN - - override val commandPipeline = { - msg: SimpleMessageFormat ⇒ - { - val bsb = new ByteStringBuilder() - msg match { - case x: SimpleCommand ⇒ - bsb.putByte(1.toByte) - bsb.putInt(x.cmd) - bsb.putBytes(x.payload.getBytes) - case x: SimpleReply ⇒ - bsb.putByte(2.toByte) - bsb.putBytes(x.payload.getBytes) - case x: SimpleStreamChunk ⇒ - bsb.putByte(3.toByte) - bsb.putBytes(x.payload.getBytes) - case x: SimpleError ⇒ - bsb.putByte(4.toByte) - bsb.putBytes(x.payload.getBytes) - case _ ⇒ - } - Seq(Right(bsb.result)) - } - - } - - override val eventPipeline = { - bs: ByteString ⇒ - val iter = bs.iterator - iter.getByte.toInt match { - case 1 ⇒ - Seq(Left(SimpleCommand(iter.getInt, new String(iter.toByteString.toArray)))) - case 2 ⇒ - Seq(Left(SimpleReply(new String(iter.toByteString.toArray)))) - case 3 ⇒ - Seq(Left(SimpleStreamChunk(new String(iter.toByteString.toArray)))) - case 4 ⇒ - Seq(Left(SimpleError(new String(iter.toByteString.toArray)))) - } - - } - } -} - -object SimpleMessage { - val stages = new PingPongMessageStage >> new LengthFieldFrame(1000) - - val PING_PONG = 1 - val TOTAL_CHUNK_SIZE = 2 - val GENERATE_NUMBERS = 3 - val CHUNK_LENGTH = 4 - val ECHO = 5 -} - -import SimpleMessage._ -trait DefaultSimpleMessageHandler extends Resolver[SimpleMessageFormat, SimpleMessageFormat] { - def process = { - case SimpleStreamChunk(x) ⇒ if (x.length > 0) ConsumerAction.ConsumeStreamChunk else ConsumerAction.EndStream - case x: SimpleError ⇒ ConsumerAction.AcceptError - case x: SimpleReply ⇒ ConsumerAction.AcceptSignal - } -} - -object SimpleClientHandler extends DefaultSimpleMessageHandler - -object SimpleServerHandler extends DefaultSimpleMessageHandler { - - override def process = super.process orElse { - case SimpleCommand(PING_PONG, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply("PONG")) } - case SimpleCommand(TOTAL_CHUNK_SIZE, payload) ⇒ ProducerAction.ConsumeStream { x: SimpleCommand ⇒ - s: Enumerator[SimpleStreamChunk] ⇒ - s |>>> Iteratee.fold(0) { (b, a) ⇒ b + a.payload.length } map (x ⇒ SimpleReply(x.toString)) - } - case SimpleCommand(GENERATE_NUMBERS, payload) ⇒ ProducerAction.ProduceStream { x: SimpleCommand ⇒ - val count = payload.toInt - Future((Enumerator(List.range(0, count): _*) &> Enumeratee.map(x ⇒ SimpleStreamChunk(x.toString))) >>> Enumerator(SimpleStreamChunk(""))) - } - case SimpleCommand(ECHO, payload) ⇒ ProducerAction.Signal { x: SimpleCommand ⇒ Future(SimpleReply(x.payload)) } - } -} \ No newline at end of file