Quantcast
Channel: Sleepless Dev
Viewing all 217 articles
Browse latest View live

Reactive Services, Reactive Manifesto and Microservices

$
0
0
Reactive Services, Reactive Manifesto and Microservices

Many disciplines of software development came to the same conclusion. They are building systems that react to modern demands on services. Reactive services live up to the Reactive Manifesto. Reactive services are built to be robust, resilient, flexible and written with modern hardware, virtualization, rich web clients and mobile clients in mind.

The Reactive Manifesto outlines qualities of Reactive Systems based on four principles: Responsive, Resilient, Elastic and Message Driven. 

Responsiveness means the service should respond in a timely manner, and never let clients or upstream services hang. A system failure should not cause a chain reaction of failures. A failure of a downstream system may cause a degraded response, but a response none-the-less. 

Resilience goes in line with responsiveness, the system should respond even in the face of failure and errors in a timely fashion. It can respond because it can detect an async response is not coming back in time and serve up a degraded response (circuit breaker). It may be able to respond in spite of failure because it can use a replicated version of a failed downstream node. Failure and recovery is built into the system. Monitoring and spinning up new instances to aid in recovery may be delegated to another highly available resource. A key component of resilience is the ability to monitor known good nodes and to perform Service Discovery to find alternative upstream and downstream services.

Elasticity works with resilience. The ability to spin up new services and for downstream and upstream services and clients to find the new instances is vital to both the resilience of the system as well as the elasticity of the system.  Reactive Systems can react to changes in load by spinning up more services to share the load. Imagine a set of services for a professional soccer game that delivers real time stats. During games, you may need to spin up many services. On non-game times, you may need just a few of these services. A reactive system is a system that can increase and decrease resources based on demand. Just like with resilience,Service Discovery aids with elasticity as it provides a mechanism for upstream and downstream services and clients to discover new nodes so the load can be spread across the services. 

Message Driven: Reactive Systems rely on asynchronous message passing. This established boundaries between services (in-proc and out of proc) which allows for loose coupling (publish/subscribe or async streams or async calls), isolation (one failure does not ripple through to upstream services and clients), and improved responsive error handling. Having messaging allows one to control throughput (re-route, spin up more services) by applying back-pressure and using back pressure events to trigger changes to shape traffic though the queues. Messaging allows for non-blocking handling of responses. 

A well written microservice is more often than not going to apply the principles of the reactive manifesto. One could argue that a microservices architecture is just an extension of the reactive manifesto that is geared towards web services. 

There are related subjects of reactive programming and functional reactive programming which are related to the reactive manifesto. A system can be a reactive system and not use a reactive programming model. Reactive programming is often used to coordinate asynchronous calls to multiple services as well as events and streams from clients and other systems.

An example: Client calls Service Z. Service Z calls Service A and Service B, but sends back only the combined results of Service A and Service C. The results of Service B are used to call Service C. Thus Z must call A, B, take the results of B and calls C, then return A/C combined back to the client. And, all of these calls must be asynchronous, non-blocking calls, but we should be able to handle errors for A, B or C, and handle timeouts such that the Client does not hang when Z calls downstream services. The orchestration of calling many services requires some sort of reactive programming coordination. Frameworks like RxJava, RxJS, etc. were conceived to provide an Object Reactive programming model to better work in an environment where there are events, streams and asynchronous calls.

QBit also provides a Reactor class to coordinate asynchronous calls using reactive programming and it uses Java 8 Lambda expression to aid in this endeavor. QBit, Java microservice lib,  is a Java-first reactive programming environment that focuses on events, service discovery, and microservices. It implements active object pattern, similar to Akka typed-actors, which is a service which lives behind a set of queues to handle events, method calls, responses, callbacks, etc. These services have location transparency as they can be in-proc of on another server node via WebSocket or the event bus. The resilience comes from replication which is readily possible with the service discovery and the event bus. The service discovery mechanism takes health of the nodes into consideration so unhealthy nodes are taken out of the service pools. QBit implements all the important parts of what it takes to build a reactive system of microservices in Java 8. QBit provides a natural environment to do reactive system development in Java. QBit is a reactive Java lib as well as a microservice lib. 










Getting Consul to run on Travis CI Server using Gradle so we can test consul integration tests

$
0
0
We were able to get our integration tests for consul to run in the Travis CI server.

We use consul as a service discovery system for our Microservice lib (QBit microservices). This allows us to get a list of healthy service peers that are up and able to handle streams of calls.

The trick was to get consul running on our integration server. We use Travis
First we define a new task and then check to see if the consul executable already exist.
Once we ensure consul exists, we execute it with the right command line args.
We are using gradle to manage our build. It seems to be the most flexible. 
    def execFile =newFile(project.rootDir,
'/tmp/consul/bin/consul')

def zipFile =newFile(project.rootDir, '/tmp/consul/bin/consul.zip')
Then we need to find the right OS. (We only build on Linux and Mac.)
    def linuxDist ="https://dl.bintray.com/mitchellh/consul/0.5.0_linux_amd64.zip"
def macDist ="https://dl.bintray.com/mitchellh/consul/0.5.0_darwin_amd64.zip"
def zipURL =null
Create the parent folder to hold the zip and the bin.
        execFile.parentFile.mkdirs()

if (execFile.parentFile.exists()) {
println("${execFile.parentFile} created" );
}
Then we see if we can find the type of OS. We only support 64 bit Linux, but as you can see, we could add more.
if (System.getProperty("os.name").contains("Mac OS X")) {
zipURL = macDist
println("On mac")
} else {
zipURL = linuxDist
println("On linux")


def osArc =System.getProperty("sun.arch.data.model")
def osName =System.getProperty("os.name")
def osVersion =System.getProperty("os.version")

println("os.arc Operating system architecture\t\t $osArc")
println("os.name Operating system name\t\t $osName")
println("os.version Operating system version\t\t $osVersion")
}
There are lots of println(s) because who know where someone might try to run this.
Copy the zip file from the URL:
newURL(zipURL).withInputStream{ i -> zipFile.withOutputStream{ it << i }}


for (int index =0; index <10; index++) {
ant.sleep(seconds:1)
if (zipFile.exists()) {
break;
}
println("Waiting for download $zipURL" )
}
If the zip file exists, then unzip it, and change permissions so it is executable.
if (zipFile.exists()) {

println("${zipFile} ${zipFile.absoluteFile} ${zipFile.exists()} ${zipFile.size()}")
println(execFile.parentFile)

ant.unzip(src: zipFile, dest: execFile.parentFile)

ant.exec(command:"/bin/sh -c \"chmod +x ${execFile}\"")
} else {
println("Unable to create file $zipFile from $zipURL")
}
Lot's of debugging info.
Once we create the zip, unpack it, change permissions, then we can execute consul.
if (!execFile.exists()) {
findItUnpackIt()
}


ant.exec(command:"/bin/sh -c \"${execFile} \
agent -server -bootstrap-expect 1 \  
                -data-dir /tmp/consul\"",
                spawn:true)

Pause for a bit to let consul run before we start our tests:
for (int index =0; index <10; index++) {
ant.sleep(seconds:1)
ant.echo(message:"Waiting for consul $index")
}
The next trick was to wire this into the consul client subproject.
project('cluster:consul-client') {


dependencies {
compile project(":qbit:web:jetty")
}



task runConsul(type:RunConsul) << {
println 'task'
}

test.dependsOn(runConsul)
...
}
Here is the full task to RunConsul.
classRunConsulextendsDefaultTask {

def execFile =newFile(project.rootDir,
'/tmp/consul/bin/consul')

def zipFile =newFile(project.rootDir, '/tmp/consul/bin/consul.zip')


def linuxDist ="https://dl.bintray.com/mitchellh/consul/0.5.0_linux_amd64.zip"
def macDist ="https://dl.bintray.com/mitchellh/consul/0.5.0_darwin_amd64.zip"
def zipURL =null


def findItUnpackIt() {


execFile.parentFile.mkdirs()

if (execFile.parentFile.exists()) {
println("${execFile.parentFile} created" );
}


if (System.getProperty("os.name").contains("Mac OS X")) {
zipURL = macDist
println("On mac")
} else {
zipURL = linuxDist
println("On linux")


def osArc =System.getProperty("sun.arch.data.model")
def osName =System.getProperty("os.name")
def osVersion =System.getProperty("os.version")

println("os.arc Operating system architecture\t\t $osArc")
println("os.name Operating system name\t\t $osName")
println("os.version Operating system version\t\t $osVersion")
}

newURL(zipURL).withInputStream{ i -> zipFile.withOutputStream{ it << i }}


for (int index =0; index <10; index++) {
ant.sleep(seconds:1)
if (zipFile.exists()) {
break;
}
println("Waiting for download $zipURL" )
}

if (zipFile.exists()) {

println("${zipFile} ${zipFile.absoluteFile} ${zipFile.exists()} ${zipFile.size()}")
println(execFile.parentFile)

ant.unzip(src: zipFile, dest: execFile.parentFile)

ant.exec(command:"/bin/sh -c \"chmod +x ${execFile}\"")
} else {
println("Unable to create file $zipFile from $zipURL")
}


}

@TaskAction
void runIt() {

if (!execFile.exists()) {
findItUnpackIt()
}


ant.exec(command:"/bin/sh -c \"${execFile} agent -server -bootstrap-expect 1 -data-dir /tmp/consul\"",
spawn:true)

for (int index =0; index <10; index++) {
ant.sleep(seconds:1)
ant.echo(message:"Waiting for consul $index")
}
}
}

Using QBit to create Java RESTful microservices

$
0
0

QBit Restful Microservices

Before we delve into QBit restful services, let's cover what we get from gradle's application plugin. In order to be a microservice, a service needs to run in a standalone process or a related group of standalone processes.

Gradle application plugin

Building a standalone application with gradle is quite easy. You use the gradle application plug-in.

Gradle build using java and application plugin

apply plugin:'java'
apply plugin:'application'

sourceCompatibility =1.8
version ='1.0'
mainClassName ="io.advantageous.examples.Main"

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
}
To round out this example, let's create a simple Main java class.

Simple main Java to demonstrate Gradle application plugin

packageio.advantageous.examples;

publicclassMain {

publicstaticvoidmain(String... args) {
System.out.println("Hello World!");
}
}
The project structure is as follows:

Project structure

$ tree
.
├── build.gradle
├── gradle
│   └── wrapper
│   ├── gradle-wrapper.jar
│   └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── restful-qbit.iml
├── settings.gradle
└── src
└── main
└── java
└── io
└── advantageous
└── examples
└── Main.java
We use a standard maven style structure.
To build this application we use the following commands:

Building our application

$ gradle clean build

Output

:clean UP-TO-DATE
:compileJava
:processResources UP-TO-DATE
:classes
:jar
:assemble
:compileTestJava UP-TO-DATE
:processTestResources UP-TO-DATE
:testClasses UP-TO-DATE
:test UP-TO-DATE
:check UP-TO-DATE
:build

BUILD SUCCESSFUL

Total time: 2.474 secs
To run our application we use the following:

Running our application

$ gradle run

Output

:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:run
Hello World!

BUILD SUCCESSFUL

Total time: 2.202 secs
There are three more commands that we care about:
  • installDist Installs the application into a specified directory
  • distZip Creates ZIP archive including libs and start scripts
  • distTar Creates TAR archive including libs and start scripts
The application plug-in allows you to create scripts to start a process. These scripts work on all operating systems. Microservices run as standalone processes. The gradle application plug-in is a good fit for microservice development.
Let's use the application plugin to create a dist zip file.

Using gradle to create a distribution zip

$ gradle distZip
Let's see where gradle put the zip.

Using find to see where the zip went

$ find . -name "*.zip"
./build/distributions/restful-qbit-1.0.zip
Let's unzip to a directory.

Unzipping to an install directory

$ mkdir /opt/example
$ unzip ./build/distributions/restful-qbit-1.0.zip -d /opt/example/
Archive: ./build/distributions/restful-qbit-1.0.zip
creating: /opt/example/restful-qbit-1.0/
creating: /opt/example/restful-qbit-1.0/lib/
inflating: /opt/example/restful-qbit-1.0/lib/restful-qbit-1.0.jar
creating: /opt/example/restful-qbit-1.0/bin/
inflating: /opt/example/restful-qbit-1.0/bin/restful-qbit
inflating: /opt/example/restful-qbit-1.0/bin/restful-qbit.bat
Now we can run it from the install directory.

Running from install directory

$ /opt/example/restful-qbit-1.0/bin/restful-qbit
Hello World!
Contents of restful-qbit startup script.
$ cat /opt/example/restful-qbit-1.0/bin/restful-qbit
#!/usr/bin/env bash

##############################################################################
##
## restful-qbit start up script for UN*X
##
##############################################################################

# Add default JVM options here. You can also use JAVA_OPTS and RESTFUL_QBIT_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS=""

APP_NAME="restful-qbit"
APP_BASE_NAME=`basename "$0"`

# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"

warn ( ) {
echo"$*"
}

die ( ) {
echo
echo"$*"
echo
exit 1
}

# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
case"`uname`" in
CYGWIN* )
cygwin=true
;;
Darwin* )
darwin=true
;;
MINGW* )
msys=true
;;
esac

# For Cygwin, ensure paths are in UNIX format before anything is touched.
if$cygwin;then
[ -n "$JAVA_HOME" ] && JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
fi

# Attempt to set APP_HOME
# Resolve links: $0 may be a link
PRG="$0"
# Need this for relative symlinks.
while [ -h "$PRG" ] ;do
ls=`ls -ld "$PRG"`
link=`expr "$ls":'.*-> \(.*\)$'`
if expr "$link":'/.*'> /dev/null;then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
SAVED="`pwd`"
cd"`dirname \"$PRG\"`/..">&-
APP_HOME="`pwd -P`"
cd"$SAVED">&-

CLASSPATH=$APP_HOME/lib/restful-qbit-1.0.jar

# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ;then
if [ -x "$JAVA_HOME/jre/sh/java" ] ;then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ] ;then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME

Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1|| die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.

Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi

# Increase the maximum file descriptors if we can.
if [ "$cygwin" = "false" -a "$darwin" = "false" ] ;then
MAX_FD_LIMIT=`ulimit -H -n`
if [ $? -eq 0 ] ;then
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ;then
MAX_FD="$MAX_FD_LIMIT"
fi
ulimit -n $MAX_FD
if [ $? -ne 0 ] ;then
warn "Could not set maximum file descriptor limit: $MAX_FD"
fi
else
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
fi
fi

# For Darwin, add options to specify how the application appears in the dock
if$darwin;then
GRADLE_OPTS="$GRADLE_OPTS\"-Xdock:name=$APP_NAME\"\"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi

# For Cygwin, switch paths to Windows format before running java
if$cygwin;then
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`

# We build the pattern for arguments to be converted via cygpath
ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
SEP=""
fordirin$ROOTDIRSRAW;do
ROOTDIRS="$ROOTDIRS$SEP$dir"
SEP="|"
done
OURCYGPATTERN="(^($ROOTDIRS))"
# Add a user-defined pattern to the cygpath arguments
if [ "$GRADLE_CYGPATTERN"!= "" ] ;then
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
fi
# Now convert the arguments - kludge to limit ourselves to /bin/sh
i=0
forargin"$@";do
CHECK=`echo"$arg"|egrep -c "$OURCYGPATTERN" -`
CHECK2=`echo"$arg"|egrep -c "^-"`### Determine if an option

if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ;then### Added a condition
eval`echo args$i`=`cygpath --path --ignore --mixed "$arg"`
else
eval`echo args$i`="\"$arg\""
fi
i=$((i+1))
done
case$i in
(0) set -- ;;
(1) set -- "$args0" ;;
(2) set -- "$args0""$args1" ;;
(3) set -- "$args0""$args1""$args2" ;;
(4) set -- "$args0""$args1""$args2""$args3" ;;
(5) set -- "$args0""$args1""$args2""$args3""$args4" ;;
(6) set -- "$args0""$args1""$args2""$args3""$args4""$args5" ;;
(7) set -- "$args0""$args1""$args2""$args3""$args4""$args5""$args6" ;;
(8) set -- "$args0""$args1""$args2""$args3""$args4""$args5""$args6""$args7" ;;
(9) set -- "$args0""$args1""$args2""$args3""$args4""$args5""$args6""$args7""$args8" ;;
esac
fi

# Split up the JVM_OPTS And RESTFUL_QBIT_OPTS values into an array, following the shell quoting and substitution rules
functionsplitJvmOpts() {
JVM_OPTS=("$@")
}
eval splitJvmOpts $DEFAULT_JVM_OPTS$JAVA_OPTS$RESTFUL_QBIT_OPTS


exec"$JAVACMD""${JVM_OPTS[@]}" -classpath "$CLASSPATH" io.advantageous.examples.Main "$@"

Creating a simple RestFul Microservice

Let's create a simple HTTP service that responsds with pong when we send it a ping as follows.

Service that responds to this curl command

$ curl http://localhost:9090/services/pongservice/ping
"pong"
First let's import the qbit lib. We are using the SNAPSHOT but hopefully by the time you read this the release will be available.
Add the following to your gradle build.

Adding QBit to your gradle build

apply plugin:'java'
apply plugin:'application'

sourceCompatibility =1.8
version ='1.0'
mainClassName ="io.advantageous.examples.Main"

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
compile group:'io.advantageous.qbit', name:'qbit-vertx', version:'0.7.3-SNAPSHOT'
}
Define a service as follows:

QBit service

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.RequestMapping;

@RequestMapping
publicclassPongService {


@RequestMapping
publicStringping() {
return"pong";
}

}
The @RequestMapping defines the service as one that responsds to an HTTP call. If you do not specify the path, then the lower case name of the class and the lower case name of the method becomes the path. Thus PongService.ping() becomes /pongservice/ping. To bind this service to a port we use a service server. A service server is a server that hosts services like our pong service.
Change the Main class to use the ServiceServer as follows:

Using main class to bind in service to a port

packageio.advantageous.examples;

importio.advantageous.qbit.server.ServiceServer;
importio.advantageous.qbit.server.ServiceServerBuilder;

publicclassMain {

publicstaticvoidmain(String... args) {
finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setPort(9090).build();
serviceServer.initServices(newPongService());
serviceServer.startServer();
}
}
Notice we pass an instance of the PongService to the initServices of the service server. If we want to change the root address from "services" to something else we could do this:

Changing the root URI of the service server

finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setUri("/main")
.setPort(9090).build();
serviceServer.initServices(newPongService());
serviceServer.startServer();
Now we can call this using curl as follows:

Use the curl command to invoke service via /main/pongservice/ping

$ curl http://localhost:9090/main/pongservice/ping
"pong"
QBit uses builders to make it easy to integrate QBit with frameworks like Spring or Guice or to just use standalone.

Adding a service that takes request params

Taking request parameters

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {

@RequestMapping("/add")
publicintadd(@RequestParam("a") inta,
@RequestParam("b") intb) {

return a + b;
}


}
Notice the above uses @RequestParam this allows you to pull the requests params as arguments to the method. If we pass a URL like: http://localhost:9090/main/my/service/add?a=1&b=2. QBit will use 1 for argument a and 2 for argument b.

Adding the new service to Main

packageio.advantageous.examples;

importio.advantageous.qbit.server.ServiceServer;
importio.advantageous.qbit.server.ServiceServerBuilder;

publicclassMain {

publicstaticvoidmain(String... args) {
finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setUri("/main")
.setPort(9090).build();
serviceServer.initServices(
newPongService(),
newSimpleService());
serviceServer.startServer();
}
}
When we load this URL:
http://localhost:9090/main/my/service/add?a=1&b=2
We get this response.
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 1

3

Working with URI params

Many think that URLs without request parameters are more search engine friendly or you can list things under a context which make it more RESTful. This is open to debate. I don't care about debate, but here is an example of using URI params.

Working with URI params example

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {
...
...
@RequestMapping("/add2/{a}/{b}")
publicintadd2( @PathVariable("a") inta,
@PathVariable("b") intb) {

return a + b;
}


}
Now we can pass arguments which are part of the URL path. We do this by using the@PathVariable annotation. Thus the following URL:
http://localhost:9090/main/my/service/add2/1/4
The 1 is correlates to the "a" argument to the method and the 4 correlates to the "b" arguments.
We would get the following response when we load this URL.

Working with URI params output

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 1

5
You can mix and match URI params and request params.

Working with URI params and request params example

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {
...
...
@RequestMapping("/add3/{a}/")
publicintadd3( @PathVariable("a") inta,
@RequestParam("b") intb) {

return a + b;
}

}
This allows us to mix URI params and request params as follows:
http://localhost:9090/main/my/service/add3/1?b=8
Now we get this response:

Working with URI params and request params example

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 1

9

Fuller RESTful example

Let's create a simple Employee / Department listing application.

RESTful Employee and Department listing

packageio.advantageous.examples.employees;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;

importjava.util.*;
importjava.util.function.Predicate;

@RequestMapping("/dir")
publicclassEmployeeDirectoryService {


privatefinalList<Department> departmentList =newArrayList<>();


@RequestMapping("/employee/{employeeId}/")
publicEmployeelistEmployee(@PathVariable("employeeId") finallongemployeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Find employee in department. */
if (departmentOptional.isPresent()) {
return departmentOptional.get().employeeList()
.stream().filter(employee -> employee.getId() == employeeId)
.findFirst().get();
} else {
returnnull;
}
}



@RequestMapping("/department/{departmentId}/")
publicDepartmentlistDepartment(@PathVariable("departmentId") finallongdepartmentId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst();

/* Find employee in department. */
if (departmentOptional.isPresent()) {
return departmentOptional.get();
} else {
returnnull;
}
}



@RequestMapping(value="/department/", method=RequestMethod.POST)
publicbooleanaddDepartment( @RequestParam("departmentId") finallongdepartmentId,
@RequestParam("name") finalStringname) {
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department "+ departmentId +" already exists");
}
departmentList.add(newDepartment(departmentId, name));
returntrue;
}


@RequestMapping(value="/department/employee/", method=RequestMethod.POST)
publicbooleanaddEmployee( @RequestParam("departmentId") finallongdepartmentId,
finalEmployeeemployee) {

finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (!departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department not found");
}


finalboolean alreadyExists = departmentOptional.get().employeeList().stream()
.anyMatch(employeeItem -> employeeItem.getId() == employee.getId());

if (alreadyExists) {
thrownewIllegalArgumentException("Employee with id already exists "+ employee.getId());
}
departmentOptional.get().addEmployee(employee);
returntrue;
}

}
To add three departments Engineering, HR and Sales:

Add Engineering, HR and Sales department with REST

$ curl -X POST http://localhost:9090/main/dir/department/?departmentId=1&name=Engineering
$ curl -X POST http://localhost:9090/main/dir/department/?departmentId=2&name=HR
$ curl -X POST http://localhost:9090/main/dir/department/?departmentId=3&name=Sales
Now let's add some employees into those departments.

Add some employees to the departments

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Rick","lastName":"Hightower", "id": 1}' /
http://localhost:9090/main/dir/departme\
nt/employee/?departmentId=1

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Diana","lastName":"Hightower", "id": 2}' /
http://localhost:9090/main/dir/departm\
ent/employee/?departmentId=2

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Maya","lastName":"Hightower", "id": 3}' /
http://localhost:9090/main/dir/departme\
nt/employee/?departmentId=3

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Paul","lastName":"Hightower", "id": 4}' /
http://localhost:9090/main/dir/departmen\
t/employee/?departmentId=3
Now let's list the employees. We can get the employees with the following curl.

Listing employees with curl

$ curl http://localhost:9090/main/dir/employee/1
{"firstName":"Rick","lastName":"Hightower","id":1}

$ curl http://localhost:9090/main/dir/employee/2
{"firstName":"Diana","lastName":"Hightower","id":2}

$ curl http://localhost:9090/main/dir/employee/3
{"firstName":"Maya","lastName":"Hightower","id":3}

$ curl http://localhost:9090/main/dir/employee/4
{"firstName":"Paul","lastName":"Hightower","id":4}
Now we can list departments with our RESTful API:

Listing departments with curl

$ curl http://localhost:9090/main/dir/department/1
{"id":1,"employees":[{"firstName":"Rick","lastName":"Hightower","id":1}]}

$ curl http://localhost:9090/main/dir/department/2
{"id":2,"employees":[{"firstName":"Diana","lastName":"Hightower","id":2}]}

$ curl http://localhost:9090/main/dir/department/3
{
"id": 3,
"employees": [
{
"firstName": "Maya",
"lastName": "Hightower",
"id": 3
},
{
"firstName": "Paul",
"lastName": "Hightower",
"id": 4
}
]
}
Some feel that is a search engine friendly RESTful interface although it is not. A true RESTful interface would have hyperlinks, but let's leave that off for another discussion lest we bring on the debate akin to vi vs. emacs or Scala vs. Groovy.

Some theory

Generally speaking people prefer the following (subject to much debate):
  • POST to add to a resource
  • PUT to update a resource
  • GET to read a resource
  • DELETE to remove an item from a list
  • End a group in the singular form with a slash as in department/
  • Use the name or id in the URI path to address a resource department/1/ would address the department with id 1.

Adding DELETE verb

You can use any of the HTTP verbs. Typically as mentioned before you use the DELETE verb to delete a resource as follows:

Mapping methods to DELETE verb

    @RequestMapping(value ="/employee", method =RequestMethod.DELETE)
publicboolean removeEmployee(@RequestParam("id") finallong employeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}


@RequestMapping(value ="/department", method =RequestMethod.DELETE)
publicboolean removeDepartment(@RequestParam("id") finallong departmentId) {

return departmentList.removeIf(department -> departmentId == department.getId());
}
Now let's delete somebody.

Using DELETE from command line

curl -H "Content-Type: application/json" -X DELETE  /
http://localhost:9090/main/dir/employee?id=3

Mixing and matching request params and path variables

You can mix and match @PathVariable and @RequestParam which is a quite common case and one that QBit just started supporitng this last release.

Mixing and matching request params and path variables example

    @RequestMapping(value ="/department/{departmentId}/employee", method =RequestMethod.DELETE)
publicboolean removeEmployeeFromDepartment(
@PathVariable("departmentId") finallong departmentId,
@RequestParam("id") finallong employeeId) {

/* Find the department by id. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}
In this very common case, you can use the PathVariable to address the department resource and then ask for a specific employee to be deleted only from this department.

Using curl to delete an employee from a specific department

curl -H "Content-Type: application/json" -X DELETE  \
http://localhost:9090/main/dir/department/3/employee?id=4

Full examples:

Listing

$ tree
.
├── addDepartments.sh
├── addEmployees.sh
├── build.gradle
├── gradle
│   └── wrapper
│   ├── gradle-wrapper.jar
│   └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── removeEmployee.sh
├── restful-qbit.iml
├── settings.gradle
├── showEmployees.sh
└── src
├── main
│   └── java
│   └── io
│   └── advantageous
│   └── examples
│   ├── Main.java
│   ├── PongService.java
│   ├── SimpleService.java
│   └── employees
│   ├── Department.java
│   ├── Employee.java
│   └── EmployeeDirectoryService.java
└── test
└── java
└── io
└── advantageous
└── examples
└── employees
└── EmployeeDirectoryServiceTest.java

Department.java

packageio.advantageous.examples.employees;

importjava.util.ArrayList;
importjava.util.List;

publicclassDepartment {

privateString name;
privatefinallong id;
privatefinalList<Employee> employees =newArrayList();

publicDepartment(longid, Stringname) {
this.id = id;
this.name = name;
}

publicvoidaddEmployee(finalEmployeeemployee) {
employees.add(employee);
}


publicbooleanremoveEmployee(finallongid) {
return employees.removeIf(employee -> employee.getId() == id);
}

publicList<Employee>employeeList() {
return employees;
}


publiclonggetId() {
return id;
}

publicStringgetName() {
return name;
}

publicvoidsetName(Stringname) {
this.name = name;
}
}

Employee

packageio.advantageous.examples.employees;

publicclassEmployee {

privateString firstName;
privateString lastName;
privatefinallong id;

publicEmployee(StringfirstName, StringlastName, longid) {
this.firstName = firstName;
this.lastName = lastName;
this.id = id;
}

publicStringgetFirstName() {
return firstName;
}

publicvoidsetFirstName(StringfirstName) {
this.firstName = firstName;
}

publicStringgetLastName() {
return lastName;
}

publicvoidsetLastName(StringlastName) {
this.lastName = lastName;
}

publiclonggetId() {
return id;
}
}

EmployeeServiceDirectory

packageio.advantageous.examples.employees;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;

importjava.util.*;
importjava.util.function.Predicate;

@RequestMapping("/dir")
publicclassEmployeeDirectoryService {


privatefinalList<Department> departmentList =newArrayList<>();


@RequestMapping("/employee/{employeeId}/")
publicEmployeelistEmployee(@PathVariable("employeeId") finallongemployeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Find employee in department. */
if (departmentOptional.isPresent()) {
return departmentOptional.get().employeeList()
.stream().filter(employee -> employee.getId() == employeeId)
.findFirst().get();
} else {
returnnull;
}
}



@RequestMapping("/department/{departmentId}/")
publicDepartmentlistDepartment(@PathVariable("departmentId") finallongdepartmentId) {

return departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst().get();
}



@RequestMapping(value="/department/", method=RequestMethod.POST)
publicbooleanaddDepartment( @RequestParam("departmentId") finallongdepartmentId,
@RequestParam("name") finalStringname) {
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department "+ departmentId +" already exists");
}
departmentList.add(newDepartment(departmentId, name));
returntrue;
}


@RequestMapping(value="/department/employee/", method=RequestMethod.POST)
publicbooleanaddEmployee( @RequestParam("departmentId") finallongdepartmentId,
finalEmployeeemployee) {

finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (!departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department not found");
}


finalboolean alreadyExists = departmentOptional.get().employeeList().stream()
.anyMatch(employeeItem -> employeeItem.getId() == employee.getId());

if (alreadyExists) {
thrownewIllegalArgumentException("Employee with id already exists "+ employee.getId());
}
departmentOptional.get().addEmployee(employee);
returntrue;
}




@RequestMapping(value="/employee", method=RequestMethod.DELETE)
publicbooleanremoveEmployee(@RequestParam("id") finallongemployeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}


@RequestMapping(value="/department", method=RequestMethod.DELETE)
publicbooleanremoveDepartment(@RequestParam("id") finallongdepartmentId) {

return departmentList.removeIf(department -> departmentId == department.getId());
}



@RequestMapping(value="/department/{departmentId}/employee", method=RequestMethod.DELETE)
publicbooleanremoveEmployeeFromDepartment(
@PathVariable("departmentId") finallongdepartmentId,
@RequestParam("id") finallongemployeeId) {

/* Find the department by id. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}

}

PongService.java

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.RequestMapping;

@RequestMapping
publicclassPongService {


@RequestMapping
publicStringping() {
return"pong";
}

}

SimpleService.java

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {

@RequestMapping("/add")
publicintadd(@RequestParam("a") inta,
@RequestParam("b") intb) {

return a + b;
}

@RequestMapping("/add2/{a}/{b}")
publicintadd2( @PathVariable("a") inta,
@PathVariable("b") intb) {

return a + b;
}

@RequestMapping("/add3/{a}/")
publicintadd3( @PathVariable("a") inta,
@RequestParam("b") intb) {

return a + b;
}


}

Main.java

packageio.advantageous.examples;

importio.advantageous.examples.employees.EmployeeDirectoryService;
importio.advantageous.qbit.server.ServiceServer;
importio.advantageous.qbit.server.ServiceServerBuilder;

publicclassMain {

publicstaticvoidmain(String... args) {
finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setUri("/main")
.setPort(9090).build();
serviceServer.initServices(
newPongService(),
newSimpleService(),
newEmployeeDirectoryService());
serviceServer.startServer();
}
}

Microservices Architecture: How much was it influenced by Mobile applications?

$
0
0
Many of the articles and talks about microservices architecture leave out an important concept. They mention the influencers to microservices, but leave out one of the major influencers of microservices architecture: the mobile web and mobile native applicationsOne of the major influences to microservices architecture is the mobile web and the proliferation of native mobile applications. ...





....
Frameworks like AkkaQBitVert.xRatpack, Node.js, Restlets are more streamlined to support the communication/service backends of mobile application development which is more of a reactive microservices architecture approach to development focused on WebSocket, and HTTP calls with JSON and not on a classic three-tiered web application now named a monolith by the microservices architecture crowd (a term I do not like). 
The main influencers of reactive microservices architecture is mobile applications (native and mobile web), richer web application, cloud computing, NoSQL movement, continuous X (integration and deployment), and the backlash to traditional eat-the-world SOA. 
...

Read more about high-speed microservicesJava microservices architecture and reactive microservices from a series of articles that we wrote.

Related links:
 read full article at:
Mobile Microservices

Microservices Architecture: VMware Releases Photon – a Cloud Native OS for microservices

$
0
0

Microservices Architecture: VMware Releases Photon – a Cloud Native OS for microservices


VMware now has its own Linux distribution, 'Project Photon', as part of its Microservices effort which is calls "Cloud Native Application".


Microservices: Cloud Native Application

"The idea is that rather than rely on a monolithic application to do everything, one can instead create lightweight components that handle one part of the process previously baked into a single application." --The Register.
Now each component can be updates more often using DevOps-driven release train instead of a larger less rigid release train. Docker now has become the poster boy of how you create microservices, although one does not need containerization to build microservices. You typically do need some sort of cloud/virtualization 2.0.
Containers like Docker use para-virtualization which is more like change root then a fully virtualized OS. This means that they can run closer to the actual hardware and there are less levels of indirection between a containerized OS and a fully virtualized one. Docker instances inherit setting from the core OS like allowed file handle limits, network configuration, etc. A Docker instance is more like a process that looks like an OS than a full VM instance. 



Microservices Architecture: VMware Releases Photon – a Cloud Native OS for microservices.


Read more about high-speed microservicesJava microservices architecture and reactive microservices from a series of articles that we wrote.

Related links:

Microservices Runtime Statistics and Metrics

$
0
0

Reactive Microservices Architecture and Runtime Statistics & Metrics

Runtime statistics and metrics are important for distributed systems. Since microservices architecture tend to promote and encourage remote process communication, they are inherently distributed systems. Runtime statistics and metrics can include request per second, available system memory, number of threads being used, connections that are open, failed authentication, expired tokens, and their ilk. If there is a parameter that is important to you, then you will want to track it. Given the complications of debugging a distributed system, you will find that runtime statistics of important parameters are a godsend.


Microservices Architecture Statistics
This is even more the case if you’re dealing with a lot of message queues. It can be difficult to determine where a message stopped being processed, and runtime statistics can help you track down issues.

Runtime statistics and metrics can also be a data-stream to your big data systems. Understanding types of request and their count and being able to correlate those with time of day, and events can aid in understanding how people use your application. In the age of big data, data science, and micro services, one may conclude that runtime statistics is no longer an optional feature, but a required feature for application development with an increasingly mobile and cloud world.

Just like logging became a must-have for applications so has runtime statistics. The runtime statistics can be important for tracking errors and when a certain threshold of errors occur a circuit breaker can be thrown open.

Remote calls and messages buses can fail, or hang without a response until a timeout is reached. In the event of a system that is down, a multitude of timeouts can cause a cascading failure. The Circuit Breaker pattern can be used to prevent a catastrophic cascade. Runtime statistics can be used to track errors and and trigger circuit breakers to open. You would want to use runtime statistics, and circuit breaker with service discovery so that you can mark nodes as unhealthy.

You can use runtime statistics to do application-specific things like rate limiting a partners Application ID so that they do not consume your resources outside of the bounds of their service agreements. Once you make microservices publicly available, you have to monitor and rate limit to collaborate effectively with partners. If you have ever used a public REST API, you are well aware of rate limiting which may do things like limit the number of connections you’re allowed to make and/or limit the number of certain requests that you were allowed to make in a given time period.
If you believe in the concepts of the reactive manifesto then you will want to gather runtime statistics that allow your systems to write reactive microservices.


QBit StatsService

QBit is a reactive mircoservices library that comes with a runtime statistics engine. QBit services are exposed via WebSocket RPC using JSON and REST. The statistics engine is easy to query and use. The QBit service engine’s stats engine can be integrated with StatsD for display and consumption of stats. There are tools and vendors who deal directly with StatsD feeds. You can also query QBit stats and use them to implement features like rate limiting, or spinning up new nodes when you detect things are getting overloaded.



StatsD the standard stats engine

StatsD is a network daemon for aggregating statistics, such as counters and timers, and shipping over UDP to backend services, such as Graphite or Datadog. In less than 5 years since it was first introduced, StatsD has become an important tool to aid in debugging, and monitoring microservices. If you are doing DevOps, then you are likely using StatsD.

StatsD was a simple daemon developed and released by Etsy. StatsD is used to aggregate and summarize application metrics. StatsD has a plethora of clients for various programming languages (ruby, python, java, erlang, node, scala, go, haskell, etc.). StatsD daemon collects stats from these clients using a published wire protocol. StatsD is so popular that it is a universal protocol for application metrics collection. The Etsy StatsD Daemon is the reference implementation, but there are other implementations like Go Stats Daemon and many more.

StatsD captures different types of metrics: Gauges, Counters, Timing Summary Statistics, and Sets. You can decorate your code to capture this type of data and report it.

A StatsD daemon listens to UDP traffic from StatsD clients. StatsD collects runtime statistics data over time and does periodic “flushes” of the data to analysis and monitoring engines you choose.
Tools can turn your runtime statistics and metrics into actionable charts and alert. Tools like Graphite are often used to visualize the state of microservices. Graphite is made up of Graphite-Web that renders graphs and dashboards, Carbon metric processing daemons, and Whisper which is a time-series database library.

There are other alternatives that QBit can integrate with as well like Coda Hale’s Metrics library which uses a Go Daemon.

StatsD seems to be the current champion of mind space. Mainly due to its simplicity and fire-and-forget protocol. StatsD can’t cause as cascading failure, and its client libs are very small.



Datadog and StatsD

Datadog allows importing of StatsD for graphing, alerting, and event correlation. They embedded the StatsD daemon within the Datadog Agent so it is a drop in replacement. Datadog added tagging to StatsD which allows information to the metrics like application version, event correlation, and more. Datadog is a monitoring service for IT, Operations, Development and DevOps. It attempts to take input from many vendors, cloud providers, open source tools, servers, and aggregate their data into actionable metrics.



StatsD and Kibana

Kibana is a flexible analytics and visualization platform for elastic search. It provides real-time summary and charting of streaming data from a variety of sources including logstash. Kibana has an Intuitive interface which allows you to configure dashboards. Kibana can be used to graph data from logstash which uses elastic search. Logstash has a plugin for StatsD. Kibana allows you to visualize streams of data from Elasticsearch from Logstash, es-hadoop or 3rd party technologies like Apache Flume, Fluentd, and many others.



StatsD and SOLR and Banana

LucidWorks ported Kibana, called Banana and Logstash to work with SOLR so if you are a SOLR shop, you have that as an option.

Conclusion

Runtime Statistics and Metrics are a very important component of microservices architecture. They help you debug, understand and react to events in your application. They help you build circuit breakers. Make sure that runtime statistics are not treated like an after thought in your microservices lib, but rather part of the core. Tools like StatsD and Code Hale Statistics allow you to gather metrics in a standard way. Tools like Graphite, Kibana, DataDog and Banana help you understand the data, and build dashboards. QBit, the Java Microservices Library, includes a queryable stats service which can feed into StatsD/CodeHale Metrics or can be used to implement features like rate limiting.

Read more here:

User Experience and Microservices Monitoring

$
0
0


User Experience and Microservices Monitoring
With Microservices which are released more often, you can try new features and see how they impact user usage patterns. With this feedback, you can improve your application. It is not uncommon to employ A/B testing and multi-variant testing to try out new combinations of features. Monitoring is more than just watching for failure. With big data, data science, and microservices, monitoring microservices runtime stats is required to know your application users. You want to know what your users like and dislike and react

Read more at Microservices Monitoring.

Debugging and Microservices Monitoring

$
0
0


Debugging and Microservices Monitoring
 
Runtime statistics and metrics are critical for distributed systems. Since microservices architecture use a lot of remote calls. Monitoring microservices metrics can include request per second, available memory, #threads, #connections, failed authentication, expired tokens, etc. These parameters are important for understanding and debugging your code. Working with distributed systems is hard. Working with distributed systems without reactive monitoring is crazy. Reactive monitoring allows you toreact to failure conditions and ramp of services for higher loads.

Read more at Microservices Monitoring.

Circuit Breaker and Microservices Monitoring

$
0
0

Circuit Breaker and Microservices Monitoring
 
You can employ the Circuit Breaker pattern to prevent a catastrophic cascade, and reactive microservices monitoring can be the trigger. Downstream services can be registered in a service discovery so that you can mark nodes as unhealthy as well react by reroute in the case of outages. The reaction can be serving up a deprecated version of the data or service, but the key is to avoid cascading failure. You don't want your services falling over like dominoes.

Cloud Orchestration and Microservices Monitoring

$
0
0

Cloud 
Orchestration and Microservices Monitoring
 
Reactive microservices monitoring would enable you to detect heavy load, and spin up new instances with the cloud orchestration platform of your choice (EC2,CloudStackOpenStackRackspaceboto, etc.). 

...
This allows you to write code that reacts to microservices metrics. QBit stats can be used to implement features like rate limiting, or spinning up new nodes when you detect things are getting overloaded. QBit can also feed stats into StatsD

StatsD and Microservices Monitoring

$
0
0

StatsD is a network daemon for aggregating statistics, such as counters and timers, and shipping over UDP to backend services, such as Graphite or DatadogStatsD has many small clients libs for Java, Python, Ruby, Node, etc.  StatsD server collects stats from clients using a published wire protocol.  StatsD is the de facto standard. Although the Etsy StatsD Server is the reference implementation (the first implementation was written in Perl), there are other implementations like Go Stats Daemon, Data Dog and many moreStatsD captures different metrics, Gauges, Counters, Timing Summary Statistics, and Sets. You decorate your code to capture this type of data and report it. Although StatsD collects runtime statistics data over time and does periodic “flushes” of the data to analysis and monitoring engines you choose, StatsD was originally written with Graphite in mind. Graphite is used to visualize the state of microservices. Graphite is made up of Graphite-Web (graph and dashboard rendering), Carbon (metric processing daemons), and Whisper (time-series database) library.

StatsD seems to be the current champion of mind space. Mainly due to its simplicity and fire-and-forget protocol. StatsD can’t cause as cascading failure, and its client libs are very small. There are other alternatives that QBit can integrate with as well like Coda Hale’s Metrics library which uses a Go Daemon.

StatsD can also dump its feed to Kibana or Banana via a Logstash plugin. You can use Kibana and Banana in place of Graphite. There is even commercial support of StatsD via DataDog which allows monitoring, graphing, alerting, and event correlation. DataDog embedded the StatsD daemon within the Datadog Agent so it is a drop in replacement for StatsD. Datadog is a monitoring service for IT, Operations, Development and DevOps. It attempts to take input from many vendors, cloud providers, open source tools, servers, and aggregate their data into reactive actionable metrics.

Reactive Microservices Monitoring

$
0
0
Reactive Microservices Monitoring

Reactive Microservices Monitoring is an essential ingredient of microservices architecture. You need it for debugging, knowing your users, working with partners, building reactive systems that react to load and failures without cascading outages. Reactive Microservices Monitoring can not be a hindsight decision. Build your microservices with microservices monitoring in mind from the start. Make sure that the microservices lib that you use has monitoring of runtime statistics built in from the start. Make sure that is a core part of the microservices library. StatsD and Code Hale Statistics allow you to gather metrics in a standard way. Tools like Graphite, Kibana, DataDog and Banana help you understand the data, and build dashboards. QBit, the Java Microservices Library, includes a query-able stats service which feeds into StatsD/CodeHale Metrics. QBit can also be used to create reactive features to do rate limiting or spin up new nodes. With big data, data science, and microservices, monitoring microservices runtime stats is required to know your application users, know your partners, know what your system will do under load, etc. 


Using Docker, Gradle to create Java docker distributions for java microservices draft 4

$
0
0

Using Docker, Gradle to create Java docker distributions for java microservices draft 4

I have used Docker and Vagrant quite a lot to setup series of servers. This is a real lifesaver when you are trying to do some integration tests and run into issues that would be hard to track down without running "actual servers". Running everything on one box is not the same as running many "servers". Even if your final deployment is VMWare or EC2 or bare metal servers, Docker and Vagrant are great for integration testing and writing setup documentation.
I also tend to use gradle a lot these days and grown quite fond of the application and distribution plugins. To me gradle application plugin and docker (or vargrant or EC2 with boto) are sort of essential way to doing Java microservice development.
Before we get into Vagrant or Docker, let's try to do something very simple. Let's use the gradle plugin to create a simple Java application that reads its config from\etc\myapp\conf.properties and \etc\myapp\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).

Using Gradle and the Gradle Application plugin and Docker

Gradle can create a distribution zip or tar file which is a archive file with the libs and shell scripts you need to run on Linux/Windows/Cygwin/OSX. Or it can just install of this stuff into a directory of your choice.
What I typically do is this….
  • Create a dist tar file using gradle.
  • Create a dockerfile.
The docker file copies the dist tar to the container, untars it and then runs it inside of docker. Once it is a docker file, then you can make a docker container that you can ship around. The gradle and docker file have all of the config info that is common.
You may even have special gradle build options for different environments. Or your app talks to Consul or etcd on startup and look up the special environments stuff like server locations so the docker binary dist can be identical. Consul and etcd are essential ingredients in a microservices architecture both for elastic consistent config and service discovery.
Our binary deliverable is the runnable docker container not a jar file or a zip.
The distZip, and/or distTar is just a way to package up our code and make it easy to shove into our docker container.
If you go the docker route, then the docker container is our binary (runnable) distribution not the tar or zip. We do not have to guess what JVM, because we configure the docker container with exactly the JVM we want to use. We can install any drivers or daemons or utilities that we might need from the Linux world into our container.
Think of it this way. With maven and/or gradle you can create a zip or war file that has the right version of the MySQL jar file. With Docker, you can create a Linux runnable binary that has all of the jar files and not only the right MySQL jar file but the actual right version MySQL server which can be packaged in the same runnable binary (the Linux Docker container).
Gradle application plugin generates a zip or tar file with everything we need and does not require a master Java process, or another repo cache of jars, etc. Between gradle application plugin and docker, we do whatever we need to do with our binary configuration but in a much more precise manner. Every jar, every linux utility, every thing we need, all in one binary that can be deployed in a prviate cloud, public cloud or just run on your laptop. No need to guess the OS, JVM, or libs. We ship exactly what we need.
Docker is used to make deployements faster and more precise.
If part of the tests include running some integration with virtualization than Docker should be the fastest route for creating new virtual instances (since it is just a chgroot like and not a full virtual machine).
I think Docker, gradle and gradle application plugin is your best option for creating fast integration tests. But of course if you have EC2/boto, Vagrant, etc., Docker is not the only option.

Gradle application plugin

Our first goal is to do the following. Use the gradle application plugin to create a simple Java application that reads its config from \etc\myapp\conf.properties and\etc\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).
Before we get started let's do some prework.
$ sudo mkdir /etc/myapp
$ sudo chown rhightower /etc/myapp
Do the same for /opt/myapp. Where rhightower is your username. :)

The Java app

packagecom.example;

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;

publicclassMain {

publicstaticvoidmain(String... args) throwsIOException {
finalString configLocation =System.getProperty("myapp.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);

finalProperties properties =newProperties();

properties.load(Files.newInputStream(confFile.toPath()));

System.out.printf("The port is %s\n", properties.getProperty("port"));
}

}
It is a simple Java app, it looks at a configuration file that has the port. The location of the configuration file is passed via a System.property. If the System.property is null, then it loads the config file from the current working directory.
When you run this program from an IDE, you will get.
The port is 8080
But we want the ability to create an /etc/myapp/conf.properties and an /opt/myappinstall dir. To do this we will use the application plugin.

Creating an install directory with the applicaiton plugin

To create /etc/myapp/conf.properties and an /opt/myapp install dir, we will use the gradle application plugin.

gradle application plugin

apply plugin:'java'
apply plugin:'application'

mainClassName ='com.example.Main'
applicationName ='myapp'
applicationDefaultJvmArgs = ["-Dmyapp.config.file=/etc/myapp/conf.properties"]

repositories {
mavenCentral()
}

task copyDist(type:Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into '/opt/myapp'
}

task copyConf(type:Copy) {
from "conf/conf.properties"
into "/etc/myapp/"
}


dependencies {
}
Running the copyDist task will also run the installApp which is provided by theapplication plugin which is configured at the top of the file. We can use the copyConffile to copy over a sample configuration file.
Here is our build dir layout.

Build dir layout of the myapp gradle project

.
├── build.gradle
├── conf
│   └── conf.properties
├── settings.gradle
└── src
└── main
└── java
└── com
└── example
└── Main.java

conf/conf.properties

port=8080
To build and deploy the project into /opt/myapp, we do the following:

Building and installing our app

$ gradle build copyDist
This creates this directory structure for the install operation.

Our app install

$ tree /opt/myapp/
/opt/myapp/
├── bin
│   ├── myapp
│   └── myapp.bat
└── lib
└── gradle-app.jar

To deploy a sample config we do this:

Copy sample config

$ gradle build copyConf
Now edit the config file and change the port from 8080 to 9090.

Edit file and change property

$ nano /etc/myapp/conf.properties 
Now run it.
$ /opt/myapp/bin/myapp
The port is 9090
Change the properties file again. Run the app again.

Next up

Configuring logging under /etc/myapp/logging.properties.

Logging

Sl4j is the standard way to install loggers. Logback is the successor to Log4j. The nice thing about Sl4j is you can use built-in logging, log4j or Logback. For now, we are recommending Logback.
We are going to use Logback. Technically we are going to use sl4j, and we are going to use the logback implementation of it.
Logback allows you to set the location of the log configuration via a System property called logback.configurationFile
#### Example setting logback via System property
java -Dlogback.configurationFile=/path/to/config.xml chapters.configuration.MyApp1
We need to add these dependencies to our gradle file.
  • logback-core-1.1.3.jar
  • logback-classic-1.1.3.jar
  • slf4j-api-1.7.7.jar

Adding dependencies to gradle file

dependencies {
compile 'ch.qos.logback:logback-core:1.1.3'
compile 'ch.qos.logback:logback-classic:1.1.3'
compile 'org.slf4j:slf4j-api:1.7.12'
}
The distribution/install that we generate with gradle needs to pass the location to our application. We do that with the applicationDefaultJvmArgs in the gradle build.

Adding logback.configurationFile System property to launcher script

applicationDefaultJvmArgs = [
"-Dmyapp.config.file=/etc/myapp/conf.properties",
"-Dlogback.configurationFile=/etc/myapp/logging.xml"]
Now we can store a logging config in our project so it gets stored in git.

./conf/logging.xml log config

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>conf %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
</encoder>
</appender>

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/opt/logging/logs</file>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>%d{yyyy-MM-dd_HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
</encoder>

<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<FileNamePattern>/opt/logging/logs%i.log.zip</FileNamePattern>
<MinIndex>1</MinIndex>
<MaxIndex>10</MaxIndex>
</rollingPolicy>

<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>2MB</MaxFileSize>
</triggeringPolicy>
</appender>

<logger name="com.example.Main" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT" />
<appender-ref ref="FILE" />
</logger>

<root level="INFO">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Then we can add some tasks in our build script to copy it to the right location.

Scripts to copy logging script into correct location for install

task copyLogConf(type: Copy) {
from "conf/logging.xml"
into "/etc/myapp/"
}

task copyAllConf() {
dependsOn "copyConf", "copyLogConf"
}

To deploy our logging script run
gradle copyAllConf
Now after you install the logging config, you can turn it on or off.
Let's change our main method to use the logging configuration.

Main method that uses logkit to do logging.

packagecom.example;

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;


importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

publicclassMain {

staticfinalLogger logger =LoggerFactory.getLogger(Main.class);

publicstaticvoidmain(finalString... args) throwsIOException {
finalString configLocation =System.getProperty("myapp.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);

finalProperties properties =newProperties();

properties.load(Files.newInputStream(confFile.toPath()));

logger.debug(String.format("The port is %s\n", properties.getProperty("port")));
}

}

Next up after that

Configuring dockerfile

Raw Notes

allprojects {

group = 'mycompany.router'
apply plugin: 'idea'
apply plugin: 'java'
apply plugin: 'maven'
apply plugin: 'application'
version = '0.1-SNAPSHOT'

}


subprojects {


repositories {
mavenLocal()
mavenCentral()
}

sourceSets.main.resources.srcDir 'src/main/java'
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8

dependencies {
compile "io.fastjson:boon:$boonVersion"

testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}

task buildDockerfile (type: Dockerfile) {
dependsOn distTar
from "java:openjdk-8"
add "$distTar.archivePath", "/"
workdir "/$distTar.archivePath.name" - ".$distTar.extension" + "/bin"
entrypoint "./$project.name"
if (project.dockerPort) {
expose project.dockerPort
}
if (project.jmxPort) {
expose project.jmxPort
}
}

task buildDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "build", "-t", "mycompany/$project.name:$version", buildDockerfile.dockerDir
}


task pushDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "push", "mycompany/$project.name"
}


task runDockerImage (type: Exec) {
dependsOn buildDockerImage
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}


task runDocker (type: Exec) {
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}

}


project(':sample-web-server') {

mainClassName = "mycompany.sample.web.WebServerApplication"

applicationDefaultJvmArgs = ["-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.port=${jmxPort}",
"-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false"]

dependencies {
compile "io.fastjson:boon:$boonVersion"

compile group: 'io.advantageous.qbit', name: 'qbit-boon', version: '0.5.2-SNAPSHOT'
compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.5.2-SNAPSHOT'

testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}

buildDockerfile {
add "$project.buildDir/resources/main/conf/sample-web-server-config.json", "/etc/sample-web-server/conf.json"
add "$project.buildDir/resources/main/conf/sample-web-server-config.ctmpl", "/etc/sample-web-server/conf.ctmpl"
add "$project.buildDir/resources/main/conf/sample-web-server-consul-template.cfg", "/etc/consul-template/conf/sample-web-server/sample-web-server-consul-template.cfg"
volume "/etc/consul-template/conf/sample-web-server"
volume "/etc/sample-web-server"
}

}


class Dockerfile extends DefaultTask {
def dockerfileInfo = ""
def dockerDir = "$project.buildDir/docker"
def dockerfileDestination = "$project.buildDir/docker/Dockerfile"
def filesToCopy = []

File getDockerfileDestination() {
project.file(dockerfileDestination)
}

def from(image="java") {
dockerfileInfo += "FROM $image\r\n"
}

def maintainer(contact) {
maintainer += "MAINTAINER $contact\r\n"
}

def add(sourceLocation, targetLocation) {
filesToCopy << sourceLocation
def file = project.file(sourceLocation)
dockerfileInfo += "ADD $file.name ${targetLocation}\r\n"
}

def run(command) {
dockerfileInfo += "RUN $command\r\n"
}

def volume(path) {
dockerfileInfo += "VOLUME $path\r\n"
}

def env(var, value) {
dockerfileInfo += "ENV $var $value\r\n"
}

def expose(port) {
dockerfileInfo += "EXPOSE $port\r\n"
}

def workdir(dir) {
dockerfileInfo += "WORKDIR $dir\r\n"
}

def cmd(command) {
dockerfileInfo += "CMD $command\r\n"
}

def entrypoint(command) {
dockerfileInfo += "ENTRYPOINT $command\r\n"
}

@TaskAction
def writeDockerfile() {
for (fileName in filesToCopy) {
def source = project.file(fileName)
def target = project.file("$dockerDir/$source.name")
target.parentFile.mkdirs()
target.delete()
target << source.bytes
}
def file = getDockerfileDestination()
file.parentFile.mkdirs()
file.write dockerfileInfo
}
}

Working with StatsD and Java

$
0
0

Run statsD daemon with docker

sudo docker run -d \
--name graphite \
-p 80:80 \
-p 2003:2003 \
-p 8125:8125/udp \
hopsoft/graphite-statsd
Make sure you upgrade to the latest docker. I had to update boot2docker and update docker to get the above to work.

Update boot2docker

boot2docker update

brew upgrade docker

I installed docker manually, so to get the latest version with brew, I had to do this.
brew install docker
brew link --overwrite docker

Run script (OSX/Windows)

whiletrue
do
echo -n "example.statsd.counter.changed:$(((RANDOM %10) +1))|c"| nc -w 1 -u 192.168.59.103 8125
done
Then you can view the dashboard data at: http://192.168.59.103/dashboard

Note to get the ip address of docker

$ boot2docker ssh
$ ifconfig
eth1 Link encap:Ethernet HWaddr 08:00:27:E1:F5:54
inet addr:192.168.59.103 Bcast:192.168.59.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fee1:f554/64 Scope:Link

Note to get the ip address of the container running statsd

First find the container id.
$ docker ps

Then use the id to look up the container.
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}'<CONTAINER ID FROM LAST STEP>
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9183f205a3aa hopsoft/graphite-statsd:latest "/sbin/my_init" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:2003->2003/tcp, 0.0.0.0:8125->8125/udp graphite

$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 9183f205a3aa
172.17.0.1

If everything goes well

Graphite

To start with let's try out the Java client.

The Java client is based on the reference Java client from the etsy project.
publicclassUsingStatsDIncrement {

publicstaticvoidmain(String... args) throwsException {
StatsdClient client =newStatsdClient("192.168.59.103", 8125);

while (true) {
client.increment("foo.bar.baz", 10, .1);
Thread.sleep(1000);
}
}
}
I let this run for a while. Then I go to http://192.168.59.103/dashboard I look under "stats.foo.bar." in the nav tree. You many not really understand what you are seeing but there is a graph that goes from 0 to 20 sort of randomly. It can even go all the way up to 30.
graphite2
Changing the sleep to 100 ms instead of 1000 should yield some different results.
Checking....
It does...
graphite3
Now when we read and re-read the docs, they will more sense.
Now I changed 0.1 to 1.0 and let it run for a while and I get this nice flat line.
graphite4
I added a gauge.
StatsdClient client =newStatsdClient("192.168.59.103", 8125);

int guageValue =10;

while (true) {
client.increment("foo.bar.baz", 10, 1.0);
client.gauge("gauge.foo.bar.baz", guageValue++, 1.0);

if (guageValue >100) {
guageValue =20;
}
Thread.sleep(100);
}
}
gauge
Then I added timings and started clicking around.
publicstaticvoid main(String... args) throws Exception {
StatsdClient client =newStatsdClient("192.168.59.103", 8125);

int gaugeValue =10;

while (true) {
client.increment("foo.bar.baz", 10, 1.0);
client.gauge("gauge.foo.bar.baz", gaugeValue++, 1.0);
client.timing("foo.bar.baz.mytiming", gaugeValue, 1.0);

if (gaugeValue >100) {
gaugeValue =20;
}
Thread.sleep(100);
}
}
}
many charts

Concepts

A good description of concepts related to the domain model of statsD is documented here:
It is sparse but perhaps complete.
The core concepts for StatsD came from this 2008 blog post (according to Etsy documentation).
Although the early versions of StatsD seemed to use RRDtool and Ganglia.
While Statsd tends to use Graphite, and Whisper.
A good description of the wire protocol in more of a tutorial form can be found here:
StatsD was written to work with Graphite. Graphite is used to visualize the state of microservices. Graphite is made up of Graphite-Web that renders graphs and dashboards, Carbon metric processing daemons, and Whisper which is a time-series database library.
When you send a stat, you send these basic types:
c: This indicates a "count". g: This indicates a gauge. s: a mathematical set. ms: time span.

Count

The counts adds up values that StatsD receives for a metric within the flush interval and sends the total value. StatsD will collect all of the data it receives during its ten second flush interval and add them together to send a single value for that time frame.

Gauge

The gauge tells the current level of something, like memory used or #number of threads being used, etc. With the gauge you send the most recent value. StatsD sends Carbon the same value until it gets a different value.

Set

With sets, you send a bunch of values and StatsD and it will count the number of times it received unique values. Think of a set of enumerators, UP, DOWN, WARNING, CRITICAL, OK. You want to count how many times each occurs.

Time Span

With time spans, you can send StatsD timing values. StatsD the values to Carbon which calculates averages, percentiles, standard deviation, sum, etc.
A good description of using StatsD/Graphite:

Vertx 3 looks amazing, nice reactive microservices framework/lib

$
0
0
It looks like Vertx 3 is a very significant release. It fills many gaps and fixes direction with Vertx 2 (which was already very compelling). Vertx 3 seems even more targeted at the reactive microservice space. It adds support for pluggable messaging, it has more than one cluster manager, it has async MySQL support, as well as async support for Redis, and PostgreSQL support, and MongoDB. (We are not using any of those but at least there are patterns in play for Cassandra). 

A web style framework to make dealing with Vertx core HTTP a little smoother, etc.

Anyway.. I used Vertx 2 quite a bit, and am very interested in Vertx 3. 

QBit is mostly based on Vertx2. Many ideas for QBit came straight out of working with Vertx2. I am a big fan.

I plan on porting QBit to Vertx3 very soon.

Also most of the examples are using Vertx2 more like a lib than using Vertx as described in the docs. I want to start to change that and have more examples of working with QBit inside of Vertx.

I really like direction of having a pluggable cluster manager in vertx3. It was pluggable before but only one plugin. I can see someone plugging in ZooKeeper, or Consul or what have you. I tend to use Consul personally because service clients are often written in non-JVM environments (that I work with anyway) and Consul has clients for most non-JVM clients. 

Apex looks like a nice way to organize the needed gaps between traditional web frameworks and something like Vertx. I like that it is separate (at least in the docs), and well documented. I could have really used that level of support with my first Vertx project (I think some of the feature were there but certainly not so clear cut).

The integration with Dropwizard Metrics looks spot on (Microservices Metrics, Stats and monitoring are essential for microservices). I hope to see some future integration with StatsD/Graphite. Not sure how easy/hard that would be. 

Messaging and Integration supports seems really nice. QBit does a lot of messaging. Early versions (non released) of QBit could plug into the Vertx pub/sub or use its own for same JVM. QBIt then just used it own (with the goal of one day working with Vertx again) and a remote one based on Vertx Websocket support. I was planning on porting QBit to work over Kafka. Now I am more likely to make it work with Vertx again, and then write a Vertx messaging plugin for Kafka.

Currently, I am using Consul (it is in theory pluggable) in QBit to find peer nodes and broadcast events to other peers using WebSocket support via Vertx 2. When I port QBit to Vertx 3, I will try to use the Vertx clustering support to find peers, etc. Not sure if that exists, but it would be something that I would be interested in working on. This way Vertx 3 handles the plumbing. 


QBit has this type of support that works with Vertx 2.

@RequestMapping("/todo-service")
publicclassTodoService {

@RequestMapping("/todo/count")
publicintsize() {...

@RequestMapping("/todo/")
publicList<TodoItem> list() {...



 Then it would add REST support (JSON only) for that object. 

It implements the full REST style URL mapping...

Adder Service using URI params

    @RequestMapping("/adder-service")
publicclassAdderService {

@RequestMapping("/add/{0}/{1}")
publicintadd(@PathVariableinta, @PathVariableintb) {...
}

REST is really about links in the docs and less about the URLs but most Java folks (that I have worked with) are more concerned about the URL style. (Usually not me per se.)

Using a microservice remotely with WebSocket

/* Start QBit client for WebSocket calls. */
final Client client = clientBuilder()
.setPort(7000).setRequestBatchSize(1).build();


/* Create a proxy to the service. */
final AdderServiceClientInterface adderService =
client.createProxy(AdderServiceClientInterface.class,
"adder-service");

client.start();



/* Call the service */
adderService.add(System.out::println, 1, 2);

It works with Vertx2 now. There is also a strongly typed message bus that works remotely (https://github.com/advantageous/qbit/wiki/%5BDetailed-Tutorial%5D-Using-event-channels-and-strongly-typed-event-bus-with-QBit-(The-employee-example)). The strongly typed message bus is using Consul to find remote nodes.


It needs this method from "clustering" / "service discovery" piece.

public interface ServiceDiscovery extends Startable, Stoppable {
...

    default List<ServiceDefinition> loadServices(final String serviceName) {

        return Collections.emptyList();
    }
 

...


public class ServiceDefinition {

    private final HealthStatus healthStatus;
    private final String id;
    private final String name;
    private final String host;
    private final int port;
    private final long timeToLive;


If I had a magic genie... and I could make three Vertx 3 wishes.


1) A service discovery pluggable thing on top of clustering support  (which may or may not already exist) 
3) Integration with StatsD as a metrics stats collection framework (http://rick-hightower.blogspot.com/2015/05/working-with-statsd-and-java.html)



Using Docker, Gradle to create Java docker distributions for Java microservices Part 1

$
0
0

Using Docker, Gradle to create Java docker distributions for Java microservices Part 1

Docker and Vagrant great tools for onboarding new developers

Docker and Vagrant are used quite a bit to setup development environments quickly. Docker can even be the deployment container for your application. 
Docker and Vagrant are real lifesavers when you are trying to do some integration tests or just trying to onboard new developers. 
They can help you debug hard to track down without running "actual servers". Running everything on one box is not the same as running many "servers", bet we can get close with Docker and Vagrant. Even if your final deployment is VMWare or EC2 or bare metal servers, Docker and Vagrant are great for integration testing and writing setup documentation. 
For this blog post, we will focus more on Docker. Perhaps Vagrant can be used for a future blog post.

Goals for the examples in this article

We will create an application distribution with gradle. We will deploy the application distribution locally with gradle. We will setup a docker image. We will deploy an application distribution with docker. We will run our new docker image with our application distribution in it.

Gradle

Gradle tends to get used a lot these days for new projects. Many have grown quite fond of the gradle application and distribution plugins. The gradle application plugin and docker (or vargrant or EC2 with boto) are sort of essential way to doing Java microservice development
Before we get into Docker, let's try to do something very simple. Let's use the gradle application plugin to create a simple Java application that reads its config from \etc\myapp\conf.properties and \etc\myapp\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files). Then we will build on this to create a Docker image.

Using Gradle and the Gradle Application plugin and Docker

Gradle can create a distribution zip or tar file which is a archive file with the libs and shell scripts you need to run on Linux/Windows/Cygwin/OSX. Or it can just install of this stuff into a directory of your choice.
What I typically do is this….
  • Create a dist tar file using gradle.
  • Create a dockerfile.
Docker uses the Dockerfile to copu the distribution files to the Docker image. From the Dockerfile, you can make a docker container that you can ship around. The gradle and docker file have all of the config info that is common. 
You may even have special gradle build options for different environments. Or your app talks to Consul or etcd on startup and look up the special environments stuff like server locations so the docker binary dist can be identical. Consul and etcd are essential ingredients in a microservices architecture both for elastic consistent config and service discovery.

Background of why Docker and gradle application plugin advantages

Our binary deliverable is the runnable docker image not a jar file or a zip. A running docker image is called a container. 
The gradle application plugin is an easy way to package up our compiled code and make it easy to shove into our docker image so we can run it as a docker container. 
If you go the docker route, then the docker container is our binary (runnable) distribution not the tar or zip. We do not have to guess what JVM, because we configure the docker image with exactly the JVM we want to use. We can install any drivers or daemons or utilities that we might need from the Linux world into our container. 
Think of it this way. With maven and/or gradle you can create a zip or war file that has the right version of the MySQL jar file. With Docker, you can create a Linux runnable binary that has all of the jar files and not only the right MySQL jar file but the actual right version MySQL server which can be packaged in the same runnable binary (the Linux Docker container). 
Gradle application plugin generates a zip or tar file with everything we need or installs everything we need into a folder. Gradle application plugin does not require a master Java process, or another repo cache of jars, etc. It is not a container and does not product a container. We just get an easy way to run our Java process.
Between gradle application plugin and docker, we can do whatever we need to do with our binary configuration but in a much more precise manner. Every jar, every linux utility, every thing we need, all in one binary that can be deployed in a prviate cloud, public cloud or just run on your laptop. No need to guess the OS, JVM, or libs. We ship exactly what we need. 
Docker is used to make deployements faster and more precise.
If part of the tests include running some integration with virtualization than Docker should be the fastest route for creating new virtual instances (since it is just a chroot like container and not a full virtual machine).
Docker, gradle and gradle application plugin is one of your best options for creating fast integration tests. But of course if you have EC2/boto, Vagrant, Chef, etc., Docker is not the only option. 

Gradle application plugin

Our first goal is to do the following. Use the gradle application plugin to create a simple Java application that reads its config from  \etc\myapp\conf.properties and \etc\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).
Before we get started let's do some prework.

Creating sample direcgtories for config

$ sudo mkdir /etc/myapp
$ sudo chown rick /etc/myapp
Do the same for /opt/myapp. Where rick is your username. :)

The Java app

Next let's create a really simple Java app since our focus in on the gradle build and the Dockerfile.

Really simple Java main app

packagecom.example;

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;

publicclassMain {

publicstaticvoidmain(String... args) throwsIOException {
finalString configLocation =System.getProperty("myapp.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);

finalProperties properties =newProperties();

properties.load(Files.newInputStream(confFile.toPath()));

System.out.printf("The port is %s\n", properties.getProperty("port"));
}

}
It is a simple Java app, it looks at a configuration file that has the port. The location of the configuration file is passed via a System.property. If the System.property is null, then it loads the config file from the current working directory plus conf directory.
When you run this program from an IDE, you will get.

Output

The port is 8080
But we want the ability to create an /etc/myapp/conf.properties and an /opt/myapp install dir. To do this we will use the gradle application plugin.
Before we use the applicaiton plugin to install our app, let's make sure we have the right install folders setup.

Prework to setup install folders

$ sudo mkdir /opt/
$ sudo mkdir /opt/myapp
$ sudo chown rick /opt/myapp
Replace rick with your username.

Creating an install directory with the applicaiton plugin

To create /etc/myapp/conf.properties and an /opt/myapp install dir, we will use the gradle application plugin.

gradle application plugin

apply plugin:'java'
apply plugin:'application'

mainClassName ='com.example.Main'
applicationName ='myapp'
applicationDefaultJvmArgs = ["-Dmyapp.config.file=/etc/myapp/conf.properties"]

repositories {
mavenCentral()
}

task copyDist(type:Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into '/opt/myapp'
}

task copyConf(type:Copy) {
from "conf/conf.properties"
into "/etc/myapp/"
}


dependencies {
}
Running the copyDist task will also run the  installApp which is provided by the application plugin which is configured at the top of the file. We can use the copyConf file to copy over a sample configuration file. 
Here is our build dir layout.

Build dir layout of the myapp gradle project

.
├── build.gradle
├── conf
│   └── conf.properties
├── settings.gradle
└── src
└── main
└── java
└── com
└── example
└── Main.java

conf/conf.properties

port=8080
To build and deploy the project into /opt/myapp, we do the following:

Building and installing our app

$ gradle build copyDist
This creates this directory structure for the install operation.
When we are done, our installed application looks like this:

Our app install

$ tree /opt/myapp/
/opt/myapp/
├── bin
│   ├── myapp
│   └── myapp.bat
└── lib
└── gradle-app.jar

To deploy a sample config we do this:

Copy sample config

$ gradle build copyConf
Now edit the config file and change the port from 8080 to 9090.

Edit file and change property

$ nano /etc/myapp/conf.properties 
Now run it.
$ /opt/myapp/bin/myapp
The port is 9090
The key point here is that it is prinitng out 9090 instead of 8080. This means it is reading the config under /etc/myapp and not the config that is included in the app.
Change the properties file again. Run the app again. Do you see the change? If not, check to make sure you are editing the right file and you understand.

Logging

Logging should be one of the first things that you setup for on any project. If it is a distributed system, then you need to setup distributed logging agregator as well.
Sl4j is the standard way to install loggers. Logback is the successor to Log4j. The nice thing about Sl4j is you can use built-in logging, log4j or Logback. For now, we are recommending Logback.
We are going to use Logback. Technically we are going to use sl4j, and we are going to use the logback implementation of it. 
Logback allows you to set the location of the log configuration via a System property called logback.configurationFile
#### Example setting logback via System property 
java -Dlogback.configurationFile=/path/to/config.xml chapters.configuration.MyApp1
We need to add these dependencies to our gradle file for LogBack. 
  • logback-core-1.1.3.jar
  • logback-classic-1.1.3.jar
  • slf4j-api-1.7.7.jar

Adding Logback dependencies to gradle file

dependencies {
compile 'ch.qos.logback:logback-core:1.1.3'
compile 'ch.qos.logback:logback-classic:1.1.3'
compile 'org.slf4j:slf4j-api:1.7.12'
}
The distribution/install that we generate with gradle needs to pass the location to our application. We do that with the applicationDefaultJvmArgs in the gradle build.

Adding logback.configurationFile System property to launcher script

applicationDefaultJvmArgs = [
"-Dmyapp.config.file=/etc/myapp/conf.properties",
"-Dlogback.configurationFile=/etc/myapp/logging.xml"]
Now we can store a logging config in our project so it gets stored in git.

./conf/logging.xml log config

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>conf %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
</encoder>
</appender>

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/opt/logging/logs</file>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>%d{yyyy-MM-dd_HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
</encoder>

<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<FileNamePattern>/opt/logging/logs%i.log.zip</FileNamePattern>
<MinIndex>1</MinIndex>
<MaxIndex>10</MaxIndex>
</rollingPolicy>

<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>2MB</MaxFileSize>
</triggeringPolicy>
</appender>

<logger name="com.example.Main" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT" />
<appender-ref ref="FILE" />
</logger>

<root level="INFO">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Then we can add some tasks in our build script to copy it to the right location.

Scripts to copy logging script into correct location for install

task copyLogConf(type: Copy) {
from "conf/logging.xml"
into "/etc/myapp/"
}

task copyAllConf() {
dependsOn "copyConf", "copyLogConf"
}

task installMyApp() {
dependsOn "copyDist", "copyConf", "copyLogConf"
}

To deploy our logging script run
gradle copyAllConf
Now after you install the logging config, you can turn it on or off.
Let's change our main method to use the logging configuration instead of System.out.

Main method that uses logkit to do logging.

packagecom.example;

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;


importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

publicclassMain {

staticfinalLogger logger =LoggerFactory.getLogger(Main.class);

publicstaticvoidmain(finalString... args) throwsIOException {
finalString configLocation =System.getProperty("myapp.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);

finalProperties properties =newProperties();

properties.load(Files.newInputStream(confFile.toPath()));

logger.debug(String.format("The port is %s\n", properties.getProperty("port")));
}

}
Now we run the app from the command line, we get.

Output from running the app

12:20:36,081 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
12:20:36,082 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
12:20:36,082 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
12:20:36,082 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@769c9116 - Registering current configuration as safe fallback point

conf 12:20:36.096 [main] DEBUG c.e.Main - The port is 9090

Installing Docker

You will need to install docker on your Mac OSX machine.
To do this use brew a package manager for OSX:
Install brew follow instructions from brew.
``bash $ sudo chown -R rick /usr/local $ brew install caskroom/cask/brew-cask $ brew cask install virtualbox $ brew install docker $ brew install boot2docker $ boot2docker init $ boot2docker up

Add the following to your ~/.profile

#### ~/.profile changes for boot2docker

```bash
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/rick/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
For Windows and Linux, follow the install instructions for those OS. Linux does not need boot2docker. OSX and Windows need boot2docker. The boot2docker runs the docker daemon which currently only runs on Linux. For OSX and Windows, the boot2docker runs in virutalbox. 

Docker image folder

To facilitate docker usage, let's create an image folder with all the bits we need for our image.
The image folder will hold the docker image. It will be under the project dir.

Docker image dir Layout

$ pwd
$ tree
.
├── Dockerfile
├── buildImage.sh
├── etc
│   └── myapp
│   ├── conf.properties
│   └── logging.xml
├── opt
│   └── myapp
│   ├── bin
│   │   ├── myapp
│   │   ├── myapp.bat
│   │   └── run.sh
│   └── lib
│   ├── logback-classic-1.1.3.jar
│   ├── logback-core-1.1.3.jar
│   ├── myapp.jar
│   └── slf4j-api-1.7.12.jar
├── runContainer.sh
└── var
└── log
└── readme.md
We have added a task to our gradle script to copy the application files to this diretory structure so we can easily deploy a docker image. 
Let's look at the Dockerfile which contains the directives for Docker to build our image.

Dockerfile

We kept the Dockerfile really simple.

Dockerfile for myapp (projectDir/image/Dockerfile)

FROM java:openjdk-8

COPY opt /opt
COPY etc /etc
COPY var /var


ENTRYPOINT /opt/myapp/bin/run.sh
This creates an image from an existing image that has Java OpenJDK 8 already installed. The docker file copies optetc, and var into the Docker image.
To build this image, we run the following docker command:
$ docker build -t example/myapp:1.0-SNAP .
Ok so where do all of the files come from under image. Well most of them you have seen before. Copy over logging.xml and conf.properties to etc. You can configure the image differently than your dev environment. To get the opt directory populated, we added a task to our gradle script. To simplify the entry point (standardize), and allow for setting evn variable as well as other Java system properties, we added a run.sh script.
#!/usr/bin/env bash
/opt/myapp/bin/myapp
Make the launch script executable. We specified the launch script was the entry point (what Docker should run when we call run container), e.g., ENTRYPOINT /opt/myapp/bin/run.sh

Make run.sh executable.

$ pwd
/Users/rick/github/myapp/image

$ chmod +x opt/myapp/bin/run.sh
Before you build it, you have to have jar files and run scripts from the gradle application plugin. 

Task to gradle script that copies application libs and start scripts into Docker Image

task copyDistToImage(type:Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into "$projectDir/image/opt/myapp"
}

Running copyDistToImage

$ gradle copyDistToImage
Once you copy the dist to the image directory, then you can build the image with docker build -t example/myapp:1.0-SNAP . as described above.
Once you install it, then you can run it.

Running Docker container

$  ./runContainer.sh 
conf 20:39:09.474 [main] DEBUG c.e.Main - The port is set to 9999
From the above run, you can see that I modified the port to 9999 in projectDir/image/etc/conf.properties.

Full gradle build file with copy to image command

apply plugin:'java'
apply plugin:'application'


def installOptDir="/opt/myapp"

def installConfDir="/etc/myapp"

mainClassName ='com.example.Main'
applicationName ='myapp'

applicationDefaultJvmArgs = [
"-Dmyapp.config.file=/etc/myapp/conf.properties",
"-Dlogback.configurationFile=/etc/myapp/logging.xml"]

repositories {
mavenCentral()
}

task copyDist(type:Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into installOptDir
}

task copyConf(type:Copy) {
from "conf/conf.properties"
into installConfDir

}


task copyLogConf(type:Copy) {
from "conf/logging.xml"
into installConfDir

}

task copyAllConf() {
dependsOn "copyConf", "copyLogConf"

}

task installMyApp() {
dependsOn "copyDist", "copyConf", "copyLogConf"

}

task copyDistToImage(type:Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into "$projectDir/image/opt/myapp"
}


dependencies {
compile 'ch.qos.logback:logback-core:1.1.3'
compile 'ch.qos.logback:logback-classic:1.1.3'
compile 'org.slf4j:slf4j-api:1.7.12'
}
We created an application distribution with gradle. We deployed the application distribution locally with gradle. We setup a docker image. We deployed an application distribution with docker. We ran a docker image with our application distribution in it.

Ideas for future article

Show how to link container

Setup consul

Raw Notes

allprojects {

group = 'mycompany.router'
apply plugin: 'idea'
apply plugin: 'java'
apply plugin: 'maven'
apply plugin: 'application'
version = '0.1-SNAPSHOT'

}


subprojects {


repositories {
mavenLocal()
mavenCentral()
}

sourceSets.main.resources.srcDir 'src/main/java'
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8

dependencies {
compile "io.fastjson:boon:$boonVersion"

testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}

task buildDockerfile (type: Dockerfile) {
dependsOn distTar
from "java:openjdk-8"
add "$distTar.archivePath", "/"
workdir "/$distTar.archivePath.name" - ".$distTar.extension" + "/bin"
entrypoint "./$project.name"
if (project.dockerPort) {
expose project.dockerPort
}
if (project.jmxPort) {
expose project.jmxPort
}
}

task buildDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "build", "-t", "mycompany/$project.name:$version", buildDockerfile.dockerDir
}


task pushDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "push", "mycompany/$project.name"
}


task runDockerImage (type: Exec) {
dependsOn buildDockerImage
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}


task runDocker (type: Exec) {
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}

}


project(':sample-web-server') {

mainClassName = "mycompany.sample.web.WebServerApplication"

applicationDefaultJvmArgs = ["-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.port=${jmxPort}",
"-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false"]

dependencies {
compile "io.fastjson:boon:$boonVersion"

compile group: 'io.advantageous.qbit', name: 'qbit-boon', version: '0.5.2-SNAPSHOT'
compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.5.2-SNAPSHOT'

testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}

buildDockerfile {
add "$project.buildDir/resources/main/conf/sample-web-server-config.json", "/etc/sample-web-server/conf.json"
add "$project.buildDir/resources/main/conf/sample-web-server-config.ctmpl", "/etc/sample-web-server/conf.ctmpl"
add "$project.buildDir/resources/main/conf/sample-web-server-consul-template.cfg", "/etc/consul-template/conf/sample-web-server/sample-web-server-consul-template.cfg"
volume "/etc/consul-template/conf/sample-web-server"
volume "/etc/sample-web-server"
}

}


class Dockerfile extends DefaultTask {
def dockerfileInfo = ""
def dockerDir = "$project.buildDir/docker"
def dockerfileDestination = "$project.buildDir/docker/Dockerfile"
def filesToCopy = []

File getDockerfileDestination() {
project.file(dockerfileDestination)
}

def from(image="java") {
dockerfileInfo += "FROM $image\r\n"
}

def maintainer(contact) {
maintainer += "MAINTAINER $contact\r\n"
}

def add(sourceLocation, targetLocation) {
filesToCopy << sourceLocation
def file = project.file(sourceLocation)
dockerfileInfo += "ADD $file.name ${targetLocation}\r\n"
}

def run(command) {
dockerfileInfo += "RUN $command\r\n"
}

def volume(path) {
dockerfileInfo += "VOLUME $path\r\n"
}

def env(var, value) {
dockerfileInfo += "ENV $var $value\r\n"
}

def expose(port) {
dockerfileInfo += "EXPOSE $port\r\n"
}

def workdir(dir) {
dockerfileInfo += "WORKDIR $dir\r\n"
}

def cmd(command) {
dockerfileInfo += "CMD $command\r\n"
}

def entrypoint(command) {
dockerfileInfo += "ENTRYPOINT $command\r\n"
}

@TaskAction
def writeDockerfile() {
for (fileName in filesToCopy) {
def source = project.file(fileName)
def target = project.file("$dockerDir/$source.name")
target.parentFile.mkdirs()
target.delete()
target << source.bytes
}
def file = getDockerfileDestination()
file.parentFile.mkdirs()
file.write dockerfileInfo
}
}

Docker and Gradle to create Java microservices part 2 Connecting containers with links

$
0
0

Docker and Gradle to create Java microservices part 2

In the last article, we used gradle and docker to create a docker container that had our Java application running in it. (We pick up right where we left off so go back to that if you have not read it.)
This is great, but microservices typically talk to other microservices or other resources like databases, so how can we deploy our service to talk to another microservice. The docker answer to this is docker container links
When we add links to another docker container, docker will add domain name alias to the other container so we don't have to ship IP addresses around. For more background about Docker Container Links check out their documentation.
This can help us with local integration tests and for onboarding new developers. This allows us to setup topologies of docker containers that collborate with each other over the network. 
For this example, we will have one Java application called client and one called server. The project structure will look just like that last project structure. 

Running the client application

docker run --name client --link server:server -t -i example/client:1.0-SNAP
Notice that we pass --link server:server, this will add a DNS like alias for server so we can configure the host address of our server app as server. In practice, you would want a more qualified name. When we start up the server, we will need to give it the docker name server with --name server so that the client can find its address.
Thus under images/etc/client/conf/conf.properties we would have:

Config for client app image images/etc/client/conf/conf.properties

port=9999
host=server

Client gradle build script

Our gradle build script for out client is essentially the same as our myapp example. The major difference is we now depend on QBit, which is a microservice lib that can easily work with HTTP clients and servers.

gradle build script for client

apply plugin:'java'
apply plugin:'application'
apply plugin:'idea'


def installOptDir="/opt/client"
def installConfDir="/etc/client"

mainClassName ='com.example.ClientMain'
applicationName ='client'

applicationDefaultJvmArgs = [
"-Dclient.config.file=/etc/client/conf.properties",
"-Dlogback.configurationFile=/etc/client/logging.xml"]

repositories {
mavenCentral()
}

task copyDist(type:Copy) {
dependsOn "installDist"
from "$buildDir/install/client"
into installOptDir
}

task copyConf(type:Copy) {
from "conf/conf.properties"
into installConfDir

}


task copyLogConf(type:Copy) {
from "conf/logging.xml"
into installConfDir

}

task copyAllConf() {
dependsOn "copyConf", "copyLogConf"

}

task installClient() {
dependsOn "copyDist", "copyConf", "copyLogConf"

}

task copyDistToImage(type:Copy) {
dependsOn "installDist"
from "$buildDir/install/client"
into "$projectDir/image/opt/client"
}


dependencies {

compile group:'io.advantageous.qbit', name:'qbit-vertx', version:'0.8.2'
compile 'ch.qos.logback:logback-core:1.1.3'
compile 'ch.qos.logback:logback-classic:1.1.3'
compile 'org.slf4j:slf4j-api:1.7.12'
}

The client main app

The client main class looks very similar to our myapp example. Except now it uses the host and port to connect to an actual server.
packagecom.example;

/**
* Created by rick on 5/15/15.
*/

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;


importio.advantageous.boon.core.Sys;
importio.advantageous.qbit.http.client.HttpClient;
importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

import staticio.advantageous.qbit.http.client.HttpClientBuilder.httpClientBuilder;

publicclassClientMain {

staticfinalLogger logger =LoggerFactory.getLogger(ClientMain.class);

publicstaticvoidmain(finalString... args) throwsIOException {

finalString configLocation =System.getProperty("client.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);



finalProperties properties =newProperties();
if (confFile.exists()) {
properties.load(Files.newInputStream(confFile.toPath()));
} else {
properties.load(Files.newInputStream(newFile("./conf/conf.properties").toPath()));
}

finalint port =Integer.parseInt(properties.getProperty("port"));
finalString host = properties.getProperty("host");

logger.info(String.format("The port is set to %d %s\n", port, host));



finalHttpClient httpClient = httpClientBuilder()
.setHost(host).setPort(port).build();
httpClient.start();

for (int index=0; index<10; index++) {
System.out.println(httpClient.get("/foo/bar").body());
Sys.sleep(1_000);
}

Sys.sleep(1_000);

httpClient.stop();
}

}
It connects to the server with httpClient and does 10 HTTP gets and prints the results out to System.out. The point here is that it is able to find the server and it does not hard code an IP address as both the client and server are Docker container instances which are ephemeral, elastic servers. 

Server application

The server uses the same gradle build file as myapp and the client examples. It uses the same image directory and Dockerfile as those examples as well.
Even the Main class looks similar to the early two examples as the focus in on gradle and Docker not our app per se.

ServerMain.java

packagecom.example;

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;
importio.advantageous.qbit.http.server.HttpServer;
importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

import staticio.advantageous.qbit.http.server.HttpServerBuilder.httpServerBuilder;

publicclassServerMain {

staticfinalLogger logger =LoggerFactory.getLogger(ServerMain.class);

publicstaticvoidmain(finalString... args) throwsIOException {

finalString configLocation =System.getProperty("server.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);



finalProperties properties =newProperties();
if (confFile.exists()) {
properties.load(Files.newInputStream(confFile.toPath()));
} else {
properties.load(Files.newInputStream(
newFile("./conf/conf.properties").toPath()));
}

finalint port =Integer.parseInt(properties.getProperty("port"));


HttpServer httpServer = httpServerBuilder()
.setPort(port).build();


httpServer.setHttpRequestConsumer(httpRequest -> {
logger.info("Got request "+ httpRequest.address()
+""+ httpRequest.getBodyAsString());
httpRequest.getReceiver()
.response(200, "application/json", "\"hello\"");
});


httpServer.startServer();




}

}
We just start a server and send a body of "hello" when called. 

Dockerfile

There was some changes to the Dockerfile to create the images. I ran into some issues with QBit and Open JDKs version number (which has been fixed but not released). I switched the example to use the Oracle 8 JDK.

Dockerfile using Oracle JDK 8

# Pull base image.
FROM ubuntu

# Install Java.
RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections

RUN apt-get update
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository ppa:webupd8team/java
RUN apt-get update

RUN apt-get install -y oracle-java8-installer
RUN rm -rf /var/lib/apt/lists/*
RUN rm -rf /var/cache/oracle-jdk8-installer


# Define working directory.
WORKDIR /data

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle

COPY opt /opt
COPY etc /etc
COPY var /var


ENTRYPOINT /opt/client/bin/run.sh

Running server

To run the server docker container use the following:
docker run --name server -t -i example/server:1.0-SNAP
Note that we pass the --name server

Directory layout for client / server example

We don't repeat much from the first example, but hopefully the directory structure will shed some light on how we ogranized the applications and their docker image directories.
$ pwd
../docker-tut/networked-apps
$ tree
.
├── client
│   ├── build.gradle
│   ├── client.iml
│   ├── conf
│   │   ├── conf.properties
│   │   └── logging.xml
│   ├── gradle
│   │   └── wrapper
│   │   ├── gradle-wrapper.jar
│   │   └── gradle-wrapper.properties
│   ├── gradlew
│   ├── gradlew.bat
│   ├── image
│   │   ├── Dockerfile
│   │   ├── buildImage.sh
│   │   ├── env.sh
│   │   ├── etc
│   │   │   └── client
│   │   │   ├── conf.properties
│   │   │   └── logging.xml
│   │   ├── opt
│   │   │   └── client
│   │   │   ├── bin
│   │   │   │   ├── client
│   │   │   │   ├── client.bat
│   │   │   │   └── run.sh
│   │   │   └── lib
│   │   │   ├── boon-json-0.5.5.jar
│   │   │   ├── boon-reflekt-0.5.5.jar
│   │   │   ├── jackson-annotations-2.2.2.jar
│   │   │   ├── jackson-core-2.2.2.jar
│   │   │   ├── jackson-databind-2.2.2.jar
│   │   │   ├── log4j-1.2.16.jar
│   │   │   ├── logback-classic-1.1.3.jar
│   │   │   ├── logback-core-1.1.3.jar
│   │   │   ├── myapp.jar
│   │   │   ├── netty-all-4.0.20.Final.jar
│   │   │   ├── qbit-boon-0.8.2.jar
│   │   │   ├── qbit-core-0.8.2.jar
│   │   │   ├── qbit-vertx-0.8.2.jar
│   │   │   ├── slf4j-api-1.7.12.jar
│   │   │   ├── vertx-core-2.1.1.jar
│   │   │   └── vertx-platform-2.1.1.jar
│   │   ├── runContainer.sh
│   │   └── var
│   │   └── log
│   │   └── client
│   │   └── readme.md
│   ├── settings.gradle
│   └── src
│   └── main
│   └── java
│   └── com
│   └── example
│   └── ClientMain.java
└── server
├── build.gradle
├── conf
│   ├── conf.properties
│   └── logging.xml
├── gradle
│   └── wrapper
│   ├── gradle-wrapper.jar
│   └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── image
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── server
│   │   ├── conf.properties
│   │   └── logging.xml
│   ├── opt
│   │   └── server
│   │   ├── bin
│   │   │   ├── run.sh
│   │   │   ├── server
│   │   │   └── server.bat
│   │   └── lib
│   │   ├── boon-json-0.5.5.jar
│   │   ├── boon-reflekt-0.5.5.jar
│   │   ├── jackson-annotations-2.2.2.jar
│   │   ├── jackson-core-2.2.2.jar
│   │   ├── jackson-databind-2.2.2.jar
│   │   ├── log4j-1.2.16.jar
│   │   ├── logback-classic-1.1.3.jar
│   │   ├── logback-core-1.1.3.jar
│   │   ├── netty-all-4.0.20.Final.jar
│   │   ├── qbit-boon-0.8.2.jar
│   │   ├── qbit-core-0.8.2.jar
│   │   ├── qbit-vertx-0.8.2.jar
│   │   ├── server.jar
│   │   ├── slf4j-api-1.7.12.jar
│   │   ├── vertx-core-2.1.1.jar
│   │   └── vertx-platform-2.1.1.jar
│   ├── runContainer.sh
│   └── var
│   └── log
│   └── server
│   └── readme.md
├── server.iml
├── settings.gradle
└── src
└── main
└── java
└── com
└── example
└── ServerMain.java
As you can see, we followed the first guide as a template very rigourously.

Setting up Consul to run with Docker for Microservices Service Discovery

$
0
0
For services to find one another easily, there are limits to what Docker provides (although maybe that is changing, i.e., Docker Swarm.)
There are many ways containers/microservices can find each other. etcd, ZooKeeper, Consul, to name just a few. 
Consul is one of many choices. Consul is nice for microservices service discoverybecause it has a HTTP/JSON (REST-ish) API that can be accessed from any programming language, and it has some support for health checks which are flexible and feed right into the service discovery.
If you are new to Consul, a good place to start is this consul tutorial. This article won't cover the basics that are covered there. 
We follow the same basic flow/structure defined in the Java micro service article to create docker images and containers, and we use the docker linking described in part 2 of the gradle docker java micro service article.
The basic directory structure to setup three consul servers is here.
.
├── consul_server1
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │   └── consul.json
│   ├── opt
│   │   └── consul
│   │   ├── bin
│   │   │   ├── readme.md
│   │   │   └── run.sh
│   │   ├── data
│   │   ├── readme.md
│   │   └── web
│   │   ├── index.html
│   │   └── static
│   │   ├── application.min.js
│   │   ├── base.css
│   │   ├── base.css.map
│   │   ├── bootstrap.min.css
│   │   ├── consul-logo.png
│   │   ├── favicon.png
│   │   └── loading-cylon-purple.svg
│   ├── runContainer.sh
│   ├── ui
│   └── var
│   └── logs
│   └── consul
│   └── readme.md
├── consul_server2
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │   └── consul.json
│   ├── opt
│   │   └── consul
│   │   ├── bin
│   │   │   ├── readme.md
│   │   │   └── run.sh
│   │   ├── data
│   │   └── readme.md
│   ├── runContainer.sh
│   └── var
│   └── logs
│   └── consul
│   └── readme.md
├── consul_server3
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │   └── consul.json
│   ├── opt
│   │   └── consul
│   │   ├── bin
│   │   │   ├── readme.md
│   │   │   └── run.sh
│   │   ├── data
│   │   └── readme.md
│   ├── runContainer.sh
│   └── var
│   └── logs
│   └── consul
│   └── readme.md
├── readme.md
└── runSample.sh

Consul configuration file for server 1

consul-server1 is setup to launch in bootstrap mode. 

/consul-server1/etc/consul/consul.json

{
"datacenter":"test-dc",
"data_dir":"/opt/consul/data",
"log_level":"INFO",
"node_name":"consul-server1",
"server":true,
"bootstrap":true
}

Consul server 1 launches the UI.

consul-server1 is also setup to launch the consul UI which took a bit of doing.
Remember: The ip address is not localhost when you are using boot2docker.
To get to the ip address of the actual host, we used $HOSTNAME which contains the host name, and the utility getent to look up the actual ip address. We added this to the run script that launches consul.

consul-server1/opt/consul/bin/run.sh

/opt/consul/bin/consul agent \ 
-config-file=/etc/consul/consul.json
\ -ui-dir=/opt/consul/web
\ -client=`getent hosts $HOSTNAME | cut -d'' -f1`



This will make it so the UI is available at http://192.168.59.103:8500/ui/#/test-dc/servicesfor OSX.

Dockerfile

The Dockerfile is pretty standard.
# Pull base image.
FROM ubuntu

EXPOSE 8500:8500

RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y unzip


COPY opt /opt
COPY etc /etc
COPY var /var

RUN wget https://dl.bintray.com/mitchellh/consul/0.5.1_linux_amd64.zip
RUN unzip 0.5.1_linux_amd64.zip
RUN mv consul /opt/consul/bin/


ENTRYPOINT /opt/consul/bin/run.sh

We just pull down consul. Then we launch the run.sh script that we showed earlier.
The only difference of the consul-server2 and consul-server3 images is the config and that they don't run the web-ui.

consul config for server 3

{
"datacenter": "test-dc",
"data_dir": "/opt/consul/data",
"log_level": "INFO",
"node_name": "consul-server3",
"server": true,
"bootstrap" : false,
"retry_join" : [
"consul-server1"]
}

Run.sh script for server 2 and 3

/opt/consul/bin/consul agent -config-file=/etc/consul/consul.json  

Building the image

To build the docker image

Building the docker image

$ docker build -t example/consul-server2:1.0-SNAP .

Starting the container

To start consul server 1 use this command.

Start the docker container for consul-server1

$ docker run --name consul-server1 -i -t  example/consul-server1:1.0-SNAP
To start the other two container, we need to tell them how to find server 1. Once they find it, they remember it. :)

Launching docker container with link to consul-server1 for consul-server2 and consul-server3

$ docker run --name consul-server2 --link consul-server1:consul-server1  \
-t -i example/consul-server2:1.0-SNAP
At this point you should be able to launch the UI and see all of the nodes in the nodes tab.
Check it out.

Perf QBit versus Spring Boot

$
0
0
First question people usually ask me. How does QBit compare to X?
Where X is the favorite framework of the person or something they once read an article about. Sometimes the questions are interesting. Sometimes people are comparing things that do not make sense.
Often times that X is Spring Boot.
So how does QBit compare, keep in mind, we have not done any serious perf tuning of QBit. We are more focused on features, and you could use QBit on a Spring project. I have used QBit running inside of Spring Boot. QBit is a lib. It can run anywhere. QBit can run with Guice. You can run QBit inside of Vert.x.
Out of the box QBit REST compares to out of the box Spring REST as follows:

QBit code

packagehello;


importio.advantageous.qbit.annotation.RequestMapping;

importjava.util.Collections;
importjava.util.List;

import staticio.advantageous.qbit.server.EndpointServerBuilder.endpointServerBuilder;

/**
* Example of a QBit Service
* <p>
* created by rhightower on 2/2/15.
*/
@RequestMapping("/myservice")
publicclassMyServiceQBit {

@RequestMapping("/ping")
publicList<String>ping() {
returnCollections.singletonList("Hello World!");
}

publicstaticvoidmain(String... args) throwsException {
endpointServerBuilder()
.setPort(6060)
.build()
.initServices(newMyServiceQBit()).startServer();
}

}

Spring code

packagehello;

importorg.springframework.boot.SpringApplication;
importorg.springframework.boot.autoconfigure.EnableAutoConfiguration;
importorg.springframework.context.annotation.Scope;
importorg.springframework.web.bind.annotation.RequestMapping;
importorg.springframework.web.bind.annotation.ResponseBody;
importorg.springframework.web.bind.annotation.RestController;

importjava.util.Collections;
importjava.util.List;

@RestController
@EnableAutoConfiguration
@Scope
publicclassMyServiceSpring {

@RequestMapping(value="/services/myservice/ping",
produces="application/json")
@ResponseBody
List<String>ping() {
returnCollections.singletonList("Hello World!");
}

publicstaticvoidmain(String[] args) throwsException {

SpringApplication.run(MyServiceSpring.class, args);
}

}
The code looks similar. Keep in mind that QBit only does JSON over HTTP or WebSocket. Also QBit is focused on queuing and messaging not just HTTP (it has more similarities with Akka than Spring Boot). Spring supports a lot more types of content and options. Also keep in mind that QBit can happily run inside of Spring. QBit is just a library. Spring Boot is using Tomcat and Tomcat does things being Java EE that QBit will never do and does not need to do. 
But, people ask.
Now how do they compare when you are talking about performance.

Spring Boot

$  wrk -c 2000 -d 10s http://localhost:8080/services/myservice/ping
Running 10s test @ http://localhost:8080/services/myservice/ping
2 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 10.03ms 2.12ms 31.37ms 67.26%
Req/Sec 13.25k 1.61k 16.33k 65.50%
264329 requests in 10.05s, 46.43MB read
Socket errors: connect 0, read 150, write 0, timeout 0
Requests/sec: 26312.11
Transfer/sec: 4.62MB

Here we see that Spring Boot handles just over 26K TPS. This is just running it with wrk on OSX. You can expect higher speeds with a properly tuned Linux TCP/IP stack.

P.S. You can make Spring Boot 30% to 50% faster by using Jetty instead of Tomcat.

QBit RAW

$ wrk -c 2000 -d 10s http://localhost:6060/services/myservice/ping
Running 10s test @ http://localhost:6060/services/myservice/ping
2 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.61ms 18.09ms 553.68ms 99.37%
Req/Sec 48.78k 3.20k 52.86k 93.50%
971148 requests in 10.02s, 80.58MB read
Requests/sec: 96910.31
Transfer/sec: 8.04MB


As you can see, QBit runs well over 3x faster. In fact it is approaching 4x faster. (Wait to see what happens when we start optimizing QBit.)
Now you are thinking but this is not a fair test. You are right. It is not fair to QBit. QBit has to funnel everything through the same thread. Spring does not.
QBit ensures that only one thread at a time runs the ping operation so you can put stateful things inside of the QBit services. You can't do that with the Spring Boot example. QBit is easier to program. Thus QBit has to do a lot more work, and it is still over 3x faster. The underlying network speed is coming form Vert.x which QBit uses as its network transport toolkit. (Thank you Mr. Fox.)
QBit also lets you easily run the same service as a WebSocket, and when you do that, you will get much higher throughput.
Performance breakdown:
  • QBit WebSocket speed 580K TPS (running on OSX macbook pro)
  • QBit HTTP speed 90K TPS (running on OSX macbook pro)
  • Spring Boot REST 26K TPS (running on OSX macbook pro)
Now this does not mean much. But I have built very real things with QBit and Vert.x. Things which handles millions and millions of users using a fraction of servers as other similar services which is how I feed my family and pay rent.
QBit is a lot more than this silly perf test. QBit has a health system, a stats system, integrates with Consul, integrates with StatsD, and much more. Here is a non-exhaustive list.
  • QBit implements a easy-to-use reactive API where you can easily manage async calls to N number of other systems.
  • QBit allows sharded in-memory services for CPU intensive needs.
  • QBit allows pooled/round-robin in-memory services for IO intensive needs.
  • QBits stats system can be clustered so you can do rate limiting across a pool of servers.
  • You can tweak QBit's queuing system to handle 200M+ TPS internal queuing.
  • QBit has a high-speed, type safe event bus which can connect to other systems or just run really fast in the same process.
It is no mistake that QBit annotations look a lot like Spring MVC annotations. I spent a lot of years using Spring and Spring MVC. I like it.
It also no mistake that you do not need Spring to run QBit. This allows QBit to be used in projects that are not using Spring (they do exist).

Think of QBit has a message based queuing system which can do a lot of really cool stuff with service discovery, events, reactive programming, etc., but does it in such a way that it will feel comfortable to Java developers who have used Spring or Java EE. 

What is QBit?

$
0
0
For a quick overview check out these two slide decks QBit Early Slide Deck and QBit Java Microservices Lib. For support from the community see this qbit google group. There are also tons of documentation in the QBit WIKI. The wiki home page does a good job of covering QBit basics, and tries to touch on every area. If you are more looking for the high-level What is QBit trying to solve then look no further than Java Microservices architectureHigh Speed MicroservicesReactive Microservices, and Microservice Monitoring.
Here is a high-level bullet list style description. (After a few more paragraphs.. right to the bulleted list).
QBit looks like Spring MVC but is more like Akka with a lot of inspiration from Vert.x.
QBit core ideas were built when using Vert.x at large media company to be able to handle 100M users on fraction of servers (13 but could do it with 6 servers similar in scope to another app done by another vendor which used 2,000 servers. Similar Java EE app done by competing media company used 150 servers). For this project the service was capable of handling 2,000,000 CPU intensive requests per second on 10 servers. (Only able to test this service to 400K TPS across cluster due to F5 starting to implode, but was able to test single node at 150K TPS using cloud load tester that simulated clients from iOS, Android and Web. 150K was limitation of the license agreement of the cloud load tool.).
Ideas were expanded while using QBit to build OAuth based application id rate limiter.
Uses messaging, event busses, and queues. Similar in architecture to Vert.x/Akka. Not quite as complex as Akka, not quite as low level as Vert.x.
A service looks just like a Java class. It has methods. It has fields. The fields do not need to be synchronized because only one thread can access the service at a time. Services calls are sent in batches over a queue (internally). For CPU intensive services there are low level checks to see if the service is busy. If the service is busy, the batch sizes get larger, which means there is less overhead using low level thread-safe hand-off constructs, which means it runs faster. Batching also helps with remote services to maximize IO bandwidth.
If you are building high-speed in-memory services, QBit is an excellent option. If you are building async services, QBit is an excellent option. If you are building services that must work with streams of calls or message, then QBit is an excellent option. If you are building microservices and you want to implement back pressure detection, circuit breakers, coordinate calls to N number of other internal / external services, QBit is an excellent option.

Tier 1: Core Features

  1. Fast queue lib (200M messages a second)
  2. Fast service queue lib (internal queue to queue, typed actor like)
  3. Proxy to service queue programming model is Java idiomatic (smaller learning curve).
  4. Ability to handle queue states (queue empty, queue busy, queue limit, etc.)
  5. Reactor (call coordination, callbacks, timing out detection, async call workflow). Easy to use. Focused on services and handling calls.
  6. Workers Services can be round-robin workers (pooled, each member has its own state but stateless wrt to client session) or sharded workers (shard calls so that in-memory services can split the data into shards so that any single shard of in-memory data is only accessed by one thread).
Note: Tier 1 support is similar to Akka typed Actors. A service is a Java class that sits behind one or two queues (request/response queues).

Tier 2: Core Features

  1. EventService: Pub/Sub events (works with service queue lib to deliver events) (20M TPS)
  2. StatsService: collect stats (high-speed, can be clustered)
  3. HealthService: collects health status of service queues (new)
  4. ServiceDiscovery service (Consul or JSON file watcher)

Tier 2: IO: Http Web Server / Http Web Client:

  1. Simplified HTTP dev, async
  2. One liners for sending and receiving HTTP requests/responses
  3. HttpClientHttpServer
  4. Easy to use WebSocket lib for servers and clients
  5. Many utility function and builders for building complex REST calls (postGzipJson).
  6. Thread safe access to handlers just like QBit services

Tier 3: REST/HTTP

  1. Builds support for REST on top of Tier 1 and Tier 2
  2. Focused on JSON only responses and bodies (uses high-speed JSON parser)
  3. Supports subset of Spring MVC annotations @RequestMapping
  4. Easier smaller set. Main focus is on REST services with JSON.
  5. Faster than mainstream (4x faster than Spring/Tomcat) 96K TPS to 200K TPS
  6. Thread safe access to handlers just like QBit services

Tier 3: Remote call proxy

  1. Easy to use. Use Builders to build a proxy
  2. Remoting with WebSocket
  3. Programming model same as ServiceQueue
  4. Wire protocol is JSON and ASCII
  5. We use the fasted Java JVM parser
  6. Proxy similar to local proxy
  7. 4x to 5x faster than QBit REST/HTTP (should be 10x faster) 400K TPS (800K messages a second)
  8. Program to interfaces. Simple.
  9. A service can have a remote and local proxy interfaces

Admin Features:

  1. Metadata provider (creates large catalog of metadata about services used for REST mapping calls) (Replaced older hackier way)
  2. REST end point to see status of internal services (rudimentary http page as well)
  3. Auto-register health checks with services
  4. Auto register rudimentary stats with services (counts, sampling for timing, creation, queue size, etc. Periodic low over-head)

EventService

  1. Pub/Sub events (works with service queue lib to deliver events) (20M TPS)
  2. Easy to use api, register(), subscribe(), consume()
  3. Internal app messaging. Used by ServiceDiscovery for example.
  4. You could use to create workflows, or just to monitor an event
  5. Async.
  6. A Channel interface is used to implement the listener and to send messages. A Channel is just a plain java interface. It is easy to see who listens to the event from the IDE because all you have to do is do a find usage.
  7. You do not have to use the proxy channel. A channel name can be any String.
  8. EventService can be clustered! You can send events to all other nodes. You can use it to replicate calls or events to other nodes. The clustering using Consul.io to get a list of service members in a service group.
  9. By default, it is not clustered. You can easily integrate with any 3rd party messaging system. The EventService has an event replication mechanism to replicate event messages to other pipes (Kafka, JMS, RabbitMQ), etc.

StatsService

  1. Tracks counts, timings, and levels
  2. Can be clustered (with QBit ServiceDiscovery service and Console)
  3. Can be integrated with StatsD
  4. Can be used to implement things like high-speed rate limiting based on header
  5. Can be used to implement things like track system and implement back pressure cool off

ServiceDiscovery

  1. Keeps track of services
  2. Lets you find services
  3. Lets you know when services have been added or removed
  4. Removes services that are unhealthy
  5. Uses Consul.io, but can also just poll a JSON file for easy integration with Consul or etcd, (Chef push), etc.

HealthService

  1. Internal single node tracking (single node meaning process)
  2. Tracks N number of internal health nodes
  3. Services can auto-register with the health system.
  4. Uses TTL. It is a watchdog type service.
Viewing all 217 articles
Browse latest View live