Quantcast
Channel: Sleepless Dev
Viewing all 217 articles
Browse latest View live

Curlable stats and health checks... for QBit

$
0
0
You can wire in stats and get a single "ok" end point for your services. 

You can use AdminBuilder to create an admin utils.
finalAdminBuilder adminBuilder =AdminBuilder.adminBuilder();

finalServiceEndpointServer adminServer =
adminBuilder.build();

adminServer.startServer();

finalHealthServiceAsync healthService = adminBuilder.getHealthService();
healthService.register("foo", 1, TimeUnit.DAYS);
healthService.checkInOk("foo");
healthService.register("bar", 1, TimeUnit.DAYS);
healthService.checkInOk("bar");
You can also register for service queue health checks.
    @Bean
publicAdminBuilder qbitAdminBuilder() {

finalint port = environment.getProperty("qbit.admin.server.port", Integer.class);
finalString host = environment.getProperty("qbit.admin.server.host", String.class);


finalAdminBuilder adminBuilder =AdminBuilder.adminBuilder()
.setPort(port).setHost(host);

return adminBuilder;
}


@Bean
publicServiceEndpointServer adminServiceEndpointServer(
finalAdminBuilder adminBuilder) {
finalServiceEndpointServer adminServer =
adminBuilder.build();

adminServer.startServer();
return adminServer;

}

....

finalInteger healthCheckTTL = env.getProperty("qbit.app.healthCheckTTLSeconds", Integer.class);
...

finalServiceBuilder serviceBuilder =ServiceBuilder.serviceBuilder();

finalAdminBuilder qbitAdminBuilder =
applicationContext.getBean("qbitAdminBuilder", AdminBuilder.class);
finalHealthServiceAsync healthServiceAsync =
qbitAdminBuilder.getHealthServiceBuilder().buildHealthSystemReporter();

serviceBuilder.registerHealthChecksWithTTLInSeconds(qbitAdminBuilder.getHealthService(), longName,
healthCheckTTL ==null?5: healthCheckTTL);

....
You can also register for stats health checks (stats system has a statsd replicator so you can replicate stats to statsD)
finalServiceBuilder serviceBuilder =ServiceBuilder.serviceBuilder();
...
finalStatsCollector statsCollector = applicationContext.getBean("qbitStatsCollector", StatsCollector.class);

serviceBuilder.registerStatsCollections(longName, statsCollector, flushTimeSeconds, sampleEvery );
Once you do this, then you can query for health status.
Returns true if all internal service queues are running. Returns false if any are failing.
Returns list of nodes healthy or not:
[
"api.proxy.unpackerService",
"api.proxy.healthService",
"api.proxy.forwarderService",
"api.proxy.bouncerService"
]
[
"api.proxy.unpackerService",
"api.proxy.healthService",
"api.proxy.forwarderService",
"api.proxy.bouncerService"
]
Return extended stats
[
{
"name":"api.proxy.unpackerService",
"ttlInMS":10000,
"lastCheckIn":1434003690275,
"status":"PASS"
},
{
"name":"api.proxy.healthService",
"ttlInMS":10000,
"lastCheckIn":1434003690275,
"status":"PASS"
},
{
"name":"api.proxy.forwarderService",
"ttlInMS":10000,
"lastCheckIn":1434003690275,
"status":"PASS"
},
{
"name":"api.proxy.bouncerService",
"ttlInMS":10000,
"lastCheckIn":1434003690275,
"status":"PASS"
}
]
Registering for health checks can be done with the service bundle builder and the service endpoint server builder as well:
finalServiceBundleBuilder.serviceBundleBuilder =
ServiceBundleBuilder.serviceBundleBuilder().getRequestQueueBuilder();

finalServiceBundle serviceBundle = serviceBundleBuilder
.setStatsCollector(statsCollector)
.build();

serviceBundle.start();

serviceBundle.addService(newMyService());
Every service added to the bundle will get stats support.

Using QBit to create Java RESTful microservices

$
0
0

QBit Restful Microservices

Before we delve into QBit restful services, let's cover what we get from gradle's application plugin. In order to be a microservice, a service needs to run in a standalone process or a related group of standalone processes.

Gradle application plugin

Building a standalone application with gradle is quite easy. You use the gradle application plug-in.

Gradle build using java and application plugin

apply plugin:'java'
apply plugin:'application'

sourceCompatibility =1.8
version ='1.0'
mainClassName ="io.advantageous.examples.Main"

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
}
To round out this example, let's create a simple Main java class.

Simple main Java to demonstrate Gradle application plugin

packageio.advantageous.examples;

publicclassMain {

publicstaticvoidmain(String... args) {
System.out.println("Hello World!");
}
}
The project structure is as follows:

Project structure

$ tree
.
├── build.gradle
├── gradle
│   └── wrapper
│   ├── gradle-wrapper.jar
│   └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── restful-qbit.iml
├── settings.gradle
└── src
└── main
└── java
└── io
└── advantageous
└── examples
└── Main.java
We use a standard maven style structure.
To build this application we use the following commands:

Building our application

$ gradle clean build

Output

:clean UP-TO-DATE
:compileJava
:processResources UP-TO-DATE
:classes
:jar
:assemble
:compileTestJava UP-TO-DATE
:processTestResources UP-TO-DATE
:testClasses UP-TO-DATE
:test UP-TO-DATE
:check UP-TO-DATE
:build

BUILD SUCCESSFUL

Total time: 2.474 secs
To run our application we use the following:

Running our application

$ gradle run

Output

:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:run
Hello World!

BUILD SUCCESSFUL

Total time: 2.202 secs
There are three more commands that we care about:
  • installDist Installs the application into a specified directory
  • distZip Creates ZIP archive including libs and start scripts
  • distTar Creates TAR archive including libs and start scripts
The application plug-in allows you to create scripts to start a process. These scripts work on all operating systems. Microservices run as standalone processes. The gradle application plug-in is a good fit for microservice development.
Let's use the application plugin to create a dist zip file.

Using gradle to create a distribution zip

$ gradle distZip
Let's see where gradle put the zip.

Using find to see where the zip went

$ find . -name "*.zip"
./build/distributions/restful-qbit-1.0.zip
Let's unzip to a directory.

Unzipping to an install directory

$ mkdir /opt/example
$ unzip ./build/distributions/restful-qbit-1.0.zip -d /opt/example/
Archive: ./build/distributions/restful-qbit-1.0.zip
creating: /opt/example/restful-qbit-1.0/
creating: /opt/example/restful-qbit-1.0/lib/
inflating: /opt/example/restful-qbit-1.0/lib/restful-qbit-1.0.jar
creating: /opt/example/restful-qbit-1.0/bin/
inflating: /opt/example/restful-qbit-1.0/bin/restful-qbit
inflating: /opt/example/restful-qbit-1.0/bin/restful-qbit.bat
Now we can run it from the install directory.

Running from install directory

$ /opt/example/restful-qbit-1.0/bin/restful-qbit
Hello World!
Contents of restful-qbit startup script.
$ cat /opt/example/restful-qbit-1.0/bin/restful-qbit
#!/usr/bin/env bash

##############################################################################
##
## restful-qbit start up script for UN*X
##
##############################################################################

# Add default JVM options here. You can also use JAVA_OPTS and RESTFUL_QBIT_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS=""

APP_NAME="restful-qbit"
APP_BASE_NAME=`basename "$0"`

# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"

warn ( ) {
echo"$*"
}

die ( ) {
echo
echo"$*"
echo
exit 1
}

# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
case"`uname`" in
CYGWIN* )
cygwin=true
;;
Darwin* )
darwin=true
;;
MINGW* )
msys=true
;;
esac

# For Cygwin, ensure paths are in UNIX format before anything is touched.
if$cygwin;then
[ -n "$JAVA_HOME" ] && JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
fi

# Attempt to set APP_HOME
# Resolve links: $0 may be a link
PRG="$0"
# Need this for relative symlinks.
while [ -h "$PRG" ] ;do
ls=`ls -ld "$PRG"`
link=`expr "$ls":'.*-> \(.*\)$'`
if expr "$link":'/.*'> /dev/null;then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
SAVED="`pwd`"
cd"`dirname \"$PRG\"`/..">&-
APP_HOME="`pwd -P`"
cd"$SAVED">&-

CLASSPATH=$APP_HOME/lib/restful-qbit-1.0.jar

# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ;then
if [ -x "$JAVA_HOME/jre/sh/java" ] ;then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ] ;then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME

Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1|| die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.

Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi

# Increase the maximum file descriptors if we can.
if [ "$cygwin" = "false" -a "$darwin" = "false" ] ;then
MAX_FD_LIMIT=`ulimit -H -n`
if [ $? -eq 0 ] ;then
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ;then
MAX_FD="$MAX_FD_LIMIT"
fi
ulimit -n $MAX_FD
if [ $? -ne 0 ] ;then
warn "Could not set maximum file descriptor limit: $MAX_FD"
fi
else
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
fi
fi

# For Darwin, add options to specify how the application appears in the dock
if$darwin;then
GRADLE_OPTS="$GRADLE_OPTS\"-Xdock:name=$APP_NAME\"\"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi

# For Cygwin, switch paths to Windows format before running java
if$cygwin;then
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`

# We build the pattern for arguments to be converted via cygpath
ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
SEP=""
fordirin$ROOTDIRSRAW;do
ROOTDIRS="$ROOTDIRS$SEP$dir"
SEP="|"
done
OURCYGPATTERN="(^($ROOTDIRS))"
# Add a user-defined pattern to the cygpath arguments
if [ "$GRADLE_CYGPATTERN"!= "" ] ;then
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
fi
# Now convert the arguments - kludge to limit ourselves to /bin/sh
i=0
forargin"$@";do
CHECK=`echo"$arg"|egrep -c "$OURCYGPATTERN" -`
CHECK2=`echo"$arg"|egrep -c "^-"`### Determine if an option

if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ;then### Added a condition
eval`echo args$i`=`cygpath --path --ignore --mixed "$arg"`
else
eval`echo args$i`="\"$arg\""
fi
i=$((i+1))
done
case$i in
(0) set -- ;;
(1) set -- "$args0" ;;
(2) set -- "$args0""$args1" ;;
(3) set -- "$args0""$args1""$args2" ;;
(4) set -- "$args0""$args1""$args2""$args3" ;;
(5) set -- "$args0""$args1""$args2""$args3""$args4" ;;
(6) set -- "$args0""$args1""$args2""$args3""$args4""$args5" ;;
(7) set -- "$args0""$args1""$args2""$args3""$args4""$args5""$args6" ;;
(8) set -- "$args0""$args1""$args2""$args3""$args4""$args5""$args6""$args7" ;;
(9) set -- "$args0""$args1""$args2""$args3""$args4""$args5""$args6""$args7""$args8" ;;
esac
fi

# Split up the JVM_OPTS And RESTFUL_QBIT_OPTS values into an array, following the shell quoting and substitution rules
functionsplitJvmOpts() {
JVM_OPTS=("$@")
}
eval splitJvmOpts $DEFAULT_JVM_OPTS$JAVA_OPTS$RESTFUL_QBIT_OPTS


exec"$JAVACMD""${JVM_OPTS[@]}" -classpath "$CLASSPATH" io.advantageous.examples.Main "$@"

Creating a simple RestFul Microservice

Let's create a simple HTTP service that responsds with pong when we send it a ping as follows.

Service that responds to this curl command

$ curl http://localhost:9090/services/pongservice/ping
"pong"
First let's import the qbit lib. We are using the SNAPSHOT but hopefully by the time you read this the release will be available.
Add the following to your gradle build.

Adding QBit to your gradle build

apply plugin:'java'
apply plugin:'application'

sourceCompatibility =1.8
version ='1.0'
mainClassName ="io.advantageous.examples.Main"

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
compile group:'io.advantageous.qbit', name:'qbit-vertx', version:'0.7.3-SNAPSHOT'
}
Define a service as follows:

QBit service

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.RequestMapping;

@RequestMapping
publicclassPongService {


@RequestMapping
publicStringping() {
return"pong";
}

}
The @RequestMapping defines the service as one that responsds to an HTTP call. If you do not specify the path, then the lower case name of the class and the lower case name of the method becomes the path. Thus PongService.ping() becomes /pongservice/ping. To bind this service to a port we use a service server. A service server is a server that hosts services like our pong service.
Change the Main class to use the ServiceServer as follows:

Using main class to bind in service to a port

packageio.advantageous.examples;

importio.advantageous.qbit.server.ServiceServer;
importio.advantageous.qbit.server.ServiceServerBuilder;

publicclassMain {

publicstaticvoidmain(String... args) {
finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setPort(9090).build();
serviceServer.initServices(newPongService());
serviceServer.startServer();
}
}
Notice we pass an instance of the PongService to the initServices of the service server. If we want to change the root address from "services" to something else we could do this:

Changing the root URI of the service server

finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setUri("/main")
.setPort(9090).build();
serviceServer.initServices(newPongService());
serviceServer.startServer();
Now we can call this using curl as follows:

Use the curl command to invoke service via /main/pongservice/ping

$ curl http://localhost:9090/main/pongservice/ping
"pong"
QBit uses builders to make it easy to integrate QBit with frameworks like Spring or Guice or to just use standalone.

Adding a service that takes request params

Taking request parameters

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {

@RequestMapping("/add")
publicintadd(@RequestParam("a") inta,
@RequestParam("b") intb) {

return a + b;
}


}
Notice the above uses @RequestParam this allows you to pull the requests params as arguments to the method. If we pass a URL like: http://localhost:9090/main/my/service/add?a=1&b=2. QBit will use 1 for argument a and 2 for argument b.

Adding the new service to Main

packageio.advantageous.examples;

importio.advantageous.qbit.server.ServiceServer;
importio.advantageous.qbit.server.ServiceServerBuilder;

publicclassMain {

publicstaticvoidmain(String... args) {
finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setUri("/main")
.setPort(9090).build();
serviceServer.initServices(
newPongService(),
newSimpleService());
serviceServer.startServer();
}
}
When we load this URL:
http://localhost:9090/main/my/service/add?a=1&b=2
We get this response.
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 1

3

Working with URI params

Many think that URLs without request parameters are more search engine friendly or you can list things under a context which make it more RESTful. This is open to debate. I don't care about debate, but here is an example of using URI params.

Working with URI params example

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {
...
...
@RequestMapping("/add2/{a}/{b}")
publicintadd2( @PathVariable("a") inta,
@PathVariable("b") intb) {

return a + b;
}


}
Now we can pass arguments which are part of the URL path. We do this by using the@PathVariable annotation. Thus the following URL:
http://localhost:9090/main/my/service/add2/1/4
The 1 is correlates to the "a" argument to the method and the 4 correlates to the "b" arguments.
We would get the following response when we load this URL.

Working with URI params output

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 1

5
You can mix and match URI params and request params.

Working with URI params and request params example

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {
...
...
@RequestMapping("/add3/{a}/")
publicintadd3( @PathVariable("a") inta,
@RequestParam("b") intb) {

return a + b;
}

}
This allows us to mix URI params and request params as follows:
http://localhost:9090/main/my/service/add3/1?b=8
Now we get this response:

Working with URI params and request params example

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 1

9

Fuller RESTful example

Let's create a simple Employee / Department listing application.

RESTful Employee and Department listing

packageio.advantageous.examples.employees;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;

importjava.util.*;
importjava.util.function.Predicate;

@RequestMapping("/dir")
publicclassEmployeeDirectoryService {


privatefinalList<Department> departmentList =newArrayList<>();


@RequestMapping("/employee/{employeeId}/")
publicEmployeelistEmployee(@PathVariable("employeeId") finallongemployeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Find employee in department. */
if (departmentOptional.isPresent()) {
return departmentOptional.get().employeeList()
.stream().filter(employee -> employee.getId() == employeeId)
.findFirst().get();
} else {
returnnull;
}
}



@RequestMapping("/department/{departmentId}/")
publicDepartmentlistDepartment(@PathVariable("departmentId") finallongdepartmentId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst();

/* Find employee in department. */
if (departmentOptional.isPresent()) {
return departmentOptional.get();
} else {
returnnull;
}
}



@RequestMapping(value="/department/", method=RequestMethod.POST)
publicbooleanaddDepartment( @RequestParam("departmentId") finallongdepartmentId,
@RequestParam("name") finalStringname) {
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department "+ departmentId +" already exists");
}
departmentList.add(newDepartment(departmentId, name));
returntrue;
}


@RequestMapping(value="/department/employee/", method=RequestMethod.POST)
publicbooleanaddEmployee( @RequestParam("departmentId") finallongdepartmentId,
finalEmployeeemployee) {

finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (!departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department not found");
}


finalboolean alreadyExists = departmentOptional.get().employeeList().stream()
.anyMatch(employeeItem -> employeeItem.getId() == employee.getId());

if (alreadyExists) {
thrownewIllegalArgumentException("Employee with id already exists "+ employee.getId());
}
departmentOptional.get().addEmployee(employee);
returntrue;
}

}
To add three departments Engineering, HR and Sales:

Add Engineering, HR and Sales department with REST

$ curl -X POST http://localhost:9090/main/dir/department/?departmentId=1&name=Engineering
$ curl -X POST http://localhost:9090/main/dir/department/?departmentId=2&name=HR
$ curl -X POST http://localhost:9090/main/dir/department/?departmentId=3&name=Sales
Now let's add some employees into those departments.

Add some employees to the departments

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Rick","lastName":"Hightower", "id": 1}' /
http://localhost:9090/main/dir/departme\
nt/employee/?departmentId=1

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Diana","lastName":"Hightower", "id": 2}' /
http://localhost:9090/main/dir/departm\
ent/employee/?departmentId=2

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Maya","lastName":"Hightower", "id": 3}' /
http://localhost:9090/main/dir/departme\
nt/employee/?departmentId=3

curl -H "Content-Type: application/json" -X POST /
-d '{"firstName":"Paul","lastName":"Hightower", "id": 4}' /
http://localhost:9090/main/dir/departmen\
t/employee/?departmentId=3
Now let's list the employees. We can get the employees with the following curl.

Listing employees with curl

$ curl http://localhost:9090/main/dir/employee/1
{"firstName":"Rick","lastName":"Hightower","id":1}

$ curl http://localhost:9090/main/dir/employee/2
{"firstName":"Diana","lastName":"Hightower","id":2}

$ curl http://localhost:9090/main/dir/employee/3
{"firstName":"Maya","lastName":"Hightower","id":3}

$ curl http://localhost:9090/main/dir/employee/4
{"firstName":"Paul","lastName":"Hightower","id":4}
Now we can list departments with our RESTful API:

Listing departments with curl

$ curl http://localhost:9090/main/dir/department/1
{"id":1,"employees":[{"firstName":"Rick","lastName":"Hightower","id":1}]}

$ curl http://localhost:9090/main/dir/department/2
{"id":2,"employees":[{"firstName":"Diana","lastName":"Hightower","id":2}]}

$ curl http://localhost:9090/main/dir/department/3
{
"id": 3,
"employees": [
{
"firstName": "Maya",
"lastName": "Hightower",
"id": 3
},
{
"firstName": "Paul",
"lastName": "Hightower",
"id": 4
}
]
}
Some feel that is a search engine friendly RESTful interface although it is not. A true RESTful interface would have hyperlinks, but let's leave that off for another discussion lest we bring on the debate akin to vi vs. emacs or Scala vs. Groovy.

Some theory

Generally speaking people prefer the following (subject to much debate):
  • POST to add to a resource
  • PUT to update a resource
  • GET to read a resource
  • DELETE to remove an item from a list
  • End a group in the singular form with a slash as in department/
  • Use the name or id in the URI path to address a resource department/1/ would address the department with id 1.

Adding DELETE verb

You can use any of the HTTP verbs. Typically as mentioned before you use the DELETE verb to delete a resource as follows:

Mapping methods to DELETE verb

    @RequestMapping(value ="/employee", method =RequestMethod.DELETE)
publicboolean removeEmployee(@RequestParam("id") finallong employeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}


@RequestMapping(value ="/department", method =RequestMethod.DELETE)
publicboolean removeDepartment(@RequestParam("id") finallong departmentId) {

return departmentList.removeIf(department -> departmentId == department.getId());
}
Now let's delete somebody.

Using DELETE from command line

curl -H "Content-Type: application/json" -X DELETE  /
http://localhost:9090/main/dir/employee?id=3

Mixing and matching request params and path variables

You can mix and match @PathVariable and @RequestParam which is a quite common case and one that QBit just started supporitng this last release.

Mixing and matching request params and path variables example

    @RequestMapping(value ="/department/{departmentId}/employee", method =RequestMethod.DELETE)
publicboolean removeEmployeeFromDepartment(
@PathVariable("departmentId") finallong departmentId,
@RequestParam("id") finallong employeeId) {

/* Find the department by id. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}
In this very common case, you can use the PathVariable to address the department resource and then ask for a specific employee to be deleted only from this department.

Using curl to delete an employee from a specific department

curl -H "Content-Type: application/json" -X DELETE  \
http://localhost:9090/main/dir/department/3/employee?id=4

Full examples:

Listing

$ tree
.
├── addDepartments.sh
├── addEmployees.sh
├── build.gradle
├── gradle
│   └── wrapper
│   ├── gradle-wrapper.jar
│   └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── removeEmployee.sh
├── restful-qbit.iml
├── settings.gradle
├── showEmployees.sh
└── src
├── main
│   └── java
│   └── io
│   └── advantageous
│   └── examples
│   ├── Main.java
│   ├── PongService.java
│   ├── SimpleService.java
│   └── employees
│   ├── Department.java
│   ├── Employee.java
│   └── EmployeeDirectoryService.java
└── test
└── java
└── io
└── advantageous
└── examples
└── employees
└── EmployeeDirectoryServiceTest.java

Department.java

packageio.advantageous.examples.employees;

importjava.util.ArrayList;
importjava.util.List;

publicclassDepartment {

privateString name;
privatefinallong id;
privatefinalList<Employee> employees =newArrayList();

publicDepartment(longid, Stringname) {
this.id = id;
this.name = name;
}

publicvoidaddEmployee(finalEmployeeemployee) {
employees.add(employee);
}


publicbooleanremoveEmployee(finallongid) {
return employees.removeIf(employee -> employee.getId() == id);
}

publicList<Employee>employeeList() {
return employees;
}


publiclonggetId() {
return id;
}

publicStringgetName() {
return name;
}

publicvoidsetName(Stringname) {
this.name = name;
}
}

Employee

packageio.advantageous.examples.employees;

publicclassEmployee {

privateString firstName;
privateString lastName;
privatefinallong id;

publicEmployee(StringfirstName, StringlastName, longid) {
this.firstName = firstName;
this.lastName = lastName;
this.id = id;
}

publicStringgetFirstName() {
return firstName;
}

publicvoidsetFirstName(StringfirstName) {
this.firstName = firstName;
}

publicStringgetLastName() {
return lastName;
}

publicvoidsetLastName(StringlastName) {
this.lastName = lastName;
}

publiclonggetId() {
return id;
}
}

EmployeeServiceDirectory

packageio.advantageous.examples.employees;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;

importjava.util.*;
importjava.util.function.Predicate;

@RequestMapping("/dir")
publicclassEmployeeDirectoryService {


privatefinalList<Department> departmentList =newArrayList<>();


@RequestMapping("/employee/{employeeId}/")
publicEmployeelistEmployee(@PathVariable("employeeId") finallongemployeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Find employee in department. */
if (departmentOptional.isPresent()) {
return departmentOptional.get().employeeList()
.stream().filter(employee -> employee.getId() == employeeId)
.findFirst().get();
} else {
returnnull;
}
}



@RequestMapping("/department/{departmentId}/")
publicDepartmentlistDepartment(@PathVariable("departmentId") finallongdepartmentId) {

return departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst().get();
}



@RequestMapping(value="/department/", method=RequestMethod.POST)
publicbooleanaddDepartment( @RequestParam("departmentId") finallongdepartmentId,
@RequestParam("name") finalStringname) {
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department "+ departmentId +" already exists");
}
departmentList.add(newDepartment(departmentId, name));
returntrue;
}


@RequestMapping(value="/department/employee/", method=RequestMethod.POST)
publicbooleanaddEmployee( @RequestParam("departmentId") finallongdepartmentId,
finalEmployeeemployee) {

finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findAny();
if (!departmentOptional.isPresent()) {
thrownewIllegalArgumentException("Department not found");
}


finalboolean alreadyExists = departmentOptional.get().employeeList().stream()
.anyMatch(employeeItem -> employeeItem.getId() == employee.getId());

if (alreadyExists) {
thrownewIllegalArgumentException("Employee with id already exists "+ employee.getId());
}
departmentOptional.get().addEmployee(employee);
returntrue;
}




@RequestMapping(value="/employee", method=RequestMethod.DELETE)
publicbooleanremoveEmployee(@RequestParam("id") finallongemployeeId) {

/* Find the department that has the employee. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.employeeList().stream()
.anyMatch(employee -> employee.getId() == employeeId)).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}


@RequestMapping(value="/department", method=RequestMethod.DELETE)
publicbooleanremoveDepartment(@RequestParam("id") finallongdepartmentId) {

return departmentList.removeIf(department -> departmentId == department.getId());
}



@RequestMapping(value="/department/{departmentId}/employee", method=RequestMethod.DELETE)
publicbooleanremoveEmployeeFromDepartment(
@PathVariable("departmentId") finallongdepartmentId,
@RequestParam("id") finallongemployeeId) {

/* Find the department by id. */
finalOptional<Department> departmentOptional = departmentList.stream()
.filter(department -> department.getId() == departmentId).findFirst();

/* Remove employee from department. */
if (departmentOptional.isPresent()) {
departmentOptional.get().removeEmployee(employeeId);
returntrue;
} else {
returnfalse;
}
}

}

PongService.java

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.RequestMapping;

@RequestMapping
publicclassPongService {


@RequestMapping
publicStringping() {
return"pong";
}

}

SimpleService.java

packageio.advantageous.examples;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;

@RequestMapping("/my/service")
publicclassSimpleService {

@RequestMapping("/add")
publicintadd(@RequestParam("a") inta,
@RequestParam("b") intb) {

return a + b;
}

@RequestMapping("/add2/{a}/{b}")
publicintadd2( @PathVariable("a") inta,
@PathVariable("b") intb) {

return a + b;
}

@RequestMapping("/add3/{a}/")
publicintadd3( @PathVariable("a") inta,
@RequestParam("b") intb) {

return a + b;
}


}

Main.java

packageio.advantageous.examples;

importio.advantageous.examples.employees.EmployeeDirectoryService;
importio.advantageous.qbit.server.ServiceServer;
importio.advantageous.qbit.server.ServiceServerBuilder;

publicclassMain {

publicstaticvoidmain(String... args) {
finalServiceServer serviceServer =ServiceServerBuilder
.serviceServerBuilder()
.setUri("/main")
.setPort(9090).build();
serviceServer.initServices(
newPongService(),
newSimpleService(),
newEmployeeDirectoryService());
serviceServer.startServer();
}
}

Microservices Architecture: How much was it influenced by Mobile applications?

$
0
0
Many of the articles and talks about microservices architecture leave out an important concept. They mention the influencers to microservices, but leave out one of the major influencers of microservices architecture: the mobile web and mobile native applicationsOne of the major influences to microservices architecture is the mobile web and the proliferation of native mobile applications. ...





....
Frameworks like AkkaQBitVert.xRatpack, Node.js, Restlets are more streamlined to support the communication/service backends of mobile application development which is more of a reactive microservices architecture approach to development focused on WebSocket, and HTTP calls with JSON and not on a classic three-tiered web application now named a monolith by the microservices architecture crowd (a term I do not like). 
The main influencers of reactive microservices architecture is mobile applications (native and mobile web), richer web application, cloud computing, NoSQL movement, continuous X (integration and deployment), and the backlash to traditional eat-the-world SOA. 
...

Read more about high-speed microservicesJava microservices architecture and reactive microservices from a series of articles that we wrote.

Related links:
 read full article at:
Mobile Microservices

Microservices Architecture: VMware Releases Photon – a Cloud Native OS for microservices

$
0
0

Microservices Architecture: VMware Releases Photon – a Cloud Native OS for microservices


VMware now has its own Linux distribution, 'Project Photon', as part of its Microservices effort which is calls "Cloud Native Application".


Microservices: Cloud Native Application

"The idea is that rather than rely on a monolithic application to do everything, one can instead create lightweight components that handle one part of the process previously baked into a single application." --The Register.
Now each component can be updates more often using DevOps-driven release train instead of a larger less rigid release train. Docker now has become the poster boy of how you create microservices, although one does not need containerization to build microservices. You typically do need some sort of cloud/virtualization 2.0.
Containers like Docker use para-virtualization which is more like change root then a fully virtualized OS. This means that they can run closer to the actual hardware and there are less levels of indirection between a containerized OS and a fully virtualized one. Docker instances inherit setting from the core OS like allowed file handle limits, network configuration, etc. A Docker instance is more like a process that looks like an OS than a full VM instance. 



Microservices Architecture: VMware Releases Photon – a Cloud Native OS for microservices.


Read more about high-speed microservicesJava microservices architecture and reactive microservices from a series of articles that we wrote.

Related links:

Microservices Runtime Statistics and Metrics

$
0
0

Reactive Microservices Architecture and Runtime Statistics & Metrics

Runtime statistics and metrics are important for distributed systems. Since microservices architecture tend to promote and encourage remote process communication, they are inherently distributed systems. Runtime statistics and metrics can include request per second, available system memory, number of threads being used, connections that are open, failed authentication, expired tokens, and their ilk. If there is a parameter that is important to you, then you will want to track it. Given the complications of debugging a distributed system, you will find that runtime statistics of important parameters are a godsend.


Microservices Architecture Statistics
This is even more the case if you’re dealing with a lot of message queues. It can be difficult to determine where a message stopped being processed, and runtime statistics can help you track down issues.

Runtime statistics and metrics can also be a data-stream to your big data systems. Understanding types of request and their count and being able to correlate those with time of day, and events can aid in understanding how people use your application. In the age of big data, data science, and micro services, one may conclude that runtime statistics is no longer an optional feature, but a required feature for application development with an increasingly mobile and cloud world.

Just like logging became a must-have for applications so has runtime statistics. The runtime statistics can be important for tracking errors and when a certain threshold of errors occur a circuit breaker can be thrown open.

Remote calls and messages buses can fail, or hang without a response until a timeout is reached. In the event of a system that is down, a multitude of timeouts can cause a cascading failure. The Circuit Breaker pattern can be used to prevent a catastrophic cascade. Runtime statistics can be used to track errors and and trigger circuit breakers to open. You would want to use runtime statistics, and circuit breaker with service discovery so that you can mark nodes as unhealthy.

You can use runtime statistics to do application-specific things like rate limiting a partners Application ID so that they do not consume your resources outside of the bounds of their service agreements. Once you make microservices publicly available, you have to monitor and rate limit to collaborate effectively with partners. If you have ever used a public REST API, you are well aware of rate limiting which may do things like limit the number of connections you’re allowed to make and/or limit the number of certain requests that you were allowed to make in a given time period.
If you believe in the concepts of the reactive manifesto then you will want to gather runtime statistics that allow your systems to write reactive microservices.


QBit StatsService

QBit is a reactive mircoservices library that comes with a runtime statistics engine. QBit services are exposed via WebSocket RPC using JSON and REST. The statistics engine is easy to query and use. The QBit service engine’s stats engine can be integrated with StatsD for display and consumption of stats. There are tools and vendors who deal directly with StatsD feeds. You can also query QBit stats and use them to implement features like rate limiting, or spinning up new nodes when you detect things are getting overloaded.



StatsD the standard stats engine

StatsD is a network daemon for aggregating statistics, such as counters and timers, and shipping over UDP to backend services, such as Graphite or Datadog. In less than 5 years since it was first introduced, StatsD has become an important tool to aid in debugging, and monitoring microservices. If you are doing DevOps, then you are likely using StatsD.

StatsD was a simple daemon developed and released by Etsy. StatsD is used to aggregate and summarize application metrics. StatsD has a plethora of clients for various programming languages (ruby, python, java, erlang, node, scala, go, haskell, etc.). StatsD daemon collects stats from these clients using a published wire protocol. StatsD is so popular that it is a universal protocol for application metrics collection. The Etsy StatsD Daemon is the reference implementation, but there are other implementations like Go Stats Daemon and many more.

StatsD captures different types of metrics: Gauges, Counters, Timing Summary Statistics, and Sets. You can decorate your code to capture this type of data and report it.

A StatsD daemon listens to UDP traffic from StatsD clients. StatsD collects runtime statistics data over time and does periodic “flushes” of the data to analysis and monitoring engines you choose.
Tools can turn your runtime statistics and metrics into actionable charts and alert. Tools like Graphite are often used to visualize the state of microservices. Graphite is made up of Graphite-Web that renders graphs and dashboards, Carbon metric processing daemons, and Whisper which is a time-series database library.

There are other alternatives that QBit can integrate with as well like Coda Hale’s Metrics library which uses a Go Daemon.

StatsD seems to be the current champion of mind space. Mainly due to its simplicity and fire-and-forget protocol. StatsD can’t cause as cascading failure, and its client libs are very small.



Datadog and StatsD

Datadog allows importing of StatsD for graphing, alerting, and event correlation. They embedded the StatsD daemon within the Datadog Agent so it is a drop in replacement. Datadog added tagging to StatsD which allows information to the metrics like application version, event correlation, and more. Datadog is a monitoring service for IT, Operations, Development and DevOps. It attempts to take input from many vendors, cloud providers, open source tools, servers, and aggregate their data into actionable metrics.



StatsD and Kibana

Kibana is a flexible analytics and visualization platform for elastic search. It provides real-time summary and charting of streaming data from a variety of sources including logstash. Kibana has an Intuitive interface which allows you to configure dashboards. Kibana can be used to graph data from logstash which uses elastic search. Logstash has a plugin for StatsD. Kibana allows you to visualize streams of data from Elasticsearch from Logstash, es-hadoop or 3rd party technologies like Apache Flume, Fluentd, and many others.



StatsD and SOLR and Banana

LucidWorks ported Kibana, called Banana and Logstash to work with SOLR so if you are a SOLR shop, you have that as an option.

Conclusion

Runtime Statistics and Metrics are a very important component of microservices architecture. They help you debug, understand and react to events in your application. They help you build circuit breakers. Make sure that runtime statistics are not treated like an after thought in your microservices lib, but rather part of the core. Tools like StatsD and Code Hale Statistics allow you to gather metrics in a standard way. Tools like Graphite, Kibana, DataDog and Banana help you understand the data, and build dashboards. QBit, the Java Microservices Library, includes a queryable stats service which can feed into StatsD/CodeHale Metrics or can be used to implement features like rate limiting.

Read more here:

User Experience and Microservices Monitoring

$
0
0


User Experience and Microservices Monitoring
With Microservices which are released more often, you can try new features and see how they impact user usage patterns. With this feedback, you can improve your application. It is not uncommon to employ A/B testing and multi-variant testing to try out new combinations of features. Monitoring is more than just watching for failure. With big data, data science, and microservices, monitoring microservices runtime stats is required to know your application users. You want to know what your users like and dislike and react

Read more at Microservices Monitoring.

Debugging and Microservices Monitoring

$
0
0


Debugging and Microservices Monitoring
 
Runtime statistics and metrics are critical for distributed systems. Since microservices architecture use a lot of remote calls. Monitoring microservices metrics can include request per second, available memory, #threads, #connections, failed authentication, expired tokens, etc. These parameters are important for understanding and debugging your code. Working with distributed systems is hard. Working with distributed systems without reactive monitoring is crazy. Reactive monitoring allows you toreact to failure conditions and ramp of services for higher loads.

Read more at Microservices Monitoring.

Circuit Breaker and Microservices Monitoring

$
0
0

Circuit Breaker and Microservices Monitoring
 
You can employ the Circuit Breaker pattern to prevent a catastrophic cascade, and reactive microservices monitoring can be the trigger. Downstream services can be registered in a service discovery so that you can mark nodes as unhealthy as well react by reroute in the case of outages. The reaction can be serving up a deprecated version of the data or service, but the key is to avoid cascading failure. You don't want your services falling over like dominoes.

Cloud Orchestration and Microservices Monitoring

$
0
0

Cloud 
Orchestration and Microservices Monitoring
 
Reactive microservices monitoring would enable you to detect heavy load, and spin up new instances with the cloud orchestration platform of your choice (EC2,CloudStackOpenStackRackspaceboto, etc.). 

...
This allows you to write code that reacts to microservices metrics. QBit stats can be used to implement features like rate limiting, or spinning up new nodes when you detect things are getting overloaded. QBit can also feed stats into StatsD

StatsD and Microservices Monitoring

$
0
0

StatsD is a network daemon for aggregating statistics, such as counters and timers, and shipping over UDP to backend services, such as Graphite or DatadogStatsD has many small clients libs for Java, Python, Ruby, Node, etc.  StatsD server collects stats from clients using a published wire protocol.  StatsD is the de facto standard. Although the Etsy StatsD Server is the reference implementation (the first implementation was written in Perl), there are other implementations like Go Stats Daemon, Data Dog and many moreStatsD captures different metrics, Gauges, Counters, Timing Summary Statistics, and Sets. You decorate your code to capture this type of data and report it. Although StatsD collects runtime statistics data over time and does periodic “flushes” of the data to analysis and monitoring engines you choose, StatsD was originally written with Graphite in mind. Graphite is used to visualize the state of microservices. Graphite is made up of Graphite-Web (graph and dashboard rendering), Carbon (metric processing daemons), and Whisper (time-series database) library.

StatsD seems to be the current champion of mind space. Mainly due to its simplicity and fire-and-forget protocol. StatsD can’t cause as cascading failure, and its client libs are very small. There are other alternatives that QBit can integrate with as well like Coda Hale’s Metrics library which uses a Go Daemon.

StatsD can also dump its feed to Kibana or Banana via a Logstash plugin. You can use Kibana and Banana in place of Graphite. There is even commercial support of StatsD via DataDog which allows monitoring, graphing, alerting, and event correlation. DataDog embedded the StatsD daemon within the Datadog Agent so it is a drop in replacement for StatsD. Datadog is a monitoring service for IT, Operations, Development and DevOps. It attempts to take input from many vendors, cloud providers, open source tools, servers, and aggregate their data into reactive actionable metrics.

Reactive Microservices Monitoring

$
0
0
Reactive Microservices Monitoring

Reactive Microservices Monitoring is an essential ingredient of microservices architecture. You need it for debugging, knowing your users, working with partners, building reactive systems that react to load and failures without cascading outages. Reactive Microservices Monitoring can not be a hindsight decision. Build your microservices with microservices monitoring in mind from the start. Make sure that the microservices lib that you use has monitoring of runtime statistics built in from the start. Make sure that is a core part of the microservices library. StatsD and Code Hale Statistics allow you to gather metrics in a standard way. Tools like Graphite, Kibana, DataDog and Banana help you understand the data, and build dashboards. QBit, the Java Microservices Library, includes a query-able stats service which feeds into StatsD/CodeHale Metrics. QBit can also be used to create reactive features to do rate limiting or spin up new nodes. With big data, data science, and microservices, monitoring microservices runtime stats is required to know your application users, know your partners, know what your system will do under load, etc. 


Using Docker, Gradle to create Java docker distributions for java microservices draft 4

$
0
0

Using Docker, Gradle to create Java docker distributions for java microservices draft 4

I have used Docker and Vagrant quite a lot to setup series of servers. This is a real lifesaver when you are trying to do some integration tests and run into issues that would be hard to track down without running "actual servers". Running everything on one box is not the same as running many "servers". Even if your final deployment is VMWare or EC2 or bare metal servers, Docker and Vagrant are great for integration testing and writing setup documentation.
I also tend to use gradle a lot these days and grown quite fond of the application and distribution plugins. To me gradle application plugin and docker (or vargrant or EC2 with boto) are sort of essential way to doing Java microservice development.
Before we get into Vagrant or Docker, let's try to do something very simple. Let's use the gradle plugin to create a simple Java application that reads its config from\etc\myapp\conf.properties and \etc\myapp\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).

Using Gradle and the Gradle Application plugin and Docker

Gradle can create a distribution zip or tar file which is a archive file with the libs and shell scripts you need to run on Linux/Windows/Cygwin/OSX. Or it can just install of this stuff into a directory of your choice.
What I typically do is this….
  • Create a dist tar file using gradle.
  • Create a dockerfile.
The docker file copies the dist tar to the container, untars it and then runs it inside of docker. Once it is a docker file, then you can make a docker container that you can ship around. The gradle and docker file have all of the config info that is common.
You may even have special gradle build options for different environments. Or your app talks to Consul or etcd on startup and look up the special environments stuff like server locations so the docker binary dist can be identical. Consul and etcd are essential ingredients in a microservices architecture both for elastic consistent config and service discovery.
Our binary deliverable is the runnable docker container not a jar file or a zip.
The distZip, and/or distTar is just a way to package up our code and make it easy to shove into our docker container.
If you go the docker route, then the docker container is our binary (runnable) distribution not the tar or zip. We do not have to guess what JVM, because we configure the docker container with exactly the JVM we want to use. We can install any drivers or daemons or utilities that we might need from the Linux world into our container.
Think of it this way. With maven and/or gradle you can create a zip or war file that has the right version of the MySQL jar file. With Docker, you can create a Linux runnable binary that has all of the jar files and not only the right MySQL jar file but the actual right version MySQL server which can be packaged in the same runnable binary (the Linux Docker container).
Gradle application plugin generates a zip or tar file with everything we need and does not require a master Java process, or another repo cache of jars, etc. Between gradle application plugin and docker, we do whatever we need to do with our binary configuration but in a much more precise manner. Every jar, every linux utility, every thing we need, all in one binary that can be deployed in a prviate cloud, public cloud or just run on your laptop. No need to guess the OS, JVM, or libs. We ship exactly what we need.
Docker is used to make deployements faster and more precise.
If part of the tests include running some integration with virtualization than Docker should be the fastest route for creating new virtual instances (since it is just a chgroot like and not a full virtual machine).
I think Docker, gradle and gradle application plugin is your best option for creating fast integration tests. But of course if you have EC2/boto, Vagrant, etc., Docker is not the only option.

Gradle application plugin

Our first goal is to do the following. Use the gradle application plugin to create a simple Java application that reads its config from \etc\myapp\conf.properties and\etc\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).
Before we get started let's do some prework.
$ sudo mkdir /etc/myapp
$ sudo chown rhightower /etc/myapp
Do the same for /opt/myapp. Where rhightower is your username. :)

The Java app

packagecom.example;

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;

publicclassMain {

publicstaticvoidmain(String... args) throwsIOException {
finalString configLocation =System.getProperty("myapp.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);

finalProperties properties =newProperties();

properties.load(Files.newInputStream(confFile.toPath()));

System.out.printf("The port is %s\n", properties.getProperty("port"));
}

}
It is a simple Java app, it looks at a configuration file that has the port. The location of the configuration file is passed via a System.property. If the System.property is null, then it loads the config file from the current working directory.
When you run this program from an IDE, you will get.
The port is 8080
But we want the ability to create an /etc/myapp/conf.properties and an /opt/myappinstall dir. To do this we will use the application plugin.

Creating an install directory with the applicaiton plugin

To create /etc/myapp/conf.properties and an /opt/myapp install dir, we will use the gradle application plugin.

gradle application plugin

apply plugin:'java'
apply plugin:'application'

mainClassName ='com.example.Main'
applicationName ='myapp'
applicationDefaultJvmArgs = ["-Dmyapp.config.file=/etc/myapp/conf.properties"]

repositories {
mavenCentral()
}

task copyDist(type:Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into '/opt/myapp'
}

task copyConf(type:Copy) {
from "conf/conf.properties"
into "/etc/myapp/"
}


dependencies {
}
Running the copyDist task will also run the installApp which is provided by theapplication plugin which is configured at the top of the file. We can use the copyConffile to copy over a sample configuration file.
Here is our build dir layout.

Build dir layout of the myapp gradle project

.
├── build.gradle
├── conf
│   └── conf.properties
├── settings.gradle
└── src
└── main
└── java
└── com
└── example
└── Main.java

conf/conf.properties

port=8080
To build and deploy the project into /opt/myapp, we do the following:

Building and installing our app

$ gradle build copyDist
This creates this directory structure for the install operation.

Our app install

$ tree /opt/myapp/
/opt/myapp/
├── bin
│   ├── myapp
│   └── myapp.bat
└── lib
└── gradle-app.jar

To deploy a sample config we do this:

Copy sample config

$ gradle build copyConf
Now edit the config file and change the port from 8080 to 9090.

Edit file and change property

$ nano /etc/myapp/conf.properties 
Now run it.
$ /opt/myapp/bin/myapp
The port is 9090
Change the properties file again. Run the app again.

Next up

Configuring logging under /etc/myapp/logging.properties.

Logging

Sl4j is the standard way to install loggers. Logback is the successor to Log4j. The nice thing about Sl4j is you can use built-in logging, log4j or Logback. For now, we are recommending Logback.
We are going to use Logback. Technically we are going to use sl4j, and we are going to use the logback implementation of it.
Logback allows you to set the location of the log configuration via a System property called logback.configurationFile
#### Example setting logback via System property
java -Dlogback.configurationFile=/path/to/config.xml chapters.configuration.MyApp1
We need to add these dependencies to our gradle file.
  • logback-core-1.1.3.jar
  • logback-classic-1.1.3.jar
  • slf4j-api-1.7.7.jar

Adding dependencies to gradle file

dependencies {
compile 'ch.qos.logback:logback-core:1.1.3'
compile 'ch.qos.logback:logback-classic:1.1.3'
compile 'org.slf4j:slf4j-api:1.7.12'
}
The distribution/install that we generate with gradle needs to pass the location to our application. We do that with the applicationDefaultJvmArgs in the gradle build.

Adding logback.configurationFile System property to launcher script

applicationDefaultJvmArgs = [
"-Dmyapp.config.file=/etc/myapp/conf.properties",
"-Dlogback.configurationFile=/etc/myapp/logging.xml"]
Now we can store a logging config in our project so it gets stored in git.

./conf/logging.xml log config

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>conf %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
</encoder>
</appender>

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/opt/logging/logs</file>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>%d{yyyy-MM-dd_HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
</encoder>

<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<FileNamePattern>/opt/logging/logs%i.log.zip</FileNamePattern>
<MinIndex>1</MinIndex>
<MaxIndex>10</MaxIndex>
</rollingPolicy>

<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>2MB</MaxFileSize>
</triggeringPolicy>
</appender>

<logger name="com.example.Main" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT" />
<appender-ref ref="FILE" />
</logger>

<root level="INFO">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Then we can add some tasks in our build script to copy it to the right location.

Scripts to copy logging script into correct location for install

task copyLogConf(type: Copy) {
from "conf/logging.xml"
into "/etc/myapp/"
}

task copyAllConf() {
dependsOn "copyConf", "copyLogConf"
}

To deploy our logging script run
gradle copyAllConf
Now after you install the logging config, you can turn it on or off.
Let's change our main method to use the logging configuration.

Main method that uses logkit to do logging.

packagecom.example;

importjava.io.File;
importjava.io.IOException;
importjava.nio.file.Files;
importjava.util.Properties;


importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

publicclassMain {

staticfinalLogger logger =LoggerFactory.getLogger(Main.class);

publicstaticvoidmain(finalString... args) throwsIOException {
finalString configLocation =System.getProperty("myapp.config.file");
finalFile confFile = configLocation==null?
newFile("./conf/conf.properties") :
newFile(configLocation);

finalProperties properties =newProperties();

properties.load(Files.newInputStream(confFile.toPath()));

logger.debug(String.format("The port is %s\n", properties.getProperty("port")));
}

}

Next up after that

Configuring dockerfile

Raw Notes

allprojects {

group = 'mycompany.router'
apply plugin: 'idea'
apply plugin: 'java'
apply plugin: 'maven'
apply plugin: 'application'
version = '0.1-SNAPSHOT'

}


subprojects {


repositories {
mavenLocal()
mavenCentral()
}

sourceSets.main.resources.srcDir 'src/main/java'
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8

dependencies {
compile "io.fastjson:boon:$boonVersion"

testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}

task buildDockerfile (type: Dockerfile) {
dependsOn distTar
from "java:openjdk-8"
add "$distTar.archivePath", "/"
workdir "/$distTar.archivePath.name" - ".$distTar.extension" + "/bin"
entrypoint "./$project.name"
if (project.dockerPort) {
expose project.dockerPort
}
if (project.jmxPort) {
expose project.jmxPort
}
}

task buildDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "build", "-t", "mycompany/$project.name:$version", buildDockerfile.dockerDir
}


task pushDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "push", "mycompany/$project.name"
}


task runDockerImage (type: Exec) {
dependsOn buildDockerImage
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}


task runDocker (type: Exec) {
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}

}


project(':sample-web-server') {

mainClassName = "mycompany.sample.web.WebServerApplication"

applicationDefaultJvmArgs = ["-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.port=${jmxPort}",
"-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false"]

dependencies {
compile "io.fastjson:boon:$boonVersion"

compile group: 'io.advantageous.qbit', name: 'qbit-boon', version: '0.5.2-SNAPSHOT'
compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.5.2-SNAPSHOT'

testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}

buildDockerfile {
add "$project.buildDir/resources/main/conf/sample-web-server-config.json", "/etc/sample-web-server/conf.json"
add "$project.buildDir/resources/main/conf/sample-web-server-config.ctmpl", "/etc/sample-web-server/conf.ctmpl"
add "$project.buildDir/resources/main/conf/sample-web-server-consul-template.cfg", "/etc/consul-template/conf/sample-web-server/sample-web-server-consul-template.cfg"
volume "/etc/consul-template/conf/sample-web-server"
volume "/etc/sample-web-server"
}

}


class Dockerfile extends DefaultTask {
def dockerfileInfo = ""
def dockerDir = "$project.buildDir/docker"
def dockerfileDestination = "$project.buildDir/docker/Dockerfile"
def filesToCopy = []

File getDockerfileDestination() {
project.file(dockerfileDestination)
}

def from(image="java") {
dockerfileInfo += "FROM $image\r\n"
}

def maintainer(contact) {
maintainer += "MAINTAINER $contact\r\n"
}

def add(sourceLocation, targetLocation) {
filesToCopy << sourceLocation
def file = project.file(sourceLocation)
dockerfileInfo += "ADD $file.name ${targetLocation}\r\n"
}

def run(command) {
dockerfileInfo += "RUN $command\r\n"
}

def volume(path) {
dockerfileInfo += "VOLUME $path\r\n"
}

def env(var, value) {
dockerfileInfo += "ENV $var $value\r\n"
}

def expose(port) {
dockerfileInfo += "EXPOSE $port\r\n"
}

def workdir(dir) {
dockerfileInfo += "WORKDIR $dir\r\n"
}

def cmd(command) {
dockerfileInfo += "CMD $command\r\n"
}

def entrypoint(command) {
dockerfileInfo += "ENTRYPOINT $command\r\n"
}

@TaskAction
def writeDockerfile() {
for (fileName in filesToCopy) {
def source = project.file(fileName)
def target = project.file("$dockerDir/$source.name")
target.parentFile.mkdirs()
target.delete()
target << source.bytes
}
def file = getDockerfileDestination()
file.parentFile.mkdirs()
file.write dockerfileInfo
}
}

QBit: Intercepting method calls, grabbing http request, using BeforeMethodCall, AOP like features with qbit

$
0
0
Recently someone asked me if you could capture the request parameters from a request with QBit REST support. You can.
QBit has this interface.

BeforeMethodCall

packageio.advantageous.qbit.service;

importio.advantageous.qbit.message.MethodCall;

/**
* Use this to register for before method calls for services.
* <p>
* created by Richard on 8/26/14.
*
* @author rhightower
*/
publicinterfaceBeforeMethodCall {

booleanbefore(MethodCallcall);
}
With this BeforeMethodCall interface you can intercept a method call. If you register it with a ServiceQueue via the ServiceBuilder then the method interception happens in the same thread as the service queue method calls.
If you return false from the before method then the call will not be made. You can also intercept calls at the ServiceBundle and the ServiceEndpointServer levels using theEndpointServerBuilder and the ServiceBundleBuilder. When you register aBeforeMethodCall with a service bundle or and server end point, it gets called before the method is enqueued to the actual service queue. When you register aBeforeMethodCall with a service queue, it gets called right before the method gets invoked in the same thread as the service queue, i.e., in the service thread, which is most useful for capturing the HttpRequest.
But let's say that you want to access the HttpRequest object to do something special with it. Perhaps read the request params.
This is possible. One merely has to intercept the call. Every Request object has a property called originatingRequest. A MethodCall is a Request object as is anHttpRequest. This means that you just have to intercept the call withBeforeMethodCall grab the methodCall, and then use it to get the HttpRequest.

Service example

/**
* Created by rhightower.
*/
@RequestMapping("/api")
publicclassPushService {

privatefinalThreadLocal<HttpRequest> currentRequest;

publicPushService() {
this.currentRequest =newThreadLocal<>();
}


@RequestMapping(value="/event", method=RequestMethod.POST)
publicvoidevent(finalCallback<Boolean>callback, finalEventevent) {

finalHttpRequest httpRequest = currentRequest.get();
System.out.println(httpRequest.address());
System.out.println(httpRequest.params().size());
...
}
Now in the main method, we will need to construct the service and then register the service with the endpoint.
Notice the private final ThreadLocal<HttpRequest> currentRequest; because we will use that to store the current http request.

Register a ServiceQueue with an end point server

finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder();

finalPushService pushService =newPushService();


finalServiceEndpointServer serviceEndpointServer =
managedServiceBuilder.getEndpointServerBuilder()
.setUri("/")
.build();

...


finalServiceQueue pushServiceQueue =...
serviceEndpointServer.addServiceQueue("/api/pushservice", pushServiceQueue);
Notice when we create the service queue separately we have to register the address it is bound under.
When we create the service queue (pushServiceQueue) for pushService, we want to tell it to use the same response queue as our endpoint server and register the beforeCall lambda to capture the HttpRequest from the MethodCall.

Creating a lambda expression to populate the currentRequest from the originatingRequest of the MethodCall (call)

finalServiceQueue pushServiceQueue = managedServiceBuilder
.createServiceBuilderForServiceObject(pushService)
.setResponseQueue(serviceEndpointServer.serviceBundle().responses())
.setBeforeMethodCall(call -> {

pushService.currentRequest.set((HttpRequest) call.originatingRequest());
returntrue;
})
.buildAndStart();
The full example is a bit longer as it has some other things not mentioned in this article.
publicclassEvent {

privatefinalString name;

publicEvent(Stringname) {
this.name = name;
}

publicStringgetName() {
return name;
}

}

....


importio.advantageous.qbit.annotation.*;
importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.http.request.HttpRequest;
importio.advantageous.qbit.reactive.Callback;
importio.advantageous.qbit.reactive.Reactor;
importio.advantageous.qbit.reactive.ReactorBuilder;
importio.advantageous.qbit.server.ServiceEndpointServer;
importio.advantageous.qbit.service.ServiceQueue;

importjava.util.concurrent.TimeUnit;



/**
* Created by rhightower.
*/
@RequestMapping("/api")
publicclassPushService {


privatefinalReactor reactor;
privatefinalStoreServiceClient storeServiceClient;

privatefinalThreadLocal<HttpRequest> currentRequest;

publicPushService(finalReactorreactor,
finalStoreServiceClientstoreServiceClient) {
this.reactor = reactor;
this.storeServiceClient = storeServiceClient;
this.currentRequest =newThreadLocal<>();
}

@RequestMapping("/hi")
publicStringsayHi() {
return"hi";
}

@RequestMapping(value="/event", method=RequestMethod.POST)
publicvoidevent(finalCallback<Boolean>callback, finalEventevent) {

finalHttpRequest httpRequest = currentRequest.get();

System.out.println(httpRequest.address());

System.out.println(httpRequest.params().baseMap());
storeServiceClient.addEvent(callback, event);

}

@QueueCallback({QueueCallbackType.LIMIT, QueueCallbackType.EMPTY, QueueCallbackType.IDLE})
publicvoidload() {

reactor.process();
}


publicstaticvoidmain(String... args) {


/* Using new snapshot 2. */
finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder();

finalStoreService storeService =newStoreService();


finalServiceQueue serviceQueue = managedServiceBuilder.createServiceBuilderForServiceObject(storeService)
.buildAndStartAll();

finalStoreServiceClient storeServiceClient = serviceQueue.createProxyWithAutoFlush(StoreServiceClient.class,
100, TimeUnit.MILLISECONDS);


finalPushService pushService =newPushService(ReactorBuilder.reactorBuilder().build(),
storeServiceClient);

finalServiceEndpointServer serviceEndpointServer = managedServiceBuilder.getEndpointServerBuilder()
.setUri("/")
.build();

finalServiceQueue pushServiceQueue = managedServiceBuilder
.createServiceBuilderForServiceObject(pushService)
.setResponseQueue(serviceEndpointServer.serviceBundle().responses())
.setBeforeMethodCall(call -> {

pushService.currentRequest.set((HttpRequest) call.originatingRequest());
returntrue;
})
.buildAndStart();

serviceEndpointServer.addServiceQueue("/api/pushservice", pushServiceQueue);


serviceEndpointServer.startServer();

/* Wait for the service to shutdown. */
managedServiceBuilder.getSystemManager().waitForShutdown();

}

}
...

publicclassStoreService {

publicbooleanaddEvent(finalEventevent) {

returntrue;
}

}
...

importio.advantageous.qbit.reactive.Callback;

publicinterfaceStoreServiceClient {

voidaddEvent(finalCallback<Boolean>callback, finalEventevent);
}
...
importio.advantageous.boon.json.JsonFactory;
importio.advantageous.qbit.http.HTTP;

import staticio.advantageous.boon.core.IO.puts;

publicclassTestMain {

publicstaticvoidmain(finalString... args) throwsException {


HTTP.Response hello =HTTP.jsonRestCallViaPOST("http://localhost:9090/api/event", JsonFactory.toJson(newEvent("hello")));

puts(hello.body(), hello.status());
}
}

Calling Cassandra async from QBit using Reactor and CallBackBuilder, and Callbacks

$
0
0
Cassandra offers an async API as does QBit. Cassandra uses Google Guava. QBit uses QBit. :)
How do you combine them so you do not have to create a worker pool in QBit to make async calls to Cassandra?
Let's say you have a Cassandra service like so...

Example Cassandra service

importcom.datastax.driver.core.*;
importcom.google.common.util.concurrent.FutureCallback;
importcom.google.common.util.concurrent.Futures;
importio.advantageous.qbit.annotation.*;

importjava.util.Map.Entry;
importjava.util.concurrent.atomic.AtomicBoolean;

importio.advantageous.qbit.reactive.Callback;

importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

importcom.datastax.driver.core.exceptions.QueryExecutionException;
importcom.datastax.driver.core.exceptions.QueryValidationException;
...

publicclassCassandraService {


privatefinalLogger logger =LoggerFactory.getLogger(CassandraService.class);
privatefinalCassandraCluster cluster ;
privatefinalCassandraConfig config;
privatefinalSession session; //only one per keyspace,
privatefinalAtomicBoolean isConnected =newAtomicBoolean(false);
/**
* Configure the client to connect to cluster
* @param config
*/
publicCassandraService (finalCassandraConfigconfig) {

...
}




publicvoidexecuteAsync(finalCallback<ResultSet>callback, finalStatementstmt) {
finalResultSetFuture future =this.session.executeAsync(stmt);

Futures.addCallback(future, newFutureCallback<ResultSet>() {
@Override
publicvoidonSuccess(ResultSetresult) {
callback.accept(result);
}

@Override
publicvoidonFailure(Throwablet) {
callback.onError(t);
}
});

}
Note that Futures from Cassandra driver support comes from the Guava library from google. DataStax has a nice tutorial on using Cassandra async API with Guava.
In this example we have a service called EventStorageService which endeavors to store an event into Cassandra. Most of the plumbing and tables DDL for the Event have been omitted. This is not a Cassandra tutorial by any means.
Note that in the onSuccess of the FutureCallback that we call the QBit callback akaCallback accept method. A QBit callback is a Java 8 consumer interface Callback<T> extends Consumer<T> which is probably what FutureCallback would have been if it were created post Java 8. You can also see that if theFutureCallback.onFailure gets called and that the code delegates to onError. Fairly simple.
Now we have another service call this service. As in this example CassandraService is a thin wrapper over the Cassandra API.

Example service that uses the cassandra service

publicclassEventStorageService {
privatefinalLogger logger =LoggerFactory.getLogger(EventStorageService.class);

privatefinalCassandraService cassandraService;


privatefinalReactor reactor;

publicEventStorageService (finalCassandraServicecassandraService,
finalReactorreactor) {
this.cassandraService = cassandraService;
logger.info(" Event Storage Service is up ");

if (reactor!=null) {
this.reactor = reactor;
} else {
this.reactor =ReactorBuilder.reactorBuilder().build();
}

}


@RequestMapping(value="/event", method=RequestMethod.POST)
publicvoidaddEventAsync (finalCallback<Boolean>statusCallback, finalEventevent) {
logger.debug("Storing Event async {} " , event);
finalEventStorageRecord storageRec =EventConverter.toStorageRec(event);

finalCallback<ResultSet> callback = reactor.callbackBuilder()
.setCallback(ResultSet.class, resultSet ->
statusCallback.accept(resultSet!=null))
.setOnTimeout(() -> statusCallback.accept(false))
.setOnError(error -> statusCallback.onError(error))
.build(ResultSet.class);

this.addEventStorageRecordAsync(callback, storageRec);


}




publicvoidaddEventStorageRecordAsync (finalCallback<ResultSet>callback,
finalEventStorageRecordstorageRec) {
logger.info("Storing the record with storage-key {} async ", storageRec.getStorageKey());

if(storageRec !=null) {

SimpleStatement simpleStatement =...;
cassandraService.executeAsync(callback, simpleStatement);

}


}

Note that QBit uses a callbackBuilder so the constituent parts of a callback can be lambda expressions.
Callback is a rather simple interface that builds on Java 8 Consumer and adds timeout and error handling.

Callback

publicinterfaceCallback<T> extendsConsumer<T> {

default voidonError(Throwableerror) {

LoggerFactory.getLogger(Callback.class)
.error(error.getMessage(), error);
}


default voidonTimeout() {

}

}
The Reactor is class to manage timeouts, schedule periodic tasks, and other service call coordination. We initialize the Reactor in the constructor of theEventStorageService as seen in the previous code listing. We use thecallbackBuilder created from the Reactor as it will register the callbacks with thereactor for timeouts and such.
To enable the reactor, we must call it from service queue callback method of idle, limit and empty. One merely needs to call reactor.process from the callback, and it will periodically check for timeouts and such.

Calling reactor process to process callbacks and handle timeouts

    @QueueCallback({
QueueCallbackType.LIMIT,
QueueCallbackType.IDLE,
QueueCallbackType.EMPTY})
publicvoid process() {
reactor.process();
}

Underneath the covers.

The Reactor uses AsyncFutureCallback which is both a FutureRunnable and aCallback so therefore a Consumer. Rather then invent our own async API or functional API we decided to lean on Java 8, and build on the shoulders of giants.

Reactor uses AsyncFutureCallback internally. And CallBack builder really builds AsyncFutureCallback

publicinterfaceAsyncFutureCallback<T> extendsRunnable, Callback<T>, Future<T> {
ExceptionCANCEL=newException("Cancelled RunnableCallback");

booleancheckTimeOut(longnow);

voidaccept(Tt);

voidonError(Throwableerror);

voidrun();

@Override
booleancancel(booleanmayInterruptIfRunning);

@Override
booleanisCancelled();

@Override
booleanisDone();

@Override
Tget();

@SuppressWarnings("NullableProblems")
@Override
Tget(longtimeout, TimeUnitunit);


default booleantimedOut(longnow) {

return!(startTime() ==-1|| timeOutDuration() ==-1) && (now - startTime()) > timeOutDuration();
}

default longtimeOutDuration() {
return-1;
}


default longstartTime() {
return-1;
}

default voidfinished() {

}


default booleanisTimedOut() {
returnfalse;
}
}

Working with Service Pools - working with SOLRJ from a service pool (microservices)

$
0
0

Working with Service Pools - working with SOLRJ from a service pool



In a truly reactive word, one can expect that all APIs are async. However, at times we have to integrate with legacy services and legacy APIs like JDBC.
There are times when you will need worker pools. If you are dealing with IO and the API is not async, then you will want to wrap the API in a service that you can access from a Service pool.
In this example, we will use SOLRJ API to access SOLR.

Example SOLR service

publicclassSolrServiceImplimplementsSolrService {


/**
* Create SolrCalypsoDataStore with config file.
*
* @param solrConfig solrConfig
*/
publicSolrServiceImpl(finalSolrConfigsolrConfig, ...) {

logger.info("SOLR Calypso Exporter Service init {}", solrConfig);
healthServiceAsync.register(HEALTH_NAME, 20, TimeUnit.SECONDS);
this.solrConfig = solrConfig;
connect();
}

...

/**
* Connect to solr.
*/
privatevoidconnect() {

...
}


@Override
publicvoidstoreEvent(Eventevent) {
store(event);
}

@Override
publicvoidstoreTimeSeries(TimeSeriestimeSeries) { store(timeSeries);}


@Override
publicvoidget(finalCallback<String>callback, final@RequestParam(value="q", required=true) StringqueryParams) {
callback.accept(doGet(queryParams));
}

privatebooleanstore(finalObjectdata) {

logger.info("store():: importing calypso data event into solr {}",
data);

if (connectedToSolr) {

SolrInputDocument doc =SolrServiceHelper.getSolrDocument(data);

try {
UpdateResponse ur = client.add(doc);
if (solrConfig.isForceCommit()) {
client.commit();
}

} catch (Exception e) {
...
}

returntrue;
} else {
...
returnfalse;
}
}

/**
* Proxy the request to solr
* @param queryParams query params
* @return
*/
publicStringdoGet(@RequestParam(value="q", required=true) StringqueryParams) {

queryParams = queryParams.replaceAll("\\n", "");

logger.debug("Processing query params: {} ", queryParams);
String solrQueryUrl =this.solrConfig.getSolrQueryUrl() + queryParams;

logger.info("solr request Built {} ", solrQueryUrl);

String result =null;
try {
result =IOUtils.toString(newURI(solrQueryUrl));

} catch (IOException|URISyntaxException e) {
logger.error("Failed to get solr response for queryUrl {} ", solrQueryUrl, e);
}

return result;
}



@QueueCallback(QueueCallbackType.SHUTDOWN)
publicvoidstop() {

logger.info("Solr Client stopped");
try {

this.client.close();
this.connectedToSolr =false;
} catch (IOException e) {
logger.warn("Exception while closing the solr client ", e);
}

}
}
Pretty simple. Mainly for an example. Now we want to access this from multiple threads since SOLR can block.
To do this we will use a RoundRobinServiceWorkerBuilder which creates aRoundRobinServiceWorker. To get more background on workers in QBit read sharded service workers and service workers.
RoundRobinServiceWorker is a start-able service dispatcher (Startable,ServiceMethodDispatcher) which can be registered with a ServiceBundle. AServiceMethodDispatcher is an object that can dispatch method calls to a service.
finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder();

finalCassandraService cassandraService =newCassandraService(config.cassandra);


/* Create the round robin dispatcher with 16 threads. */
finalRoundRobinServiceWorkerBuilder roundRobinServiceWorkerBuilder =RoundRobinServiceWorkerBuilder
.roundRobinServiceWorkerBuilder().setWorkerCount(16);

/* Register a callback to create instances. */
roundRobinServiceWorkerBuilder.setServiceObjectSupplier(()
->newSolrServiceImpl(config.solr));

/* Build and start the dispatcher. */
finalServiceMethodDispatcher serviceMethodDispatcher = roundRobinServiceWorkerBuilder.build();
serviceMethodDispatcher.start();

/* Create a service bundle and register the serviceMethodDispatcher with the bundle. */
finalServiceBundle bundle = managedServiceBuilder.createServiceBundleBuilder().setAddress("/").build();
bundle.addServiceConsumer("/solrWorkers", serviceMethodDispatcher);
finalSolrService solrWorkers = bundle.createLocalProxy(SolrService.class, "/solrWorkers");
bundle.start();

/* Create other end points and register them with service endpoint server. */
finalSolrServiceEndpoint solrServiceEndpoint =newSolrServiceEndpoint(solrWorkers);
finalEventStorageService eventStorageService =newEventStorageService(cassandraService);

//final EventManager eventManager = managedServiceBuilder.getEventManager(); In 0.8.16+
finalEventManager eventManager =QBit.factory().systemEventManager();
finalIngestionService ingestionService =newIngestionService(eventManager);



managedServiceBuilder.getEndpointServerBuilder().setUri("/").build()
.initServices( cassandraService,
eventStorageService,
ingestionService,
solrServiceEndpoint
)
.startServer();
Notice this code that creates a RoundRobinServiceWorkerBuilder.

Working with RoundRobinServiceWorkerBuilder

/* Create the round robin dispatcher with 16 threads. */
finalRoundRobinServiceWorkerBuilder roundRobinServiceWorkerBuilder =RoundRobinServiceWorkerBuilder
.roundRobinServiceWorkerBuilder().setWorkerCount(16);
Above we are creating the builder and setting the number of workers for the round robin dispatcher. The default is to set the number equal to the number of available CPUs. Next we need to tell the builder how to create the service impl objects as follows:

Registering a callback to create instance of the service.

/* Register a callback to create instances. */
roundRobinServiceWorkerBuilder.setServiceObjectSupplier(()
->newSolrServiceImpl(config.solr));
NOTE: Note that you use RoundRobinServiceWorkerBuilder when the services are stateless (other than connection state) and you use ShardedServiceWorkerBuilder if you must maintain sharded state (caches or some such).
ServiceBundle knows how to deal with a collection of addressableServiceMethodDispatchers. Thus to use the RoundRobinServiceWorker we need to use a service bundle. Therefore, we create a service bundle and register the service worker with it.

Registering the roundRobinServiceWorker with a service bundle

/* Build and start the dispatcher. */
finalServiceMethodDispatcher serviceMethodDispatcher = roundRobinServiceWorkerBuilder.build();
serviceMethodDispatcher.start();

/* Create a service bundle and register the serviceMethodDispatcher with the bundle. */
finalServiceBundle bundle = managedServiceBuilder.createServiceBundleBuilder().setAddress("/").build();
bundle.addServiceConsumer("/solrWorkers", serviceMethodDispatcher);
finalSolrService solrWorkers = bundle.createLocalProxy(SolrService.class, "/solrWorkers");
bundle.start();
Service bundles do not auto flush, and we are using an interface from a service bundle from our SolrServiceEndpoint instance. Therefore, we should use a Reactor. A QBitReactor is owned by a service that is siting behind a service queue (ServiceQueue). You can register services to be flushed with a reactor, you can register for repeating jobs with the reactor, and you can coordinate callbacks with the reactor. Thereactor has a process method that needs to be periodically called during idle times, when batch limits (queue is full) are met and when the queue is empty. We do that by calling the process method as follows:

SolrServiceEndpoint using a reactor object to manage callbacks and flushes

@RequestMapping(value ="/storage/solr", method =RequestMethod.ALL)
publicclassSolrServiceEndpoint {


privatefinalSolrService solrService;
privatefinalReactor reactor;

publicSolrServiceEndpoint(finalSolrServicesolrService) {
this.solrService = solrService;
reactor =ReactorBuilder.reactorBuilder().build();
reactor.addServiceToFlush(solrService);

}

@OnEvent(IngestionService.NEW_EVENT_CHANNEL)
publicvoidstoreEvent(finalEventevent) {
solrService.storeEvent(event);
}

@OnEvent(IngestionService.NEW_TIMESERIES_CHANNEL)
publicvoidstoreTimeSeries(finalTimeSeriestimeSeries) {
solrService.storeTimeSeries(timeSeries);
}


/**
* Proxy the request to solr
*
* @param queryParams
* @return
*/
@RequestMapping(value="/get", method=RequestMethod.GET)
publicvoidget(finalCallback<String>callback, final@RequestParam(value="q", required=true) StringqueryParams) {
solrService.get(callback, queryParams);
}


@QueueCallback({QueueCallbackType.EMPTY, QueueCallbackType.IDLE, QueueCallbackType.LIMIT})
publicvoidprocess() {
reactor.process();
}
}
Notice that the process method of SolrServiceEndpoint uses the QueueCallBackannotation and enums (@QueueCallback({QueueCallbackType.EMPTY, QueueCallbackType.IDLE, QueueCallbackType.LIMIT}), and then all it does it callreactor.process. In the constructor, we registered the solrService service proxy with the reactor.

Registering the solrService with the reactor

public SolrServiceEndpoint(finalSolrService solrService) {
this.solrService = solrService;
reactor =ReactorBuilder.reactorBuilder().build();
reactor.addServiceToFlush(solrService);

}

Understanding ManagedServiceBuilder to create Microservices in QBit that support Docker, Heroku, Swagger, Consul, and StatsD with near 0 config

$
0
0
QBit integrates easily with ConsulStatsD and Swagger. In addition QBit has its own health system, stats engine, and meta-data engine.

Swagger is for code generations of REST clients in Python, Java, Ruby, Scala, etc.
Consul is for health monitoring and service discovery (among other things).
StatsD is for microservice health monitoring.
To make configuring QBit easier we did two things: 1) we added support for Spring Boot (which we have not released yet), and 2) we created the ManagedServiceBuilder, which we will show below.
The ManagedServiceBuilder simplifies construction of QBit endpoints by registering all services and endpoints with the system service so that they are shut down correctly when you CTRL-C or kill an app gracefully.
In addition ManagedServiceBuilder allows enabling of StatsDSwagger, and Consul in one simple step. By default, ManagedServiceBuilder is configured to run in an environment like Docker or Heroku.
Let's show a simple example:

Hello World REST service in QBit

packagecom.mammatustech;


importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.annotation.RequestMapping;

@RequestMapping("/hello")
publicclassHelloWorldService {


@RequestMapping("/hello")
publicStringhello() {
return"hello "+System.currentTimeMillis();
}

publicstaticvoidmain(finalString... args) {
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder().setRootURI("/root");

/* Start the service. */
managedServiceBuilder.addEndpointService(newHelloWorldService())
.getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health end-points and meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Servers started");


}
}
The above is a simple REST service. It has one REST method.

Hello REST method

@RequestMapping("/hello")
publicclassHelloWorldService {


@RequestMapping("/hello")
publicStringhello() {
return"hello "+System.currentTimeMillis();
}
QBit uses the same style REST methods as Spring MVC REST support. QBit only supports JSON as body params and return types.
The gradle file to compile this is as follows:

build.gradle

group 'qbit-ex'
version '1.0-SNAPSHOT'

apply plugin:'java'


compileJava {
sourceCompatibility =1.8
}

repositories {
mavenCentral()
mavenLocal()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
compile group:'io.advantageous.qbit', name:'qbit-admin', version:'0.8.16-RC2-SNAPSHOT'
compile group:'io.advantageous.qbit', name:'qbit-vertx', version:'0.8.16-RC2-SNAPSHOT'

}
Change the version to a release after 0.8.16 or build the snapshot.
The main method starts up the end point on port 8080 and then starts up an Admin Server on PORT 7777.

Main method starts up endpoint and admin server

...

publicstaticvoid main(finalString... args) {
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder().setRootURI("/root");

/* Start the service. */
managedServiceBuilder.addEndpointService(newHelloWorldService())
.getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health end-points and meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Servers started");


}
Since this class has a main method, you should be able to run it from your IDE. The admin server exposes health end-points and meta-data. The default port for an end point is 8080, but you can override it by setting the environment variable PORT or WEB_PORT.
By default, we enable endpoints to manage the server for health, and stats that works in a Heroku or Docker environment. There are methods on ManagedServiceBuilder to disable health checks, etc. There are also methods on ManagedServiceBuilder to turn on StatsD and Consul support.
Let's look at the endpoints we have exposed so far.

Actual Service end point

$ curl http://localhost:8080/root/hello/hello
"hello 1438470713526"

Swagger meta endpoint

$ curl http://localhost:7777/__admin/meta/
{
"swagger": "2.0",
"info": {
"title": "application title goes here",
"description": "Description not set",
"contact": {
"name": "ContactName not set",
"url": "Contact URL not set",
"email": "no.contact.email@set.me.please.com"
},
"version": "0.1-NOT-SET",
"license": {
"name": "licenseName not set",
"url": "http://www.license.url.com/not/set/"
}
},
"host": "localhost:8888",
"basePath": "/root",
"schemes": [
"http",
"https",
"wss",
"ws"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/hello/hello": {
"get": {
"operationId": "hello",
"produces": [
"application/json"
],
"responses": {
"200": {
"description": "returns",
"schema": {
"type": "string"
}
}
}
}
}
}
}
You can import the above into the Swagger editor and generate clients in Python, Perl, PHP, Ruby, Java, C# and more. By the way, there are ways to configure all the parameters that say "set me" or some variation of the above.
There is a health endpoint to make working in Docker and Heroku easy or other similar cloud environments (EC2, VMWare cloud, OpenStack).

Health system endpoint

$ curl http://localhost:8080/__health
"ok"
This deceptively easy end-point will check every endpoint server, service, service queue, etc. to see if they are healthy and you can register your own health checks. This is not just your REST services but all of the IO services, nano services, etc. that they depend on. We could write a whole article on just the HealthService, which is preconfigured with ManagedServiceBuilder.

Other end points of note

Admin Endpoint ok

 $ curl http://localhost:7777/__admin/ok
The above Returns true if all registered health systems are healthy.
A node is a service, service bundle, queue, or server endpoint that is being monitored.

All nodes by name (health and unhealthy)

    $ curl http://localhost:7777/__admin/all-nodes/

List healthy nodes by name

 $ curl http://localhost:7777/__admin/healthy-nodes/

List complete node information

$ curl http://localhost:7777/__admin/load-nodes/

List stats information for Heroku and Docker style environments

 $ curl http://localhost:8080/__stats/instance

Consul

Let's say you want to use Consul. There is one method to enable it.

Enabling Consul

        managedServiceBuilder.enableConsulServiceDiscovery("dc1", "localhost", 8500);
If you want to use the default host and port:

Enabling Consul with on argument

        managedServiceBuilder.enableConsulServiceDiscovery("dc1");
Just do the above before you create your first endpoint server or service queue. The main endpoint will automatically register with Consul and periodically check-in health. It will even check with the internal health system to see if all of the nodes (service queues, endpoints etc.) are healthy and pass that information to Consul.

Enabling StatsD

Enabling StatsD is also easy

Enabling StatsD

        managedServiceBuilder.getStatsDReplicatorBuilder()
.setHost("somehost").setPort(9000);
managedServiceBuilder.setEnableStats(true);
Just do the above before you create your first endpoint server or service queue. There are default stats gathered for all Service Queues and Endpoint servers.
ManagedServiceBuilder is one stop shopping to writing a cloud friendly microservices in QBit.

Request filtering based on headers in QBit - Filtering requests with HttpRequest shouldContinue predicate

$
0
0
We added support for doing things that you would normally do in a ServletFilter or its ilk. We had the hook there already and the Predicate already allowed you to chainPredicates but this did not address that we started to use Predicates to wire in health checks and stat check endpoints. We added a mechanism to create chains of predicates. The first one that returns false, the chain stops processing.
The HTTP server allows you to pass a predicate.
setShouldContinueHttpRequest(Predicate<HttpRequest> predicate) 
Predicate<HttpRequest> predicate.
The predicate allows for things like security interception. Look for an auth header. Reject request if auth header is not in place.
Predicates are nest-able.
It is often the case, that you will want to run more than one predicate.
To support this, we added addShouldContinueHttpRequestPredicate(final Predicate<HttpRequest> predicate) to the HttpServerBuilder.
The HttpServerBuilder will keep a list of predicates, and register them with the HttpServer when it builds the http server.
You can add your own predicates or replace the default predicate mechanism.

HttpServerBuilder

privateRequestContinuePredicate requestContinuePredicate =null;

publicRequestContinuePredicate getRequestContinuePredicate() {
if (requestContinuePredicate ==null) {
requestContinuePredicate =newRequestContinuePredicate();
}
return requestContinuePredicate;
}

publicHttpServerBuilder setRequestContinuePredicate(finalRequestContinuePredicate requestContinuePredicate) {
this.requestContinuePredicate = requestContinuePredicate;
returnthis;
}

publicHttpServerBuilder addShouldContinueHttpRequestPredicate(finalPredicate<HttpRequest> predicate) {
getRequestContinuePredicate().add(predicate);
returnthis;
}



publicclassRequestContinuePredicateimplementsPredicate<HttpRequest>{

privatefinalCopyOnWriteArrayList<Predicate<HttpRequest>> predicates =newCopyOnWriteArrayList<>();

publicRequestContinuePredicateadd(Predicate<HttpRequest>predicate) {
predicates.add(predicate);
returnthis;
}

@Override
publicbooleantest(finalHttpRequesthttpRequest) {
boolean shouldContinue;

for (Predicate<HttpRequest> shouldContinuePredicate : predicates) {
shouldContinue = shouldContinuePredicate.test(httpRequest);
if (!shouldContinue) {
returnfalse;
}
}
returntrue;
}
}
We added a bunch of unit tests to make sure this actually works. :)
We created an example to show how this works.
packagecom.mammatustech;


importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.http.request.HttpRequest;
importio.advantageous.qbit.http.server.HttpServerBuilder;

importjava.util.function.Predicate;

/**
* Default port for admin is 7777.
* Default port for main endpoint is 8080.
*
* <pre>
* <code>
*
* Access the service:
*
* $ curl http://localhost:8080/root/hello/hello
*
* This above will respond "shove off".
*
* $ curl --header "X-SECURITY-TOKEN: shibboleth"http://localhost:8080/root/hello/hello
*
* This will get your hello message.
*
* To see swagger file for this service:
*
* $ curl http://localhost:7777/__admin/meta/
*
* To see health for this service:
*
* $ curl http://localhost:8080/__health
* Returns "ok" if all registered health systems are healthy.
*
* OR if same port endpoint health is disabled then:
*
* $ curl http://localhost:7777/__admin/ok
* Returns "true" if all registered health systems are healthy.
*
*
* A node is a service, service bundle, queue, or server endpoint that is being monitored.
*
* List all service nodes or endpoints
*
* $ curl http://localhost:7777/__admin/all-nodes/
*
*
* List healthy nodes by name:
*
* $ curl http://localhost:7777/__admin/healthy-nodes/
*
* List complete node information:
*
* $ curl http://localhost:7777/__admin/load-nodes/
*
*
* Show service stats and metrics
*
* $ curl http://localhost:8080/__stats/instance
* </code>
* </pre>
*/
@RequestMapping("/hello")
publicclassHelloWorldService {


@RequestMapping("/hello")
publicStringhello() {
return"hello "+System.currentTimeMillis();
}

publicstaticvoidmain(finalString... args) {
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder().setRootURI("/root");

finalHttpServerBuilder httpServerBuilder = managedServiceBuilder.getHttpServerBuilder();

/** We can register our security token checker here. */
httpServerBuilder.addShouldContinueHttpRequestPredicate(HelloWorldService::checkAuth);

/* Start the service. */
managedServiceBuilder.addEndpointService(newHelloWorldService())
.getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health end-points and meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Servers started");


}

/**
* Checks to see if the header <code>X-SECURITY-TOKEN</code> is set to "shibboleth".
* @param httpRequest http request
* @return true if we should continue, i.e., auth passed, false otherwise.
*/
privatestaticbooleancheckAuth(finalHttpRequesthttpRequest) {

/* Only check uri's that start with /root/hello. */
if (httpRequest.getUri().startsWith("/root/hello")) {

finalString x_security_token = httpRequest.headers().getFirst("X-SECURITY-TOKEN");

/* If the security token is set to "shibboleth" then continue processing the request. */
if ("shibboleth".equals(x_security_token)) {
returntrue;
} else {
/* Security token was not what we expected so send a 401 auth failed. */
httpRequest.getReceiver().response(401, "application/json", "\"shove off\"");
returnfalse;
}
}
returntrue;
}
}
To exercise this and show that it is working, let's use curl.

Pass the header token

$ curl --header "X-SECURITY-TOKEN: shibboleth"http://localhost:8080/root/hello/hello
"hello 1440012093122"

No header token

Health request

You may wonder why/how health comes up in this conversation. It is clear really.EndpointServerBuilder and ManagedServiceBuilder configure the health system as a should you continue Predicate as well.

EndpointServerBuilder

publicclassEndpointServerBuilder {
publicEndpointServerBuildersetupHealthAndStats(finalHttpServerBuilderhttpServerBuilder) {

if (isEnableStatEndpoint() || isEnableHealthEndpoint()) {
finalboolean healthEnabled = isEnableHealthEndpoint();
finalboolean statsEnabled = isEnableStatEndpoint();


finalHealthServiceAsync healthServiceAsync = healthEnabled ? getHealthService() :null;

finalStatCollection statCollection = statsEnabled ? getStatsCollection() :null;

httpServerBuilder.addShouldContinueHttpRequestPredicate(
newEndPointHealthPredicate(healthEnabled, statsEnabled,
healthServiceAsync, statCollection));
}


returnthis;
}

package io.advantageous.qbit.server;

import io.advantageous.boon.json.JsonFactory;
import io.advantageous.qbit.http.request.HttpRequest;
import io.advantageous.qbit.service.health.HealthServiceAsync;
import io.advantageous.qbit.service.stats.StatCollection;

import java.util.function.Predicate;

publicclassEndPointHealthPredicateimplementsPredicate<HttpRequest> {

privatefinalboolean healthEnabled;
privatefinalboolean statsEnabled;
privatefinalHealthServiceAsync healthServiceAsync;
privatefinalStatCollection statCollection;


publicEndPointHealthPredicate(booleanhealthEnabled, booleanstatsEnabled,
HealthServiceAsynchealthServiceAsync, StatCollectionstatCollection
) {
this.healthEnabled = healthEnabled;
this.statsEnabled = statsEnabled;
this.healthServiceAsync = healthServiceAsync;
this.statCollection = statCollection;
}

@Override
publicbooleantest(finalHttpRequesthttpRequest) {

boolean shouldContinue =true;
if (healthEnabled && httpRequest.getUri().startsWith("/__health")) {
healthServiceAsync.ok(ok -> {
if (ok) {
httpRequest.getReceiver().respondOK("\"ok\"");
} else {
httpRequest.getReceiver().error("\"fail\"");
}
});
shouldContinue =false;
} elseif (statsEnabled && httpRequest.getUri().startsWith("/__stats")) {

if (httpRequest.getUri().equals("/__stats/instance")) {
if (statCollection !=null) {
statCollection.collect(stats -> {
String json =JsonFactory.toJson(stats);
httpRequest.getReceiver().respondOK(json);
});
} else {
httpRequest.getReceiver().error("\"failed to load stats collector\"");
}
} elseif (httpRequest.getUri().equals("/__stats/global")) {
/* We don't support global stats, yet. */
httpRequest.getReceiver().respondOK("{\"version\":1}");
} else {

httpRequest.getReceiver().notFound();
}
shouldContinue =false;
}

return shouldContinue;

}
}

QBit: Restful URI patterns URLs with resources

$
0
0


Restful URI patterns URLs with resources


Http methods

It is common if you are updating an object to do a PUT, and if you are adding a new object to do POST.
GET method should never modify data. Use POST to add and PUT to modify. UseDELETE to remove an item.
It also makes a lot of sense to organize you resources in collections.
A collection of employees in a Restful URL is generally /${rootURI}/employee/. To add to a list of employees you would either POST or PUT to the /${rootURI}/employee/.
Let's say you wanted to add a phone_number to an employee who resided in a certain department. You could POST or PUT the phone number at this location/department/{departmentId}/employee/{employeeId}/phoneNumber/.

Example HR system that uses resource aware URLs to provide a REST interface

packageio.advantageous.qbit.service.rest.endpoint.tests.services;


importio.advantageous.qbit.annotation.PathVariable;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.service.rest.endpoint.tests.model.Department;
importio.advantageous.qbit.service.rest.endpoint.tests.model.Employee;
importio.advantageous.qbit.service.rest.endpoint.tests.model.PhoneNumber;

importjava.util.*;
importjava.util.function.Predicate;

@RequestMapping("/hr")
publicclassHRService {

finalMap<Integer, Department> departmentMap =newHashMap<>();


@RequestMapping("/department/")
publicList<Department>getDepartments() {
returnnewArrayList<>(departmentMap.values());
}

@RequestMapping(value="/department/{departmentId}/", method=RequestMethod.POST)
publicbooleanaddDepartments(@PathVariable("departmentId") IntegerdepartmentId,
finalDepartmentdepartment) {

departmentMap.put(departmentId, department);
returntrue;
}

@RequestMapping(value="/department/{departmentId}/employee/", method=RequestMethod.POST)
publicbooleanaddEmployee(@PathVariable("departmentId") IntegerdepartmentId,
finalEmployeeemployee) {

finalDepartment department = departmentMap.get(departmentId);

if (department ==null) {
thrownewIllegalArgumentException("Department "+ departmentId +" does not exist");
}

department.addEmployee(employee);
returntrue;
}

@RequestMapping(value="/department/{departmentId}/employee/{employeeId}", method=RequestMethod.GET)
publicEmployeegetEmployee(@PathVariable("departmentId") IntegerdepartmentId,
@PathVariable("employeeId") LongemployeeId) {

finalDepartment department = departmentMap.get(departmentId);

if (department ==null) {
thrownewIllegalArgumentException("Department "+ departmentId +" does not exist");
}

Optional<Employee> employee = department.getEmployeeList().stream().filter(newPredicate<Employee>() {
@Override
publicbooleantest(Employeeemployee) {
return employee.getId() == employeeId;
}
}).findFirst();

if (employee.isPresent()){
return employee.get();
} else {
thrownewIllegalArgumentException("Employee with id "+ employeeId +" Not found ");
}
}


publicMap<Integer, Department>getDepartmentMap() {
return departmentMap;
}


@RequestMapping(value="/department/{departmentId}/employee/{employeeId}/phoneNumber/",
method=RequestMethod.POST)
publicbooleanaddPhoneNumber(@PathVariable("departmentId") IntegerdepartmentId,
@PathVariable("employeeId") LongemployeeId,
PhoneNumberphoneNumber) {

Employee employee = getEmployee(departmentId, employeeId);
employee.addPhoneNumber(phoneNumber);
returntrue;
}



@RequestMapping(value="/department/{departmentId}/employee/{employeeId}/phoneNumber/")
publicList<PhoneNumber>getPhoneNumbers(@PathVariable("departmentId") IntegerdepartmentId,
@PathVariable("employeeId") LongemployeeId) {

Employee employee = getEmployee(departmentId, employeeId);
return employee.getPhoneNumbers();
}

}

Reactor tutorial | reactively handling async calls with QBit Reactive Microservices

$
0
0

QBit reactive programming with the Reactor

Reactive Microservices Background

Of the key tenets of a microservices architecture is the ability to be asynchronous. This is important because you want to make the best use of your hardware. There is little point in starting up thousands of threads that are waiting on IO. Instead you can have fewer CPU threads and use a async model.
An asynchronous programming model is not in and of itself a reactive programming model. You need asynchronous model before you can have a truly reactive model. In order to have a reactive model, you need to be able to coordinate asynchronous calls.
Imagine you have three services. One of the services is client facing. By client facing, we mean public web or for internal app, it is the end point that the client app talks to. Let's call this client facing service Service A.
For example, let’s say Service A performs an operation on behalf of the client, and this operation needs to call Service B, and after it calls Service B, it needs to take the result of Service B and call Service C. And then Service A takes the combined results a Service B and Service C and returns those back to the client. These of course are all nonblocking asynchronous calls.
Let’s summarize the Client calls a method on Service AService A calls a method on Service B,. Then when result from Service B method invocation comes back,Service A then calls a method on Service C, passing results from Service B as a parameter to the call to Service C. The combined results from Service B and Service C are then processed by Service A and then finally Service A passes the response that depended on calls to Service B and Service C to the Client.
The reactive nature comes into play in that we need to coordinate the call to Service Cto happen after the call to Service B. And we need to maintain enough with the context from the original call to return results to the original client. At this point we are still mostly talking about an asynchronous system not really a reactive system per se. There are language constructs and Java to capture the context of the call either lambda expression or an anonymous class.
Where a reactive system start to come into play is what happens if Service B orService C takes too long to respond. Or if the total operation of Service A takes too long to respond. You need to have a way to detect when asynchronous call do not come back in allotted period of time. If you do not have this, the client can continue to hold onto the connection that is not responding and there is hardware limitations to how many open connections you can. Now let's say if the Client is not a client app but rather another service that is calling Service A. You do not want a rolling back up of waiting connections if a downstream service like Service B stopped responding. The ability to handle a non-responsive system is what makes a reactive system reactive. The system has to be able to react to things like timeouts or downstream failures.
Now the call sequence that was just described is a fairly simple one. A more complicated call sequence might involved many downstream services and perhaps calls that rely on calls that rely on calls that then decide which other calls to make. It might make sense to have some service internal cache that can cache results of the calls and coordinate a filtered response based on N number of calls. However complex the call sequences the basic principle that you can't leave the client hanging still applies. At some point one has to determine that the call sequence is not going to be successful and at that point a response even if it's an error response must return to client. The main mission of reactive system is to not have a cascading failure.
Again, the main mission of reactive system is to not have a cascading failure.
In the case of the cache, which doesn't have to be a real cache at all one may need a mechanism to purge this cache and/or keep the cache warmed up. Perhaps instead of making frequent calls to Service BService A can be notified by Service B via an event that item of interest has been changed and Service A can ask ahead a time for the things it needs from serves be before the client asked for them.
Things like async call coordinationhandling async call timeoutscoordinating complex async calls, and populating caches based on events, and having periodic jobs to manage real-time stats, cache eviction, and complex call coordination is needed. A system that provides these things is a reactive system. In QBit the main interface to this reactive system is the Reactor.

QBit Background

QBit is a service-orientedreactivemicro-service library. QBit revolves around having aJava idiomatic service architecture. Your services are Java classes. These services are guaranteed to only be called by one thread at a time. Since there are strong guarantees for thread safety your services can maintain state. Since your services can maintain state then they can do things like keeping internal cache where the internal cash might just be a tree map or a hash map or your own data structure. Stateful services can also do things like reports statistics since they can easily have counters.
If you have a CPU intensive service that needs to maintain state, QBit allows you to shard services in the same JVM. There are built-in shard rules to shard based on method call arguments and you can create your own shard rules easily.
Data safety can be accomplished through using tools like CassandraKafka or by simply having services that replicate to another service peer. You can set the service up so it does not mutate its internal state, until a call to Kaka, or an update to Cassandra or call to a replica succeeds. The calls to the replica or async store or transactional message bus will be asynchronous calls.
QBit enables the development of in-memory services, IO bound services, or both running in the same JVM, etc. QBit provides a cluster event bus (using idiomatic Java, i.e., interfaces, classes), a fast batched queuing system based on streams of calls, shared services, round-robin services, as well as exposing services via REST/JSON or WebSocket (pluggable remoting) not to mention a ServiceDiscovery mechanism so services can find peers for replication. QBit services can automatically be enrolled in the QBit health system or the QBit stats system. QBit provides 
HealthService,ServiceDiscoveryEventService and a StatService. The StatService can be integrated with StatsD to publish passive stats. Or you can query the stats engine and react to the stats (counts, timings and levels). The StatsService is a reactive stats system that can be clustered. The StatService is reactive in that your services can publish to it and query it and react based on the results. You can implement things like rate limiting and react to an increased rate of something. The ServiceDiscovery system integrates with the HealthSystem and Consul to roll up each of your internal services that make up you micro service and publish the composite availably of your micro service to a single HTTP endpoint or a dead mans switch in Consul (TTL). In short without going into a ton of detail, QBit fully embraces microservices. Down to even publishing the REST interfaces as swagger meta-data to enable API-gateways.
Whether QBIt is calling another async service or calling another QBit async service (remote or local) or is using a pool of services to call a blocking IO service one thing is clear, you need async call coordination.

QBit Reactor to reactively manage async microserivce calls

First and foremost, the Reactor ensures that async calls come in on the same thread as the method calls and event publication that the ServiceQueue already handles not a foreign thread so the callback handlers are thread safe. The Reactor is more or less a utility class to manage async calls and periodic jobs.
The Reactor works in concert with a ServiceQueue to manage async calls and schedule periodic jobs. Recall that events and method calls that come through aServiceQueue are guaranteed to come in on the same thread. The ServiceQueue based service is inherently thread safe. This is not a new idea DCOM supported this with active objects and apartment model threading, Akka supports this same concept with typed Actors and the LMAX architecture for trading uses the same principle (although souped up and highly optimized for high-speed trading). As it turns out, CPUs are fairly fast, and you can do a lot of operations per second on a single thread, quite a bit more than often the IO hardware card can handle.
Thus if both events and method calls come in on the same thread, what happens when we call into another service or use a library that has a callback or some sort of async future. The callback or async future will come back on a foreign thread. We need a way to get that callback to come back on the same thread as the ServiceQueue. This is where the Reactor comes into play. The Reactor ensures that callbacks happen on the same thread as the Service running in a ServiceQueue.
If you adopt the QBit model, you embrace the fact that services can be stateful, even if the state is only counters and caches. You are in effect embracing in-memory services. This does not force you to manage state in a Java class, but it allows you to manage state and makes things like counters and stats collection chlid’s play.
The missing link is managing callbacks so that they also come back on the same thread as the ServiceQueue. The Reactor allows callbacks to be handled like events and method calls.

Reactor to manage async calls

packageio.advantageous.qbit.reactive;


publicclassReactor {


/** Add an object that is auto flushed.
*
* @param serviceObject as service object that will be auto-flushed.
*/
publicvoidaddServiceToFlush(finalObjectserviceObject) {
.
}

/** Add a task that gets repeated.
*
* @param repeatEvery repeat Every time period
* @param timeUnit unit for repeatEvery
* @param task task to perform
*/
publicvoidaddRepeatingTask(finallongrepeatEvery, finalTimeUnittimeUnit,
finalRunnabletask) {


}

publicCallbackBuildercallbackBuilder() {
returnCallbackBuilder.callbackBuilder(this);
}

publicCoordinatorBuildercoordinatorBuilder() {
returnCoordinatorBuilder.coordinatorBuilder(this);
}



publicprocess() {

}

}
You do not always need to create a callback via the Reactor. However, if you want to mutate the state of the Service based on a ServiceQueue, you will want to use aReactor. Also the Reactor makes it convenient to have callbacks with timeouts. Those are the two use cases for the Reactor. You want to enforce a timeout or you want to ensure that the callback executes on the same thread as the method calls and events so that the access to member variables of the service are thread safe.

HRService and DepartmentRepo example using Reactor

Let’s create a small example to show how it all ties in.
We have the following components and classes and interfaces:
  • HRService (Human resources service) that is exposed via REST
  • DepartmentRepo which stores departments in a long term storage Department a department object
  • DepartmentRepoAsync which is the async interface to DepartmentRepo
  • Reactor which coordinates calls to DepartmentRepo
  • HRServiceMain which constructs the servers and services queues (wiring)
Let’s look at HRServiceHRService (Human Resource Service) is a s Service that is running on a ServiceQueue thread.

HRService

/** This is the public REST interface to the Human Resources services.
*
*/
@RequestMapping("/hr")
publicclassHRService {

privatefinalMap<Integer, Department> departmentMap
=newHashMap<>();

privatefinalReactor reactor;
privatefinalDepartmentRepoAsync departmentRepoAsync;

/**
* Construct a new HR REST Service.
* @param reactor reactor
* @param departmentRepoAsync async interface to DepartmentStore
*/
publicHRService(finalReactorreactor,
finalDepartmentRepoAsyncdepartmentRepoAsync) {
this.reactor = reactor;
this.reactor.addServiceToFlush(departmentRepoAsync);
this.departmentRepoAsync = departmentRepoAsync;
}

/**
* Add a new department
* @param callback callback
* @param departmentId department id
* @param department department
*/
@RequestMapping(value="/department/{departmentId}/",
method=RequestMethod.POST)
publicvoidaddDepartment(finalCallback<Boolean>callback,
@PathVariable("departmentId") IntegerdepartmentId,
finalDepartmentdepartment) {

finalCallback<Boolean> repoCallback = reactor.callbackBuilder()
.setCallback(Boolean.class, succeeded -> {
departmentMap.put(departmentId, department);
callback.accept(succeeded);
}).build();

//TODO improve this to handle timeout and error handling.
departmentRepoAsync.addDepartment(repoCallback, department);

}

/** Register to be notified when the service queue is idle, empty,
or has hit its batch limit.
*/
@QueueCallback({QueueCallbackType.EMPTY,
QueueCallbackType.IDLE, QueueCallbackType.LIMIT})
privatevoidprocess () {

/** Call the reactor to process callbacks. */
reactor.process();
}
To use the Reactor, you must do the following, 1) register collaborating services with addServiceToFlush, call the reactor’s process method from a @QueueCallback method of the service that registers for idle, empty and limit notification. The Reactor's process method will handle registered coordinators, repeating jobs, collaborating service queue flushes, and callback timeouts & callbacks running on the same thread as the service queue. Now every time we make a call to our collaborating service we will use the callback builder from the reactor (reactor.callbackBuilder) so the reactor can manage the callback and if it times out. Let's break this down.
First we register the collaborating services with addServiceToFlush.

register collaborating services with addServiceToFlush

public HRService(finalReactor reactor, 
finalDepartmentRepoAsync departmentRepoAsync) {
...
this.reactor.addServiceToFlush(departmentRepoAsync);
Next we call the reactor’s process method from a @QueueCallback method that registers for idle, empty and limit notification.

call the reactor’s process from a @QueueCallback method

/** Register to be notified when the service queue is 
idle, empty, or has hit its batch limit.
*/
@QueueCallback({QueueCallbackType.EMPTY,
QueueCallbackType.IDLE, QueueCallbackType.LIMIT})
privatevoid process () {

/** Call the reactor to process callbacks. */
reactor.process();
}


This literally means if the queue is idle or empty or we reached the batch size limit, then run the reactor process method. This works for most use cases, but you could opt to call reactor.process after some other important event or after X number of calls to a certain method. The reactor process method is where it manages the service flushes, callbacks, periodic jobs, etc.
DepartmentRepo which stores departments in a long term storage for now is just a simple class to keep the discussion moving forward.

DepartmentRepo which stores departments

packagecom.mammatustech.hr;

importjava.util.HashMap;
importjava.util.Map;

/**
* Represents a storage repo. Imagine this is talking to MongoDB or
* Cassandra. Perhaps it is also indexing the department name via
* SOLR. It does all of this and then returns when it is finished.
* If this in turn called other services, it would use a Callback instead of
* returning a boolean.
*/
publicclassDepartmentRepo {

privatefinalMap<Long, Department> departmentMap =newHashMap<>();


/**
* Add a department.
* @param department department we are adding.
* @return true if successfully stored the department
*/
publicbooleanaddDepartment(finalDepartmentdepartment) {

departmentMap.put(department.getId(), department);
returntrue;
}
}
For now imagine it writing to database or Cassandra or LevelDB or something.
Since this is such a simple version, we don’t even need a Callback, but we do need one when we call it. (Later we will coordinate multiple calls).
DepartmentRepoAsync which is the async interface to DepartmentRepo so it allows async access even though, it does not technically need it yet.

DepartmentRepoAsync which is the async interface to DepartmentRepo

packagecom.mammatustech.hr;


importio.advantageous.qbit.reactive.Callback;

/**
* Async interface to DepartmentRepo internal service.
*
*/
publicinterfaceDepartmentRepoAsync {

/**
* Add a department to the repo.
* @param callback callback which returns the success code async.
* @param department department to add
*/
voidaddDepartment(finalCallback<Boolean>callback,
finalDepartmentdepartment);

}
There is nothing special about the Department object.

Department Object

packagecom.mammatustech.hr;

importjava.util.ArrayList;
importjava.util.List;

publicclassDepartment {

privatefinallong id;
privatefinalString name;
privatefinalList<Employee> employeeList;

publicDepartment(longid, Stringname, List<Employee>employeeList) {
this.id = id;
this.name = name;
this.employeeList = employeeList;
}

publicvoidaddEmployee(Employeeemployee) {
employeeList.add(employee);
}

publicList<Employee>getEmployeeList() {
returnnewArrayList<>(employeeList);
}

publiclonggetId() {
return id;
}
}
HRServiceMain constructs the servers and services queues and starts them up. It is the bootstrap class.

HRServiceMain wires up DepartmentRepo and HRService

/**
* Default port for admin is 7777.
* Default port for main endpoint is 8080.
*
* <pre>
* <code>
*
* Access the service:
*
* $ curl http://localhost:8888/v1/...
*
*
* To see swagger file for this service:
*
* $ curl http://localhost:7777/__admin/meta/
*
* To see health for this service:
*
* $ curl http://localhost:8888/__health
* Returns "ok" if all registered health systems are healthy.
*
* OR if same port endpoint health is disabled then:
*
* $ curl http://localhost:7777/__admin/ok
* Returns "true" if all registered health systems are healthy.
*
*
* A node is a service, service bundle, queue, or server endpoint that is being monitored.
*
* List all service nodes or endpoints
*
* $ curl http://localhost:7777/__admin/all-nodes/
*
*
* List healthy nodes by name:
*
* $ curl http://localhost:7777/__admin/healthy-nodes/
*
* List complete node information:
*
* $ curl http://localhost:7777/__admin/load-nodes/
*
*
* Show service stats and metrics
*
* $ curl http://localhost:8888/__stats/instance
* </code>
* </pre>
*/
publicclassHRServiceMain {

publicstaticvoidmain(finalString... args) throwsException {

/* Create the ManagedServiceBuilder which
manages a clean shutdown, health, stats, etc. */
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder()
.setRootURI("/v1") //Defaults to services
.setPort(8888); //Defaults to 8080 or environment variable PORT


/* Build the reactor. */
finalReactor reactor =ReactorBuilder.reactorBuilder()
.setDefaultTimeOut(1)
.setTimeUnit(TimeUnit.SECONDS)
.build();


/* Build the service queue for DepartmentRepo. */
finalServiceQueue departmentRepoServiceQueue =
managedServiceBuilder
.createServiceBuilderForServiceObject(
newDepartmentRepo()).build();

departmentRepoServiceQueue
.startServiceQueue()
.startCallBackHandler();

/* Build the remote interface for department repo. */
finalDepartmentRepoAsync departmentRepoAsync =
departmentRepoServiceQueue
.createProxy(DepartmentRepoAsync.class);



/* Start the service. */
managedServiceBuilder.addEndpointService(
newHRService(reactor, departmentRepoAsync)) //Register HRService
.getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health
end-points and swagger meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("HR Server and Admin Server started");

}
}
You can run this example by going to Reactor Example on github. There is even a REST client generated with swagger to exercise this example HRService client generated with Swagger.
Thus far we have only handled making the callback from DepartmentRepo happen on the same thread as the ServiceQueue of HRService. We have not really handled the timeout case.
To handle the timeout case, we need to handle the onTimeOut handler. Essentially we need to register an onTimeOut with the callbackBuilder as follows.

Registering an onTimeOut to handle timeouts

    @RequestMapping(value ="/department/{departmentId}/", 
method =RequestMethod.POST)
publicvoid addDepartment(finalCallback<Boolean> callback,
@PathVariable("departmentId") Integer departmentId,
finalDepartment department) {


finalCallback<Boolean> repoCallback = reactor.callbackBuilder()
.setCallback(Boolean.class, succeeded -> {
departmentMap.put(departmentId, department);
callback.accept(succeeded);
}).setOnTimeout(() -> { //handle onTimeout
//callback.accept(false); // one way

// callback.onTimeout(); //another way
/* The best way. */
callback.onError(
newTimeoutException("Timeout can't add department "+ departmentId));
}).setOnError(error -> { //handle error handler
callback.onError(error);
}).build();

departmentRepoAsync.addDepartment(repoCallback, department);
Notice that now we handle not only the callback, but we handle if there was a timeout. You could just return false by calling callback.accept(false) but since a timeout is an exceptional case, we opted to create an Exception and pass it to the callback.onError(…). The other option is call the default onTimeout handler, but by using onError to report the timeout, we are able to pass some additional context information about the timeout.
In addition to handling the timeout, we handle the error handler case. If we don’t handle the timeout and the error handler if their is a timeout or an error then the REST client will hold on to the connection until the HTTP connection times out. We don’t want the client to hold on to the connection for a long time as that could lead to a cascading failure if a downstream service fails while upstream services or clients hold on to connections waiting for their HTTP connections to timeout. Bottom line, handle timeouts and errors by sending a response to the client (even if the client is only an upstream service). Don’t let the client hang. Prevent cascading failures.

Coordinating multiple calls

Let's take this a step further. Let's say that instead of calling one service whenaddDepartment gets called, that we call three services: AuthService,DepartmentCassandraRepo and DepartmentSolrIndexer. First we want the HRService to call the AuthService to see if the user identified by userName is authorized to add a department. The doAddDepartment gets called if auth succeeds. Remember this is merely an example to show what async call coordination looks like. Then the doAddDepartment calls the DepartmentCassandraRepo repo to store the department and if it successful it stores the department in the department cache (departmentMap), notifies the clientCallback, and then call DepartmentSolrIndexer to index the department so that it is searchable.

AuthServiceImpl

packagecom.mammatustech.hr;

importio.advantageous.qbit.reactive.Callback;

publicinterfaceAuthService {

voidallowedToAddDepartment(Callback<Boolean>callback,
Stringusername,
intdepartmentId);

}
...
packagecom.mammatustech.hr;

importio.advantageous.qbit.reactive.Callback;

publicclassAuthServiceImplimplementsAuthService {

publicvoidallowedToAddDepartment(finalCallback<Boolean>callback,
finalStringusername,
finalintdepartmentId) {

...
}

}

DepartmentCassandraRepo to store departments

packagecom.mammatustech.hr;

importio.advantageous.boon.core.Sys;

importjava.util.HashMap;
importjava.util.Map;

/**
* Represents a storage repo. Imagine this is talking to
* Cassandra.
*/
publicclassDepartmentCassandraRepo {
...


/**
* Add a department.
* @param department department we are adding.
* @return true if successfully stored the department
*/
publicvoidaddDepartment(finalCallback<Boolean>callback,
finalDepartmentdepartment) {
...
}
}

DepartmentSolrIndexer to index departments

packagecom.mammatustech.hr;

importio.advantageous.boon.core.Sys;

importjava.util.HashMap;
importjava.util.Map;

/**
* Represents a SOLR indexer. Imagine this is talking to
* SOLR.
*/
publicclassDepartmentSolrIndexer {
...


/**
* Add a department.
* @param department department we are adding.
* @return true if successfully stored the department
*/
publicvoidaddDepartment(finalCallback<Boolean>callback,
finalDepartmentdepartment) {
...
}
}

HRService REST interface

/** This is the public REST interface to the Human Resources services.
*
*/
@RequestMapping("/hr")
publicclassHRService {

privatefinalMap<Integer, Department> departmentMap =
newHashMap<>();

privatefinalReactor reactor;
privatefinalDepartmentRepoAsync solrIndexer;
privatefinalDepartmentRepoAsync cassandraStore;
privatefinalAuthService authService;

/**
* Construct a new HR REST Service.
* @param reactor reactor
* @param cassandraStore async interface to DepartmentStore
* @param solrIndexer async interface to SOLR Service
*/
publicHRService(finalReactorreactor,
finalDepartmentRepoAsynccassandraStore,
finalDepartmentRepoAsyncsolrIndexer,
finalAuthServiceauthService) {
this.reactor = reactor;
this.reactor.addServiceToFlush(cassandraStore);
this.reactor.addServiceToFlush(solrIndexer);
this.reactor.addServiceToFlush(authService);
this.cassandraStore = cassandraStore;
this.solrIndexer = solrIndexer;
this.authService = authService;
}

/**
* Add a new department
* @param clientCallback callback
* @param departmentId department id
* @param department department
*/
@RequestMapping(value="/department/{departmentId}/", method=RequestMethod.POST)
publicvoidaddDepartment(finalCallback<Boolean>clientCallback,
@PathVariable("departmentId") IntegerdepartmentId,
finalDepartmentdepartment,
@HeaderParam(value="username", defaultValue="noAuth")
finalStringuserName) {

finalCallbackBuilder callbackBuilder = reactor.callbackBuilder()
.setOnTimeout(() -> {
clientCallback.onError(
newTimeoutException("Timeout can't add department "
+ departmentId));
}).setOnError(clientCallback::onError);


authService.allowedToAddDepartment(callbackBuilder.setCallback(Boolean.class, allowed -> {
if (allowed) {
doAddDepartment(clientCallback, callbackBuilder, department);
} else {
clientCallback.onError(newSecurityException("Go away!"));
}
}).build(), userName, departmentId);


}

privatevoiddoAddDepartment(finalCallback<Boolean>clientCallback,
finalCallbackBuildercallbackBuilder,
finalDepartmentdepartment) {

finalCallback<Boolean> callbackDeptRepo = callbackBuilder.setCallback(Boolean.class, addedDepartment -> {

departmentMap.put((int)department.getId(), department);
clientCallback.accept(addedDepartment);

solrIndexer.addDepartment(indexedOk -> {
}, department);
}).build();

cassandraStore.addDepartment(callbackDeptRepo, department);

}

/** Register to be notified when the service queue is idle, empty, or has hit its batch limit.
*/
@QueueCallback({QueueCallbackType.EMPTY, QueueCallbackType.IDLE, QueueCallbackType.LIMIT})
privatevoidprocess () {

/** Call the reactor to process callbacks. */
reactor.process();
}
The key to this is the shared callback builder.
/**
* Add a new department
* @param clientCallback callback
* @param departmentId department id
* @param department department
*/
@RequestMapping(value ="/department/{departmentId}/", method =RequestMethod.POST)
publicvoid addDepartment(finalCallback<Boolean> clientCallback,
@PathVariable("departmentId") Integer departmentId,
finalDepartment department,
@HeaderParam(value="username", defaultValue ="noAuth")
finalString userName) {

finalCallbackBuilder callbackBuilder = reactor.callbackBuilder()
.setOnTimeout(() -> {
clientCallback.onError(
newTimeoutException("Timeout can't add department "
+ departmentId));
}).setOnError(clientCallback::onError);
Notice how we break the methods down and functional decompose them so that things are easier to read, witness doAddDepartment and how it is called.

Breaking down callback handling

        authService.allowedToAddDepartment(callbackBuilder.setCallback(Boolean.class, allowed -> {
if (allowed) {
doAddDepartment(clientCallback, callbackBuilder, department);
} else {
clientCallback.onError(newSecurityException("Go away!"));
}
}).build(), userName, departmentId);
...

privatevoid doAddDepartment(finalCallback<Boolean> clientCallback,
finalCallbackBuilder callbackBuilder,
finalDepartment department) {

finalCallback<Boolean> callbackDeptRepo = callbackBuilder.setCallback(Boolean.class, addedDepartment -> {

departmentMap.put((int)department.getId(), department);
clientCallback.accept(addedDepartment);

solrIndexer.addDepartment(indexedOk -> {
}, department);
}).build();

cassandraStore.addDepartment(callbackDeptRepo, department);

}

Callback builder specifying timeouts

The CallbackBuilder allows you to specify timeouts for calls.

Specifying timeouts per CallbackBuilder

finalCallbackBuilder callbackBuilder = reactor.callbackBuilder()
.setOnTimeout(() -> {
clientCallback.onError(
newTimeoutException("Timeout can't add department "+ departmentId));
}).setOnError(clientCallback::onError)
.setTimeoutDuration(200)
.setTimeoutTimeUnit(TimeUnit.MILLISECONDS);

Working with repeating tasks

@RequestMapping("/hr")
publicclassHRService {
...
/**
* Construct a new HR REST Service.
* @param reactor reactor
* @param cassandraStore async interface to DepartmentStore
* @param solrIndexer async interface to SOLR Service
*/
publicHRService(finalReactorreactor,
finalDepartmentRepoAsynccassandraStore,
finalDepartmentRepoAsyncsolrIndexer,
finalAuthServiceauthService) {
...
this.reactor.addRepeatingTask(1, TimeUnit.SECONDS, () -> {
manageCache();
});
}

Getting started with QBit Microservices Lib Batteries-Included Part 2

$
0
0
If you are new to QBit. It might make more sense to skim the overview. We suggest reading the landing page of the QBit Microservices Lib's wiki for a background on QBit. This will let you see the forrest while the tutorials are inspecting the trees. There is also a lot of documents linked to off of the wiki landing page as well as in the footer section of the tutorials.

Getting started with QBit Microservices Lib Batteries Included Part 2

Mammatus Tech
This is part two in this tutorial series. Part 1
QBit is small and wicked fast, but it comes batteries included.
QBit comes with Service DiscoveryHealthMonitoringRealtime service stats,AsyncReactive Async call managementJob ControlEvent Bus, built-in. QBit is very similar to the Typed Actor model but streamlined for Microservices and using idiomatic Java constructs.
Already, even the simple Hello World example has runtime stats support, health monitoring that can be integrated with service discovery, and more.
To expose end points to some of these services, we merely have to create an admin end point as follows:

Turning on the admin end point

publicstaticvoid main(finalString... args) {
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder().setRootURI("/root");

/* Start the service. */
managedServiceBuilder.addEndpointService(newHelloWorldService())
.getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health end-points and meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Servers started");


}
We just turn on the admin.

Turning on Admin Support

/* Start the admin builder which exposes health end-points and meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

Health

By default when you use the ManagedServiceBuilder you get a health check on the same port as your main web service port. Microserivces Health and Stats are very important in a Microservices Architecture. This is part of the batteries included approach of QBit Microservices lib. If it is important in a Microservices Architecture, it is supported by QBit Java Microservices Lib.

To see health for this service:

$ curl http://localhost:8080/__health
"ok"
Your web service port is settable by call setPort on the ManagedServiceBuilder or by passing in the environment variable PORT or WEB_PORT.
The health check is not a simple endpoint that returns "ok" with a status 200. It in fact it will ask check to see if all ServiceQueue services (internal and exposed) are still running. If a ServiceQueue or ServiceServerEndpoint does not check-in with the health system the health end point will return a 500 code with the message "fail". Services can also mark themselves as unhealthy, and mark themselves as healthy again if they can recover.
The health check is on by default endpoint, but it can be disabled(managedServiceBuilder.setEnableLocalHealth(false)).
ManagedServiceBuilder is setup so that your microservice just runs as expected in EC2, Heroku or Docker. ManagedServiceBuilder makes it easy to hook up your service to Heroku health checks, Load Balancer health checks, NagiosConsul, etc.
Side note: We strongly recommend you implement Consul for Microservices Service Discovery and Health. If you are not familiar with Consul, we wrote this tutorial Consul for Microservices Architecture Service Discovery and Health For Tutorial.
This health end point is on by default with or without the admin. It can be disabled.
Once you turn on the admin port, you can also see stats on the admin port as well as health. (You could for example disable the __health endpoint on the main port and still have access to health via the admin port). We will have a full tutorial on Health support. It is beyond on the scope of this batteries included microservice tutorial.

See health over admin back-end port.

$ curl http://localhost:7777/__admin/ok
"true"
The above returns "true" if all registered health systems nodes are healthy.
A node is a Service (SerivceQueue, service actor), ServiceBundle (a group of services), queue, or ServiceServerEndpoint (a ServiceBundle that is exposed via REST and WebSocket) that is being monitored. Later we will show you how to list all service nodes or endpoints. This example only has one.
Before we show the stats, let's hit the service a few times with wrk.

Hitting the service a few times so we can show stats

$ wrk -t 2 -d 2s -c 1000  http://localhost:8888/root/hello/hello
Running 2s test @ http://localhost:8888/root/hello/hello
2 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.18ms 7.78ms 308.89ms 99.66%
Req/Sec 41.64k 7.89k 55.16k 57.50%
165635 requests in 2.02s, 14.53MB read
Socket errors: connect 0, read 250, write 0, timeout 0
Requests/sec: 82109.35
Transfer/sec: 7.20MB
Recall that wrk is a load testing tool.
Now let's ask the admin to show some stats.

Show service stats and metrics

$ curl http://localhost:8080/__stats/instance 

Output showing microservice stats and metrics

{
"MetricsKV": {
"apptitle.hostname.jvm.thread.peak.count": 33,
"apptitle.hostname..jvm.mem.heap.free": 232961928,
"apptitle.hostname.jvm.thread.count": 33,
"apptitle.hostname.jvm.mem.non.heap.max": -1,
"apptitle.hostname.jvm.os.load.level": 3,
"apptitle.hostname.jvm.mem.heap.max": 3817865216,
"HelloWorldService": 1,
"apptitle.hostname.jvm.mem.heap.used": 24463480,
"apptitle.hostname.jvm.mem.non.heap.used": 18423344,
"apptitle.hostname.jvm.thread.daemon.count": 12,
"apptitle.hostname.jvm.mem.total": 257425408,
"apptitle.hostname.jvm.thread.started.count": 35
},
"MetricsMS": {
"HelloWorldService.callTimeSample": [
21039,
14608,
7701
]
},
"MetricsC": {
"HelloWorldService.startBatchCount": 17900,
"HelloWorldService.receiveCount": 447169
},
"version": 1
}
Local stats collection is on by default if you use the ManagedServiceBuilder. It can be disabled (managedServiceBuilder.setEnableLocalStats(false)).
This again is so you can see critical stats about your service before you set things up in Grafana, Graphite using StatsD which we will cover later.
The Stats system can be passive (StatsDGrafanaGraphite). Also the QBit Stats system can be clustered and shared and query-able so that the Stats become realtime analyticsthat can be reacted upon (reactive programming). We will cover those later in this tutorial series, but it is beyond the scope of this tutorial.
Side Note: We have used the Stats system in production to provide application key rate limiting with OAuth headers. We have also written custom plugins (Stats System is pluggable) to provide stats in prexisting stats systems that clients were already using. Every core service in QBit has an interface that can be replaced with your own implementation.
Notice: The stats are not just for the JVM but for every service actor (ServiceQueue) running in the system.
QBit Microservice Lib takes KPIs, runtime stats, and health of microservices very seriously (Microservices Monitoring thoughts behind QBit approach). If you take microservices seriously, then you need a library that supports microservices monitoring and KPI as core to its internals.
We will cover Stats more when we cover setting up StatsD/Graphite. This microservice tutorial series will also show how to setup the application names, and such (the keys, names of the stats).
This endpoint is nice if you want to implement a pull model for stats collection (versus a push model like StatsD) so that all services can publish stats, and background jobs can aggregate them and push them into a time series database using the REST/microservice friendly JSON interface. (We prefer StatsD, but depending on the number of nodes you are running that might be more difficult.)

Microservice API Gateway

QBit Microservices Lib creates on-the-fly proxies that can do high-speed WebSocket calls. QBit's WebSocket support which uses JSON and ASCII is often faster than many competitors solutions which use binary protocols and regular sockets. This is one way that QBit provides support for Microservice API Gateways. (We are planning similar support for on-the-fly REST interfaces).
In addition to the WebSocket support for Microservice API Gateways, QBit provides access via REST and Swagger. QBit tracks a rich set of meta-data about the microserivce endpoints which it can expose via Swagger support. Once the API is exposed as swagger it is easy to generate Python, Ruby, Scala, Objective-C clients. This is the very definition of a Microservice API gateway.
Swagger, leading provider of API Gateway services, provides a RESTful API to meta-data about your Microservice API gateways. Swagger has the largest ecosystem of API tooling (a lot of it open source), and is supported by thousands of developers. It has a meta-data spec. With a Swagger-enabled API, QBit gets interactive documentation, as well as client SDK generation, API Gateway documentation and additional discoverability. We will cover this more when we cover API gateways.

To see swagger file for this service:

$ curl http://localhost:7777/__admin/meta/

Output Swagger

{
"swagger":"2.0",
"info": {
"title":"application title goes here",
"description":"Description not set",
"contact": {
"name":"ContactName not set",
"url":"Contact URL not set",
"email":"no.contact.email@set.me.please.com"
},
"version":"0.1-NOT-SET",
"license": {
"name":"licenseName not set",
"url":"http://www.license.url.com/not/set/"
}
},
"host":"localhost:8888",
"basePath":"/root",
"schemes": [
"http",
"https",
"wss",
"ws"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/hello/hello": {
"get": {
"operationId":"hello",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"type":"string"
}
}
}
}
}
}
}
Most QBit REST features can be exposed via Swagger. We support the full array of Swagger features so you can develop in your polyglot Microservice environment.

YAML version of Microservice Hello World interface in Swagger

swagger:'2.0'
info:
title:application title goes here
description:Description not set
contact:
name:ContactName not set
url:Contact URL not set
email:no.contact.email@set.me.please.com
version:0.1-NOT-SET
license:
name:licenseName not set
url:'http://www.license.url.com/not/set/'
host:'localhost:8888'
basePath:/root
schemes:
- http
- https
- wss
- ws
consumes:
- application/json
produces:
- application/json
paths:
/hello/hello:
get:
operationId:hello
summary:no summary
description:no description
produces:
- application/json
responses:
'200':
description:no return description
schema:
type:string
You can import the JSON file into a Swagger Editor and generate all sorts of clients.
Later we will show more about this and how to set the description, summary and return descriptions as well as the other data and documents about the microservice that we can expose via Swagger.
We also have some Swagger generated examples for our TODO microservice exampleand our RESTful Microserivce Resource Example.

More about the admin

Show all nodes

$ curl http://localhost:7777/__admin/all-nodes/

All of the services running in this JVM

["HelloWorldService"]
There is only one at this point.
We can show all of the running nodes (service actors, REST endpoints, service bundles) from this admin endpoint.
If you want to see just the healthy nodes.

List healthy nodes by name:

$ curl http://localhost:7777/__admin/healthy-nodes/
["HelloWorldService"]
Keep in mind that the QBit health system integrates with the QBit ServiceDiscoveryservice so if a service becomes unhealthy it can be unregistered in theServiceDiscovery system. You can see this in action with QBit's clustered event bus which uses the ServiceDiscovery and its Consul implementation to provide a clustered event bus which removes nodes and discovers nodes via Consul. Nodes can be removed if they become unhealthy. It works and has been used in production for quite some time.
You can also get the complete information about the health of nodes.

List complete node information:

$ curl http://localhost:7777/__admin/load-nodes/

Complete information about Health of all services

[
{
"name": "HelloWorldService",
"ttlInMS": 10000,
"lastCheckIn": 1440877933151,
"status": "PASS"
}
]

Conclusion

The QBit Microservice lib comes with batteries-included microservice development support from real-time queryable stats to do real-time analytics to health monitoring to polyglot API gateways and service discovery. QBit supports a true Microservice Architecture.
Viewing all 217 articles
Browse latest View live