Quantcast
Channel: Sleepless Dev
Viewing all 213 articles
Browse latest View live

Getting started with QBit Microservice Lib Part 1

$
0
0
QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. QBit is SOA evolved for mobile and cloud computing. QBit is a small lightweight lib that provides support for ServiceDiscovery, Health, reactive StatService, typed events, and Java idiomatic reactive programming for Microservices.

If you are new to QBit. It might make more sense to skim the overview. We suggest reading the landing page of the QBit Microservices Lib's wiki for a background on QBit. This will let you see the forrest while the tutorials are inspecting the trees. There is also a lot of documents linked to off of the wiki landing page as well as in the footer section of the tutorials.

Getting started with QBit Microservice Lib Part 1

Mammatus Tech
QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. QBit is SOA evolved for mobile and cloud computing. QBit is a small lightweight lib that provides support for ServiceDiscovery, Health, reactive StatService, typed events, and Java idiomatic reactive programming for Microservices.
QBit is small and wicked fast.
QBit is wicked fast
Learn gradle.

Setup gradle build file

group 'qbit-ex'
version '1.0-SNAPSHOT'
apply plugin: 'java'
apply plugin: 'application'

mainClassName ="com.mammatustech.HelloWorldService"

compileJava {
sourceCompatibility =1.8
}

repositories {
mavenCentral()
mavenLocal()
}

dependencies {
testCompile group: 'junit', name: 'junit', version: '4.11'
compile group: 'io.advantageous.qbit', name: 'qbit-admin',
version: '0.9.0-M1'
compile group: 'io.advantageous.qbit', name: 'qbit-vertx',
version: '0.9.0-M1'

}

Java code for hello world QBit

packagecom.mammatustech;

importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.annotation.RequestMapping;

@RequestMapping("/hello")
publicclassHelloWorldService {

@RequestMapping("/hello")
publicStringhello() {
return"hello "+System.currentTimeMillis();
}

publicstaticvoidmain(finalString... args) {
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder()
.setRootURI("/root");

/* Start the service. */
managedServiceBuilder.addEndpointService(newHelloWorldService())
.getEndpointServerBuilder()
.build().startServer();

}
}
Run the app.

Run the app

$ gradle run

Hit the app with curl

$ curl http://localhost:8080/root/hello/hello
"hello 1440742489358"

Hit the app a lot with wrk

$ wrk -d 5s -t 2 -c 1000 http://localhost:8080/root/hello/hello
Running 5s test @ http://localhost:8080/root/hello/hello
2 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 17.65ms 22.96ms 427.36ms 97.57%
Req/Sec 33.33k 7.75k 43.10k 75.00%
319154 requests in 5.06s, 28.00MB read
Requests/sec: 63083.97
Transfer/sec: 5.53MB

Conclusion

You can find more information reading the tutorials and looking at the wiki. If you are new to QBit, please take some time to skim the overview to QBit Microservices. After you skim the overview, look into QBit batteries included microservice lib to get a feel for how QBit supports the full ethos of Microservices Architecture (as it supports monitoring, health checks, service discovery, api gateways, etc.). Check out the introduction to QBit reactive programming model to get a feel for the depth of QBit and how QBit supports coordinating async callbacks. To get a real feel for what it means to be an idiomatic Java lib for microservices read about the Event Bus and Type event bus. Keep in mind that QBit is pluggable and you can plug in additional event busses (KafkaAeron, etc.) as well as plugin different protocol parsers (Thrift, Message Pack) or JSON parsers (Jackon, GSON), etc.), but still keep the same idiomatic Java programming model. This is just the start of your QBit journey. There is much to explore.

Introduction to Apache Spark Part 1

$
0
0
The reason people are so interested in Apache Spark is it puts the power of Hadoop in the hands of developers. It is easier to setup an Apache Spark cluster than an Hadoop Cluster. It runs faster. And it is a lot easier to program. It puts the promise and power of Big Data and real time analysis in the hands of the masses. With that in mind, let's introduce Apache Spark in this quick tutorial.

Google search interests for Apache Spark has sky rocketed recently, indicating a wide range of interest. (108,000 searches in July according to Google Ad Word Tools about ten times more than Microservices).


Apache Spark, an open source cluster computing system, is growing fast. Apache Spark has a growing ecosystem of libraries and framework to enables advanced data analytics. Apache Spark's rapid success is due to its power and and ease-of-use. It is more productive and has faster runtime than the typical MapReduce based analysisApache Spark provides in-memory, distributed computing. It has APIs in Java, Scala, Python, and R. The Spark Ecosystem is show below.

See the full Apache Spark Tutorial.


QBit Microservices Java Lib RESTful and Swagger-ific API Gateway - Tutorial Part 3

$
0
0


If you are new to QBit. It might make more sense to skim the overview. We suggest reading the landing page of the QBit Microservices Lib's wiki for a background on QBit. This will let you see the forrest while the tutorials are inspecting the trees. There is also a lot of documents linked to off of the wiki landing page as well as in the footer section of the tutorials.

QBit Microservices Java Lib RESTful and Swaggerific API Gateway Support- Part 3

Mammatus Tech
QBit Microserivces Lib provides three way to remotely talk to microservices out of the box, WebSocket, HTTP REST and the QBit event bus. (Communication is also pluggable so it is easy to send QBit calls or events over a message bus, or other means.)
This tutorial is going to focus on QBit and its REST support. It covers QBit REST support and its support for runtime stats and API Gateway support with Swagger. It just works.

Here is an example program that we are going to examine.

REST API for Microservices

packagecom.mammatustech.todo;

importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;

importjava.util.*;

@RequestMapping(value ="/todo-service", description ="Todo service")
publicclassTodoService {


privatefinalMap<String, Todo> todoMap =newTreeMap<>();


@RequestMapping(value="/todo", method=RequestMethod.POST,
description="add a todo item to the list", summary="adds todo",
returnDescription="returns true if successful")
publicbooleanadd(finalTodotodo) {

todoMap.put(todo.getId(), todo);
returntrue;
}



@RequestMapping(value="/todo", method=RequestMethod.DELETE,
description="Deletes an item by id", summary="delete a todo item")
publicvoidremove(@RequestParam(value="id", description="id of Todo item to delete")
finalStringid) {

todoMap.remove(id);
}



@RequestMapping(value="/todo", method=RequestMethod.GET,
description="List all items in the system", summary="list items",
returnDescription="return list of all items in system")
publicList<Todo>list() {
returnnewArrayList<>(todoMap.values());
}

}
Let's start with the addTodo method.

Add TODO

    @RequestMapping(value ="/todo", method =RequestMethod.POST,
description ="add a todo item to the list", summary ="adds todo",
returnDescription ="returns true if successful")
publicboolean add(finalTodo todo) {

todoMap.put(todo.getId(), todo);
returntrue;
}
The RequestMapping is inspired from Spring MVC's REST support. In fact, if you use Spring's annotation, QBit can use it as is.

RequestMapping annotation

packageio.advantageous.qbit.annotation;
...
/**
* Used to map Service method to URIs in an HTTP like protocol.
* @author Rick Hightower
*/
@Retention(RetentionPolicy.RUNTIME)
@Target(value = {ElementType.METHOD, ElementType.TYPE})
public@interfaceRequestMapping {


/**
* Primary mapping expressed by this annotation.
* For HTTP, this would be the URI. Or part of the URI after the parent URI context
* be it ServletApp Context or some other parent context.
*
* @return a request mapping, URIs really
*/
String[] value() default {};

/**
* HTTP request methods must be:
* GET, or POST or WebSocket.
*
* @return or RequestMethods that are supported by this end point
*/
RequestMethod[] method() default {RequestMethod.GET};

/**
* Used to document endpoint
* @return description
*/
Stringdescription() default "no description";


/**
* Used to document endpoint
* @return description
*/
StringreturnDescription() default "no return description";

/**
* Used to document endpoint
* @return summary
*/
Stringsummary() default "no summary";
}
Both Boon and QBit have the same philosophy on annotations. You don't have to use our annotations. You can create your own and as long as the class name and attributes match, we can use the annotations that you provide. This way if you decide to switch away from Boon or QBit, it is easier and you can keep your annotations.
QBit added descriptionsummary and returnDescription attributes to ourRequestMapping annotation. QBit does this to capture the meta-data and expose it for API-Gateway style client generation and documentation using tools like Swagger. QBit is not tied to Swagger. It has its own service meta-data format, but QBit converts its service meta-data to Swagger format to get access to the wealth of Swagger tools and clients. Swagger, and tools like Swagger, makes documenting your API easy and accessible. QBit fully embraces generating service meta-data and Swagger because it embraces the concepts behind building Microserivce API Gateways, which is essential part ofMicroservices Architecture to support web and mobile Microservices.
The add method specifies the HTTP method as POST, and the URI is mapped to"/todo"

Add TODO

    @RequestMapping(value ="/todo", method =RequestMethod.POST, ...)
publicboolean add(finalTodo todo) {

todoMap.put(todo.getId(), todo);
returntrue;
}
You can specify the HTTP methods POSTGETDELETEPUT, etc. Generally speaking you use GET to read, POST to create new, PUT to update or add to a list, and DELETE to destroy, delete or remove. It is a bad idea to use a GET to modify something (just in case a tool crawls your web service).
The next method is a DELETE operation which removes a Todo item.

Remove TODO

    @RequestMapping(value ="/todo", method =RequestMethod.DELETE...)
publicvoid remove(@RequestParam(value ="id", description ="id of Todo item to delete")
finalString id) {

todoMap.remove(id);
}
Notice that the remove method also takes an id of the Todo item, which we want to remove from the map. The @RequestParam is also modeled after the one from Spring MVC as that is the one that most people are familiar with. It just pulls the id from a request parameter.
Notice that the method returns void. Whenever you provide a method that provides avoid, it does not wait for the method to return to send a HTTP response code 202 Accepted. If you want the service to capture any exceptions that might occur and send the exception message, then return anything but void from a method call. Returningvoid means fire and forget which is very fast, but does not have a way to notify the client of errors.
Let's show an example that would report errors.

Remove TODO with error handling and a return

    @RequestMapping(value ="/todo", method =RequestMethod.DELETE)
publicboolean remove(final @RequestParam("id") String id) {

Todo remove = todoMap.remove(id);
return remove!=null;

}
If this operation for any reason throws an exception, then the client with get the exception message and an HTTP response code of 500 Internal Server Error.
Briefly, as you know you might have to call a downstream service to save the actual Todo item in Cassandra or a relational database. In this case, you would use a Callback as follows:

Callback version of remove

    @RequestMapping(value ="/todo", method =RequestMethod.DELETE)
publicvoid remove(finalCallback<Boolean> callback,
final @RequestParam("id") String id) {

Todo remove = todoMap.remove(id);
callback.accept(remove!=null);

}
Callback allows you to response async to a request so that the method call does not block the IO threads. We will cover callbacks more later in the tutorial series.
Side Note: To learn more about Callbacks go to the QBit wiki and search for Callback in the Pages sidebar, to see some Cassandra async examples and SOLR examples of Callbacks go to reactively handling async callbacks with QBit Reactive Microservices.
Lastly in this example we have list todos.

List Todos

    @RequestMapping(value ="/todo", method =RequestMethod.GET... )
publicList<Todo> list() {
returnnewArrayList<>(todoMap.values());
}
Since the list method is not modifying the Todo items but merely returning them, then we can return the entire list.

Using Callbacks

In general you use a Callback to handle multiple downstream services that may be doing additional processing or IO but in separate services and then in a non-blocking way return the result to the original client.
This service could be rewritten using Callbacks as follows:

REST API with callback for Microservices

packagecom.mammatustech.todo;

importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;
importio.advantageous.qbit.reactive.Callback;

importjava.util.*;


@RequestMapping("/todo-service")
publicclassTodoService {


privatefinalMap<String, Todo> todoMap =newTreeMap<>();


@RequestMapping(value="/todo", method=RequestMethod.POST)
publicvoidadd(finalCallback<Boolean>callback, finalTodotodo) {
todoMap.put(todo.getId(), todo);
callback.accept(true);
}


@RequestMapping(value="/todo", method=RequestMethod.DELETE)
publicvoidremove(finalCallback<Boolean>callback,
final@RequestParam("id") Stringid) {

Todo remove = todoMap.remove(id);
callback.accept(remove!=null);

}

@RequestMapping(value="/todo", method=RequestMethod.GET)
publicvoidlist(finalCallback<ArrayList<Todo>>callback) {
callback.accept(newArrayList<>(todoMap.values()));
}


}
Again callbacks would make more sense if we were talking to a downstream service that did additional IO as follows:

Callback

    @RequestMapping(value ="/todo", method =RequestMethod.POST)
publicvoid add(finalCallback<Boolean> callback, finalTodo todo) {
todoMap.put(todo.getId(), todo);
todoRepo.add(callback, todo);
}
Of course there are a lot more details to cover then this and we will cover them in due course.
*Remember: To learn more about Callbacks go to the QBit wiki and search forCallback in the Pages sidebar, to see some Cassandra async examples and SOLR examples of Callbacks go to reactively handling async callbacks with QBit Reactive Microservices. Or just continue reading this tutorial series.

TODO Info how to run the example

You can find the complete code listing for the microservice Todo service with Callbacks.
Running the todo service with gradle is shown as follows:

Running the todo service

$ gradle run
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:run
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder forfurther details.
log4j:WARN No appenders could be found forlogger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig formore info.
Todo Server and Admin Server started
The TODO service has a gradle build file that uses the Gradle application plugin as follows:
group 'qbit-ex'
version '1.0-SNAPSHOT'

apply plugin:'java'


apply plugin:'application'


mainClassName ="com.mammatustech.todo.TodoServiceMain"


compileJava {
sourceCompatibility =1.8
}

repositories {
mavenCentral()
mavenLocal()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
compile group:'io.advantageous.qbit', name:'qbit-admin', version:'0.9.0.M2'
compile group:'io.advantageous.qbit', name:'qbit-vertx', version:'0.9.0.M2'

}
In the main method that launches the app we setup some meta-data about the service so we can export it later via Swagger.

TodoServiceMain

packagecom.mammatustech.todo;


importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.meta.builder.ContextMetaBuilder;

publicclassTodoServiceMain {


publicstaticvoidmain(finalString... args) throwsException {

/* Create the ManagedServiceBuilder which manages a clean shutdown, health, stats, etc. */
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder()
.setRootURI("/v1") //Defaults to services
.setPort(8888); //Defaults to 8080 or environment variable PORT

/* Context meta builder to document this endpoint. */
ContextMetaBuilder contextMetaBuilder = managedServiceBuilder.getContextMetaBuilder();
contextMetaBuilder.setContactEmail("lunati-not-real-email@gmail.com");
contextMetaBuilder.setDescription("A great service to show building a todo list");
contextMetaBuilder.setContactURL("http://www.bwbl.lunati/master/of/rodeo");
contextMetaBuilder.setContactName("Buffalo Wild Bill Lunati");
contextMetaBuilder.setLicenseName("Commercial");
contextMetaBuilder.setLicenseURL("http://www.canttouchthis.com");
contextMetaBuilder.setTitle("Todo Title");
contextMetaBuilder.setVersion("47.0");


managedServiceBuilder.getStatsDReplicatorBuilder().setHost("192.168.59.103");
managedServiceBuilder.setEnableStatsD(true);


/* Start the service. */
managedServiceBuilder.addEndpointService(newTodoService()) //Register TodoService
.getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health end-points and swagger meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Todo Server and Admin Server started");

}
}
The Todo class is just a POJO.

Todo class

packagecom.mammatustech.todo;

publicclassTodo {

privateString id;

privatefinalString name;
privatefinalString description;
privatefinallong createTime;

publicTodo(Stringname, Stringdescription, longcreateTime) {
this.name = name;
this.description = description;
this.createTime = createTime;

this.id = name +"::"+ createTime;
}


publicStringgetId() {
if (id ==null) {
this.id = name +"::"+ createTime;
}
return id;
}

publicStringgetName() {
return name;
}

publicStringgetDescription() {
return description;
}

publiclonggetCreateTime() {
return createTime;
}

@Override
publicbooleanequals(Objecto) {
if (this== o) returntrue;
if (o ==null|| getClass() != o.getClass()) returnfalse;

Todo todo = (Todo) o;

if (createTime != todo.createTime) returnfalse;
return!(name !=null?!name.equals(todo.name) : todo.name !=null);

}

@Override
publicinthashCode() {
int result = name !=null? name.hashCode() :0;
result =31* result + (int) (createTime ^ (createTime >>>32));
return result;
}
}
The TodoService is as follows.

TodoService without callbacks

packagecom.mammatustech.todo;

importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;

importjava.util.*;


/**
* Default port for admin is 7777.
* Default port for main endpoint is 8080.
*
* <pre>
* <code>
*
* Access the service:
*
* $ curl http://localhost:8888/v1/...
*
*
* To see swagger file for this service:
*
* $ curl http://localhost:7777/__admin/meta/
*
* To see health for this service:
*
* $ curl http://localhost:8888/__health
* Returns "ok" if all registered health systems are healthy.
*
* OR if same port endpoint health is disabled then:
*
* $ curl http://localhost:7777/__admin/ok
* Returns "true" if all registered health systems are healthy.
*
*
* A node is a service, service bundle, queue, or server endpoint that is being monitored.
*
* List all service nodes or endpoints
*
* $ curl http://localhost:7777/__admin/all-nodes/
*
*
* List healthy nodes by name:
*
* $ curl http://localhost:7777/__admin/healthy-nodes/
*
* List complete node information:
*
* $ curl http://localhost:7777/__admin/load-nodes/
*
*
* Show service stats and metrics
*
* $ curl http://localhost:8888/__stats/instance
* </code>
* </pre>
*/
@RequestMapping(value ="/todo-service", description ="Todo service")
publicclassTodoService {


privatefinalMap<String, Todo> todoMap =newTreeMap<>();


@RequestMapping(value="/todo", method=RequestMethod.POST,
description="add a todo item to the list", summary="adds todo",
returnDescription="returns true if successful")
publicbooleanadd(finalTodotodo) {

todoMap.put(todo.getId(), todo);
returntrue;
}



@RequestMapping(value="/todo", method=RequestMethod.DELETE,
description="Deletes an item by id", summary="delete a todo item")
publicvoidremove(@RequestParam(value="id", description="id of Todo item to delete")
finalStringid) {

todoMap.remove(id);
}



@RequestMapping(value="/todo", method=RequestMethod.GET,
description="List all items in the system", summary="list items",
returnDescription="return list of all items in system")
publicList<Todo>list() {
returnnewArrayList<>(todoMap.values());
}

}

QBit comes with a lib to easily (and quickly) make REST calls.

Calling todo service from Java using QBit's http client support

packagecom.mammatustech.todo;

importio.advantageous.boon.json.JsonFactory;
importio.advantageous.qbit.http.HTTP;

publicclassHttpClient {

publicstaticvoidmain(finalString... args) throwsException {

for (int index =0; index <100; index++) {

HTTP.postJSON("http://localhost:8888/v1/todo-service/todo",
JsonFactory.toJson(newTodo("name"+ index,
"desc"+ index, System.currentTimeMillis() )));
System.out.print(".");
}
}

}
You can also make async calls with QBit's http client lib.

Swagger generation

You can import the JSON meta data into Swagger.
$ curl http://localhost:7777/__admin/meta/
{
"swagger":"2.0",
"info": {
"title":"application title goes here",
"description":"A great service to show building a todo list",
"contact": {
"name":"Buffalo Wild Bill Lunati",
"url":"http://www.bwbl.lunati/master/of/rodeo",
"email":"lunati-not-real-email@gmail.com"
},
"version":"47.0",
"license": {
"name":"Commercial",
"url":"http://www.canttouchthis.com"
}
},
"host":"localhost:8888",
"basePath":"/v1",
"schemes": [
"http",
"https",
"wss",
"ws"
],
"consumes": [
"application/json"
],
"definitions": {
"Todo": {
"properties": {
"id": {
"type":"string"
},
"name": {
"type":"string"
},
"description": {
"type":"string"
},
"createTime": {
"type":"integer",
"format":"int64"
}
}
}
},
"produces": [
"application/json"
],
"paths": {
"/todo-service/todo": {
"get": {
"operationId":"list",
"summary":"list items",
"description":"List all items in the system",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"return list of all items in system",
"schema": {
"type":"array",
"items": {
"$ref":"#/definitions/Todo"
}
}
}
}
},
"post": {
"operationId":"add",
"summary":"adds todo",
"description":"add a todo item to the list",
"produces": [
"application/json"
],
"parameters": [
{
"name":"body",
"in":"body",
"required":true,
"schema": {
"$ref":"#/definitions/Todo"
}
}
],
"responses": {
"200": {
"description":"returns true if successful",
"schema": {
"type":"boolean"
}
}
}
},
"delete": {
"operationId":"remove",
"summary":"delete a todo item",
"description":"Deletes an item by id",
"parameters": [
{
"name":"id",
"in":"query",
"description":"id of Todo item to delete",
"type":"string"
}
],
"responses": {
"202": {
"description":"returns success",
"schema": {
"type":"string"
}
}
}
}
}
}
}
The Swagger JSON meta-data can be imported into the swagger editor.
swagger:'2.0'
info:
title:Todo Title
description:A great service to show building a todo list
contact:
name:Buffalo Wild Bill Lunati
url:'http://www.bwbl.lunati/master/of/rodeo'
email:lunati-not-real-email@gmail.com
version:'47.0'
license:
name:Commercial
url:'http://www.canttouchthis.com'
host:'localhost:8888'
basePath:/v1
schemes:
- http
- https
- wss
- ws
consumes:
- application/json
definitions:
Todo:
properties:
id:
type:string
name:
type:string
description:
type:string
createTime:
type:integer
format:int64
produces:
- application/json
paths:
/todo-service/todo:
get:
operationId:list
summary:list items
description:List all items in the system
produces:
- application/json
responses:
'200':
description:return list of all items in system
schema:
type:array
items:
$ref:'#/definitions/Todo'
post:
operationId:add
summary:adds todo
description:add a todo item to the list
produces:
- application/json
parameters:
- name:body
in:body
required:true
schema:
$ref:'#/definitions/Todo'
responses:
'200':
description:returns true if successful
schema:
type:boolean
delete:
operationId:remove
summary:delete a todo item
description:Deletes an item by id
parameters:
- name:id
in:query
description:id of Todo item to delete
type:string
responses:
'202':
description:returns success
schema:
type:string
You can see the descriptions, summary and returnDescription from the@RequestMapping and @RequestParam annotations are exposed in the Swagger generation for documentation.
You can also generate working client libs from Swagger using the JSON that QBit provides. We did this and it works for both the Callback version of the TODO list as well as the non-callback version. The source code for the Swagger REST TODO Client which was generated is in github.

Using Swagger generated client to talk to TODO service

DefaultApi defaultApi =newDefaultApi();

Todo todo =newTodo();
todo.setDescription("Show demo to group");
todo.setName("Show demo");
todo.setCreateTime(123L);

defaultApi.add(todo);

List<Todo> list = defaultApi.list();

list.forEach(newConsumer<Todo>() {
@Override
publicvoidaccept(Todotodo) {
System.out.println(todo);
}
});

Curl, Stats and health

You can access the service via curl commands.

Getting a list of TODO items using REST curl call

$ curl http://localhost:8888/v1/todo-service/todo
[{"id":"name0::1441040038414","name":"name0","description":"desc0",
"createTime":1441040038414}, ...
You can inquire about the health of its nodes using the admin port.

Using admin port to check Todo services health

$ curl http://localhost:7777/__admin/load-nodes/
[{"name":"TodoService","ttlInMS":10000,"lastCheckIn":1441040291968,"status":"PASS"}]
Remember that this will list all service actors (ServiceQueues) andServiceServerEndpoints (REST and WebSocket services).
If you are looking for a yes/no answer to health for Nagios or Consul or some load balancer, then you can use.

Health status yes or no?

 $ curl http://localhost:8888/__health
"ok"
Of the admin port version of this with:

Health status yes or no? on admin port

$ curl http://localhost:7777/__admin/ok
true
To get the stats (after I ran some load testing):

Getting runtime stats of the TODO microservice

$  curl http://localhost:8888/__stats/instance

Output of getting runtime stats of the TODO microservice

{
"MetricsKV": {
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.free":219638816,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.count.std":0.5,
"todo.title.millenniumfalcon.mammatustech.com.jvm.os.load.level.count":8,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.used.median":21867208,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.seconds.min":60,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.used.std":1023408.06,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.seconds.median":300,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.peak.count":32,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.used.mean":21437416,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.free.std":18688268,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.max":3817865216,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.used.mean":61136300,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.count":31,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.minutes.count":9,
"todo.title.millenniumfalcon.mammatustech.com.jvm.os.load.level.std":1.1659224,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.free.min":173839288,
"todo.title.millenniumfalcon.mammatustech.com.jvm.os.load.level":4,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.minutes.std":2.5819888,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.used.min":19162464,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.minutes":9,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.seconds.mean":300,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.total":257425408,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.started.count":35,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.count.median":32,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.count.count":2,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.started.count.max":35,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.minutes.mean":5,
"todo.title.millenniumfalcon.mammatustech.com.jvm.os.load.level.median":4,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.free.median":191758000,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.seconds.std":154.91933,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.used.max":83586120,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.seconds.count":9,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.count.max":32,
"todo.title.millenniumfalcon.mammatustech.com.jvm.os.load.level.mean":4.125,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.seconds":540,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.used":37786592,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.used.median":65667408,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.used":22026992,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.used.count":10,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.minutes.median":5,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.used.max":22026992,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.count.mean":31.5,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.count.min":31,
"TodoService":1,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.free.max":238262944,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.minutes.min":1,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.free.mean":196289104,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.used.count":10,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.started.count.median":35,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.started.count.min":34,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.used.min":18501664,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.started.count.count":2,
"todo.title.millenniumfalcon.mammatustech.com.jvm.os.load.level.max":6,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.minutes.max":9,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.daemon.count":11,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.free.count":10,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.started.count.std":0.5,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.non.heap.max":-1,
"todo.title.millenniumfalcon.mammatustech.com.jvm.os.load.level.min":2,
"todo.title.millenniumfalcon.mammatustech.com.jvm.up.time.seconds.max":540,
"todo.title.millenniumfalcon.mammatustech.com.jvm.mem.heap.used.std":18688268,
"todo.title.millenniumfalcon.mammatustech.com.jvm.thread.started.count.mean":34.5
},
"MetricsMS": {
"TodoService.callTimeSample": [
167643,
9502
],
"todo.title.millenniumfalcon.mammatustech.com.jvm.gc.collector.ps.scavengecollection.time": [
11,
11
]
},
"MetricsC": {
"todo.title.millenniumfalcon.mammatustech.com.jvm.gc.collector.ps.scavengecollection.count":2,
"TodoService.startBatchCount":175,
"TodoService.receiveCount":260
},
"version":1
}
Remember that you can disable this local collection of stats (read the batteries included tutorial for more information).

Conclusion

We covered how to expose a service by mapping the service and its methods to URIs.
Next up we will show how to create a resourceful REST scheme.

QBit, Microservices REST lib, working with Map String, Object for types

$
0
0
When working with REST endpoints at times you want extra capability from one REST end point. To support polymorphic subtypes or operations, you may want to send a wrapper Map that gives some context information. The information could be about what types the sub-maps are.
Let's show a quick example of this using the TodoService from the previous examples. The complete code for this example can be found here Todo Map.

Example of using a Map instead of a strongly typed POJO

importio.advantageous.boon.core.reflection.MapperSimple;


@RequestMapping(value ="/todo", method =RequestMethod.POST,
description ="add a todo item to the list", summary ="adds todo",
returnDescription ="returns true if successful")
publicboolean add(finalMap<String, Object> todoMap) {

String id = todoMap.get("id").toString();

String name = (String)todoMap.get("name");


String description = (String)todoMap.get("description");


Long createTime = (Long)todoMap.get("createTime");


Map<String, Object> parent = (Map<String, Object>) todoMap.get("parent");

MapperSimple simple =newMapperSimple();

Category category = simple.fromMap(parent, Category.class);
Todo todo =newTodo(name, description, createTime, category);

todoMap.put(id, todo);

returntrue;
}
Notice that the add method now takes a Map<String, Object> instead of a TodoPOJO. We can pull items out of the map, which equate to properties of a Todo item.
For this example, I changed the Todo to have a property of type Category.

New Property of type Category

@Description("A `TodoItem`.")
publicclassTodo {


@Description("Holds the description of the todo item")
privatefinalString description;

@Description("Holds the name of the todo item")
privatefinalString name;


privateString id;

privatefinallong createTime;

privatefinalCategory parent;

...

publicclassCategory {

privatefinalString name;

publicCategory(Stringname) {
this.name = name;
}
}
The category payload is expressed as another JSON object, aka, a Java MapMap<String, Object> parent = (Map<String, Object>) todoMap.get("parent");.
We can pull out the sub map which represents the category and use a MapperSimple to convert the map into a POJO (Category).

Converting a map to a POJO


Map<String, Object> parent = (Map<String, Object>) todoMap.get("parent");

MapperSimple simple = new MapperSimple();

Category category = simple.fromMap(parent, Category.class);
QBit has both a MapperSimple and a MapperComplex, which both implement Mapper. A Mapper converts maps into objects. The MapperComplex allows you to ignore properties, decide how to read fields, etc.
This example was kept fairly simple. You could imagine a more involved example where you read fields and then decide which POJO to instantiate from a list of subclasses.

Do you want to understand QBit microservices lib? But not sure where to start?

$
0
0

Start with the overview:


Look at the tutorials:


(Coming soon Scala/sbt versions of all of the samples which are now Java/gradle.)

You want to be a pro at QBit Microservices...

READ THIS:


Now read this:




QBit supports swagger, websocket, REST, service discovery, high-speed queryable stats (that can be replicated to StatsD), etc. 
QBit is built from the ground up to support Java/Scala microservices: async, service discovery, reactive metrics, api gateway development, distributed health checks, and run well in Cloud/PaaS/Container environments, etc.


Coming soon integration with Kafka (event bus, and async method call routing) and Redis.
(Currently distributed event bus works with websocket and Consul or PUSH JSON service discovery).


Introduction to Apache Spark Part 1 for Real-Time Data Analytics

$
0
0

Introduction to Apache Spark Part 1

By Fadi Maalouli and Rick Hightower

Overview

Apache Spark, an open source cluster computing system, is growing fast. Apache Spark has a growing ecosystem of libraries and framework to enable advanced data analytics. Apache Spark's rapid success is due to its power and and ease-of-use. It is more productive and has faster runtime than the typical MapReduce based analysisApache Spark provides in-memory, distributed computing. It has APIs in Java, Scala, Python, and R. The Spark Ecosystem is shown below.
The entire ecosystem is built on top of the core engine. The core enables in memory computation for speed and its API has support for Java, Scala, Python, and R.Streaming enables processing streams of data in real time. Spark SQL enables users to query structured data and you can do so with your favorite language, a DataFrame resides at the core of Spark SQL, it holds data as a collection of rows and each column in the row is named, with DataFrames you can easily select, plot, and filter data. MLlib is a Machine Learning framework. GraphX is an API for graph structured data. This was a brief overview on the ecosystem.
A little history about Apache Spark:
  • Originally developed in 2009 in UC Berkeley AMP lab, became open sourced in 2010, and now it is part of the top level Apache Software Foundation.
  • Has about 12,500 commits made by about 630 contributors (as seen on the Apache Spark Github repo).
  • Mostly written in Scala.
  • Google search interests for Apache Spark has sky rocketed recently, indicating a wide range of interest. (108,000 searches in July according to Google Ad Word Tools about ten times more than Microservices).
  • Some of Spark's distributors: IBM, Oracle, DataStax, BlueData, Cloudera...
  • Some of the applications that are built using spark: Qlik, Talen, Tresata, atscale, platfora...
  • Some of the companies that are using Spark: VerizonNBC, Yahoo, Spotify...
The reason people are so interested in Apache Spark is it puts the power of Hadoop in the hands of developers. It is easier to setup an Apache Spark cluster than an Hadoop Cluster. It runs faster. And it is a lot easier to program. It puts the promise and power of Big Data and real time analysis in the hands of the masses. With that in mind, let's introduce Apache Spark in this quick tutorial.

Download Spark, and How to use the interactive shell

A great way to experiment with Apache Spark is to use the available interactive shells. There is a Python Shell and a Scala shell.
To download Apache Spark go here , and get the latest pre built version so we can run the shell out of the box.
Right now Apache Spark is version 1.4.1 released on July 15, 2015.

Unzip Spark

tar -xvzf ~/spark-1.4.1-bin-hadoop2.4.tgz

To run the Python shell

cd spark-1.4.1-bin-hadoop2.4
./bin/pyspark
We won't use the Python shell here in this section.
The Scala interactive shell runs on the JVM therefore it enables you to use Java libraries.

To run the Scala shell

cd spark-1.4.1-bin-hadoop2.4
./bin/spark-shell
You should see something like this:

The Scala shell welcome message

Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.4.1
/_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_25)
Type in expressions to have them evaluated.
Type :help for more information.
15/08/24 21:58:29 INFO SparkContext: Running Spark version 1.4.1
The following is a simple exercise just to get you started with the shell. You might not understand what we are doing right now but we will explain in detail later. With the Scala shell, do the following:

Create a textFile RDD from the README file in Spark

val textFile = sc.textFile("README.md")

Get the first element in the RDD textFile

textFile.first()

res3: String = # Apache Spark
You can filter the RDD textFile to return a new RDD that contains all the lines with the word Spark, then count its lines.

Filtered RDD linesWithSpark and count its lines

val linesWithSpark = textFile.filter(line => line.contains("Spark"))
linesWithSpark.count()

res10: Long = 19
To find the line with the most amount of words in the RDD linesWithSpark do the following. Using the map method, map each line in the RDD to a number, and look for spaces. Then use the method reduce to look for the lines that has the most amount of words.

Find the line in the RDD textFile that has the most amount of words

textFile.map(line => line.split("").size).reduce((a, b) => if (a > b) a else b)

res11: Int = 14
Line 14 has the most words.
You can also import Java libraries for example like the Math.max() method because the arguments map and reduce are Scala function literals.

Importing Java methods in the Scala shell

import java.lang.Math
textFile.map(line => line.split("").size).reduce((a, b) => Math.max(a, b))

res12: Int = 14
We can easily cache data in memory for example. Lets cache the filtered RDD linesWithSpark:
linesWithSpark.cache()
res13: linesWithSpark.type = MapPartitionsRDD[8] at filter at <console>:23

linesWithSpark.count()
res15: Long = 19
This was a brief overview on how to use the Spark interactive shell.

RDDs

Spark enables users to execute tasks in parallel on a cluster. This parallelism is made possible by using one of the main component of Spark, a RDD. A RDD (Resilient distributed data) is a representation of data. A RDD is data that can be partitioned on a cluster (sharded data if you will). The partitioning enables the execution of tasks in parallel. The more partitions you have, the more parallelism you can do. The diagram bellow is a representation of a RDD:
Think of each column as a partition, you can easily assign these partitions to nodes on a cluster.
In order to create a RDD, you can read data from an external storage; for example from Cassandra or Amazon Simple Storage Service, HDFS, or any data that offers Hadoop input format. You can also create a RDD by reading a text file, an array, or JSON. On the other hand if the data is local to your application you just need to parallelize it then you will be able to apply all the Spark features on it and do analysis in parallel across the Apache Spark Cluster. To test it out, with a Scala Spark shell:

Make a RDD thingsRDD from a list of words

val thingsRDD = sc.parallelize(List("spoon", "fork", "plate", "cup", "bottle"))

thingsRDD: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[11] at parallelize at <console>:24

Count the word in the RDD thingsRDD

thingsRDD.count()

res16: Long = 5
In order to work with Spark you need to start with a Spark Context. When you are using a shell, Spark Context already exists as sc. When we call the parallelize method on the Spark Context, we will get a RDD that is partitioned and ready to be distributed across nodes.
What can we do with a RDD?
With a RDD, we can either transform data or take actions on that data. This means with a transformation we can change its format, search for something, filter data etc. With actions you make changes, you pull data out, collect data, and even count().
For example, lets create a RDD textFile from the text file README.md available inSpark, this file contains lines of text. When we read the file into the RDD with textFile, the data will get partitioned into lines of text which can be spread across the cluster and operated on in parallel.

Create RDD textFile from README.md

val textFile = sc.textFile("README.md")

Count the lines

textFile.count()

res17: Long = 98
The count 98 represents the amount of lines the file README.md has.
Will get something that looks like this:
Then we can filter out all the lines that have the word Spark, and create a new RDD linesWithSpark that contains that filtered data.

Create the filtered RDD linesWithSpark

val linesWithSpark = textFile.filter(line => line.contains("Spark"))
Using the previous diagram where we showed how a textFile RDD would look like, the RDD linesWithSpark will look like the following:
It is worth mentioning, we also have what is called a Pair RDD, this kind of RDD is used when we have a key/value paired data. For example if we have data like the following table, Fruits matching its color:
We can execute a groupByKey() transformation on the fruit data to get:
pairRDD.groupByKey()

Banana [Yellow]
Apple [Red, Green]
Kiwi [Green]
Figs [Black]
This transformation just grouped 2 values which are (Red and Green) with one key which is (Apple). These are examples of transformation changes so far.
Once we have filtered a RDD, we can collect/materialize its data and make it flow into our application, this is an example of an action. Once we do this, all the data in the RDD are gone, but we can still call some operations on the RDD's data since they are still in memory.

Collect or materialize the data in linesWithSpark RDD

linesWithSpark.collect()
Important to note that every time we call an action in Spark for example a count() action, Spark will go over all the transformations and computations done to that point and then return the count number, this will be somewhat slow. To fix this problem and increase the performance speed you can cache a RDD in memory. This way when you call an action time after time, you won't have to start the process from the beginning, you just get the results of the cached RDD from memory.

Cashing the RDD linesWithSpark

linesWithSpark.cache()
If you like to delete the RDD linesWithSpark from memory you can use the unpersist() method:

Deleting linesWithSpark from memory

linesWithSpark.unpersist()
Otherwise Spark automatically delete the oldest cashed RDD using the least recently used logic (LRU).
Here is a list to summarize the Spark process from start to end:
  • Create a RDD of some sort of data.
  • Transform the RDD's data by filtering for example.
  • Cache the transformed or filtered RDD if needed to be reused.
  • Do some actions on the RDD like pulling the data out, counting, storing data to Cassandra etc...
Here is a list of some of the transformations that can be used on a RDD:
  • filter()
  • map()
  • sample()
  • union()
  • groupbykey()
  • sortbykey()
  • combineByKey()
  • subtractByKey()
  • mapValues()
  • Keys()
  • Values()
Here is a list of some of the actions that can be made on a RDD:
  • collect()
  • count()
  • first()
  • countbykey()
  • saveAsTextFile()
  • reduce()
  • take(n)
  • countBykey()
  • collectAsMap()
  • lookup(key)
For the full lists with their descriptions, check out the following Spark documentation.

Have a team who wants to get started with Apache Spark?
This two-day course introduces experienced developers and architects to Apache Spark. Developers will be enabled to build real-world, high-speed, real-time analytics systems. This course has extensive hands-on examples. The idea is introduce key concepts that make Apache Spark such an important technology. This course should prepare architects, development managers, and developers to understand the possibilities with Apache Spark.



Reference

QBit and Vertx3 : Best of both worlds for Microservices

$
0
0
QBit support Vertx 3. This allows you to create a service which can also serve up web pages and web resources for an app (an SPA). Prior to this, QBit has been more focused on just being a REST microservices, i.e., routing HTTP calls and WebSocket messages to Java methods. Rather then reinvent the world. QBit now supports Vertx 3.
The QBit support for Vertx 3 exceeds the support for Vertx 2.
QBit allows REST style support via annotations.

Example of QBit REST style support via annotations.

    @RequestMapping(value ="/todo", method =RequestMethod.DELETE)
publicvoid remove(finalCallback<Boolean> callback,
final @RequestParam("id") String id) {

Todo remove = todoMap.remove(id);
callback.accept(remove!=null);

}
QBit, microservices lib, also provides integration with Consul, a typed event bus (which can be clustered), and really simplifies complex reactive async callback coordination between services, and a lot more. Please read through the QBit overview.
History: QBit at first only ran inside of Vertx2 . Then we decided to (client driven decision)  make it stand alone and we lost the ability to run it embedded inside of Vertx (we did not need it for any project on the road map). QBit was heavily inspired by Vertx and Akka.
Now you can use QBit features and Vertx 3 features via a mix and match model. You do this by setting up routers and/or a route in Vertx 3 to route to an HttpServer in QBit, and this takes about 1 line of code.
This means we can use Vertx 3's chunking, streaming, routing, etc. for complex HTTP support, HTTP auth, its Shiro Integration, etc. As well as use Vertx 3 as a normal HttpServer to serve up resources, but when we want to use REST style, async callbacks we can use QBit for routing REST calls to Java methods (as well as routing WebSocket messages to Java methods). We can access all of the features of Vertx 3. 
(Recall: QBit was originally written as a Vertx 2 add-on lib, but then we had clients that wanted to run in standalone and clients who wanted to use it with Servlets / Embedded Jetty. This is more coming back home versus a new thing. We also had pressure to add systems for microservices like monitoring, service discovery, health checks, etc. We did this at the same time Vertx 3 was adding similar features to support microservices.).
You can run QBit standalone and if you do, it uses Vertx 3 like a network lib, or you can run QBit inside of Vertx 3.
We moved this up the priority wish list for QBit for two reasons. We were going to start using Vertx support for DNS to read DNS entries for service discovery in a Heroku like environment. It made no sense to invest a lot of time using Vertx 2 API when we were switching to Vertx 3 in the short time. We also had some services that needed to deliver up an SPA (Single Page App), so we had to extend the support for Vertx anyway or add these features to QBit (which it sort of has but not really its focus so we would rather just delegate that to Vertx 3), and it made no sense to do that with Vertx 2.
Also the Vertx 3 environment and community is a very vibrant one with many shared philosophies to QBit. Let's cover where the Vertx3 integration and QBit come in.

Vertx 3 Integration and QBit, microservices lib integration, details. 
We added a new class called a VertxHttpServerBuilder (extends HttpServerBuilder), which allows one to build a QBit HTTP server from a vertx object, a vertxHttpServer and optionally from a Vertx router or a Vertx route.
Note that you can pass QBit HttpServerBuilder or a QBit HttpServer to a QBitEndpointServerBuilder to use that builder instead or HttpServer instead of the default.VertxHttpServerBuilder is a QBit HttpServerBuilder so you construct it, associate it with vertx, and then inject it into EndpointServerBuilder. This is how we integrate with the QBit REST/WebSocket support. If you are using QBit REST with Vertx, that is one integration point.
Also note that you can pass HttpServerBuilder or a HttpServer to aManagedServiceBuilder to use that builder instead or HttpServer instead of the default. If you wanted to use QBit REST and QBit Swagger support with Vertx then you would want to use ManagedServiceBuilder with this class.
Here are some docs taken from our JavaDocs for QBit VertxHttpServerBuilder.VertxHttpServerBuilder also allows one to pass a shared Vertx object if running inside of the Vertx world. It also allows one to pass a shared vertx HttpServer if you want to use more than just QBit routing. If you are using Vertx routing or you want to limit this QBit HttpServer to one route then you can pass a route.
Note: QBits Vertx 2 support is EOL. We will be phasing it out shortly.
Here are some code examples on how to mix and match QBit and Vertx3.

Usage

Creating a QBit HttpServer that is tied to a single vertx route

HttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(httpServer).setRoute(route).build();
httpServer.start();

Creating a QBit HttpServer server and passing a router so it can register itself as the default route

Router router =Router.router(vertx); //Vertx router
Route route1 = router.route("/some/path/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
// enable chunked responses because we will be adding data as
// we execute over other handlers. This is only required once and
// only if several handlers do output.
response.setChunked(true);
response.write("route1\n");

// Call the next matching route after a 5 second delay
routingContext.vertx().setTimer(5000, tid -> routingContext.next());
});

//Now install our QBit Server to handle REST calls.
vertxHttpServerBuilder =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(httpServer).setRouter(router);

HttpServer httpServer = vertxHttpServerBuilder.build();
httpServer.start();
Note that you can pass HttpServerBuilder or a HttpServer toEndpointServerBuilder to use that builder instead or HttpServer instead of the default. If you are using QBit REST with Vertx, that is one integration point.

EndpointServerBuilder integration

//Like before
vertxHttpServerBuilder =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(httpServer).setRouter(router);

//Now just inject it into the vertxHttpServerBuilder before you call build
HttpServer httpServer = vertxHttpServerBuilder.build();
endpointServerBuilder.setHttpServer(httpServer);
Also note that you can pass HttpServerBuilder or a HttpServer to aManagedServiceBuilder to use that builder instead or HttpServer instead of the default.
If you wanted to use QBit REST and QBit Swagger support with Vertx then you would want to use ManagedServiceBuilder with this class.

ManagedServiceBuilder integration

//Like before
vertxHttpServerBuilder =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(httpServer).setRouter(router);

//Now just inject it into the vertxHttpServerBuilder before you call build
HttpServer httpServer = vertxHttpServerBuilder.build();
managedServiceBuilder.setHttpServer(httpServer);
Read Vertx guide on routing for more details Vertx Http Ext Manual.

Where do we go from here

QBit has a health system, and a microservices stats collections system. Vertx 3 provided similar support. QBit has an event bus. Vertx has an event bus. There is no reason why QBit can't provide Vertx implementations of its event bus (this is how the QBit event bus started), or for that matter integrate with Vertx's health system or its stats collection system. QBit has its own service discovery system with implementations that talk to DNS, Consul, or just monitor JSON files to be updated (for Chef Push, or Consul, etcd pull model). There is no reason QBit could not provide an implementation of its Service Discovery that worked with Vertx's clustering support. All of the major internal services that QBit provides are extensible with plugins via interfaces. There is plenty of opportunity for more integration of QBit and Vertx.
QBit and Vertx have both evolved to provide more and more support for microservices and there is a lot of synergy between the two libs.
QBit can also play well with Servlets, Spring MVC, Spring Boot, and other lightweight HTTP libs. QBit comes batteries included.

Find out more information on QBit here.

Rick's thoughts on Scala book (Part 1)

$
0
0
Here is my stream of consciences as I take notes and share thoughts on the Scala book.

Scala book redux
Reading through the book Programming in Scala: A Comprehensive Step by Step Guide, but this time I am actually using Scala in my day job so I think it will be more meaningful. Also I have written two of my own Python envy, Scala envy, Groovy envy, functional programming libs in Java with tons of other utilities (one I threw away, it was horrible, and the other one is Boon). Also now I have some experience with Java 8 lambda expressions and functional programming, and some experience with Python functional programming so I feel like the book makes more sense then it did when I attempted a few goes at it before.
Java vs. Scala
The Scala book which predates Java 8 uses a name exists example and shows a forloop vs. Scala exists.

Scala

valname:String="Hello World"
valnameHasUpperCase= name.exists(_.isUpper)
println(nameHasUpperCase)

Java

finalString name ="Hello World";
finalboolean nameHasUpperCase = name.chars().mapToObj(i -> (char)i)
.anyMatch(c -> c.isUpperCase(c));
System.out.println(nameHasUpperCase);
The issues with Java is mainly that it does not include chars() that returns a CharacterStream instead it returns a IntStream, and I can't fathom why.
We can close the verbosity gap by using a static import and a method reference.
finalString name ="Hello World";
finalboolean nameHasUpperCase = name.chars().mapToObj(i -> (char)i)
.anyMatch(Character::isUpperCase);
out.println(nameHasUpperCase);
Clearly Scala still wins. Scala is easier to read and smaller. But the difference is smaller if you choose Java 8. (Or a library like Boon where this type of operation would be a one liner in Java as well.)
Scala's big win here (in this small example) is mainly due to its implicit typing, andString.chars returning an IntStream instead of a ChararacterStream. Java would lose less bad with a more complete lib for handling Strings.

Type system

For instance, Alan Kay, the inventor of the Smalltalk language, once remarked: "I'm not against types, but I don't know of any type systems that aren't a complete pain, so I still like dynamic typing."[13] We hope to convince you in this book that Scala's type system is far from being a "complete pain." -- Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 536-538). Artima Press. Kindle Edition.
I find that larger systems always need strong typing which could be less of an issues with Microservices. Perhaps. But I find when I wrote code in Groovy that I use types. When I write code in Python, I add type information to the here docs and to the comments so I often wonder if I always feel the need to add type information (from being burned a lot on larger Python projects) then why not have types in the system. If you are going to have types in the system, then why not make the implicit like Scala (and others).
I agree with the Scala approach.

Implicit arguments

This book is older so again it misrepresents Java by not including implicit typing added in Java 7 and present in Java 8. This book did come out after Java 7.
Clearly, it should be enough to say just once that x is a HashMap with Ints as keys and Strings as values; there's no need to repeat the same phrase twice. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 564-567). Artima Press. Kindle Edition.
valx=newHashMap[Int, String]()
finalMap<Integer, String> x =newHashMap<>();
Note that I prefer the Scala syntax here. This has been adopted by the two other Java programming langauges that adopted Scala's implicted typing. Implicted and strong typing will be Scala's legacy. It is terse, yet unlike Dynamic typing, exact.
What I don't like about Scala is this:

Scala operators instead of methods

valx=newHashMap[Int, String]()
x += (1->"foo")

println (x.get(1).get)
Versus Java.

Java methods instead of operators

finalMap<Integer, String> x =newHashMap<>();
x.put(1, "foo");
out.println(x.get(1));
At least I don't like it yet. Strangely I feel that Python's operators for Maps make sense as do Groovy's. But Scala's look weird. There is so much prior art dealing with associative arrays, dictionaries and hash maps, that this seems to be an odd choice at best. Also, I am not a fan of using operators when a method will do. I saw this in C++ and in Python. It tends to make code very hard to read, unless you are working with a real well known lib. I feel this has so much potential for abuse and leads to undreadable code. This goes against my instincts to comment and carefuly think about method names. I would likely use the standard built-in libs operators (no choice for many things), but shy away from using this feature (mis-feature).
I also wonder why Scala supports non-implict types. You can specify the type.
valy:Map[Int, String] =newHashMap[Int, String]()
One wonders what dark corners of the langauge exist where you would want to use an explicit type when an implicit one seems to work so well. My guess would be interface design. We will see. Still on chapter 1.
I did write my first real world microservice in Scala. But I have to admit. There are things that I wrote to make the compiler happy that I do not fully understand which is why I am reading the book (other times I more skimmed it). I can't stand using a language the wrong way. Unless I do it for a good reason (performance).

Scala influences

At the surface level, Scala adopts ... syntax from Java and C#, which in turn borrowed ... from C and C++. Expressions, statements, and blocks are mostly as in Java, as is the syntax of classes, packages and imports. (What did it borrow from C#?) (Basic type system from Java, libs, execution model.) ... (Its) uniform object model (is from) Smalltalk (mention of Ruby to make Ruby guys happy I suppose). Its Universal nesting ...(is from) in Algol, Simula (Beta and gbeta). Its uniform access principle for method invocation and field selection... from Eiffel. Its ... functional programming is ... similar... to ML family of languages (SML, OCaml, and F#)... Many higher-order functions in Scala's standard library are also present in ML or Haskell. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 577-585). Artima Press. Kindle Edition.
I can see why so many CS majors love Scala. It incorporates many of their favorite langauges that they are forced to learn in school which are not used in the industry very much at all. Lookup all of those languages other than C#, Java and Ruby on the Tiobe index, and they are not very popular. What no mention of Icon? You are missing out on the Tucson AZ market. (Scala is 27th on the list with .7 percent. Groovy is at .5 percent. Scala is the only "cool" langauge that makes it in the top 50. I imagine Scala is going to keep climbing due to Apache Spark.
I find that Ruby, Perl, JavaScript and Python developers who would never dream of touching Java sort of like Scala. This is one reason for me to learn it. The other is Apache Spark. I run into a lot of Python developers. Scala does not bring on the hatred that Java does from Python folks. (I like Python, Groovy and Java a lot.) It seems given this that more thought would have been given to adopt closer syntax support for Python/JavaScript ways of indexing maps (assocaitive arrays).

Functions

objectMainScala {

defmax (x : Int, y : Int):Int=if (x > y) x else y
...
valabc= max(1, 2)
println(s"Max $abc")
publicclassMain {

publicstaticintmax(intx, inty) {
return (x > y) ? x : y;
}
...
finalint abc = max(1, 2);
out.printf("Max %d\n", abc);
Those are roughly equivalent at this point. I prefer the return type after the parameter list. This makes reading/finding the method name easier. (Point goes to Scala).
The Scala version can be simplified further as Scala can infer the type. The book does not mention this (older book, or covered later).
defmax (x : Int, y : Int) =if (x > y) x else  y
I am very used to the C/C++/Java terenary operator so its use does not bother me. I can see how the Scala if/else would be more appealing to Python and Ruby developers.
Performance wise: The Java version is using primitives not objects so unless Scala does some magic in a tight loop the Java version is going to be much better with a lot less GC pressure. This will not concern Python and Ruby devs. It concerns me. I do write things that at times need a bit of punch when it comes to performance (parsers, event system, queue systems, etc.). If I did adopt Scala, I would still use Java for core things that needed performance.
Book just covered how the return type was optional. :)
While loops says the book are not the preferred style. I had to use a while loop once. I was moving Java code to Scala. Good to know that there is a better way.
They just covered imperative while loop style vs. foreach. There are times in Java when I get the underlying array and I iterate using a for / loop. I do it when I know something is going to iterate in one or more loops. The difference when you are working with millions of users per node can be pretty large. I can see where I would write some features in Java to avoid the extra GC just like where I would avoid iterators and streams in Java 8 in certain places. It is good there is a Java escape valve in Scala. You won't need it often, but when you do. :)
That covers the first two chapters.

Scala book chapter 3 review, notes, stream of conscienceness (Part 2)

$
0
0
Scala 2 Book Chapter 3 review.
More of my random thoughts as I go through the Scala book.
Going through Chapter 3, ....
valnumNames=Array("zero", "one", "two", "three")
numNames(0) ="0"

for (i <-0 to numNames.length-1) {
valname= numNames(i)
println (s"name i $i = $name")
}
For some reason I did not think that Scala had a for loop as most of the example I see use a while loop. But it does.
Java version of this:
finalString[] numNames = {"zero", "one", "two", "three"};
numNames[0] ="0";

for (int i =0; i < numNames.length; i++) {
out.printf ("name i %d = %s \n", i, numNames[i]);
}
Other than the Java not supporitng implicit typing which it does sort of on the right hand side of the assingment, which is oddly inconsistent, there is not much difference LOC wise between these two. Scala one if you iterating of millions and million in a tight loop which was iterating of millions and million which was getting called in a microservice a million times a second, would cause a lot more GC given the even Integers are objects mantra (I would imagine but a good hotspot and Scala compiler could optmize most of that away.. do they? Based on perf of JSON parsers written in purse Scala, I have doubts, but I am no Scala expert.)
Scala doesn't technically have operator overloading, because it doesn't actually have operators - Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 923-924). Artima Press. Kindle Edition.
Which is technially what Python and C++ have but they call it operator overloading and so should you. Prior art. But you know whatever.
The book brags about the consitency of always using the object model.
Let's cover that a bit.
Here are two different ways to access a Map in scala.
    x += (1->"foo")
x(1) ="foo2"
Here are five different ways to iterate through the list of numNames.
valnumNames=Array("zero", "one", "two", "three")
numNames(0) ="0"

for (i <-0 to numNames.length-1) {
valname= numNames(i)
println (s"name i $i = $name")
}

for (i <-0.to(numNames.length-1)) {
valname= numNames(i)
println (s"name i $i = $name")
}

vari=0
while (i < numNames.length) {
valname= numNames(i)
println (s"name i $i = $name")
i+=1
}


i =0
numNames.foreach(name => {
println (s"name i $i = $name")
i+=1
})


i =0
for (name <- numNames) {
println (s"name i $i = $name")
i+=1
}
Note the only difference between the first and second way is how I invoke the to method. One way is the operator way. One way is the normal Java . method invoke way.
I find Scala about as consistent at doing things as Perl so far, but I am new. I could see two developers writing the same program completely different.
I do find a lot of value in Scala. Its consistent Object method invocation is not one of them (so far).

List and Cons

::: :: Yuck!
Appending lists. Just as simple as this:
In the expression "1 :: twoThree", :: is a method of its right operand, the list, twoThree. ...If a method is used in operator notation, such as a  b, the method is invoked on the left operand, as in a.(b)—unless the method name ends in a colon. If the method name ends in a colon, the method is invoked on the right operand. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 985-997). Artima Press. Kindle Edition.
I guess I willget used to this. Right now I am not liking it much. I prefer Python's view of consistency.
I thought I understood what they said.
I tried it first in Java.
finalList<String> aList = asList("1", "2", "3");
finalList<String> bList = asList("4", "5", "6");

finalList<String> both =newArrayList<>();

both.addAll(aList);
both.addAll(bList);

out.printf("Both %s\n", both);
I got this
Both [1, 2, 3, 4, 5, 6]
Then I tried the same thing in Scala.
valaList=List("1", "2", "3")
valbList=List("4", "5", "6")

valboth= aList :: bList

println(s"Both $both")
I got this
Both List(List(1, 2, 3), 4, 5, 6)
Ok.. that was not what I wanted. Let me try the extra colon :::.
valaList=List("1", "2", "3")
valbList=List("4", "5", "6")

valboth= aList ::: bList

println(s"Both $both")
So the extra colon is if you want to add to the list.
Do you remember how += worked with map. Certainly += makes even more sense with list. right?
    both +="new item"//DOES NOT WORK
Ok.. += only works for maps and not for lists. Consistent. Not.
So how does one add an item to a list. I guess we find out in chapter 16 but chapter 3 give us some hints and I am not happy so far.
Let's compare again Java with what we know in chapter 3.
finalList<String> aList = asList("1", "2", "3");
finalList<String> bList = asList("4", "5", "6");

finalList<String> both =newArrayList<>();

both.addAll(aList);
both.addAll(bList);

both.add("foo");
out.printf("Both %s\n", both);
valaList=List("1", "2", "3")
valbList=List("4", "5", "6")



valboth= aList ::: bList

valnewList= both :+"foo"
println(s"Both $newList")
To append to a list we need a ListBuffer which is only mentioned in ths chapter but not covered.
The rest of the list operations look pretty cool. I like the drop, dropRight, etc. I added these to Boon for Java (not callled that.. they are called split after the Python lingo for these operations), and they work with the Java util collections.

Tuples

I see the importance of Tuples. (Python has them and I get this).

Set

Yeah.. They have +=. I hope ListBuffer has +=. I assume it does. After I read more and try more out... My earlier criticisms are no longer valid. I sort of get the two operations for adding lists as one is for mutable list and the other is for immutable lists. I am also understanding their implementation details of Tuple a bit better with the disucssion of (1)->"Foo" which is actually a method that returns a tuple.
That said, bear in mind that neither vars nor side effects are inherently evil. Scala is not a pure functional language that forces you to program everything in the functional style. Scala is a hybrid imperative/functional language. You may find that in some situations an imperative style is a better fit for the problem at hand, and in such cases you should not hesitate to use it. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 1269-1272). Artima Press. Kindle Edition.
This is true. And I agree with this. We might disagree with when/how this is true , i.e., where the line is drawn. For performant programs, you are trying to reduce buffer copies, but buffer copies are better for functional programming. This is to say that functional programming has its place, but depending on the type of software you are building (in-memory rules engine, JSON parser), functional programming might not be the best fit. The Martin Thompson talks come to mind (Mechanical Sympathy).
Prefer vals, immutable objects, and methods without side effects. Reach for them first. Use vars, mutable objects, and methods with side effects when you have a specific need and justification for them. -- Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 1275-1277). Artima Press. Kindle Edition.
I agree with this. Even if you are programming Java, prefer finals to non-finals. More and more this is the direction that I was moving to with my Java programming.
Now let's compare the Java version (my Java version) to an example like the one in the book.
Here is my text file.
Fox jumped over the moon
Love is in the air
Death and taxes, the two constants in life
To be or not to be that is the question
Tis nobler to suffer the arrows of tyrants

Here is the Java version that follows the example in the book.
packagefoo;

importorg.boon.IO;
importjava.util.Arrays;
importjava.util.List;
import staticjava.lang.System.out;
import staticorg.boon.Str.lpad;
import staticorg.boon.Str.str;

publicclassReadFileShowWithLines {


publicstaticintwidthOfLength(finalStrings) {
returnInteger.toString(s.length()).length();
}

publicstaticvoidmain(String... args) {

if (args.length >0) {
finalList<String> lines =IO.readLines(args[0]);
finalString longestLine = lines.stream().reduce(
(a, b) -> a.length() > b.length() ? a : b).get();

finalint maxWidth = widthOfLength(longestLine);

for (String line : lines) {
finalint numSpaces = maxWidth - widthOfLength(line);
out.println(lpad(str(line.length()), numSpaces, '') +" | "+ line);
}

}
}

}
output
24 | Fox jumped over the moon
18 | Love is in the air
42 | Death and taxes, the two constants in life
39 | To be or not to be that is the question
42 | Tis nobler to suffer the arrows of tyrants

Here is the Scala version.
packagefoo

importscala.io.Source

defwidthOfLength(s: String) = s.length.toString.length

objectReadFileShowWithLinesScala {

defmain(args: String*) {
if (args.nonEmpty) {
vallines=Source.fromFile(args(0)).getLines().toList

vallongestLine:String= lines.reduceLeft(
(a, b) =>if (a.length > b.length) a else b
)

valmaxWidth:Int= widthOfLength(longestLine)
for (line <- lines) {
valnumSpaces:Int= maxWidth - widthOfLength(line)
valpadding=""* numSpaces
println(padding + line.length +" | "+ line)
}
}
}

}
output
24 | Fox jumped over the moon
18 | Love is in the air
42 | Death and taxes, the two constants in life
39 | To be or not to be that is the question
42 | Tis nobler to suffer the arrows of tyrants
The Scala version is shorter. Partly due to Source being a nice lib utitliy. And partly due to Scala not needing libs to do the same. Take out the imports and they get closer but Scala still wins.
defwidthOfLength(s: String) = s.length.toString.length
publicstaticint widthOfLength(finalString s) {
returnInteger.toString(s.length()).length();
}
Scala is shorter.
Here is where this is not as much difference.
if (args.nonEmpty) {
vallines=Source.fromFile(args(0)).getLines().toList

vallongestLine:String= lines.reduceLeft(
(a, b) =>if (a.length > b.length) a else b
)

valmaxWidth:Int= widthOfLength(longestLine)
for (line <- lines) {
valnumSpaces:Int= maxWidth - widthOfLength(line)
valpadding=""* numSpaces
println(padding + line.length +" | "+ line)
}
}
if (args.length >0) {
finalList<String> lines =IO.readLines(args[0]);
finalString longestLine = lines.stream().reduce(
(a, b) -> a.length() > b.length() ? a : b).get();

finalint maxWidth = widthOfLength(longestLine);

for (String line : lines) {
finalint numSpaces = maxWidth - widthOfLength(line);
out.println(lpad(str(line.length()), numSpaces, '') +" | "+ line);
}
}
Here is get longest line in Java:
finalString longestLine = lines.stream().reduce(
(a, b) -> a.length() > b.length() ? a : b).get();
Here is get longest line in Scala
vallongestLine:String= lines.reduceLeft(
(a, b) =>if (a.length > b.length) a else b
)
Scala has to make two less hops. And for some reason the Java version of reduce returns an Optional, and although Scala has Option, it opts not to use it (pun intended). Java is a bit bigger because it is backwards compatible with the older Collection lib and added a stream instead of adding reduce direct to Collections (this is more of a library choice than a langauge difference). And it opts to use Optional. In other words, they are linguistic very similar. Ternary operator in Java and Scala if blocks that return values (like Python btw).

Chapter 4 Scala book review and notes by Rick Hightower (Part 3)

$
0
0

Chapter 4 Scala book review and notes by Rick Hightower
Chapter 4 Scala book review and notes by Rick Hightower. (Note link is to second edition book, but I am reading the first edition because that is the one I own).
Notes and random thoughts. Any opinions stated in this document are not carved in stone. Read at your own risk.
Chapter 3 spoke of the ill of var versus val. Chapter 4 ChecksumAccumulator uses a var. Earlier I noticed the use of semicolons in the Scala examples, which I thought strange given Scala does not need them. Here I see methods that use return which Scala does not need. When I write Scala, I do it with IntelliJ so it can warn me not to use this superfluous concepts. I guess Author(s) did not have IntelliJ available.
Spoke too soon. They just dropped the return (but the semicolons in the other examples were never mentioned so probably wont be there in the second edition).
Ok so vars and val are public by default, which I think means is that Scala generates sum() and sum(int) by default (accessor methods). But that is what it seemed to do when I was using it with/Java.
The recommended style for methods is in fact to avoid having explicit, and especially multiple, return statements. Instead, think of each method as an expression that yields one value, which is returned. This philosophy will encourage you to make methods quite small, to factor larger methods into multiple smaller ones. On the other hand, design choices depend on the design context, and Scala makes it easy to write methods that have multiple, explicit returns if that's what you desire. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 1408-1412). Artima Press. Kindle Edition.
Ok. Tons of one liner methods that are just expressions. Oh joy. That will take some getting.
As mentioned in Chapter 1, one way in which Scala is more object-oriented than Java is that classes in Scala cannot have static members. Instead, Scala has singleton objects. A singleton object definition looks like a class definition, except instead of the keyword class you use the keyword object. Listing 4.2 shows an example. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 1455-1458). Artima Press. Kindle Edition.
So static methods are bad, and we use singleton objects instead.
I too was once a singleton lover. If you prevent yourself from doing singletons for 1 year straight, you will find out one thing....singletons are evil. If you have not prevented yourself for 1 straight year, you have no right to talk. That is like never eating ice cream and saying you know the fries are better. Maybe the fries are better but shut up until you have tried the ice cream first! You can still represent the real world and only have one thing (though naturally you just need to make sure you only instantiate one). That is usually done in some top level module and passed down to services (ie. Dependency Injection). The worst thing I have found out about singletons is when it comes to wanting to write reset code (specifically in tests). In a no singleton system, I can just new Service() and if it was designed right, the use of static keyword is rarely used except maybe for constants and all state will be reset....Great for the test setup method. So, you are suggesting that your reset is accomplished through the constructor? This forces the constructor to look for previously existing conditions on the real world device and do the appropriate things to knock the device down before setting it back up again. Okay, I can buy that. Now, how do you "make sure" that only one instance of the device is present in your system? --http://c2.com/cgi/wiki?SingletonsAreEvil
I find the dogmatic proclamations of evilness are evil and don't see much difference between static methods which are associated with the Class object and having an object Foo notation in a language (there are pros and cons to both approaches). I respect his thoughts, but...
My point is... So this book says don't use static methods and singletons are good, and classic Spring DI, IoC, etc. injections and traditional Java say that singletons are evil. Joy. No wonder people quit being developers and become managers. So much conflicting dogma to deal with.
I actually don't care. When in Rome (Scala), I will use object singleton, when back in Java/Spring world I will not use singleton and when not in Spring/Guice world of DI,.... When back in Boon/QBit with no Spring world I will use static methods where they make sense, and design by interface most of the time. I even design by interface when I write Python, and it drives my Python pals batty. I believe in the concept. I believe in mock objects, and testability.
I finally learned how to write a proper main method. I have been calling my Scala main from a Java main method. (My hidden super power is to hack things to work until I find time to do it the right way. First make it work. Then fix it up to work right. It is a skill brought on by always meeting schedule deadlines even if it means sacrificing personal health, and always cleaning up tech debt even if it is months later after 1.0 has been delivered and the project has not been canceled. There is the right way, and there is right now way. You have to pick you battles. End users don't care if you are using the right way. They care if there app works. You care because later you have to maintain this ball of goo, and your name and reputation is on it. I write comments a lot on contracts because I want to be able to hand this code off to someone else when I move on, and/or when I come back to this code in three months or a year, I want to be able to read what I wrote. Which reminds me, I add less comments to QBit then I do to code I use on contracts. QBit microservices lib is my calling card. It should have a lot more comments.)
objectReadFileShowWithLinesScala {

defmain(args: Array[String]) {
if (args.nonEmpty) {
vallines=Source.fromFile(args(0)).getLines().toList

vallongestLine:String= lines.reduceLeft(
(a, b) =>if (a.length > b.length) a else b
)

valmaxWidth:Int= widthOfLength(longestLine)
for (line <- lines) {
valnumSpaces:Int= maxWidth - widthOfLength(line)
valpadding=""* numSpaces
println(padding + line.length +" | "+ line)
}
}
}

defwidthOfLength(s: String) = s.length.toString.length
Why I had to wait until chapter 4!
I don't have to do this shit now.
packagefoo;

importscala.collection.JavaConversions;
importscala.collection.mutable.Buffer;
importjava.util.Arrays;
importjava.util.List;

publicclassReadFileShowWithLines {


publicstaticvoidmain(String... args) {

...// Java code

//This was how I was launching Scala code since they decide
//not to show me how to do this until Chapter 4,
//BTW I have some production code that does this because I could not find this online easily.
Buffer<String> seq =JavaConversions.asScalaBuffer(Arrays.asList(args));
ReadFileShowWithLinesScala.main(seq);
}

}
Thanks Chapter 4. Thanks for finally telling me what a Scala main method looks like. I tried everything and gave up and called it from Java.
One difference between Scala and Java is that whereas Java requires you to put a public class in a file named after the class—for example, you'd put class SpeedRacer in file SpeedRacer.java—in Scala, you can name .scala files anything you want, no matter what Scala classes or code you put in them. In general in the case of non-scripts, however, it is recommended style to name files after the classes they contain as is done in Java, so that programmers can more easily locate classes by looking at file names. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 1526-1531). Artima Press. Kindle Edition.
Yeah... so Java forces the convention and Scala does not. I think will become important when we are working with a lot of case classes. I wish they added case classes to Java. I am sick of Java beans which is almost reason enough to switch to Scala. Almost.

Application trait

This is probably why I could not easily google how to do a Scala main method. You don't, you use the Application trait.
To use the trait, you first write "extends Application" after the name of your singleton object. Then instead of writing a main method, you place the code you would have put in the main method directly between the curly braces of the singleton object. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 1562-1565). Artima Press. Kindle Edition.
Ok.. I like this. Damn Odersky, quit making me like Scala. This defeats my previous narratives.
Ok.. Application trait is deprecated. So that was a quick elation with an even quicker deflation. Maybe I should not be using a book that is five years old. Maybe I should commit to going through a book before I buy it.
Here I am...
So it seems that I can use the App instead of the Application trait. Chapter 1-4 does not really cover a trait yet, but since Scala people can never shut up about how great Scala is, I sort of know what a trait is already, and have in fact already used them. Although, I wish I knew more.. Must read on...
Here is the code refactored to use App.
packagefoo

importscala.io.Source

objectReadFileShowWithLinesScalaextendsApp {
defwidthOfLength(s: String) = s.length.toString.length


if (args.nonEmpty) {
vallines=Source.fromFile(args(0)).getLines().toList

vallongestLine:String= lines.reduceLeft(
(a, b) =>if (a.length > b.length) a else b
)

valmaxWidth:Int= widthOfLength(longestLine)
for (line <- lines) {
valnumSpaces:Int= maxWidth - widthOfLength(line)
valpadding=""* numSpaces
println(padding + line.length +" | "+ line)
}
}

}
It is kicking Java ass just a little bit more now.
packagefoo;

importorg.boon.IO;
importjava.util.List;

import staticjava.lang.System.out;
import staticorg.boon.Str.lpad;
import staticorg.boon.Str.str;

publicclassReadFileShowWithLines {


publicstaticintwidthOfLength(finalStrings) {
returnInteger.toString(s.length()).length();
}

publicstaticvoidmain(String... args) {

if (args.length >0) {
finalList<String> lines =IO.readLines(args[0]);
finalString longestLine = lines.stream().reduce(
(a, b) -> a.length() > b.length() ? a : b).get();

finalint maxWidth = widthOfLength(longestLine);

for (String line : lines) {
finalint numSpaces = maxWidth - widthOfLength(line);
out.println(lpad(str(line.length()), numSpaces, '') +" | "+ line);
}
}

}

}
Not sure why he did not use foreach so I did not use forEach, but imagine if I did.. About the same. Except of course, Java is more consistent with came case then Scala.
Inheriting from Application is shorter than writing an explicit main method, but it also has some shortcomings. First, you can't use this trait if you need to access command-line arguments, because the args array isn't available. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 1570-1573). Artima Press. Kindle Edition.
I write a lot of microservices and I almost always get arguments from a config file or from environment variables or etcd or consul or..... I can't remember the last time I used main args in production code. I don't see this as much of a limitation but if I was writing command line tools like I do in Python or Groovy from time to time, I would not like this.
I guess if you are writing scripts in Scala you use the Script form which has the args (covered in Chapter 2). Hmmm...
Chapter 4 is in the can. I feel like I learned a few things. Despite the dogma which makes me angry at this point in my career as I have come to love and hate things many time by wave after wave of dogma. I am more pragmatic. I try not to be dogmatic. I try to follow the precepts of the Bloch book which I think Scala supports more or less better than Java.
I would like Scala a lot more with less Dogma and the holier than though typical Scala developers. Granted they are not as bad as the early Ruby high-priests but still. Caustic Scala developers are tools. I like the attitude of Python and Groovy developers. I use it because it is productive. I love it, but you don't have to. Scala still seems to be more more pontification.
So far to me, Scala is a better language than Java. This does not mean I agree with all of the decisions that were made in the language design, but I can get used to the ones that I don't agree with.

Scala book review, notes, random thoughts part 4

$
0
0
Read chapter 5, 6, and 7 pretty quickly. Now on chapter 8 of the Scala book. Chapter 8 Functions.
I want to write some head to head comparisons of Scala first class Function support vs. Java 8 lambda expressions and Function. More for my own edification.
Function values are objects, so you can store them in variables if you like. They are functions, too, so you can invoke them using the usual parentheses function-call notation. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 2911-2912). Artima Press. Kindle Edition.
You could not do this with Java functions. The best you could do would be to pass it around and then call apply (https://docs.oracle.com/javase/8/docs/api/java/util/function/Function.html). It has a certain amount of awkwardness, which you would not find in Groovy, Scala, or Python. Changes in the Java syntax are more conservative due to its large amount of users. Backwards compatible can be a crutch.
I am on chapter 8 and there seems to be 35 chapters. Perhaps I need a different book. This is a great book. But I have already started to use Scala at work and need a quicker introduction. Perhaps.
This is a really good book on Scala, but I am in a hurry so to speak. Hmmm... I will soldier on for now.
I will rethink this after this chapter.
To make a function literal even more concise, you can use underscores as placeholders for one or more parameters, so long as each parameter appears only one time within the function literal. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 2942-2943). Artima Press. Kindle Edition.
That is pretty short, but man there are so many ways to do the same thing and unlike Java where I get to use one of those ways for five years until the new improved ways comes with Scala they are all here at once. Seems daunting.
The function literal _ > 0, therefore, is equivalent to the slightly more verbose x => x > 0, --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 2960-2964). Artima Press. Kindle Edition.
Ok.. so it sounds like partially applied functions seem a lot like currying.
Now, although sum _ is indeed a partially applied function, it may not be obvious to you why it is called this. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 3018-3019). Artima Press. Kindle Edition.
Why don't they just say that. Not sure. I get the concept. Used it before with Groovy and even wrote my own functional lib that allowed currying. The syntax, to me, seems rough or should I say foriegn to my experience.
Closures seem the same (so far) in Java and Scala. One big difference is Scala does not require final for closures.
This example brings up a question: what happens if more changes after the closure is created? In Scala, the answer is that the closure sees the change. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 3093-3095). Artima Press. Kindle Edition.
POP!
Scala allows you to indicate that the last parameter to a function may be repeated. This allows clients to pass variable length argument lists to the function. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 3127-3128). Artima Press. Kindle Edition.
We have String... args in Java and args: String* in Scala. Ok. Unlike Java no auto conversion from Array to vararg. We have to use _* which is more like Python to spread the arguments.
Named arguments are most frequently used in combination with default parameter values. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 3146-3147). Artima Press. Kindle Edition.
Named arguments and default arguments is one of the things that made me love Python and later Groovy.
Functions like approximate, which call themselves as their last action, are called tail recursive. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 3178-3179). Artima Press. Kindle Edition.
Ok. I hear this term a lot.
Does Java support it? I am not sure. I hear several mentions of it, but the answer seem inconclusive.
From what I understand, the JIT might do it, if certain conditions are met, maybe. So in practice you can't rely on it. --http://programmers.stackexchange.com/questions/272061/why-doesnt-java-have-optimization-for-tail-recursion-at-all
If Scala supports it, and it generates byte-code, it seems like Java could support it as well. And there are vague references to Java 8 supporting it under the right conditions of which I do not know.
Java does not support TCO at compiler level but it is possible to implement it with Java 8 using lambda expressions. It is described by Venkat Subramaniamin in "Functional Programming in Java". --http://stackoverflow.com/questions/22866491/does-java8-have-tail-call-optimization
I guess I have to buy Venkat Subramaniamin book to find out exactly how/when tail call recursion is supported.
And according to this article, it is not yet supported.http://jvmgeek.com/2015/04/02/java-8-functional-programming/
Is it? Is it not? I still don't know but am leaning to not. If that is the case, then Scala has a really large advantage in functional programming compared to Java.
The use of tail recursion in Scala is fairly limited, because the JVM instruction set makes implementing more advanced forms of tail recursion very difficult. --Odersky, Martin; Spoon, Lex; Venners, Bill (2010-12-13). Programming in Scala: A Comprehensive Step-by-Step Guide (Kindle Locations 3200-3201). Artima Press. Kindle Edition.
Scala does the best it can with the byte code options that it is given.

Example of combining QBit and Vertx (Running QBit inside of Vertx instead of using Vertx like a lib)

$
0
0


From QBit and Vertx Microservices best of both worlds

Example of combining QBit and Vertx

Let's say we have a service like this:

Sample QBit Service

    @RequestMapping("/hello")
publicstaticclassMyRestService {

@RequestMapping(value="/world", method=RequestMethod.POST)
publicStringhello(Stringbody) {
return body;
}
}
We want to use a lot of Vertx features, and we decide to embed QBit support inside of a verticle.
Our vertx MyVerticle might look like this:

Vertx Verticle

publicclassMyVerticleextendsAbstractVerticle {

privatefinalint port;

publicMyVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


/* Route one call to a vertx handler. */
finalRouter router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});

/* Route everything under /hello to QBit http server. */
finalRoute qbitRoute = router.route().path("/hello/*");


/* Vertx HTTP Server. */
finalio.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer();

/*
* Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
*/
finalHttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRoute(qbitRoute)
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();


/*
* Create a new service endpointServer and add MyRestService to it.
* ( You could add a lot more services than just one. )
*/
finalMyRestService myRestService =newMyRestService();
finalServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
.addService(myRestService).setHttpServer(httpServer).build();

endpointServer.startServer();



/*
* Associate the router as a request handler for the vertxHttpServer.
*/
vertxHttpServer.requestHandler(router::accept).listen(port);
}catch (Exception ex) {
ex.printStackTrace();
}
}

publicvoidstop() {
}

}
Read the comments to see what is going on. It should make sense.
Next we start up the vertx Verticle (perhaps in a main method).

Starting up the Vertx verticle

        myVerticle =newMyVerticle(port);
vertx =Vertx.vertx();
vertx.deployVerticle(myVerticle, res -> {
if (res.succeeded()) {
System.out.println("Deployment id is: "+ res.result());
} else {
System.out.println("Deployment failed!");
res.cause().printStackTrace();
}
latch.countDown();
});

Now do some QBit curl commands :)

finalHttpClient client =HttpClientBuilder.httpClientBuilder()
.setHost("localhost").setPort(port).buildAndStart();

finalHttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
assertEquals(202, response.code());
assertEquals("route1", response.body());


finalHttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
assertEquals(200, response2.code());
assertEquals("\"hi\"", response2.body());
The full example is actually one of the integration tests that is part of QBit.

Full example

packageio.advantageous.qbit.vertx;

importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.http.client.HttpClient;
importio.advantageous.qbit.http.client.HttpClientBuilder;
importio.advantageous.qbit.http.request.HttpTextResponse;
importio.advantageous.qbit.http.server.HttpServer;
importio.advantageous.qbit.server.ServiceEndpointServer;
importio.advantageous.qbit.util.PortUtils;
importio.advantageous.qbit.vertx.http.VertxHttpServerBuilder;
importio.vertx.core.AbstractVerticle;
importio.vertx.core.Vertx;
importio.vertx.core.VertxOptions;
importio.vertx.core.http.HttpServerResponse;
importio.vertx.ext.web.Route;
importio.vertx.ext.web.Router;
importorg.junit.After;
importorg.junit.Before;
importorg.junit.Test;

importjava.util.concurrent.CountDownLatch;
importjava.util.concurrent.TimeUnit;

import staticio.advantageous.qbit.server.EndpointServerBuilder.endpointServerBuilder;
import staticorg.junit.Assert.assertEquals;

/**
* Created by rick on 9/29/15.
*/
publicclassVertxRESTIntegrationTest {

privateVertx vertx;
privateTestVerticle testVerticle;
privateint port;

@RequestMapping("/hello")
publicstaticclassTestRestService {

@RequestMapping(value="/world", method=RequestMethod.POST)
publicStringhello(Stringbody) {
return body;
}
}

publicstaticclassTestVerticleextendsAbstractVerticle {

privatefinalint port;

publicTestVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


/* Route one call to a vertx handler. */
finalRouter router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});

/* Route everything under /hello to QBit http server. */
finalRoute qbitRoute = router.route().path("/hello/*");


/* Vertx HTTP Server. */
finalio.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer();

/*
* Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
*/
finalHttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRoute(qbitRoute)
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();


/*
* Create a new service endpointServer.
*/
finalServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
.addService(newTestRestService()).setHttpServer(httpServer).build();

endpointServer.startServer();



/*
* Associate the router as a request handler for the vertxHttpServer.
*/
vertxHttpServer.requestHandler(router::accept).listen(port);
}catch (Exception ex) {
ex.printStackTrace();
}
}

publicvoidstop() {
}

}

@Before
publicvoidsetup() throwsException{


finalCountDownLatch latch =newCountDownLatch(1);
port =PortUtils.findOpenPortStartAt(9000);
testVerticle =newTestVerticle(port);
vertx =Vertx.vertx(newVertxOptions().setWorkerPoolSize(5));
vertx.deployVerticle(testVerticle, res -> {
if (res.succeeded()) {
System.out.println("Deployment id is: "+ res.result());
} else {
System.out.println("Deployment failed!");
res.cause().printStackTrace();
}
latch.countDown();
});


latch.await(5, TimeUnit.SECONDS);
}

@Test
publicvoidtest() {

finalHttpClient client =HttpClientBuilder.httpClientBuilder().setHost("localhost").setPort(port).buildAndStart();
finalHttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
assertEquals(202, response.code());
assertEquals("route1", response.body());


finalHttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
assertEquals(200, response2.code());
assertEquals("\"hi\"", response2.body());

}


@After
publicvoidtearDown() throwsException {

finalCountDownLatch latch =newCountDownLatch(1);
vertx.close(res -> {
if (res.succeeded()) {
System.out.println("Vertx is closed? "+ res.result());
} else {
System.out.println("Vertx failed closing");
}
latch.countDown();
});


latch.await(5, TimeUnit.SECONDS);
vertx =null;
testVerticle =null;

}
}
You can bind direct to the vertxHttpServer, or you can use a router.

Bind qbit to a vertx router

publicstaticclassMyVerticleextendsAbstractVerticle {

privatefinalint port;

publicMyVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {

HttpServerOptions options =newHttpServerOptions().setMaxWebsocketFrameSize(1000000);
options.setPort(port);

Router router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});



io.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer(options);

HttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRouter(router)//BIND TO THE ROUTER!
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();
...

Bind qbit to a vertx httpServer

publicstaticclassMyVerticleextendsAbstractVerticle {

privatefinalint port;

publicMyVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


HttpServerOptions options =newHttpServerOptions().setMaxWebsocketFrameSize(1000000);
options.setPort(port);


io.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer(options);

HttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(getVertx())
.setHttpServer(vertxHttpServer) //BIND TO VERTX HTTP SERVER DIRECT
.build();

...


BACKGROUND

QBit and Vertx3 : Best of both worlds for Microservices


This feature allows you to create a service which can also serve up web pages and web resources for an app. Prior QBit has been more focused on just being a REST microservices, i.e., routing HTTP calls and WebSocket messages to Java methods. Rather then reinvent the world. QBit now supports Vertx 3. There is a full example at the bottom of this page on how to combine QBit and Vertx.
The QBit support for Vertx 3 exceeds the support for Vertx 2.
QBit allows REST style support via annotations.

Example of QBit REST style support via annotations.

    @RequestMapping(value ="/todo", method =RequestMethod.DELETE)
publicvoid remove(finalCallback<Boolean> callback,
final @RequestParam("id") String id) {

Todo remove = todoMap.remove(id);
callback.accept(remove!=null);

}
QBit, microservices lib, also provides integration with Consul, a typed event bus (which can be clustered), and really simplifies complex reactive async callback coordination between services, and a lot more. Please read through the QBit overview.
History: QBit at first only ran inside of Vertx2 . Then we decided to (client driven decision) to make it stand alone and we lost the ability to run it embedded (we did not need it for any project on the road map).
Now you can use QBit features and Vertx 3 features via a mix and match model. You do this by setting up routers and/or a route in Vertx 3 to route to an HttpServer in QBit, and this takes about 1 line of code.
This means we can use Vertx 3's chunking, streaming etc. for complex Http Support. As well as use Vertx 3 as a normal HttpServer, but when we want to use REST style, async callbacks we can use QBit for routing REST calls to Java methods. We can access all of the features of Vertx 3. (QBit was originally a Vertx 2 add-on lib, but then we had clients that wanted to run in standalone and clients who wanted to use it with Servlets / Embedded Jetty. This is more coming back home versus a new thing).
You can run QBit standalone and if you do, it uses Vertx 3 like a network lib, or you can run QBit inside of Vertx 3.
We moved this up for two reasons. We were going to start using Vertx support for DNS to read DNS entries for service discovery in a Heroku like environment. It made no sense to invest a lot of time using Vertx 2 API when we were switching to Vertx 3 in the short time. We also had some services that needed to deliver up an SPA (Single Page App), so we had to extend the support for Vertx anyway or add these features to QBit (which it sort of has but not really its focus so we would rather just delegate that to Vertx 3), and it made no sense to do that with Vertx 2.
Also the Vertx 3 environment is a very vibrant one with many shared philosophies to QBit.
Let's cover where the Vertx3 integration and QBit come in.
We added a new class called a VertxHttpServerBuilder (extends HttpServerBuilder), which allows one to build a QBit HTTP server from a vertx object, a vertxHttpServer and optionally from a Vertx router or a Vertx route.
Note that you can pass QBit HttpServerBuilder or a QBit HttpServer to a QBitEndpointServerBuilder to use that builder instead or HttpServer instead of the default.VertxHttpServerBuilder is a QBit HttpServerBuilder so you construct it, associate it with vertx, and then inject it into EndpointServerBuilder. This is how we integrate with the QBit REST/WebSocket support. If you are using QBit REST with Vertx, that is one integration point.
Also note that you can pass HttpServerBuilder or a HttpServer to aManagedServiceBuilder to use that builder instead or HttpServer instead of the default. If you wanted to use QBit REST and QBit Swagger support with Vertx then you would want to use ManagedServiceBuilder with this class.
Here are some docs taken from our JavaDocs for QBit VertxHttpServerBuilder.VertxHttpServerBuilder also allows one to pass a shared Vertx object if running inside of the Vertx world. It also allows one to pass a shared vertx HttpServer if you want to use more than just QBit routing. If you are using Vertx routing or you want to limit this QBit HttpServer to one route then you can pass a route.
Note: QBits Vertx 2 support is EOL. We will be phasing it out shortly.
Here are some code examples on how to mix and match QBit and Vertx3.

Usage

Creating a QBit HttpServer that is tied to a single vertx route

HttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(vertxHttpServer).setRoute(route).build();
httpServer.start();

Creating a QBit HttpServer server and passing a router so it can register itself as the default route

Router router =Router.router(vertx); //Vertx router
Route route1 = router.route("/some/path/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
// enable chunked responses because we will be adding data as
// we execute over other handlers. This is only required once and
// only if several handlers do output.
response.setChunked(true);
response.write("route1\n");

// Call the next matching route after a 5 second delay
routingContext.vertx().setTimer(5000, tid -> routingContext.next());
});

//Now install our QBit Server to handle REST calls.
vertxHttpServerBuilder =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(httpServer).setRouter(router);

HttpServer httpServer = vertxHttpServerBuilder.build();
httpServer.start();
Note that you can pass HttpServerBuilder or a HttpServer toEndpointServerBuilder to use that builder instead or HttpServer instead of the default. If you are using QBit REST with Vertx, that is one integration point.

EndpointServerBuilder integration

//Like before
vertxHttpServerBuilder =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(vertxHttpServer).setRouter(router);

//Now just inject it into the vertxHttpServerBuilder before you call build
HttpServer httpServer = vertxHttpServerBuilder.build();
endpointServerBuilder.setHttpServer(httpServer);
Also note that you can pass HttpServerBuilder or a HttpServer to aManagedServiceBuilder to use that builder instead or HttpServer instead of the default.
If you wanted to use QBit REST and QBit Swagger support with Vertx then you would want to use ManagedServiceBuilder with this class.

ManagedServiceBuilder integration

//Like before
vertxHttpServerBuilder =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(vertx).setHttpServer(vertxHttpServer).setRouter(router);

//Now just inject it into the vertxHttpServerBuilder before you call build
HttpServer httpServer = vertxHttpServerBuilder.build();
managedServiceBuilder.setHttpServer(httpServer);
Read Vertx guide on routing for more details Vertx Http Ext Manual.

Example of combining QBit and Vertx

Let's say we have a service like this:

Sample QBit Service

    @RequestMapping("/hello")
publicstaticclassMyRestService {

@RequestMapping(value="/world", method=RequestMethod.POST)
publicStringhello(Stringbody) {
return body;
}
}
We want to use a lot of Vertx features, and we decide to embed QBit support inside of a verticle.
Our vertx MyVerticle might look like this:

Vertx Verticle

publicclassMyVerticleextendsAbstractVerticle {

privatefinalint port;

publicMyVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


/* Route one call to a vertx handler. */
finalRouter router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});

/* Route everything under /hello to QBit http server. */
finalRoute qbitRoute = router.route().path("/hello/*");


/* Vertx HTTP Server. */
finalio.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer();

/*
* Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
*/
finalHttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRoute(qbitRoute)
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();


/*
* Create a new service endpointServer and add MyRestService to it.
* ( You could add a lot more services than just one. )
*/
finalMyRestService myRestService =newMyRestService();
finalServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
.addService(myRestService).setHttpServer(httpServer).build();

endpointServer.startServer();



/*
* Associate the router as a request handler for the vertxHttpServer.
*/
vertxHttpServer.requestHandler(router::accept).listen(port);
}catch (Exception ex) {
ex.printStackTrace();
}
}

publicvoidstop() {
}

}
Read the comments to see what is going on. It should make sense.
Next we start up the vertx Verticle (perhaps in a main method).

Starting up the Vertx verticle

        myVerticle =newMyVerticle(port);
vertx =Vertx.vertx();
vertx.deployVerticle(myVerticle, res -> {
if (res.succeeded()) {
System.out.println("Deployment id is: "+ res.result());
} else {
System.out.println("Deployment failed!");
res.cause().printStackTrace();
}
latch.countDown();
});

Now do some QBit curl commands :)

finalHttpClient client =HttpClientBuilder.httpClientBuilder()
.setHost("localhost").setPort(port).buildAndStart();

finalHttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
assertEquals(202, response.code());
assertEquals("route1", response.body());


finalHttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
assertEquals(200, response2.code());
assertEquals("\"hi\"", response2.body());
The full example is actually one of the integration tests that is part of QBit.

Full example

packageio.advantageous.qbit.vertx;

importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.http.client.HttpClient;
importio.advantageous.qbit.http.client.HttpClientBuilder;
importio.advantageous.qbit.http.request.HttpTextResponse;
importio.advantageous.qbit.http.server.HttpServer;
importio.advantageous.qbit.server.ServiceEndpointServer;
importio.advantageous.qbit.util.PortUtils;
importio.advantageous.qbit.vertx.http.VertxHttpServerBuilder;
importio.vertx.core.AbstractVerticle;
importio.vertx.core.Vertx;
importio.vertx.core.VertxOptions;
importio.vertx.core.http.HttpServerResponse;
importio.vertx.ext.web.Route;
importio.vertx.ext.web.Router;
importorg.junit.After;
importorg.junit.Before;
importorg.junit.Test;

importjava.util.concurrent.CountDownLatch;
importjava.util.concurrent.TimeUnit;

import staticio.advantageous.qbit.server.EndpointServerBuilder.endpointServerBuilder;
import staticorg.junit.Assert.assertEquals;

/**
* Created by rick on 9/29/15.
*/
publicclassVertxRESTIntegrationTest {

privateVertx vertx;
privateTestVerticle testVerticle;
privateint port;

@RequestMapping("/hello")
publicstaticclassTestRestService {

@RequestMapping(value="/world", method=RequestMethod.POST)
publicStringhello(Stringbody) {
return body;
}
}

publicstaticclassTestVerticleextendsAbstractVerticle {

privatefinalint port;

publicTestVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


/* Route one call to a vertx handler. */
finalRouter router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});

/* Route everything under /hello to QBit http server. */
finalRoute qbitRoute = router.route().path("/hello/*");


/* Vertx HTTP Server. */
finalio.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer();

/*
* Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
*/
finalHttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRoute(qbitRoute)
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();


/*
* Create a new service endpointServer.
*/
finalServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
.addService(newTestRestService()).setHttpServer(httpServer).build();

endpointServer.startServer();



/*
* Associate the router as a request handler for the vertxHttpServer.
*/
vertxHttpServer.requestHandler(router::accept).listen(port);
}catch (Exception ex) {
ex.printStackTrace();
}
}

publicvoidstop() {
}

}

@Before
publicvoidsetup() throwsException{


finalCountDownLatch latch =newCountDownLatch(1);
port =PortUtils.findOpenPortStartAt(9000);
testVerticle =newTestVerticle(port);
vertx =Vertx.vertx(newVertxOptions().setWorkerPoolSize(5));
vertx.deployVerticle(testVerticle, res -> {
if (res.succeeded()) {
System.out.println("Deployment id is: "+ res.result());
} else {
System.out.println("Deployment failed!");
res.cause().printStackTrace();
}
latch.countDown();
});


latch.await(5, TimeUnit.SECONDS);
}

@Test
publicvoidtest() {

finalHttpClient client =HttpClientBuilder.httpClientBuilder().setHost("localhost").setPort(port).buildAndStart();
finalHttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
assertEquals(202, response.code());
assertEquals("route1", response.body());


finalHttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
assertEquals(200, response2.code());
assertEquals("\"hi\"", response2.body());

}


@After
publicvoidtearDown() throwsException {

finalCountDownLatch latch =newCountDownLatch(1);
vertx.close(res -> {
if (res.succeeded()) {
System.out.println("Vertx is closed? "+ res.result());
} else {
System.out.println("Vertx failed closing");
}
latch.countDown();
});


latch.await(5, TimeUnit.SECONDS);
vertx =null;
testVerticle =null;

}
}
You can bind direct to the vertxHttpServer, or you can use a router.

Bind qbit to a vertx router

publicstaticclassMyVerticleextendsAbstractVerticle {

privatefinalint port;

publicMyVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {

HttpServerOptions options =newHttpServerOptions().setMaxWebsocketFrameSize(1000000);
options.setPort(port);

Router router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});



io.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer(options);

HttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRouter(router)//BIND TO THE ROUTER!
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();
...

Bind qbit to a vertx httpServer

publicstaticclassMyVerticleextendsAbstractVerticle {

privatefinalint port;

publicMyVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


HttpServerOptions options =newHttpServerOptions().setMaxWebsocketFrameSize(1000000);
options.setPort(port);


io.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer(options);

HttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setVertx(getVertx())
.setHttpServer(vertxHttpServer) //BIND TO VERTX HTTP SERVER DIRECT
.build();

...

Where do we go from here

QBit has a health system, and a microservices stats collections system. Vertx 3 provided similar support. QBit has an event bus. Vertx has an event bus. There is no reason why QBit can't provide Vertx implementations of its event bus (this is how the QBit event bus started), or for that matter integrate with Vertx's health system or its stats collection system. QBit has its own service discovery system with implementations that talk to DNS, Consul, or just monitor JSON files to be updated (for Chef Push, or Consul, etcd pull model). There is no reason QBit could not provide an implementation of its Service Discovery that worked with Vertx's clustering support. All of the major internal services that QBit provides are extensible with plugins via interfaces. There is plenty of opportunity for more integration of QBit and Vertx.
QBit and Vertx have both evolved to provide more and more support for microservices and there is a lot of synergy between the two libs.
QBit can also play well with Servlets, Spring MVC, Spring Boot, and other lightweight HTTP libs. QBit comes batteries included.

Vertx and QBit integration, the best of both worlds Microservices round 3

$
0
0


ManagedServiceBuilder vs. EndpointServerBuilder

ManagedServiceBuilder is in QBit admin. EndpointServerBuilder is in QBit core.ManagedServiceBuilder provides integration with statsD, consul, swagger.ManagedServiceBuilder is the glue to work well in Heroku-like environments, Swagger support, StatsD support, local stats support, health end point support, health system support, admin endpoint support, etc. EndpointServerBuilder builds a single endpoint.ManagedServiceBuilder builds a standard microservice app with health checks, metrics, and more to provide a batteries-included microservices architecture. There is some overlap with Vertx. But the plan is to build bridges from QBit health system over to Vertx health system, and from QBit metrics, stats, KPI over to Vertx stats system. (You just have to implement an interface and delegate some method calls to vertx for both QBit health system and QBit stats system).
Read all three rounds here if you missed the first two.
If you want to use QBit without statsD, consul, health checks, admin, managed shutdown and swagger support, then you just use EndpointServerBuilder. If you want statsD, consul, health checks, admin, or swagger then you use QBit Spring support or theManagedServiceBuilder. The Spring support is for another document page.ManagedServiceBuilder allows you to inject a custom health system, a custom service discovery, a custom stats system. It is an nice integration point to delegate to Vertx services.
Let's cover this in example that is like the REST example above but usesManagedServiceBuilder instead of EndpointServerBuilder.

You have a service like before

    @RequestMapping("/hello")
publicclassMyRestService {

@RequestMapping(value="/world", method=RequestMethod.POST)
publicStringhello(Stringbody) {
return body;
}
}
Note this is a simple example. QBit can do much more than this. To get a bit of an idea, please check out: QBit microservice tutorials. And be sure to check out QBit Reactive Programming.

The verticle now uses ManagedServiceBuilder instead of EndpointServerBuilder direct

publicclassMyVerticleextendsAbstractVerticle {

privatefinalint port;

/** The systemManager can cleanly shut down anything started by the
* QBit ManagedServiceBuilder.
*/
privateQBitSystemManager systemManager;

publicMyVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


/* Route one call to a vertx handler. */
finalRouter router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});

/* Route everything under /hello to QBit http server. */
finalRoute qbitRoute = router.route().path("/hello/*");


/* Vertx HTTP Server. */
finalio.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer();

/*
* Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
*/
finalHttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRoute(qbitRoute)
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();


/** Use a managed service builder. */
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder();

systemManager = managedServiceBuilder.getSystemManager();

/*
* Create a new service endpointServer.
*/
finalServiceEndpointServer endpointServer = managedServiceBuilder
.getEndpointServerBuilder().setUri("/")
.addService(newMyRestService())
.setHttpServer(httpServer).build();



endpointServer.startServer();



/*
* Associate the router as a request handler for the vertxHttpServer.
*/
vertxHttpServer.requestHandler(router::accept).listen(port);
}catch (Exception ex) {
ex.printStackTrace();
thrownewIllegalStateException(ex);
}
}

publicvoidstop() {

if (systemManager!=null) {
systemManager.shutDown();
}
}

}
The important bits to see are that we are now using the ManagedServiceBuilder in the verticle start method

ManagedServiceBuilder

/** Use a managed service builder. */
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder();

systemManager = managedServiceBuilder.getSystemManager();
And that we are now using the EndpointServerBuilder that is managed by theManagedServiceBuilder (managedServiceBuilder.getEndpointServerBuilder).

ManagedServiceBuilder.getEndpointServerBuilder

/*
* Create a new service endpointServer.
*/
finalServiceEndpointServer endpointServer = managedServiceBuilder
.getEndpointServerBuilder().setUri("/")
.addService(newMyRestService())
.setHttpServer(httpServer).build();
Note that the QBit systemManager ensures that all services that QBit started will get properly shutdown.

Proper shutdown

publicvoid stop() {

if (systemManager!=null) {
systemManager.shutDown();
}
}
This example is one of the unit tests for the admin package.

Complete example showing how to use ManagedServiceBuilder with Vertx to build microservices

packageio.advantageous.qbit.vertx;

importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.http.client.HttpClient;
importio.advantageous.qbit.http.client.HttpClientBuilder;
importio.advantageous.qbit.http.request.HttpTextResponse;
importio.advantageous.qbit.http.server.HttpServer;
importio.advantageous.qbit.server.ServiceEndpointServer;
importio.advantageous.qbit.system.QBitSystemManager;
importio.advantageous.qbit.util.PortUtils;
importio.advantageous.qbit.vertx.http.VertxHttpServerBuilder;
importio.vertx.core.AbstractVerticle;
importio.vertx.core.Vertx;
importio.vertx.core.VertxOptions;
importio.vertx.core.http.HttpServerResponse;
importio.vertx.ext.web.Route;
importio.vertx.ext.web.Router;
importorg.junit.After;
importorg.junit.Before;
importorg.junit.Test;

importjava.util.concurrent.CountDownLatch;
importjava.util.concurrent.TimeUnit;

import staticorg.junit.Assert.assertEquals;

publicclassVertxManagedServiceBuilderIntegrationTest {

privateVertx vertx;
privateTestVerticle testVerticle;
privateint port;

@RequestMapping("/hello")
publicstaticclassTestRestService {

@RequestMapping(value="/world", method=RequestMethod.POST)
publicStringhello(Stringbody) {
return body;
}
}

publicstaticclassTestVerticleextendsAbstractVerticle {

privatefinalint port;

privateQBitSystemManager systemManager;

publicTestVerticle(intport) {
this.port = port;
}

publicvoidstart() {

try {


/* Route one call to a vertx handler. */
finalRouter router =Router.router(vertx); //Vertx router
router.route("/svr/rout1/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.setStatusCode(202);
response.end("route1");
});

/* Route everything under /hello to QBit http server. */
finalRoute qbitRoute = router.route().path("/hello/*");


/* Vertx HTTP Server. */
finalio.vertx.core.http.HttpServer vertxHttpServer =
this.getVertx().createHttpServer();

/*
* Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
*/
finalHttpServer httpServer =VertxHttpServerBuilder.vertxHttpServerBuilder()
.setRoute(qbitRoute)
.setHttpServer(vertxHttpServer)
.setVertx(getVertx())
.build();


/** Use a managed service builder. */
finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder();

systemManager = managedServiceBuilder.getSystemManager();

/*
* Create a new service endpointServer.
*/
finalServiceEndpointServer endpointServer = managedServiceBuilder
.getEndpointServerBuilder().setUri("/")
.addService(newTestRestService())
.setHttpServer(httpServer).build();



endpointServer.startServer();



/*
* Associate the router as a request handler for the vertxHttpServer.
*/
vertxHttpServer.requestHandler(router::accept).listen(port);
}catch (Exception ex) {
ex.printStackTrace();
thrownewIllegalStateException(ex);
}
}

publicvoidstop() {

if (systemManager!=null) {
systemManager.shutDown();
}
}

}

@Before
publicvoidsetup() throwsException{


finalCountDownLatch latch =newCountDownLatch(1);
port =PortUtils.findOpenPortStartAt(9000);
testVerticle =newTestVerticle(port);
vertx =Vertx.vertx(newVertxOptions().setWorkerPoolSize(5));
vertx.deployVerticle(testVerticle, res -> {
if (res.succeeded()) {
System.out.println("Deployment id is: "+ res.result());
} else {
System.out.println("Deployment failed!");
res.cause().printStackTrace();
}
latch.countDown();
});


latch.await(5, TimeUnit.SECONDS);
}

@Test
publicvoidtest() {

finalHttpClient client =HttpClientBuilder.httpClientBuilder().setHost("localhost").setPort(port).buildAndStart();
finalHttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
assertEquals(202, response.code());
assertEquals("route1", response.body());


finalHttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
assertEquals(200, response2.code());
assertEquals("\"hi\"", response2.body());

}


@After
publicvoidtearDown() throwsException {

finalCountDownLatch latch =newCountDownLatch(1);
vertx.close(res -> {
if (res.succeeded()) {
System.out.println("Vertx is closed? "+ res.result());
} else {
System.out.println("Vertx failed closing");
}
latch.countDown();
});


latch.await(5, TimeUnit.SECONDS);
vertx =null;
testVerticle =null;

}
}

QBit supports Metrics, KPI gathering for runtime stats and reactive stats for Microservices

$
0
0
For some background on why this is important for microservices see Reactive Microservices Monitoring.

QBit supports Metrics, KPI gathering

QBit support collecting metrics for microservices. The QBit runtime statistics system can be queried, it can be clustered, and it can replicate to any statistic system. The core interfaces for the QBit runtime stats system is io.advantageous.qbit.service.stats. The main interface for collecting stats is StatsCollector.

StatsCollector

packageio.advantageous.qbit.service.stats;

importio.advantageous.qbit.client.ClientProxy;

/**
* Collects stats
* This collects key performance indicators: timings, counts and levels/gauges.
* Created by rick on 6/6/15.
*/
publicinterfaceStatsCollectorextendsClientProxy {


/** Increment a counter by 1.
* This is a short cut for recordCount(name, 1);
* @param name name name of metric, KPI, metric.
*/
default voidincrement(Stringname) {
}

/**
* Record a a count.
* Used to record things like how many users used the site.
*
* @param name name of the metric, KPI, stat
* @param count count to record.
*/
default voidrecordCount(Stringname, longcount) {
}

/**
* This is used to record things like the count of current threads or
* free system memory or free disk, etc.
* Record Level. Some systems call this a gauge.
* @param name name of the gauge or level
* @param level level
*/
default voidrecordLevel(Stringname, longlevel) {
}

/**
* This is used to record timings.
* This would be things like how long did it take this service to call
* this remote service.
* @param name name of the timing
* @param duration duration
*/
default voidrecordTiming(Stringname, longduration) {
}

}

ServiceStatsListener

You will probably never use a StatsCollector but a StatsCollectorBuffer instead as it buffers metric calls to reduce IO and reporting to the stats engine. Another important concept in this package is the ServiceStatsListener. The ServiceStatsListenergets registered on your behalf if you use the ManagedServiceBuilder.
The ServiceStatsListener is used to intercept queue calls for the ServiceQueue. All services and end-points end up using the ServiceQueue. This class is able to track stats for services.

Default Service Stat Keys

   startBatchCountKey = serviceName +".startBatchCount";
receiveCountKey = serviceName +".receiveCount";
receiveTimeKey = serviceName +".callTimeSample";
this.queueRequestSizeKey = serviceName +".queueRequestSize";
this.queueResponseSizeKey = serviceName +".queueResponseSize";
The ${serviceName}.startBatchCount tracks how many times a batch has been sent.
This can tell you how well your batching is setup.
The ${serviceName}.receiveCount is how many times the service has been called.
The ${serviceName}.callTimeSample is how long do methods take for this service (if enabled, call times are sampled).
The ${serviceName}.queueRequestSize keeps track of how large the request queue is. This is an indication of calls not getting handled if greater than 0. If this continues to rise then the service could be down. (Note there is a health check to see a queue is blocked, and the service will be marked unhealthy.)
The ${serviceName}.queueResponseSize keeps track of how large the response queue is getting. This is an indication that responses are not getting drained.
All of the classes that we covered so far are in QBit core. This means that stats, KPI gathering is just part of the QBit system. It is an integral part of microservices so it is an integral part of QBit.

StatService and StatsD

The StatService is in QBit admin package. The StatService interface allows you to both record stats, KPI, and metrics for microservices and to query the services. TheStatService can replicate KPIs (key performance indicators) to replicators. It does this efficiently.
Let's look at the StatService interface and its comments.
packageio.advantageous.qbit.metrics;
importio.advantageous.qbit.reactive.Callback;
importio.advantageous.qbit.service.stats.Stats;
importio.advantageous.qbit.service.stats.StatsCollector;


/**
* The StatService collects stats, and allows stats to be queried.
* This collects key performance indicators: timings, counts and levels/gauges.
* It also allow internal or external clients to query this system.
*
* Created by rick on 6/6/15.
*/
publicinterfaceStatServiceextendsStatsCollector {


/**
* Get the last n Seconds of stats (up to two minutes of stats typically
* kept in memory).
*
* The `Stat` object has the mean, median, etc.
*
* ```java
*
* private final float mean;
* private final float stdDev;
* private final float variance;
* private final long sum;
* private final long max;
* private final long min;
* private final long median;
* ```
* @param callback callback to get Stat
* @param name name metric, KPI, etc.
* @param secondCount secondCount
*/
default voidstatsForLastSeconds(Callback<Stats>callback, Stringname,
intsecondCount) {
}

/**
* Gets the average last n Seconds of of a level.
*
* @param callback callback
* @param name name of metric, KPI, etc.
* @param secondCount secondCount
*/
default voidaverageLastLevel(Callback<Long>callback, Stringname,
intsecondCount) {
}

/**
* Gets count of the current minute
*
* @param callback callback
* @param name name of metric
*/
default voidcurrentMinuteCount(Callback<Long>callback, Stringname) {
}


/**
* Gets count of the current second.
*
* @param callback callback
* @param name name of metric
*/
default voidcurrentSecondCount(Callback<Long>callback, Stringname) {
}


/**
* Gets count of the last recorded full second.
*
* @param callback callback
* @param name name of metric
*/
default voidlastSecondCount(Callback<Long>callback, Stringname) {
}


/**
* Gets count of the last recorded ten full seconds.
*
* @param callback callback
* @param name name of metric
*/
default voidlastTenSecondCount(Callback<Long>callback, Stringname) {
}


/**
* Gets count of the last recorded five full seconds.
*
* @param callback callback
* @param name name of metric
*/
default voidlastFiveSecondCount(Callback<Long>callback, Stringname) {
}


/**
* Gets count of the last recorded N full seconds.
*
* @param callback callback
* @param name name of metric
*/
default voidlastNSecondsCount(Callback<Long>callback, Stringname,
intsecondCount) {
}


/**
* Gets count of the last recorded N full seconds.
* This is more exact if the count overlaps two minutes.
*
* @param callback callback
* @param name name of metric
*/
default voidlastNSecondsCountExact(Callback<Long>callback, Stringname,
intsecondCount) {
}


/**
* Gets count of the last recorded N full seconds.
* This is more exact if the count overlaps two minutes.
*
* @param callback callback
* @param name name of metric
*/
default voidlastTenSecondCountExact(Callback<Long>callback, Stringname) {
}

/**
* Gets count of the last recorded N full seconds.
* This is more exact if the count overlaps two minutes.
*
* @param callback callback
* @param name name of metric
*/
default voidlastFiveSecondCountExact(Callback<Long>callback, Stringname) {
}

/**
* Bulk record.
* @param name name of metric
* @param count count
* @param timestamp timestamp
*/
default voidrecordWithTime(Stringname, intcount, longtimestamp) {
}


/**
* Bulk record.
* @param names names of metric
* @param counts counts of metrics
* @param timestamp timestamp
*/
default voidrecordAll(longtimestamp, String[] names, long[] counts) {
}


/**
* Bulk record.
* @param names names of metric
* @param counts counts of metrics
* @param times times
*/
default voidrecordAllWithTimes(String[] names,
long[] counts, long[] times){
}
}
You can query the metrics system and provide reactive support. For example, you could query the current REQUESTS PER SECOND to a service and dynamically change the size of buffering to increase throughput.
QBit does not only monitor metrics, but it makes the metrics queryable so your microservices can be reactive.
The StatService system that comes with QBit can replicate changes to other systems via the StatReplicator.
/**
* Stat Replicator.
* This is used to replicate stats to another system.
* created by rhightower on 1/28/15.
*/
publicinterfaceStatReplicatorextendsRemoteTCPClientProxy, ServiceFlushable, Stoppable {
voidreplicateCount(Stringname, longcount, longtime);
voidreplicateLevel(Stringname, longlevel, longtime);
voidreplicateTiming(Stringname, longtiming, longtime);
}
The QBit Admin package has two built-in collectors. The StatsDReplicator (notice the statsD) implements StatReplicator and replicates via UDP to a StatsD server (e.g.,GraphiteStatsite, and more). The StatsD is a wire protocol over UDP to send stats. TheStatsDReplicator implements this wire protocol to talk UDP to a given host and port over UDP.
The QBit Admin package has two built-in collectors. The StatsDReplicator (notice the statsD) implements StatReplicator and replicates via UDP to a StatsD server (e.g.,GraphiteStatsite, and more). The StatsD is a wire protocol over UDP to send stats. TheStatsDReplicator implements this wire protocol to talk UDP to a given host and port over UDP. The other built-collector the LocalStatsCollector which just sends stats over a REST endpoint (/__stats/instance) that will deliver up a JSON version of the stats (and it resets the stats after the REST request) or it keeps collecting them until some other system queries the /__stats/instance REST endpoint. Both theStatsDReplicator and the LocalStatsCollector have builders, but you typically build them for free by using the ManagedServiceBuilder. We use LocalStatsCollector for Heroku-like environments.
You can configure StatsD via the ManagedServiceBuilder.

Configure StatsD with ManagedServiceBuilder

if (config.isStatsD()) {
managedServiceBuilder.setEnableStatsD(true);
managedServiceBuilder.getStatsDReplicatorBuilder()
.setHost(config.getStatsDHost());

if (config.getStatsDPort() !=-1) {
managedServiceBuilder.getStatsDReplicatorBuilder()
.setPort(config.getStatsDPort());
}
}
If you are using the JSON config file, you setup StatsD as follows:
{
"statsD":true,
"statsDHost":"lab99.myhost.com",
}
You can send your own stats and not just the ones that are sent via the default stats gathering.
Assuming you have a service called TodoService

Using stats from your own service

/** Create a stats collector. */
finalStatsCollector statsCollector = managedServiceBuilder
.getStatServiceBuilder().buildStatsCollector();

finalTodoService tododService =newTodoService(statsCollector,
ReactorBuilder.reactorBuilder().build(),
taskRepo,
Timer.timer());

/** Add the todo service to the managedServiceBuilder. */
managedServiceBuilder.addEndpointService(tododService);

Passing stats collector, reactor and timer.

public TodoService(finalStatsCollector statsCollector,
finalReactor reactor,
finalTaskRepo taskRepo,
finalTimer timer) {
this.statsCollector = statsCollector;
this.timer = timer;
this.taskRepo = taskRepo;
this.reactor = reactor;

this.reactor.addServiceToFlush(statsCollector);
Calling reactor.addServiceToFlush and passing the statsCollector will ensure that when service queue that is managing the TodoService is idle or full that all of the stats will be flushed if there are any to save. The statsCollector is the one does buffering as mentioned earlier.
The reactor does not auto flush unless it is told to do. For now, you always use the reactor with the falling queue callback (no magic).

Calling reactor so that it can run jobs, coordinate calls and flush proxies

/** Process Reactor stuff. */
@QueueCallback({QueueCallbackType.LIMIT, QueueCallbackType.EMPTY})
publicvoid process(){
reactor.process();
time = timer.time();
}
The reactor.process will flush all calls to statsCollector which will then send the stats to actual StatService where they will be replicated to all outstanding replicators.

Using recordCount

/**
* Load TODOs from TodoRepo.
*/
@RequestMapping(value ="/todo", summary ="Load TODOs",
...)
publicvoid loadTodo(finalCallback<Boolean> callback) {

finalSet<TodoCategory> categories =newHashSet<>(this.categories);

/* If there are no categories or if service is paused, then return right away. */
if (categories.size() >0&&!stop) {
loadFromTodoRepoCache++;
statsCollector.recordCount("Todo.repo.call.count", 1);
} else {
logger.warn("Service can't load categories count {} or stopped {}",
components.size(), stop);
return;
}
...
Notice the use of statsCollector.recordCount("Todo.repo.call.count", 1) since this is just incrementing one time we can callstatsCollector.increment("Todo.repo.call.count", 1).

Using increment

/**
* Load TODOs from TodoRepo.
*/
@RequestMapping(value ="/todo", summary ="Load TODOs",
...)
publicvoid loadTodo(finalCallback<Boolean> callback) {

finalSet<TodoCategory> categories =newHashSet<>(this.categories);

/* If there are no categories or if service is paused, then return right away. */
if (categories.size() >0&&!stop) {
loadFromTodoRepoCache++;
statsCollector.increment("Todo.repo.call.count");
} else {
logger.warn("Service can't load categories count {} or stopped {}",
components.size(), stop);
return;
}
...
Now lets show a timing.

statsCollector.recordTiming Timing how long a bunch of async calls took

/**
* Load TODOs from TodoRepo.
*/
@RequestMapping(value ="/todo", summary ="Load TODOs",
...)
publicvoid loadTodo(finalCallback<Boolean> callback) {

finalSet<TodoCategory> categories =newHashSet<>(this.categories);

/* If there are no categories or if service is paused, then return right away.*/
...


finallong startTime = timer.time();

/* For each TodoCategory call TodoRepo to load the todo items. */
categories.forEach(category -> {

finalCallback<List<Todo>> todoCacheCallback =
createLoadFromCacheCallback(count, errorCount, category);
taskRepo.loadTodosFromCache(todoCacheCallback, category);

});

/* Coordinate all of the callbacks are done. */
reactor.coordinatorBuilder()
/* If the success count is equal to the
component size, we are done. */
.setCoordinator(() -> {
if (logger.isDebugEnabled()) {
logger.debug("COUNT "+ count.get());
}
return count.get() == components.size();
}
)
/* Set the timeout to be seconds times two since
we are calling two services. */
.setTimeoutDuration(config.getTimeoutMakingRemoteCallInSeconds() *2)
.setTimeoutTimeUnit(TimeUnit.SECONDS)
/* If there were no errors, then return success. */
.setFinishedHandler(() -> {
statsCollector
.recordTiming("Todo.loadCache.time",
timer.time() - startTime);
})
/* Set the timeout handler to return no
success and log that there was a timeout. */
.setTimeOutHandler(() -> {
logger.error("Timeout while loading todo items" );
callback.returnThis(false);
}).build();

...
This records a start time startTime = timer.time() then it makes a bunch of async calls. And when all of the async calls return, we then send a timing to record how long the process took using statsCollector.recordTiming. To really understand the complex call coordination with the QBit reactor, you first need to understand how QBit coordinates calls, etc. You can learn more about this at QBit Reactive Microservices Tutorial for handling async calls with the reactor.
Here is a simpler timing example timing a call to Cassandra.

Timing how long a single call to Cassandra took

publicvoid executeAsyncCassandraCall(finalCallback<ResultSet> callback,
finalStatement stmt) {
finalResultSetFuture future =this.session.executeAsync(stmt);
finallong startTime = timer.time();

Futures.addCallback(future, newFutureCallback<ResultSet>() {
@Override
publicvoidonSuccess(ResultSetresult) {
statsCollector
.recordTiming("Cassandra.load.time",
timer.time() - startTime);
callback.accept(result);

}

@Override
publicvoidonFailure(Throwablet) {
statsCollector
.recordTiming("Cassandra.load.error.time",
timer.time() - startTime);
callback.onError(t);
}
});

}
Notice that we use final long startTime = timer.time() and we record two timings either how long the successful call took or how long the error took.
To store a level just use style.

Store a level

        statsCollector.recordLevel("Todo.categories.size",
categories.size());
Remember is a level is a gauge like how large is my cache, how many outstanding items are in my queue, etc.

QBit has JMS, Kafka, Redis support, etc.

$
0
0

QBit has JMS support. The JMS support will be similar for Kafka support and other persistent queues. You can use JMS queues, local queues and Kafka queues with the same interface.
The JMS support works with ActiveMQ with little effort and can be easily be made to work with any JMS implementation via the JmsServiceBuilder.
Two classes do the bulk of the integration with JMS as follows:
  • JmsService does the low level communication with JMS.
  • JmsServiceBuilder manages JNDI, creation of sessions, connections, destinations, etc.
Then there are three classes that adapt JMS Queues and Topics to QBit Queues.
  • JmsTextQueue exposes a JMS queue or topic as QBit Queue
  • JmsTextReceiveQueue exposes a JMS queue or topic as QBit ReceiveQueue
  • JmsTextSenderQueue exposes a JMS queue or topic as QBit SendQueue
  • JsonQueue wraps a QBit Queue<String> and converts items into JSON and from JSON.
  • EventBusQueueAdapter moves items from a queue onto the QBit event bus. It can be used to channel events from Kafka, JMS, and/or Redis into the QBit world.

Receive a messages from a queue

Receive a messages from a queue

/** Create a new JMS Builder which can emit JmsService objects. */
finalJmsServiceBuilder jmsBuilder =JmsServiceBuilder
.newJmsServiceBuilder()
.setHost("somehost").setPort(6355)
.setDefaultDestination("foobarQueue");


/** Create a QBit Queue that talks to JMS. */
finalQueue<String> textQueue =newJmsTextQueue(jmsBuilder);


/** Create a QBit ReceiveQueue that talks to JMS. */
finalReceiveQueue<String> receiveQueue = textQueue.receiveQueue();



/** Get a message from JMS. */
String message = receiveQueue.pollWait();
out.println(message);


/** Keep getting messages. */
while (message!=null) {
message = receiveQueue.poll();
out.println(message);
}

out.println("DONE");

Send a messages to a queue

Send a messages to a queue

finalJmsServiceBuilder jmsBuilder =JmsServiceBuilder.newJmsServiceBuilder()
.setDefaultDestination("foobarQueue");
finalQueue<String> textQueue =newJmsTextQueue(jmsBuilder);
finalSendQueue<String> sendQueue = textQueue.sendQueue();

sendQueue.send("foo");
for (int i=0; i <10; i++) {
sendQueue.send("foo"+ i);
}

Starting up a QBit listener to listen to a queue

Starting up a listener to listent to a queue

finalArrayBlockingQueue<Person> personsABQ =newArrayBlockingQueue<>(100);

personSendQueue.send(newPerson("Geoff"));
personSendQueue.send(newPerson("Rick"));
personSendQueue.flushSends();


personQueue.startListener(personsABQ::add); //Listen to JMS and put everything in ArrayBlockingQueue

//This is just an example not a suggestion on API usage.

Strongly typed queue with JsonQueue.

If you want to not work with String and instead work with strongly typed object, you can combine the JMS support with the JsonQueue.

Complete example

importjava.util.concurrent.atomic.AtomicReference

importio.advantageous.qbit.admin.{MicroserviceConfig, ManagedServiceBuilder}
importio.advantageous.qbit.system.QBitSystemManager
importscala.util.Properties._
importorg.slf4j.LoggerFactory._


objectMainextendsApp{

varqBitSystemManagerAtomicReference:AtomicReference[QBitSystemManager] =null
System.getProperty(MicroserviceConfig.CONTEXT+"file", "/app/conf/service.json")



start(true)


defstart(wait:Boolean) {


qBitSystemManagerAtomicReference =newAtomicReference[QBitSystemManager]()
/**
* Logger to log messages.
*/
valmanagedServiceBuilder:ManagedServiceBuilder=ManagedServiceBuilder.managedServiceBuilder("queue-test-client")

vallogger= getLogger("queue.Main")

valbrokerURL= envOrElse("BROKER_URL", "http://localhost:8080")

logger.info("STARTING...")

valtestClient=newTestQueueClientService(
managedServiceBuilder.getStatServiceBuilder.buildStatsCollector,
brokerURL)


managedServiceBuilder.addEndpointService(testClient)


logger.info("PORT_WEB {}", envOrElse("PORT_WEB", "9090").toInt)
logger.info("BROKER_URL {}", brokerURL)

managedServiceBuilder.setPort(envOrElse("PORT_WEB", "9090").toInt)

managedServiceBuilder.getEndpointServerBuilder
.setUri("/api/v1")
.build
.startServer


configureAdmin(managedServiceBuilder)
logger.info("Queue Broker is open for eBusiness")
qBitSystemManagerAtomicReference.set(managedServiceBuilder.getSystemManager)
if (wait) {
managedServiceBuilder.getSystemManager.waitForShutdown()
}

}

/**
* Configure the admin (we will be able to get rid of this for the last release.
* @parammanagedServiceBuilder managedServiceBuilder
*/
privatedefconfigureAdmin(managedServiceBuilder: ManagedServiceBuilder) {
valportAdmin= envOrElse("PORT_ADMIN", "6666").toInt
managedServiceBuilder.getAdminBuilder.setPort(portAdmin)
managedServiceBuilder.getAdminBuilder.setMicroServiceName("calypso.queue.test")
managedServiceBuilder.getAdminBuilder.build.startServer
}



defshutdown() {
qBitSystemManagerAtomicReference.get().shutDown()
}

}

....

importjava.lang

importjava.util
importjava.util.Collections

importio.advantageous.qbit.annotation.RequestMethod._
importio.advantageous.boon.core.Str
importio.advantageous.qbit.annotation.{QueueCallbackType, QueueCallback, RequestMapping}
importio.advantageous.qbit.http.HTTP
importio.advantageous.qbit.jms.{JmsTextQueue, JmsServiceBuilder}
importio.advantageous.qbit.queue.{JsonQueue, SendQueue, Queue}
importio.advantageous.qbit.reactive.{ReactorBuilder, Reactor}
importio.advantageous.qbit.service.stats.StatsCollector
importio.advantageous.qbit.util.Timer
importorg.slf4j.{LoggerFactory, Logger}


@RequestMapping(Array("/queue-test-client"))
classTestQueueClientService(privatevalstatsCollector:StatsCollector,
privatevalbrokerURL:String,
privatevalreactor:Reactor=ReactorBuilder.reactorBuilder().build(),
privatevaltimer:Timer=Timer.timer()) {

privatevartime:Long=0L
privatevarlistenerURL=""
privatevallogger:Logger=LoggerFactory.getLogger(classOf[TestQueueClientService])
privatevarjmsQueue:Option[Queue[Record]] =None
privatevarjmsSendQueue:Option[SendQueue[Record]] =None

init()

definit():Unit= {
listenerURL =HTTP.get(brokerURL +"/api/v1/broker/ip/address-port")
vallistenerURLParts=Str.split(listenerURL.replace("\"", ""), ':')


valjmsServiceBuilder=JmsServiceBuilder.newJmsServiceBuilder()

jmsServiceBuilder.setHost("somehost")

if (listenerURLParts.length ==2) {
jmsServiceBuilder.setPort(listenerURLParts(1).toInt)
}

jmsQueue =Option(newJsonQueue[Record](classOf[Record], newJmsTextQueue(jmsServiceBuilder)))
jmsSendQueue =Option(jmsQueue.get.sendQueue())


logger.info(s"LISTENER URL $listenerURL")
}



/** Read description in annotation. */
@RequestMapping(value =Array("/send"), summary ="send", description ="send some records",
returnDescription =" just send some records", method =Array(PUT))
defsendRecords: lang.Boolean= {

if (jmsSendQueue.isDefined) {
for (i <-1 to 100) {
jmsSendQueue.get.send(newRecord("hi mom"+ i))
}
} else {
init()
}
true
}


/** Read description in annotation. */
@RequestMapping(value =Array("/receive"), summary ="receive records", description ="receive some records",
returnDescription =" just receive some records", method =Array(PUT))
defreceiveRecords: util.List[Record] = {

if (jmsQueue.isDefined) {
valreceiveQueue= jmsQueue.get.receiveQueue()

varrecord:Record= receiveQueue.poll()

vallist=newutil.ArrayList[Record]()

if (record !=null)
do {
list.add(record)
record = receiveQueue.poll()
} while (record !=null)
list
} else {
init()
Collections.emptyList()
}
}

/** Time service. */
@RequestMapping(value =Array("/time"), summary ="time", description ="Used to time service to see if it is up",
returnDescription ="just returns current time")
defgetTime= time

/** Build number increment this on every unique build to make sure the latest is in Orchard. */
@RequestMapping(value =Array("/build"), summary ="build number",
description ="Used to make sure we have the right version deployed", returnDescription ="returns the build number")
defbuildNumber= {
"1-(10-12-2015)"
}

/** Process Reactor stuff. */
@QueueCallback(Array(QueueCallbackType.LIMIT, QueueCallbackType.EMPTY))
defprocess () {
reactor.process()
time = timer.time
}


/** Ping service. */
@RequestMapping(value =Array("/ping"), summary ="ping", description ="Used to ping service to see if it is up",
returnDescription ="just returns true")
defping: lang.Boolean= {
true
}

}

Complete example 2

/*
* Copyright (c) 2015. Rick Hightower, Geoff Chandler
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* QBit - The Microservice lib for Java : JSON, WebSocket, REST. Be The Web!
*/

packageio.advantageous.qbit.jms.example.events;

importio.advantageous.boon.core.Sys;
importio.advantageous.qbit.QBit;
importio.advantageous.qbit.annotation.EventChannel;
importio.advantageous.qbit.annotation.OnEvent;
importio.advantageous.qbit.annotation.QueueCallback;
importio.advantageous.qbit.annotation.QueueCallbackType;
importio.advantageous.qbit.concurrent.PeriodicScheduler;
importio.advantageous.qbit.events.EventBusProxyCreator;
importio.advantageous.qbit.events.EventManager;
importio.advantageous.qbit.events.EventManagerBuilder;
importio.advantageous.qbit.events.spi.EventConnector;
importio.advantageous.qbit.events.spi.EventTransferObject;
importio.advantageous.qbit.jms.JmsServiceBuilder;
importio.advantageous.qbit.jms.JmsTextQueue;
importio.advantageous.qbit.queue.*;
importio.advantageous.qbit.service.ServiceQueue;
importio.advantageous.qbit.util.PortUtils;
importorg.apache.activemq.broker.BrokerService;

importjavax.jms.Session;
importjava.util.concurrent.TimeUnit;

import staticio.advantageous.qbit.service.ServiceBuilder.serviceBuilder;
import staticio.advantageous.qbit.service.ServiceProxyUtils.flushServiceProxy;

/**
* EmployeeEventExampleUsingChannelsToSendEvents
* created by rhightower on 2/11/15.
*/
@SuppressWarnings("ALL")
publicclassEmployeeEventExampleUsingChannelsToSendEventsWithJMS {


publicstaticfinalStringNEW_HIRE_CHANNEL="com.mycompnay.employee.new";




publicstaticvoidmain(String... args) throwsException {


finalBrokerService broker; //JMS Broker to make this a self contained example.
finalint port; //port to bind to JMS Broker to


/* ******************************************************************************/
/* START JMS BROKER. ************************************************************/
/* Start up JMS Broker. */
port =PortUtils.findOpenPortStartAt(4000);
broker=newBrokerService();
broker.addConnector("tcp://localhost:"+port);
broker.start();

Sys.sleep(5_000);


/* ******************************************************************************/
/* START JMS CLIENTS FOR SERVER A AND B *******************************************/
/* Create a JMS Builder to create JMS Queues. */
finalJmsServiceBuilder jmsBuilder =JmsServiceBuilder.newJmsServiceBuilder().setPort(port)
.setDefaultDestination(NEW_HIRE_CHANNEL).setAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);


/* JMS client for server A. */
finalJsonQueue<Employee> employeeJsonQueueServerA =
newJsonQueue<>(Employee.class, newJmsTextQueue(jmsBuilder));


/* JMS client for server B. */
finalJsonQueue<Employee> employeeJsonQueueServerB =
newJsonQueue<>(Employee.class, newJmsTextQueue(jmsBuilder));

/* Send Queue to send messages to JMS broker. */
finalSendQueue<Employee> sendQueueA = employeeJsonQueueServerA.sendQueue();
Sys.sleep(1_000);




/* ReceiveQueueB Queue B to receive messages from JMS broker. */
finalReceiveQueue<Employee> receiveQueueB = employeeJsonQueueServerB.receiveQueue();
Sys.sleep(1_000);




/* ******************************************************************************/
/* START EVENT BUS A ************************************************************/
/* Create you own private event bus for Server A. */
finalEventManager privateEventBusServerAInternal =EventManagerBuilder.eventManagerBuilder()
.setEventConnector(newEventConnector() {
@Override
publicvoidforwardEvent(EventTransferObject<Object>event) {

if (event.channel().equals(NEW_HIRE_CHANNEL)) {
System.out.println(event);
finalObject body = event.body();
finalObject[] bodyArray = ((Object[]) body);
finalEmployee employee = (Employee) bodyArray[0];
System.out.println(employee);
sendQueueA.sendAndFlush(employee);
}
}
})
.setName("serverAEventBus").build();

/* Create a service queue for this event bus. */
finalServiceQueue privateEventBusServiceQueueA = serviceBuilder()
.setServiceObject(privateEventBusServerAInternal)
.setInvokeDynamic(false).build();

finalEventManager privateEventBusServerA = privateEventBusServiceQueueA.createProxyWithAutoFlush(EventManager.class,
50, TimeUnit.MILLISECONDS);



/* Create you own private event bus for Server B. */
finalEventManager privateEventBusServerBInternal =EventManagerBuilder.eventManagerBuilder()
.setEventConnector(newEventConnector() {
@Override
publicvoidforwardEvent(EventTransferObject<Object>event) {
System.out.println(event);
finalObject body = event.body();
finalObject[] bodyArray = ((Object[]) body);
finalEmployee employee = (Employee) bodyArray[0];
System.out.println(employee);
sendQueueA.sendAndFlush(employee);
}
})
.setName("serverBEventBus").build();

/* Create a service queue for this event bus. */
finalServiceQueue privateEventBusServiceQueueB = serviceBuilder()
.setServiceObject(privateEventBusServerBInternal)
.setInvokeDynamic(false).build();


finalEventManager privateEventBusServerB = privateEventBusServiceQueueB.createProxyWithAutoFlush(EventManager.class,
50, TimeUnit.MILLISECONDS);






finalEventBusProxyCreator eventBusProxyCreator =
QBit.factory().eventBusProxyCreator();

finalEmployeeEventManager employeeEventManagerA =
eventBusProxyCreator.createProxy(privateEventBusServerA, EmployeeEventManager.class);

finalEmployeeEventManager employeeEventManagerB =
eventBusProxyCreator.createProxy(privateEventBusServerB, EmployeeEventManager.class);

/* ******************************************************************************/
/* LISTEN TO JMS CLIENT B and FORWARD to Event bus. **********************/
/* Listen to JMS client and push to B event bus ****************************/
employeeJsonQueueServerB.startListener(newReceiveQueueListener<Employee>(){
@Override
publicvoidreceive(finalEmployeeemployee) {
System.out.println("HERE "+ employee);
employeeEventManagerB.sendNewEmployee(employee);
System.out.println("LEFT "+ employee);

}
});


finalSalaryChangedChannel salaryChangedChannel = eventBusProxyCreator.createProxy(privateEventBusServerA, SalaryChangedChannel.class);

/*
Create your EmployeeHiringService but this time pass the private event bus.
Note you could easily use Spring or Guice for this wiring.
*/
finalEmployeeHiringService employeeHiring =newEmployeeHiringService(employeeEventManagerA,
salaryChangedChannel); //Runs on Server A



/* Now create your other service POJOs which have no compile time dependencies on QBit. */
finalPayrollService payroll =newPayrollService(); //Runs on Server A
finalBenefitsService benefits =newBenefitsService();//Runs on Server A

finalVolunteerService volunteering =newVolunteerService();//Runs on Server B


/** Employee hiring service. A. */
ServiceQueue employeeHiringServiceQueue = serviceBuilder()
.setServiceObject(employeeHiring)
.setInvokeDynamic(false).build();

/** Payroll service A. */
ServiceQueue payrollServiceQueue = serviceBuilder()
.setServiceObject(payroll)
.setInvokeDynamic(false).build();

/** Employee Benefits service. A. */
ServiceQueue employeeBenefitsServiceQueue = serviceBuilder()
.setServiceObject(benefits)
.setInvokeDynamic(false).build();

/** Community outreach program. B. */
ServiceQueue volunteeringServiceQueue = serviceBuilder()
.setServiceObject(volunteering)
.setInvokeDynamic(false).build();


/* Now wire in the event bus so it can fire events into the service queues.
* For ServerA. */
privateEventBusServerA.joinService(payrollServiceQueue);
privateEventBusServerA.joinService(employeeBenefitsServiceQueue);


/* Now wire in event B bus. */
privateEventBusServerB.joinService(volunteeringServiceQueue);


/* Start Server A bus. */
privateEventBusServiceQueueA.start();


/* Start Server B bus. */
privateEventBusServiceQueueB.start();


employeeHiringServiceQueue.start();
volunteeringServiceQueue.start();
payrollServiceQueue.start();
employeeBenefitsServiceQueue.start();


/** Now create the service proxy like before. */
EmployeeHiringServiceClient employeeHiringServiceClientProxy =
employeeHiringServiceQueue.createProxy(EmployeeHiringServiceClient.class);

/** Call the hireEmployee method which triggers the other events. */
employeeHiringServiceClientProxy.hireEmployee(newEmployee("Lucas", 1));

flushServiceProxy(employeeHiringServiceClientProxy);

Sys.sleep(5_000);

}

interfaceEmployeeHiringServiceClient {
voidhireEmployee(finalEmployeeemployee);

}


@EventChannel
interfaceSalaryChangedChannel {


voidsalaryChanged(Employeeemployee, intnewSalary);

}


interfaceEmployeeEventManager {

@EventChannel(NEW_HIRE_CHANNEL)
voidsendNewEmployee(Employeeemployee);


}

publicstaticclassEmployee {
finalString firstName;
finalint employeeId;

publicEmployee(StringfirstName, intemployeeId) {
this.firstName = firstName;
this.employeeId = employeeId;
}

publicStringgetFirstName() {
return firstName;
}

publicintgetEmployeeId() {
return employeeId;
}

@Override
publicStringtoString() {
return"Employee{"+
"firstName='"+ firstName +'\''+
", employeeId="+ employeeId +
'}';
}
}

publicstaticclassEmployeeHiringService {

finalEmployeeEventManager eventManager;
finalSalaryChangedChannel salaryChangedChannel;

publicEmployeeHiringService(finalEmployeeEventManageremployeeEventManager,
finalSalaryChangedChannelsalaryChangedChannel) {
this.eventManager = employeeEventManager;
this.salaryChangedChannel = salaryChangedChannel;
}


@QueueCallback(QueueCallbackType.EMPTY)
privatevoidnoMoreRequests() {


flushServiceProxy(salaryChangedChannel);
flushServiceProxy(eventManager);
}


@QueueCallback(QueueCallbackType.LIMIT)
privatevoidhitLimitOfRequests() {

flushServiceProxy(salaryChangedChannel);
flushServiceProxy(eventManager);
}


publicvoidhireEmployee(finalEmployeeemployee) {

int salary =100;
System.out.printf("Hired employee %s\n", employee);

//Does stuff to hire employee


eventManager.sendNewEmployee(employee);
salaryChangedChannel.salaryChanged(employee, salary);


}

}

publicstaticclassBenefitsService {

@OnEvent(NEW_HIRE_CHANNEL)
publicvoidenroll(finalEmployeeemployee) {

System.out.printf("Employee enrolled into benefits system employee %s %d\n",
employee.getFirstName(), employee.getEmployeeId());

}

}

publicstaticclassVolunteerService {

@OnEvent(NEW_HIRE_CHANNEL)
publicvoidinvite(finalEmployeeemployee) {

System.out.printf("Employee will be invited to the community outreach program %s %d\n",
employee.getFirstName(), employee.getEmployeeId());

}

}

publicstaticclassPayrollServiceimplementsSalaryChangedChannel {

@Override
publicvoidsalaryChanged(Employeeemployee, intnewSalary) {
System.out.printf("DIRECT FROM CHANNEL SalaryChangedChannel Employee added to payroll %s %d %d\n",
employee.getFirstName(), employee.getEmployeeId(), newSalary);

}
}
}

Complete Example 3

packageio.advantageous.qbit.jms;

importio.advantageous.boon.core.Lists;
importio.advantageous.boon.core.Sys;
importio.advantageous.qbit.QBit;
importio.advantageous.qbit.queue.*;
importio.advantageous.qbit.util.PortUtils;
importorg.apache.activemq.broker.BrokerService;
importorg.junit.After;
importorg.junit.Before;
importorg.junit.Test;

importjavax.jms.Connection;
importjavax.jms.ConnectionFactory;
importjavax.jms.JMSException;
importjavax.jms.Session;
importjava.util.Collections;
importjava.util.List;
importjava.util.concurrent.ArrayBlockingQueue;
importjava.util.concurrent.TimeUnit;

import staticorg.junit.Assert.assertEquals;
import staticorg.junit.Assert.assertTrue;

/**
*
* Created 10/8/15.
*/
publicclassJmsTest {

privateQueue<Person> personQueue;
privateSendQueue<Person> personSendQueue;
privateReceiveQueue<Person> personReceiveQueue;
privateBrokerService broker;
privateint port;

@Before
publicvoidsetUp() throwsException {

port =PortUtils.findOpenPortStartAt(4000);
broker=newBrokerService();
broker.addConnector("tcp://localhost:"+ port);
broker.start();

finalJmsServiceBuilder jmsBuilder =JmsServiceBuilder.newJmsServiceBuilder()
.setDefaultDestination("foobarQueue").setAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE).setPort(port);

finalQueue<String> textQueue =newJmsTextQueue(jmsBuilder);

personQueue =newJsonQueue<>(Person.class, textQueue);
personSendQueue = personQueue.sendQueue();
personReceiveQueue = personQueue.receiveQueue();



personSendQueue.shouldBatch();
personSendQueue.name();
personSendQueue.size();
personQueue.name();
personQueue.size();
}

@Test
publicvoidtestSendConsume() throwsException {


personSendQueue.send(newPerson("Geoff"));
personSendQueue.send(newPerson("Rick"));
personSendQueue.flushSends();

finalPerson geoff = personReceiveQueue.pollWait();
finalPerson rick = personReceiveQueue.pollWait();

assertTrue(geoff.name.equals("Rick") || geoff.name.equals("Geoff"));
assertTrue(rick.name.equals("Rick") || rick.name.equals("Geoff"));


assertEquals(true, personQueue.started());

}


@Test
publicvoidtestSendConsume2() throwsException {

personSendQueue.sendAndFlush(newPerson("Geoff"));
personSendQueue.sendAndFlush(newPerson("Rick"));

finalPerson geoff = personReceiveQueue.pollWait();
Sys.sleep(100);
finalPerson rick = personReceiveQueue.poll();


assertTrue(geoff.name.equals("Rick") || geoff.name.equals("Geoff"));
assertTrue(rick.name.equals("Rick") || rick.name.equals("Geoff"));
}


@Test
publicvoidtestSendConsume3() throwsException {

personSendQueue = personQueue.sendQueueWithAutoFlush(10, TimeUnit.MILLISECONDS);

personSendQueue.sendMany(newPerson("Geoff"), newPerson("Rick"));
finalPerson geoff = personReceiveQueue.take();
finalPerson rick = personReceiveQueue.take();

assertTrue(geoff.name.equals("Rick") || geoff.name.equals("Geoff"));
assertTrue(rick.name.equals("Rick") || rick.name.equals("Geoff"));

}


@Test
publicvoidtestSendConsume4() throwsException {
personSendQueue = personQueue.sendQueueWithAutoFlush(QBit.factory().periodicScheduler(),
10, TimeUnit.MILLISECONDS);

personSendQueue.sendBatch(Lists.list(newPerson("Geoff"), newPerson("Rick")));
finalPerson geoff = personReceiveQueue.take();
finalPerson rick = personReceiveQueue.take();

assertTrue(geoff.name.equals("Rick") || geoff.name.equals("Geoff"));
assertTrue(rick.name.equals("Rick") || rick.name.equals("Geoff"));

}

@Test
publicvoidtestSendConsume5() throwsException {
finalList<Person> list =Lists.list(newPerson("Geoff"), newPerson("Rick"));

Iterable<Person> persons = list::iterator;


personSendQueue.sendBatch(persons);
personSendQueue.flushSends();

finalPerson geoff = personReceiveQueue.pollWait();
finalPerson rick = personReceiveQueue.pollWait();


assertTrue(geoff.name.equals("Rick") || geoff.name.equals("Geoff"));
assertTrue(rick.name.equals("Rick") || rick.name.equals("Geoff"));

}

@Test
publicvoidtestSendConsume6() throwsException {

personSendQueue.send(newPerson("Geoff"));
personSendQueue.send(newPerson("Rick"));

personSendQueue.flushSends();

Sys.sleep(2000);
finalList<Person> personsBatch = (List<Person>) personReceiveQueue.readBatch();

}


@Test
publicvoidtestSendConsume7() throwsException {
finalList<Person> list =Lists.list(newPerson("Geoff"), newPerson("Rick"));
finalIterable<Person> persons = list::iterator;


personSendQueue.sendBatch(persons);
personSendQueue.flushSends();

finalList<Person> personsBatch = (List<Person>) personReceiveQueue.readBatch(5);




}


@Test
publicvoidtestSendConsume8() throwsException {

finalArrayBlockingQueue<Person> personsABQ =newArrayBlockingQueue<>(100);

personSendQueue.send(newPerson("Geoff"));
personSendQueue.send(newPerson("Rick"));
personSendQueue.flushSends();


personQueue.startListener(personsABQ::add);

Sys.sleep(1000);
int count =0;

while (personsABQ.size() <2) {
Sys.sleep(100);
count++;
if (count >100) break;
}


Sys.sleep(1000);
assertEquals(2, personsABQ.size());
finalPerson geoff = personsABQ.poll();
finalPerson rick = personsABQ.poll();


assertTrue(geoff.name.equals("Rick") || geoff.name.equals("Geoff"));
assertTrue(rick.name.equals("Rick") || rick.name.equals("Geoff"));

}


@Test
publicvoidbuilder() throwsException {
JmsServiceBuilder jmsServiceBuilder =JmsServiceBuilder.newJmsServiceBuilder();
jmsServiceBuilder.setJndiSettings(Collections.emptyMap());
jmsServiceBuilder.getJndiSettings();
jmsServiceBuilder.setConnectionFactory(newConnectionFactory() {
@Override
publicConnectioncreateConnection() throwsJMSException {
returnnull;
}

@Override
publicConnectioncreateConnection(StringuserName, Stringpassword) throwsJMSException {
returnnull;
}
});
jmsServiceBuilder.getConnectionFactory();

jmsServiceBuilder.setConnectionFactoryName("Foo");
jmsServiceBuilder.getConnectionFactoryName();
jmsServiceBuilder.setContext(null);
jmsServiceBuilder.getContext();
jmsServiceBuilder.setDefaultDestination("foo");
jmsServiceBuilder.getDefaultDestination();
jmsServiceBuilder.setAcknowledgeMode(5);
jmsServiceBuilder.getAcknowledgeMode();
jmsServiceBuilder.setDefaultTimeout(1);
jmsServiceBuilder.getDefaultTimeout();
jmsServiceBuilder.setHost("foo");
jmsServiceBuilder.getHost();
jmsServiceBuilder.setUserName("rick");
jmsServiceBuilder.getUserName();
jmsServiceBuilder.setPassword("foo");
jmsServiceBuilder.getPassword();
jmsServiceBuilder.setProviderURL("");
jmsServiceBuilder.getProviderURL();
jmsServiceBuilder.setProviderURLPattern("");
jmsServiceBuilder.getProviderURLPattern();
jmsServiceBuilder.setConnectionSupplier(null);
jmsServiceBuilder.getConnectionSupplier();
jmsServiceBuilder.setStartConnection(true);
jmsServiceBuilder.isStartConnection();
jmsServiceBuilder.setTransacted(true);
jmsServiceBuilder.isTransacted();
jmsServiceBuilder.setJndiSettings(null);
jmsServiceBuilder.addJndiSetting("foo","bar");



jmsServiceBuilder =JmsServiceBuilder.newJmsServiceBuilder().setPort(port);

jmsServiceBuilder.build().start();
}

@After
publicvoidtearDown() throwsException {

personQueue.stop();
personReceiveQueue.stop();
personSendQueue.stop();

try {
broker.stop();
} catch (Exception ex) {
ex.printStackTrace();
}

broker =null;
Sys.sleep(1000);
}


privatestaticclassPerson {
finalString name;

privatePerson(Stringname) {
this.name = name;
}
}
}


We have been busy over at QBit Java micorservices central

$
0
0

1) Support for non-JSON bodies from REST end-points

Added support for String and byte[] to be passed without JSON parsing.
IssueDocs Added to wiki main.
    @RequestMapping(value ="/body/bytes", method =RequestMethod.POST)
publicboolean bodyPostBytes( byte[] body) {
String string =newString(body, StandardCharsets.UTF_8);
return string.equals("foo");
}

@RequestMapping(value ="/body/string", method =RequestMethod.POST)
publicboolean bodyPostString(String body) {
return body.equals("foo");
}
If the Content-Type of the request is null or is application/json then we will parse the body as JSON. If the Content-Type is set and is not application/json then we will pass the raw String or raw bytes. This allows you to handle non-JSON content from REST. It does not do any auto-conversion. You will get the raw bytes or raw UTF-8 string.
    @Test
publicvoid testNoJSONParseWithBytes() {

finalHttpTextResponse httpResponse = httpServerSimulator.sendRequest(
httpRequestBuilder.setUri("/es/body/bytes")
.setMethodPost().setContentType("foo")
.setBody("foo")
.build()
);
assertEquals(200, httpResponse.code());
assertEquals("true", httpResponse.body());
}


@Test
publicvoid testNoJSONParseWithString() {

finalHttpTextResponse httpResponse = httpServerSimulator.sendRequest(
httpRequestBuilder.setUri("/es/body/string")
.setMethodPost().setContentType("foo")
.setBody("foo")
.build()
);

assertEquals(200, httpResponse.code());
assertEquals("true", httpResponse.body());

}

2) Added HttpProxy support

You can proxy backend services from a single endpoint. This also allows you to do actions before sending the request, and to not forward the request based on a Predicate.

3) Added ability to return different response codes for success

By default QBit sends a 200 (OK) for a non-void call (a call that has a return or aCallback). If the REST operation has no return or no callback then QBit sends a 202 (Accepted). There may be times when you want to send a 201 (Created) or some other code that is not an Exception. You can do that by setting code on @RequestMapping. By default the code is -1 which means use the default behavior.
Issue Docs Added to wiki main.
  @RequestMapping(value ="/helloj7", code =221)
publicvoid helloJSend7(Callback<JSendResponse<List<String>>> callback) {
callback.returnThis(JSendResponseBuilder.jSendResponseBuilder(Lists.list(
"hello "+System.currentTimeMillis())).build());
}

4) Working with non JSON responses

Issue Docs Added to wiki main.
You do not have to return JSON form rest calls. You can return any binary or any text.

4.1) Returning non JSON from REST call

      @RequestMapping(method =RequestMethod.GET)
publicvoid ping2(Callback<HttpTextResponse> callback) {

callback.returnThis(HttpResponseBuilder.httpResponseBuilder()
.setBody("hello mom").setContentType("mom")
.setCode(777)
.buildTextResponse());
}

4.2) Returning binary from REST call

      @RequestMapping(method =RequestMethod.GET)
publicvoid ping2(Callback<HttpBinaryResponse> callback) {

callback.returnThis(HttpResponseBuilder.httpResponseBuilder()
.setBody("hello mom").setContentType("mom")
.setCode(777)
.buildBinaryResponse());
}

5) Create websocket service client that is ServiceDiscovery aware

ServiceDiscovery-aware websocket service client

finalClient client = clientBuilder
.setServiceDiscovery(serviceDiscovery, "echo")
.setUri("/echo").setProtocolBatchSize(20).build()
.startClient();

finalEchoAsync echoClient = client.createProxy(EchoAsync.class, "echo");
Currently the clientBuilder will load all service endpoints that are registered under the service name, and randomly pick one.
In the future we can RoundRobin calls or shard calls to websocket service and/or provide auto fail over if the connection is closed. We do this for the event bus that uses service discovery but it is not baked into WebSocket based client stubs yet.

For comparison here is a non-ServiceDiscovery version.

finalClientBuilder clientBuilder =ClientBuilder.clientBuilder();
finalClient client = clientBuilder.setHost("localhost")
.setPort(8080).setUri("/echo")
.build().startClient();

finalEchoAsync echoClient = client.createProxy(EchoAsync.class, "echo");
Recall ServiceDiscovery includes Consul based, watching JSON files on disk, and DNS SVR records. It is easy to write your own service discovery as well and plug it into QBit.

6) JSend support for serialization and swagger

We started to add JSend support. The JSend is supported for marshaling JSend objects, and via our Swagger support.
@RequestMapping("/hw")
publicclassHelloWorldJSend {

publicstaticclassHello {
finalString hello;

publicHello(Stringhello) {
this.hello = hello;
}
}

@RequestMapping("/hello")
publicStringhello() {
return"hello "+System.currentTimeMillis();
}



@RequestMapping("/helloj")
publicJSendResponse<String>helloJSend() {

returnJSendResponseBuilder.jSendResponseBuilder("hello "+System.currentTimeMillis()).build();
}


@RequestMapping("/helloj2")
publicJSendResponse<Hello>helloJSend2() {

returnJSendResponseBuilder.jSendResponseBuilder(newHello("hello "+System.currentTimeMillis())).build();
}


@RequestMapping("/helloj3")
publicJSendResponse<List<String>>helloJSend3() {

returnJSendResponseBuilder.jSendResponseBuilder(Lists.list("hello "+System.currentTimeMillis())).build();
}


@RequestMapping("/helloj4")
publicJSendResponse<List<Hello>>helloJSend4() {

returnJSendResponseBuilder.jSendResponseBuilder(Lists.list(newHello("hello "+System.currentTimeMillis()))).build();
}


@RequestMapping("/helloj5")
publicvoidhelloJSend5(Callback<JSendResponse<List<Hello>>>callback) {

callback.returnThis(JSendResponseBuilder.jSendResponseBuilder(Lists.list(newHello("hello "+System.currentTimeMillis()))).build());
}

@RequestMapping("/helloj6")
publicvoidhelloJSend6(Callback<JSendResponse<List<String>>>callback) {
callback.returnThis(JSendResponseBuilder.jSendResponseBuilder(Lists.list(
"hello "+System.currentTimeMillis())).build());
}


@RequestMapping(value="/helloj7", code=221)
publicvoidhelloJSend7(Callback<JSendResponse<List<String>>>callback) {
callback.returnThis(JSendResponseBuilder.jSendResponseBuilder(Lists.list(
"hello "+System.currentTimeMillis())).build());
}
Hitting the above
# String respnose
curl http://localhost:8080/hw/hello | jq .
"hello 1446088919561"

# JSend wrapping a a string
$ curl http://localhost:8080/hw/helloj | jq .
{
"data": "hello 1446088988074",
"status": "success"
}

#JSend wrapping a domain Object Hello
$ curl http://localhost:8080/hw/helloj2 | jq .
{
"data": {
"hello": "hello 1446089041902"
},
"status": "success"
}

#JSend wrapping a list of domain objects
$ curl http://localhost:8080/hw/helloj5 | jq .
{
"data": [
{
"hello": "hello 1446089152089"
}
],
"status": "success"
}

Use jq to get pretty print JSON from QBit.
In this example we setup the admin interface as well so we can query swagger gen.

Starting up admin

publicstaticvoid main(finalString... args) {
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder().setRootURI("/");

/* Start the service. */
managedServiceBuilder.addEndpointService(newHelloWorldJSend())
.getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health end-points and meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Servers started");

managedServiceBuilder.getSystemManager().waitForShutdown();


}

Showing swagger support for JSend

$ curl http://localhost:7777/__admin/meta/ | jq .

{
"swagger":"2.0",
"info": {
"title":"application title goes here",
"description":"Description not set",
"contact": {
"name":"ContactName not set",
"url":"Contact URL not set",
"email":"no.contact.email@set.me.please.com"
},
"version":"0.1-NOT-SET",
"license": {
"name":"licenseName not set",
"url":"http://www.license.url.com/not/set/"
}
},
"host":"localhost:8080",
"basePath":"/",
"schemes": [
"http",
"https",
"wss",
"ws"
],
"consumes": [
"application/json"
],
"definitions": {
"jsend-array-String": {
"properties": {
"data": {
"type":"string"
},
"status": {
"type":"string",
"description":"Status of return, this can be 'success', 'fail' or 'error'"
}
},
"description":"jsend standard response"
},
"Hello": {
"properties": {
"hello": {
"type":"string"
}
}
},
"jsend-Hello": {
"properties": {
"data": {
"$ref":"#/definitions/Hello"
},
"status": {
"type":"string",
"description":"Status of return, this can be 'success', 'fail' or 'error'"
}
},
"description":"jsend standard response"
},
"jsend-String": {
"properties": {
"data": {
"type":"string"
},
"status": {
"type":"string",
"description":"Status of return, this can be 'success', 'fail' or 'error'"
}
},
"description":"jsend standard response"
},
"jsend-array-Hello": {
"properties": {
"data": {
"type":"array",
"items": {
"$ref":"#/definitions/Hello"
}
},
"status": {
"type":"string",
"description":"Status of return, this can be 'success', 'fail' or 'error'"
}
},
"description":"jsend standard response"
}
},
"produces": [
"application/json"
],
"paths": {
"/hw/helloj7": {
"get": {
"operationId":"helloJSend7",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"221": {
"description":"no return description",
"schema": {
"$ref":"#/definitions/jsend-array-String"
}
}
}
}
},
"/hw/helloj6": {
"get": {
"operationId":"helloJSend6",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"$ref":"#/definitions/jsend-array-String"
}
}
}
}
},
"/hw/helloj5": {
"get": {
"operationId":"helloJSend5",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"$ref":"#/definitions/jsend-array-Hello"
}
}
}
}
},
"/hw/helloj4": {
"get": {
"operationId":"helloJSend4",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"$ref":"#/definitions/jsend-array-Hello"
}
}
}
}
},
"/hw/helloj3": {
"get": {
"operationId":"helloJSend3",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"$ref":"#/definitions/jsend-array-String"
}
}
}
}
},
"/hw/helloj2": {
"get": {
"operationId":"helloJSend2",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"$ref":"#/definitions/jsend-Hello"
}
}
}
}
},
"/hw/helloj": {
"get": {
"operationId":"helloJSend",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"$ref":"#/definitions/jsend-String"
}
}
}
}
},
"/hw/hello": {
"get": {
"operationId":"hello",
"summary":"no summary",
"description":"no description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description":"no return description",
"schema": {
"type":"string"
}
}
}
}
}
}
}
To learn more about swagger see swagger.
More work is needed to support JSend error and failures.

7) Add kv store / cache support part of core

finalKeyValueStoreService<Todo> todoKVStoreInternal =JsonKeyValueStoreServiceBuilder
.jsonKeyValueStoreServiceBuilder()
.setLowLevelKeyValueStoreService(keyValueStore)
.buildKeyValueStore(Todo.class);

todoKVStore.putWithConfirmation(callback,
"testPutWithConfirmationWrapped", newTodo(value));

8) Custom exception error codes

You can use HttpStatusCodeException to send custom HTTP error codes and to wrap exceptions.

Custom http error code exceptions

    @RequestMapping("/echo3")
publicString echoException() {
thrownewHttpStatusCodeException(700, "Ouch!");

}

@RequestMapping("/echo4")
publicvoid echoException2(finalCallback<String> callback) {
callback.onError(HttpStatusCodeException.httpError(900, "Ouch!!"));
}

@RequestMapping("/echo5")
publicvoid echoException3(finalCallback<String> callback) {

try {

thrownewIllegalStateException("Shoot!!");
}catch (Exception ex) {
callback.onError(HttpStatusCodeException.httpError(666, ex.getMessage(), ex));

}
}

10) Improved speed for HTTP Rest services that have only a few client

QBit was originally written for high-end, high-traffic, high-volume services. It needed to be tuned to work with just a few clients as well as high-end.

11) Bug fixes

There were quite a few potential issue that end up not being issue, but we wrote better test cases to prove (if only to ourselves) that these were not issues.

12) Created JMS (and Kafka support) support so JMS looks like regular QBit queue

QBit, Microservices Lib, MDC - Logging Mapped Diagnostic Context, RequestContext and HttpContext

$
0
0
QBit, microservices lib, has implemented the Logging Mapped Diagnostic Context (MDC) to make debugging microservices easier. Prior to MDC integration it was difficult to track request information in logs. Now we have added MDC integration it is easy to track Request URI, and more even beyond thread boundaries of internal services.
We added support for MDC to log the requests coming from Microservice you write like:
  • UserID
  • First name, last name
  • Request URI
  • remote address
  • browser/use-agent
For microservice to microservice calls (service to service) we can track the URI, source IP of the system, request URI. You should also be able to track (or set) any extra headers that will make sense to log in the context of the application.
This allows you to customize the Pattern used to log - and allows items to be properly pushed to SplunkGreyLog, or LogStash. This allows the MDC fields to be used for real-time operational intelligence using tools like SplunkLogStash and GreyLog. This allows you to search, analyze and log streams from your microservices log to learn about usages of these services which is critical for debugging and learning about how your microservices are used. Distributed logging and distributed log analysis is sort of a given in a microservices architecture.
Modern logging systems are designed to audit and debug distributed applications. QBit being a reactive microservice lib allows the creation of in-proc services and remote services, i.e., distributed applications. The in-proc services run in one or more threads using an actor/service queue model. QBit MDC support crosses the Thread/Queue/Actor Message boundary to facilitated debugging and provide a complete async call stack which is essential for debugging a message passing system like QBit which relies of ServiceQueue's.
Since QBit, the microservices lib, focuses on creating distributed systems you have to deal with multiple clients simultaneously when dealing with logging and auditing. The concepts of MDC (Mapped Diagnostic Contexts) was covered in the book Logging Diagnostic Messages in Pattern Languages of Program Design 3, by Neil Harrison (Addison-Wesley, 1997). QBit uses SLF4J API for Mapped Diagnostic Contexts (MDC).

Examples of using MDC

QBit provides the class ManagedServiceBuilder which is a utility class for when you are running in a PaaS like Heroku or Docker. It also allows you to share stat, health and system manager setup. You can also do things like enable stats (statsD support baked in), and install service discovery / distributed health (Consul, SkyDNS), etc.
The ManagedServiceBuilder to support MDC now provides a methodenableLoggingMappedDiagnosticContext(). This installs the correct QBit interceptors to provide MDC logging and to create an async service call stack.
Let's demonstrate how this works with a simple example RestService.

RestService example to demonstrate MDC

@RequestMapping ("rest")
publicclassRestService {

privatefinalLogger logger =LoggerFactory.getLogger(RestService.class);

...

@RequestMapping ("mdc")
publicvoidmdc(finalCallback<Map<String,String>>callback) {
logger.info("CALLED MDC");
callback.returnThis(MDC.getCopyOfContextMap());
}
...
To make mdc work for the log, we will add the logback.xml file to the Java resources.

/resources/logback.xml

<configuration>

<appendername="STDOUT"class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n-
%X{requestRemoteAddress} - %X{requestUri} - %X{requestHttpMethod}%n</pattern>
</encoder>
</appender>


<loggername="io.advantageous.qbit"level="DEBUG"/>

<rootlevel="INFO">
<appender-refref="STDOUT"/>
</root>
</configuration>
Notice the syntax %X{requestRemoteAddress}, this allows you to access the current HTTP request's remote address.
When constructing this service, you have to call enableLoggingMappedDiagnosticContext.

Turning on MDC support for ManagedServiceBuilder

finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder();
managedServiceBuilder.setRootURI("/");
managedServiceBuilder.enableLoggingMappedDiagnosticContext();

finalRestService restService =newRestService();

managedServiceBuilder.addEndpointService(restService);


managedServiceBuilder.getEndpointServerBuilder().build().startServer();
At this point you can access the service with curl as follows:

Accessing the service with curl to see mdc

 curl http://localhost:8080/rest/mdc | jq .
{
"requestRemoteAddress": "0:0:0:0:0:0:0:1:63772",
"requestUri": "/rest/mdc",
"requestHttpMethod": "GET"
}

Output of log

18:57:00.516 [QueueListener|Send Queue  rest] INFO  i.a.qbit.example.mdc.RestService - CALLED MDC
- 0:0:0:0:0:0:0:1:63797 - /rest/mdc - GET
Notice that the requestRemoteAddressrequestUri, and requestHttpMethod was output to the log.
Note that the enableLoggingMappedDiagnosticContext allows you to pass in N number of header names which will become part of the logging MDC and if you are using Splunk, GreyLog or LogStash will become part of the custom fields that you can parse. Once you use LogStash, Splunk or GreyLog with custom fields for headers, and requests etc. for debugging and log analysis, you will not know how you did it without it.
This is important and nice. But what if you have downstream services, i.e., ServiceQueue service or a ServiceBundle service. These services are running on other threads. Will QBit pass handle this case? Yes. Yes it will.

Downstream services example

To show QBit's ability to breach the thread chasm of other actor service queues, let's create some services.
And, in addition let's show off RequestContext which QBit uses to create this context by creating a service queue call stack to show the call stack.

Service to show capturing service call stack and MDC working N-levels deep

publicinterfaceInternalService {

voidgetCallStack(Callback<List<String>>listCallback);
}

....


publicclassInternalServiceImpl {


privatefinalLogger logger =LoggerFactory.getLogger(InternalServiceImpl.class);
privatefinalRequestContext requestContext;

publicInternalServiceImpl(finalRequestContextrequestContext) {
this.requestContext = requestContext;
}

publicList<String>getCallStack() {

logger.info("GET CallStack called");

finalOptional<MethodCall<Object>> currentMethodCall = requestContext.getMethodCall();

if (!currentMethodCall.isPresent()) {
logger.info("Method not found");
returnArrays.asList("MethodCall Not Found");
}

finalList<String> callStack =newArrayList<>();
MethodCall<Object> methodCall = currentMethodCall.get();
callStack.add("Service Call("+ methodCall.objectName()
+"."+ methodCall.name() +")");

while (methodCall!=null) {

finalRequest<Object> request = methodCall.originatingRequest();
if (request ==null) {
methodCall =null;
} elseif (request instanceofMethodCall) {
methodCall = ((MethodCall<Object>) request);
callStack.add("Service Call("+ methodCall.objectName()
+"."+ methodCall.name() +")");
} elseif (request instanceofHttpRequest) {
finalHttpRequest httpRequest = ((HttpRequest) request);

callStack.add("REST Call("+ httpRequest.getRemoteAddress()
+"."+ httpRequest.getUri() +")");

methodCall =null;
} else {
methodCall =null;
}
}

return callStack;
}

}
Now let's wire in this service twice. Once being used from a ServiceQueue (fastest) and once using a ServiceBundle. Notice the above calls logger.info("GET CallStack called"), and then it uses requestContext.getMethodCall() to get the current MethodCall in QBit. Note that aMethodCall is a Request in QBit, and an HttpRequest is a Request as well. You can now track the current MethodCall all the way back to the HttpRequest using therequest.originatingRequest() method as shown above. We use originatingRequest to find the original HttpRequest and all of the MethodCall's in-between.

Wiring in InternalServiceImpl to show service queue call stack and logging mdc

publicstaticvoid main(String... args) throws Exception {
finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder();


managedServiceBuilder.setRootURI("/");

managedServiceBuilder.enableLoggingMappedDiagnosticContext();

/** Create Service from Service Queue. */
finalInternalService internalServiceFromServiceQueue = getInternalServiceFromServiceQueue(managedServiceBuilder);


/** Create Service from Service Bundle. */
finalInternalService internalServiceFromServiceBundle = getInternalServiceFromServiceBundle(managedServiceBuilder);


finalStatsCollector statsCollectorForRest = managedServiceBuilder.getStatServiceBuilder().buildStatsCollector();

finalRestService restService =newRestService(internalServiceFromServiceBundle,
internalServiceFromServiceQueue,
ReactorBuilder.reactorBuilder().build(), Timer.timer(), statsCollectorForRest);

managedServiceBuilder.addEndpointService(restService);


managedServiceBuilder.getEndpointServerBuilder().build().startServer();


}

privatestaticInternalService getInternalServiceFromServiceQueue(ManagedServiceBuilder managedServiceBuilder) {
finalInternalServiceImpl internalServiceImpl =newInternalServiceImpl(newRequestContext());
finalServiceBuilder serviceBuilderForServiceObject = managedServiceBuilder.createServiceBuilderForServiceObject(internalServiceImpl);
finalServiceQueue serviceQueue = serviceBuilderForServiceObject.buildAndStartAll();
return serviceQueue.createProxy(InternalService.class);
}


privatestaticInternalService getInternalServiceFromServiceBundle(ManagedServiceBuilder managedServiceBuilder) {
finalInternalServiceImpl internalServiceImpl =newInternalServiceImpl(newRequestContext());
finalServiceBundle serviceBundle = managedServiceBuilder.createServiceBundleBuilder().build().startServiceBundle();
serviceBundle.addServiceObject("myService", internalServiceImpl);
return serviceBundle.createLocalProxy(InternalService.class, "myService");
}
Then we use these services from the RestService example that we created to show a call stack from a ServiceQueue and a call stack from a ServiceBundle.

Using services to show call stack from ServiceQueue andServiceBundle.

@RequestMapping ("rest")
publicclassRestServiceextendsBaseService {

privatefinalLogger logger =LoggerFactory.getLogger(RestService.class);
privatefinalInternalService internalServiceFromServiceQueue;
privatefinalInternalService internalServiceFromServiceBundle;

publicRestService(finalInternalServiceinternalServiceFromServiceBundle,
finalInternalServiceinternalServiceFromServiceQueue,
finalReactorreactor,
finalTimertimer,
finalStatsCollectorstatsCollector) {
super(reactor, timer, statsCollector);
this.internalServiceFromServiceBundle = internalServiceFromServiceBundle;
this.internalServiceFromServiceQueue = internalServiceFromServiceQueue;
reactor.addServiceToFlush(internalServiceFromServiceBundle);
reactor.addServiceToFlush(internalServiceFromServiceQueue);
}

@RequestMapping ("callstack/queue")
publicvoidcallStackFromQueue(finalCallback<List<String>>callback) {
logger.info("Logger {}", MDC.getCopyOfContextMap());
internalServiceFromServiceQueue.getCallStack(callback);
}

@RequestMapping ("callstack/bundle")
publicvoidcallStackFromBundle(finalCallback<List<String>>callback) {
logger.info("Logger {}", MDC.getCopyOfContextMap());
internalServiceFromServiceBundle.getCallStack(callback);
}
Now let's call this service with REST and see the results.

Calling REST service to see example service queue call stack

$ curl http://localhost:8080/rest/callstack/queue | jq .
[
"Service Call(.getCallStack)",
"Service Call(restservice.callStackFromQueue)",
"REST Call(0:0:0:0:0:0:0:1:63881./rest/callstack/queue)"
]

Calling REST service to see example service bundle call stack

$ curl http://localhost:8080/rest/callstack/bundle | jq .
[
"Service Call(myService.getCallStack)",
"Service Call(restservice.callStackFromBundle)",
"REST Call(0:0:0:0:0:0:0:1:63899./rest/callstack/bundle)"
]

Output

19:20:12.807 [QueueListener|Send Queue  rest] INFO  i.a.qbit.example.mdc.RestService - Logger {requestRemoteAddress=0:0:0:0:0:0:0:1:63909, requestUri=/rest/callstack/queue, requestHttpMethod=GET}
- 0:0:0:0:0:0:0:1:63909 - /rest/callstack/queue - GET

19:20:12.808 [QueueListener|Send Queue internalserviceimpl] INFO i.a.q.e.mdc.InternalServiceImpl - GET CallStack called
- 0:0:0:0:0:0:0:1:63909 - /rest/callstack/queue - GET

19:20:14.906 [QueueListener|Send Queue rest] INFO i.a.qbit.example.mdc.RestService - Logger {requestRemoteAddress=0:0:0:0:0:0:0:1:63910, requestUri=/rest/callstack/bundle, requestHttpMethod=GET}
- 0:0:0:0:0:0:0:1:63910 - /rest/callstack/bundle - GET

19:20:14.958 [QueueListener|Send Queue /services/myService] INFO i.a.q.e.mdc.InternalServiceImpl - GET CallStack called
- 0:0:0:0:0:0:0:1:63910 - /rest/callstack/bundle - GET
Think about this for a moment. We just passed our call context and we jumped the Thread call context chasm. Pretty cool?

Getting the current request context

You do not have to call ManagedServiceBuilder.enableLoggingMappedDiagnosticContext to get the request context. All you need to call is enableRequestChain. CallingenableRequestChain enables the request chain. There is some slight overhead for this, but this allows REST and WebSocket services to pass the originating request, methodCall, etc. to downstream services where it will be available via the RequestContext. Remember that aMethodCallHttpRequestWebSocketMessageEvent are all Requests objects in QBit. Calling ManagedServiceBuilder.enableLoggingMappedDiagnosticContext also enablesManagedServiceBuilder.enableRequestChain.
The class RequestContext allows you to access the current request or current method.

QBit's RequestContext class

packageio.advantageous.qbit.service;

importio.advantageous.qbit.message.MethodCall;
importio.advantageous.qbit.message.Request;

importjava.util.Optional;

/**
* Holds the current request for the method call.
*/
publicclassRequestContext {
/** Grab the current request.
*
* @return Optional request.
*/
publicOptional<Request<Object>>getRequest() {
...
}

/** Grab the current method call.
*
* @return Optional method call.
*/
publicOptional<MethodCall<Object>>getMethodCall() {
...
}

...

}
In addition QBit provides a way to access the current HttpRequest associated with a service call chain. This is done via the HttpContext which extends the RequestContext.

QBit's HttpContext class

/**
* Holds current information about the HttpRequest.
*/
publicclassHttpContextextendsRequestContext {


/** Grab the current http request.
*
* @return Optional http request.
*/
publicOptional<HttpRequest>getHttpRequest() {
...
}
...
We can extend our example to capture the HttpContext. Let's add this to the RestService.
    @RequestMapping ("http-info")
publicString httpInfo() {

finalStringBuilder builder =newStringBuilder();
finalHttpContext httpContext =newHttpContext();
finalOptional<HttpRequest> httpRequest = httpContext.getHttpRequest();
if (httpRequest.isPresent()) {
builder.append("URI = ").append(httpRequest.get().getUri()).append("\n");
builder.append("HTTP Method = ").append(httpRequest.get().getMethod()).append("\n");
builder.append("USER AGENT = ").append(
httpRequest.get().getHeaders().getFirst(HttpHeaders.USER_AGENT)).append("\n");
} else {
builder.append("request not found");
}


finalRequestContext requestContext =newRequestContext();

if (requestContext.getMethodCall().isPresent()) {
finalMethodCall<Object> methodCall = requestContext.getMethodCall().get();
builder.append("Object Name = ").append(methodCall.objectName()).append("\n");
builder.append("Method Name = ").append(methodCall.name()).append("\n");
}
return builder.toString();
}
The above shows how to use both HttpContext and another example of RequestContext.
$ curl http://localhost:8080/rest/http-info | jq .
"URI = /rest/http-info\nHTTP Method = GET\nUSER AGENT = curl/7.43.0\nObject Name = restservice\nMethod Name = httpInfo\n"

Complete code for the example

packageio.advantageous.qbit.example.mdc;

importio.advantageous.qbit.reactive.Callback;

importjava.util.List;

publicinterfaceInternalService {

voidgetCallStack(Callback<List<String>>listCallback);
}
...
packageio.advantageous.qbit.example.mdc;


importio.advantageous.qbit.http.request.HttpRequest;
importio.advantageous.qbit.message.MethodCall;
importio.advantageous.qbit.message.Request;
importio.advantageous.qbit.service.RequestContext;
importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

importjava.util.ArrayList;
importjava.util.Arrays;
importjava.util.List;
importjava.util.Optional;

publicclassInternalServiceImpl {


privatefinalLogger logger =LoggerFactory.getLogger(InternalServiceImpl.class);
privatefinalRequestContext requestContext;

publicInternalServiceImpl(finalRequestContextrequestContext) {
this.requestContext = requestContext;
}

publicList<String>getCallStack() {

logger.info("GET CallStack called");

finalOptional<MethodCall<Object>> currentMethodCall = requestContext.getMethodCall();

if (!currentMethodCall.isPresent()) {
logger.info("Method not found");
returnArrays.asList("MethodCall Not Found");
}

finalList<String> callStack =newArrayList<>();
MethodCall<Object> methodCall = currentMethodCall.get();


callStack.add("Service Call("+ methodCall.objectName()
+"."+ methodCall.name() +")");

while (methodCall!=null) {

finalRequest<Object> request = methodCall.originatingRequest();
if (request ==null) {
methodCall =null;
} elseif (request instanceofMethodCall) {
methodCall = ((MethodCall<Object>) request);
callStack.add("Service Call("+ methodCall.objectName()
+"."+ methodCall.name() +")");
} elseif (request instanceofHttpRequest) {
finalHttpRequest httpRequest = ((HttpRequest) request);

callStack.add("REST Call("+ httpRequest.getRemoteAddress()
+"."+ httpRequest.getUri() +")");

methodCall =null;
} else {
methodCall =null;
}
}

return callStack;
}

}
...
packageio.advantageous.qbit.example.mdc;

importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.http.HttpContext;
importio.advantageous.qbit.http.HttpHeaders;
importio.advantageous.qbit.http.request.HttpRequest;
importio.advantageous.qbit.message.MethodCall;
importio.advantageous.qbit.reactive.Callback;
importio.advantageous.qbit.reactive.Reactor;
importio.advantageous.qbit.reactive.ReactorBuilder;
importio.advantageous.qbit.service.*;
importio.advantageous.qbit.service.stats.StatsCollector;
importio.advantageous.qbit.util.Timer;
importorg.slf4j.MDC;

importorg.slf4j.LoggerFactory;

importorg.slf4j.Logger;
importjava.util.List;
importjava.util.Map;
importjava.util.Optional;


/**
* curl http://localhost:8080/rest/mdc
* curl http://localhost:8080/rest/callstack/queue
* curl http://localhost:8080/rest/callstack/queue
*
*/
@RequestMapping ("rest")
publicclassRestServiceextendsBaseService {

privatefinalLogger logger =LoggerFactory.getLogger(RestService.class);
privatefinalInternalService internalServiceFromServiceQueue;
privatefinalInternalService internalServiceFromServiceBundle;

publicRestService(finalInternalServiceinternalServiceFromServiceBundle,
finalInternalServiceinternalServiceFromServiceQueue,
finalReactorreactor,
finalTimertimer,
finalStatsCollectorstatsCollector) {
super(reactor, timer, statsCollector);
this.internalServiceFromServiceBundle = internalServiceFromServiceBundle;
this.internalServiceFromServiceQueue = internalServiceFromServiceQueue;
reactor.addServiceToFlush(internalServiceFromServiceBundle);
reactor.addServiceToFlush(internalServiceFromServiceQueue);
}

@RequestMapping ("callstack/queue")
publicvoidcallStackFromQueue(finalCallback<List<String>>callback) {
logger.info("Logger {}", MDC.getCopyOfContextMap());
internalServiceFromServiceQueue.getCallStack(callback);
}

@RequestMapping ("callstack/bundle")
publicvoidcallStackFromBundle(finalCallback<List<String>>callback) {
logger.info("Logger {}", MDC.getCopyOfContextMap());
internalServiceFromServiceBundle.getCallStack(callback);
}


@RequestMapping ("mdc")
publicvoidmdc(finalCallback<Map<String,String>>callback) {
logger.info("CALLED MDC");
callback.returnThis(MDC.getCopyOfContextMap());
}

@RequestMapping ("ping")
publicbooleanping() {
returntrue;
}


@RequestMapping ("http-info")
publicStringhttpInfo() {

finalStringBuilder builder =newStringBuilder();
finalHttpContext httpContext =newHttpContext();
finalOptional<HttpRequest> httpRequest = httpContext.getHttpRequest();
if (httpRequest.isPresent()) {
builder.append("URI = ").append(httpRequest.get().getUri()).append("\n");
builder.append("HTTP Method = ").append(httpRequest.get().getMethod()).append("\n");
builder.append("USER AGENT = ").append(
httpRequest.get().getHeaders().getFirst(HttpHeaders.USER_AGENT)).append("\n");
} else {
builder.append("request not found");
}


finalRequestContext requestContext =newRequestContext();

if (requestContext.getMethodCall().isPresent()) {
finalMethodCall<Object> methodCall = requestContext.getMethodCall().get();
builder.append("Object Name = ").append(methodCall.objectName()).append("\n");
builder.append("Method Name = ").append(methodCall.name()).append("\n");
}
return builder.toString();
}


publicstaticvoidmain(String... args) throwsException {
finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder();


managedServiceBuilder.setRootURI("/");

managedServiceBuilder.enableLoggingMappedDiagnosticContext();

/** Create Service from Service Queue. */
finalInternalService internalServiceFromServiceQueue = getInternalServiceFromServiceQueue(managedServiceBuilder);


/** Create Service from Service Bundle. */
finalInternalService internalServiceFromServiceBundle = getInternalServiceFromServiceBundle(managedServiceBuilder);


finalStatsCollector statsCollectorForRest = managedServiceBuilder.getStatServiceBuilder().buildStatsCollector();

finalRestService restService =newRestService(internalServiceFromServiceBundle,
internalServiceFromServiceQueue,
ReactorBuilder.reactorBuilder().build(), Timer.timer(), statsCollectorForRest);

managedServiceBuilder.addEndpointService(restService);


managedServiceBuilder.getEndpointServerBuilder().build().startServer();


}

privatestaticInternalServicegetInternalServiceFromServiceQueue(ManagedServiceBuildermanagedServiceBuilder) {
finalInternalServiceImpl internalServiceImpl =newInternalServiceImpl(newRequestContext());
finalServiceBuilder serviceBuilderForServiceObject = managedServiceBuilder.createServiceBuilderForServiceObject(internalServiceImpl);
finalServiceQueue serviceQueue = serviceBuilderForServiceObject.buildAndStartAll();
return serviceQueue.createProxy(InternalService.class);
}


privatestaticInternalServicegetInternalServiceFromServiceBundle(ManagedServiceBuildermanagedServiceBuilder) {
finalInternalServiceImpl internalServiceImpl =newInternalServiceImpl(newRequestContext());
finalServiceBundle serviceBundle = managedServiceBuilder.createServiceBundleBuilder().build().startServiceBundle();
serviceBundle.addServiceObject("myService", internalServiceImpl);
return serviceBundle.createLocalProxy(InternalService.class, "myService");
}

}
<configuration>

<appendername="STDOUT"class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n- %X{requestRemoteAddress} - %X{requestUri} - %X{requestHttpMethod}%n</pattern>
</encoder>
</appender>


<loggername="io.advantageous.qbit"level="INFO"/>

<rootlevel="INFO">
<appender-refref="STDOUT"/>
</root>
</configuration>

Internals of QBit MDC support.

To make this all happen we added some new guts to QBit. We added some AOP interceptors to intercept method calls and added a new type of interceptor to intercept when we are going to send a call.
QBit has three types of interceptors.

BeforeMethodSent (NEW added for MDC and RequestContext support!)

package io.advantageous.qbit.client;


import io.advantageous.qbit.message.MethodCallBuilder;

public interface BeforeMethodSent {

default void beforeMethodSent(final MethodCallBuilder methodBuilder) {}
}

BeforeMethodSent gets called just before a method is sent to a service queue or a service bundle.

BeforeMethodCall

package io.advantageous.qbit.service;

import io.advantageous.qbit.message.MethodCall;

/**
* Use this to register for before method calls for services.
* <p>
* created by Richard on 8/26/14.
*
* @author rhightower
*/
@SuppressWarnings("SameReturnValue")
public interface BeforeMethodCall {

/**
*
* @param call method call
* @return true if the method call should continue.
*/
boolean before(MethodCall call);
}

AfterMethodCall


/**
* Use this to register for after method calls for services.
* created by Richard on 8/26/14.
*
* @author rhightower
*/
@SuppressWarnings({"BooleanMethodIsAlwaysInverted", "SameReturnValue"})
public interface AfterMethodCall {


boolean after(MethodCall call, Response response);
}

We also provide interceptor chains to make it easier to wire these up.

Interceptor Chains

packageio.advantageous.qbit.client;

importio.advantageous.qbit.message.MethodCallBuilder;

importjava.util.Arrays;
importjava.util.Collections;
importjava.util.List;

publicclassBeforeMethodSentChainimplementsBeforeMethodSent {


privatefinalList<BeforeMethodSent> beforeMethodCallSentList;

publicstaticBeforeMethodSentChainbeforeMethodSentChain(BeforeMethodSent... beforeMethodSentCalls) {
returnnewBeforeMethodSentChain(Arrays.asList(beforeMethodSentCalls));
}

publicBeforeMethodSentChain(List<BeforeMethodSent>beforeMethodCallSentList) {
this.beforeMethodCallSentList =Collections.unmodifiableList(beforeMethodCallSentList);
}

@Override
publicvoidbeforeMethodSent(finalMethodCallBuildermethodBuilder) {

for (finalBeforeMethodSent beforeMethodCallSent : beforeMethodCallSentList) {
beforeMethodCallSent.beforeMethodSent(methodBuilder);
}
}

}

...

packageio.advantageous.qbit.service;

importio.advantageous.qbit.message.MethodCall;
importio.advantageous.qbit.message.Response;

importjava.util.Arrays;
importjava.util.Collections;
importjava.util.List;

publicclassAfterMethodCallChainimplementsAfterMethodCall{

publicstaticAfterMethodCallChainafterMethodCallChain(finalAfterMethodCall... calls) {
returnnewAfterMethodCallChain(Arrays.asList(calls));

}
privatefinalList<AfterMethodCall> afterMethodCallList;

publicAfterMethodCallChain(List<AfterMethodCall>afterMethodCallList) {
this.afterMethodCallList =Collections.unmodifiableList(afterMethodCallList);
}

@Override
publicbooleanafter(finalMethodCallcall, finalResponseresponse) {

for (finalAfterMethodCall afterMethodCall : afterMethodCallList) {
if (!afterMethodCall.after(call, response)) {
returnfalse;
}
}
returntrue;
}
}
...
packageio.advantageous.qbit.service;

importio.advantageous.qbit.message.MethodCall;

importjava.util.Arrays;
importjava.util.Collections;
importjava.util.List;

publicclassBeforeMethodCallChainimplementsBeforeMethodCall {


privatefinalList<BeforeMethodCall> beforeMethodCallList;

publicstaticBeforeMethodCallChainbeforeMethodCallChain(BeforeMethodCall... beforeMethodCalls) {
returnnewBeforeMethodCallChain(Arrays.asList(beforeMethodCalls));
}

publicBeforeMethodCallChain(List<BeforeMethodCall>beforeMethodCallList) {
this.beforeMethodCallList =Collections.unmodifiableList(beforeMethodCallList);
}


@Override
publicbooleanbefore(finalMethodCallcall) {

for (finalBeforeMethodCall beforeMethodCall : beforeMethodCallList) {
if (!beforeMethodCall.before(call)) {
returnfalse;
}
}
returntrue;
}
}
The chains are all new to make dealing with AOP a bit easier in QBit.
The builders for ServiceQueueServiceBundle and ServiceEndpointServer allow you to pass a BeforeMethodCall, an AfterMethodCall, and a BeforeMethodSent which could be the aforementioned chains.
To setup MDC we created an interceptor called SetupMdcForHttpRequestInterceptor which is both a BeforeMethodCall, and an AfterMethodCall interceptor.
/**
* Provides MDC support for QBit REST services.
* Intercepts method calls to a service.
* Looks at the originatingRequest to see if HTTP request is the originating request.
* If an HTTP request is the originating request then we decorate the Log with
* MDC fields.
*
* [http://logback.qos.ch/manual/mdc.html](Mapped Diagnostic Context)
*
* You can specify the headers that you want extracted and placed inside
* the Mapped Diagnostic Context as well.
*
*/
publicclassSetupMdcForHttpRequestInterceptorimplementsBeforeMethodCall, AfterMethodCall {

publicstaticfinalStringREQUEST_URI="requestUri";
publicstaticfinalStringREQUEST_REMOTE_ADDRESS="requestRemoteAddress";
publicstaticfinalStringREQUEST_HTTP_METHOD="requestHttpMethod";
publicstaticfinalStringREQUEST_HEADER_PREFIX="requestHeader.";
/**
* Holds the headers that we want to extract from the request.
*/
privatefinalSet<String> headersToAddToLoggingMappingDiagnosticsContext;

/**
* Construct a SetupMdcForHttpRequestInterceptor
* @param headersToAddToLoggingMappingDiagnosticsContext headers to add to the Logging Mapping Diagnostics Context.
*/
publicSetupMdcForHttpRequestInterceptor(Set<String>headersToAddToLoggingMappingDiagnosticsContext) {
this.headersToAddToLoggingMappingDiagnosticsContext =
Collections.unmodifiableSet(headersToAddToLoggingMappingDiagnosticsContext);
}

/**
* Gets called before a method gets invoked on a service.
* This adds request URI, remote address and request headers of the HttpRequest if found.
* @param methodCall methodCall
* @return true to continue, always true.
*/
@Override
publicbooleanbefore(finalMethodCallmethodCall) {

finalOptional<HttpRequest> httpRequest = findHttpRequest(methodCall);
if (httpRequest.isPresent()) {
extractRequestInfoAndPutItIntoMappedDiagnosticContext(httpRequest.get());
}
returntrue;
}

privateOptional<HttpRequest>findHttpRequest(Request<Object>request) {

if (request.originatingRequest() instanceofHttpRequest) {
returnOptional.of(((HttpRequest) request.originatingRequest()));
} elseif (request.originatingRequest()!=null) {
return findHttpRequest(request.originatingRequest());
} else {
returnOptional.empty();
}
}


/**
* Gets called after a method completes invocation on a service.
* Used to clear the logging Mapped Diagnostic Context.
* @param call method call
* @param response response from method
* @return always true
*/
@Override
publicbooleanafter(finalMethodCallcall, finalResponseresponse) {
MDC.clear();
returntrue;
}

/**
* Extract request data and put it into the logging Mapped Diagnostic Context.
* @param httpRequest httpRequest
*/
privatevoidextractRequestInfoAndPutItIntoMappedDiagnosticContext(finalHttpRequesthttpRequest) {
MDC.put(REQUEST_URI, httpRequest.getUri());
MDC.put(REQUEST_REMOTE_ADDRESS, httpRequest.getRemoteAddress());
MDC.put(REQUEST_HTTP_METHOD, httpRequest.getMethod());

extractHeaders(httpRequest);

}

/**
* Extract headersToAddToLoggingMappingDiagnosticsContext data and put them into the logging mapping diagnostics context.
* @param httpRequest httpRequest
*/
privatevoidextractHeaders(finalHttpRequesthttpRequest) {
if (headersToAddToLoggingMappingDiagnosticsContext.size() >0) {
finalMultiMap<String, String> headers = httpRequest.getHeaders();
headersToAddToLoggingMappingDiagnosticsContext.forEach(header -> {
String value = headers.getFirst(header);
if (!Str.isEmpty(value)) {
MDC.put(REQUEST_HEADER_PREFIX+ header, value);
}
});
}
}

}
To send originating requests to downstream servers, we created theForwardCallMethodInterceptor which implements BeforeMethodSent.

Creates the call chain.

packageio.advantageous.qbit.http.interceptor;

importio.advantageous.qbit.client.BeforeMethodSent;
importio.advantageous.qbit.message.MethodCallBuilder;
importio.advantageous.qbit.message.Request;
importio.advantageous.qbit.service.RequestContext;

importjava.util.Optional;

/** This is used by proxies to find the parent request and forward it
* to the service that the parent calls.
*/
publicclassForwardCallMethodInterceptorimplementsBeforeMethodSent {

/**
* Holds the request context, which holds the active request.
*/
privatefinalRequestContext requestContext;

/**
*
* @param requestContext request context
*/
publicForwardCallMethodInterceptor(finalRequestContextrequestContext) {
this.requestContext = requestContext;
}

/**
* Intercept the call before it gets sent to the service queue.
* @param methodBuilder methodBuilder
*/
@Override
publicvoidbeforeMethodSent(finalMethodCallBuildermethodBuilder) {

if (methodBuilder.getOriginatingRequest() ==null) {
finalOptional<Request<Object>> request = requestContext.getRequest();
if (request.isPresent()) {
methodBuilder.setOriginatingRequest(request.get());
}
}
}
}
But as you may have noticed, that before it creates the call chain, the RequestContext has to be properly populated. And this is done by the CaptureRequestInterceptor.

CaptureRequestInterceptor

packageio.advantageous.qbit.service;

importio.advantageous.qbit.message.MethodCall;
importio.advantageous.qbit.message.Response;

/**
* Captures the Request if any present and puts it in the RequestContext.
*/
publicclassCaptureRequestInterceptorimplementsBeforeMethodCall, AfterMethodCall {


/** Captures the current method call and if originating as an HttpRequest,
* then we pass the HttpRequest into the the RequestContext.
* @param methodCall methodCall
* @return always true which means continue.
*/
@Override
publicbooleanbefore(finalMethodCallmethodCall) {

RequestContext.setRequest(methodCall);
returntrue;
}


/**
* Clear the request out of the context
* @param methodCall methodCall
* @param response response
* @return always true
*/
@Override
publicbooleanafter(finalMethodCallmethodCall, finalResponseresponse) {
RequestContext.clear();
returntrue;
}

}
The rest of RequestContext which we introduced earlier is:

QBit's RequestContext

packageio.advantageous.qbit.service;

importio.advantageous.qbit.message.MethodCall;
importio.advantageous.qbit.message.Request;

importjava.util.Optional;

/**
* Holds the current request for the method call.
*/
publicclassRequestContext {

/** Current request. */
privatefinalstaticThreadLocal<Request<Object>> requestThreadLocal =newThreadLocal<>();


/** Grab the current request.
*
* @return Optional request.
*/
publicOptional<Request<Object>>getRequest() {
finalRequest request = requestThreadLocal.get();
returnOptional.ofNullable(request);

}

/** Grab the current request.
*
* @return Optional request.
*/
publicOptional<MethodCall<Object>>getMethodCall() {
finalRequest<Object> request = requestThreadLocal.get();
if (request instanceofMethodCall) {
returnOptional.of(((MethodCall<Object>) request));
}
returnOptional.empty();

}


/**
* Used from this package to populate request for this thread.
* @param request request
*/
staticvoidsetRequest(finalRequest<Object>request) {
requestThreadLocal.set(request);
}

/**
* Clear the request.
*/
staticvoidclear() {
requestThreadLocal.set(null);
}


}
You don't need ManagedServiceBuilder to use these features. It is just that it makes the wiring a lot easier. You can wire up these interceptors yourself as this example from QBit's unit tests shows:

Wiring up QBit MDC without ManagedServiceBuilder

    @Test
publicvoid testIntegrationWithServiceBundle() throws Exception{


mdcForHttpRequestInterceptor =newSetupMdcForHttpRequestInterceptor(Sets.set("foo"));

finalCaptureRequestInterceptor captureRequestInterceptor =newCaptureRequestInterceptor();
captureRequestInterceptor.before(
methodCallBuilder.setName("restMethod").setOriginatingRequest(httpRequest).build());


finalServiceBundle serviceBundle =ServiceBundleBuilder.serviceBundleBuilder()
.setBeforeMethodCallOnServiceQueue(
BeforeMethodCallChain.beforeMethodCallChain(captureRequestInterceptor,
mdcForHttpRequestInterceptor))
.setAfterMethodCallOnServiceQueue(
AfterMethodCallChain.afterMethodCallChain(captureRequestInterceptor,
mdcForHttpRequestInterceptor))
.setBeforeMethodSent(
newForwardCallMethodInterceptor(newRequestContext()))
.build().startServiceBundle();


serviceBundle.addServiceObject("my", newMyServiceImpl());


finalMyService localProxy = serviceBundle.createLocalProxy(MyService.class, "my");

finalAsyncFutureCallback<String> callback =AsyncFutureBuilder.asyncFutureBuilder().build(String.class);
localProxy.getRequestURI(callback);

localProxy.clientProxyFlush();

assertEquals("/foo", callback.get());



finalAsyncFutureCallback<Map<String, String>> callbackMap =AsyncFutureBuilder.asyncFutureBuilder()
.buildMap(String.class, String.class);

localProxy.getMDC(callbackMap);

localProxy.clientProxyFlush();

validate(callbackMap.get());

captureRequestInterceptor.after(null, null);

serviceBundle.stop();


}

That about covers it for the internals. The ManagedServiceBuilder just configures those interceptors for you.

ManagedServiceBuilder configuring AOP interceptors for QBit

/**
* Hold lists of interceptors.
*/
privatestaticclassInterceptors {
List<BeforeMethodCall> before =newArrayList<>();
List<AfterMethodCall> after =newArrayList<>();
List<BeforeMethodSent> beforeSent =newArrayList<>();
}

/**
* Configure a list of common interceptors.
* @return
*/
privateInterceptors configureInterceptors() {
Interceptors interceptors =newInterceptors();
SetupMdcForHttpRequestInterceptor setupMdcForHttpRequestInterceptor;
if (enableLoggingMappedDiagnosticContext) {
enableRequestChain =true;
if (requestHeadersToTrackForMappedDiagnosticContext!=null&&
requestHeadersToTrackForMappedDiagnosticContext.size()>0) {
setupMdcForHttpRequestInterceptor =
newSetupMdcForHttpRequestInterceptor(requestHeadersToTrackForMappedDiagnosticContext);
}else {
setupMdcForHttpRequestInterceptor =
newSetupMdcForHttpRequestInterceptor(Collections.emptySet());
}
interceptors.before.add(setupMdcForHttpRequestInterceptor);
interceptors.after.add(setupMdcForHttpRequestInterceptor);
}

if (enableRequestChain) {
finalCaptureRequestInterceptor captureRequestInterceptor =newCaptureRequestInterceptor();
interceptors.before.add(captureRequestInterceptor);
interceptors.after.add(captureRequestInterceptor);
interceptors.beforeSent.add(newForwardCallMethodInterceptor(newRequestContext()));
}
return interceptors;
}

...


privatevoid configureEndpointServerBuilderForInterceptors(finalEndpointServerBuilder endpointServerBuilder) {

finalInterceptors interceptors = configureInterceptors();
if (interceptors.before.size() >0) {
endpointServerBuilder.setBeforeMethodCallOnServiceQueue(newBeforeMethodCallChain(interceptors.before));
}
if (interceptors.after.size() >0) {
endpointServerBuilder.setAfterMethodCallOnServiceQueue(newAfterMethodCallChain(interceptors.after));
}
if (interceptors.beforeSent.size() >0) {
endpointServerBuilder.setBeforeMethodSent(newBeforeMethodSentChain(interceptors.beforeSent));
}
}


privatevoid configureServiceBundleBuilderForInterceptors(finalServiceBundleBuilder serviceBundleBuilder) {

finalInterceptors interceptors = configureInterceptors();
if (interceptors.before.size() >0) {
serviceBundleBuilder.setBeforeMethodCallOnServiceQueue(newBeforeMethodCallChain(interceptors.before));
}
if (interceptors.after.size() >0) {
serviceBundleBuilder.setAfterMethodCallOnServiceQueue(newAfterMethodCallChain(interceptors.after));
}
if (interceptors.beforeSent.size() >0) {
serviceBundleBuilder.setBeforeMethodSent(newBeforeMethodSentChain(interceptors.beforeSent));
}
}


privatevoid configureServiceBuilderForInterceptors(finalServiceBuilder serviceBuilder) {

finalInterceptors interceptors = configureInterceptors();
if (interceptors.before.size() >0) {
serviceBuilder.setBeforeMethodCall(newBeforeMethodCallChain(interceptors.before));
}
if (interceptors.after.size() >0) {
serviceBuilder.setAfterMethodCall(newAfterMethodCallChain(interceptors.after));
}
if (interceptors.beforeSent.size() >0) {
serviceBuilder.setBeforeMethodSent(newBeforeMethodSentChain(interceptors.beforeSent));
}
}


publicEndpointServerBuilder getEndpointServerBuilder() {
if (endpointServerBuilder==null) {
endpointServerBuilder =EndpointServerBuilder.endpointServerBuilder();
endpointServerBuilder.setPort(this.getPort());
....



configureEndpointServerBuilderForInterceptors(endpointServerBuilder);

...

//And so on

Conclusion

Distributed development can be harder to debug. Tools like MDC, LogStash and Splunk can make it much easier to manage microservices. With QBit, MDC, and distributed logging support is built-in so you can get things done when you are writing microservices.
Read more about QBit with these Wiki Pages or read the QBit Microservices tutorial.

QBit Microservices Lib: EventBus using Consul and QBit to wire together an event bus

$
0
0

EventBus using Consul and QBit to wire together an event bus



QBit, the microservices library for Java, has an event system. You have likely seen it if you have read through the QBit documents.
QBit allows the event bus to be connected to remote instance of QBit forming a cluster.
It does this through the ServiceDiscovery that QBit provides.
By default the EventBusClusterBuilder will use the ConsulServiceDiscoveryBuilder if you do not provide it a ServiceDiscovery. You can read more about Consul and Service Discovery.
To construct an event bus, you first start up Consul.

Using Consul for Microservice Service Discovery

consul agent -server -bootstrap-expect 1 -dc dc1 -data-dir /tmp/consulqbit -ui-dir ./support/ui/
Next you use the EventBusClusterBuilder to construct an event bus cluster as follows:

EventBusClusterBuilder

finalEventBusClusterBuilder eventBusClusterBuilder =EventBusClusterBuilder.eventBusClusterBuilder();
eventBusClusterBuilder.setEventBusName("event-bus");
eventBusClusterBuilder.setReplicationPortLocal(replicatorPort);
finalEventBusCluster eventBusCluster = eventBusClusterBuilder.build();
eventBusCluster.start();
Then you inject eventBusCluster event manager into builders.

Inject the event manager from EventBusCluster into service builders

finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder
.managedServiceBuilder().setRootURI("/")
.setEventManager(eventBusCluster.eventManagerImpl())
.setPort(webPort);
You inject client proxies of the event manager into other services.

inject client proxies of the event manager into other services.

finalEventExampleService eventExampleService =newEventExampleService(
eventBusCluster.createClientEventManager(),
"event.",
ReactorBuilder.reactorBuilder().build(),
Timer.timer(),
managedServiceBuilder.getStatServiceBuilder().buildStatsCollector());

managedServiceBuilder.addEndpointService(eventExampleService);
Here is a complete example:

Complete example

packageio.advantageous.qbit.example.event.bus;


importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.annotation.Listen;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.http.DELETE;
importio.advantageous.qbit.annotation.http.GET;
importio.advantageous.qbit.annotation.http.POST;
importio.advantageous.qbit.eventbus.EventBusCluster;
importio.advantageous.qbit.eventbus.EventBusClusterBuilder;
importio.advantageous.qbit.events.EventManager;
importio.advantageous.qbit.events.EventManagerBuilder;
importio.advantageous.qbit.reactive.Reactor;
importio.advantageous.qbit.reactive.ReactorBuilder;
importio.advantageous.qbit.service.BaseService;
importio.advantageous.qbit.service.stats.StatsCollector;
importio.advantageous.qbit.util.Timer;

importjava.util.ArrayList;
importjava.util.List;

/**
* curl http://localhost:8080/event/
* curl -X POST -H "Content-Type: application/json" http://localhost:8080/event -d '{"id":"123", "message":"hello"}'
*/
@RequestMapping("/")
publicclassEventExampleServiceextendsBaseService{

privatefinalEventManager eventManager;
privatefinalList<MyEvent> events =newArrayList<>();

publicEventExampleService(finalEventManagereventManager,
finalStringstatKeyPrefix,
finalReactorreactor,
finalTimertimer,
finalStatsCollectorstatsCollector) {
super(statKeyPrefix, reactor, timer, statsCollector);
this.eventManager = eventManager;
reactor.addServiceToFlush(eventManager);
}

@POST("/event")
publicbooleansendEvent(MyEventevent) {
eventManager.sendArguments("myevent", event);
returntrue;
}


@DELETE("/event/")
publicbooleanclearEvents() {
events.clear();
returntrue;
}

@GET("/event/")
publicList<MyEvent>getEvents() {
return events;
}

@Listen("myevent")
publicvoidlistenEvent(finalMyEventevent) {
events.add(event);
}

publicstaticvoidrun(finalintwebPort, finalintreplicatorPort) {

finalEventBusClusterBuilder eventBusClusterBuilder =EventBusClusterBuilder.eventBusClusterBuilder();
eventBusClusterBuilder.setEventBusName("event-bus");
eventBusClusterBuilder.setReplicationPortLocal(replicatorPort);
finalEventBusCluster eventBusCluster = eventBusClusterBuilder.build();
eventBusCluster.start();


finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder
.managedServiceBuilder().setRootURI("/")
.setEventManager(eventBusCluster.eventManagerImpl())
.setPort(webPort);

finalEventExampleService eventExampleService =newEventExampleService(
eventBusCluster.createClientEventManager(),
"event.",
ReactorBuilder.reactorBuilder().build(),
Timer.timer(),
managedServiceBuilder.getStatServiceBuilder().buildStatsCollector());
managedServiceBuilder.addEndpointService(eventExampleService);

managedServiceBuilder.getEndpointServerBuilder().build().startServerAndWait();



}
publicstaticvoidmain(finalString... args) {

run(8080, 7070);

}
}

....

packageio.advantageous.qbit.example.event.bus;

publicclassMyEvent {

privateString id;
privateString message;


publicStringgetId() {
return id;
}

publicMyEventsetId(Stringid) {
this.id = id;
returnthis;
}

publicStringgetMessage() {
return message;
}

publicMyEventsetMessage(Stringmessage) {
this.message = message;
returnthis;
}
}
...
packageio.advantageous.qbit.example.event.bus;

publicclassSecondEventExample {
publicstaticvoidmain(finalString... args) {

EventExampleService.run(6060, 5050);

}

}

....
packageio.advantageous.qbit.example.event.bus;

publicclassThirdEventExample {

publicstaticvoidmain(finalString... args) {
EventExampleService.run(4040, 3030);
}
}

To run this run Consul, EventExampleService, ThirdEventExample, and SecondEventExample, then use the following curl commands.

Curl commands to exercise example.

## Send an event
$ curl -X POST -H "Content-Type: application/json" http://localhost:8080/event -d '{"id":"123", "message":"hello"}'
true

## See the event is on each of the nodes.
$ curl http://localhost:8080/event/
[{"id":"123","message":"hello"}]

$ curl http://localhost:6060/event/
[{"id":"123","message":"hello"}]

$ curl http://localhost:4040/event/
[{"id":"123","message":"hello"}]

CallbackBuilder and generics for Reactive Java Microservices

$
0
0
The CallbackBuilder is used to create callbacks. Callbacks have error handlers, timeout handlers and return handlers.

Setting up error handlers, timeout handlers and callback handlers with a callback builder.

                callbackBuilder
.setCallback(ResultSet.class, resultSet ->
statusCallback.accept(resultSet!=null))
.setOnTimeout(() -> statusCallback.accept(false))
.setOnError(error -> statusCallback.onError(error))
.build(ResultSet.class);

this.addEventStorageRecordAsync(callbackBuilder.build(), storageRec);
The CallbackBuilder has many helper methods to help you with dealing with common Java types like OptionalMapListCollectionSetString, primitive types and wrappers.
This allows you to quickly build callbacks without navigating the complexity of Generics. Let's cover a small example.
First let's define a basic service that uses lists, maps and optional.

Basic service to drive the example

packageio.advantageous.qbit.example.callback;


importio.advantageous.boon.core.Lists;
importio.advantageous.boon.core.Maps;
importio.advantageous.qbit.reactive.Callback;

importjava.util.List;
importjava.util.Map;
importjava.util.Optional;

publicclassEmployeeServiceImplimplementsEmployeeService {

@Override
publicvoidgetEmployeesAsMap(finalCallback<Map<String, Employee>>empMapCallback) {

empMapCallback.returnThis(Maps.map("rick", newEmployee("Rick")));
}

@Override
publicvoidgetEmployeesAsList(finalCallback<List<Employee>>empListCallback) {

empListCallback.returnThis(Lists.list(newEmployee("Rick")));
}


@Override
publicvoidfindEmployeeByName(finalCallback<Optional<Employee>>employeeCallback,
finalStringname) {

if (name.equals("Rick")) {
employeeCallback.returnThis(Optional.of(newEmployee("Rick")));
} else {
employeeCallback.returnThis(Optional.empty());
}
}

}
The interface for the above looks like this:

Basic interface to drive the example

packageio.advantageous.qbit.example.callback;

importio.advantageous.qbit.reactive.Callback;

importjava.util.List;
importjava.util.Map;
importjava.util.Optional;

publicinterfaceEmployeeService {
voidgetEmployeesAsMap(Callback<Map<String, Employee>>empMapCallback);

voidgetEmployeesAsList(Callback<List<Employee>>empListCallback);

voidfindEmployeeByName(Callback<Optional<Employee>>employeeCallback,
Stringname);
}
If you are familiar with QBit, all of the above should already make sense. If not, I suggest going through the home page of the WIKI and coming back here after you skim it.
To show how to use the CallbackBuilder we will define a basic Rest service calledCompanyRestService.

CompanyRestService to demonstrate CallbackBuilder

/**
* To access this service
* curl http://localhost:8080/emap
{"rick":{"name":"Rick"}}
*/
@RequestMapping("/")
publicclassCompanyRestService {

privatefinalLogger logger =LoggerFactory.getLogger(CompanyRestService.class);
privatefinalEmployeeService employeeService;

publicCompanyRestService(EmployeeServiceemployeeService) {
this.employeeService = employeeService;
}

...

@QueueCallback({QueueCallbackType.EMPTY, QueueCallbackType.LIMIT})
publicvoidprocess(){
ServiceProxyUtils.flushServiceProxy(employeeService);
}


QBit uses micro-batching which helps optimize message passing between service queues (service actors). The QueueCallback annotation allows us to capture when our request queue is empty or when it has hit its limit. The limit is usually the batch size but could be other things like hitting an important message. Whenever we hit our limit or when are request queue is empty, we go ahead and flush things to the downstream service by callingServiceProxyUtils.flushServiceProxy. This should be mostly review.
As you can see, the EmployeeService has a lot of methods that take Generic types likeOptionalList and Map. When we want to call a downstream service that is going to return a map, list or optional, we have helper methods to make the construction of the callback easier.

CompanyRestService Calling getEmployeesAsMap using CallbackBuilder.wrap

@RequestMapping("/")
publicclassCompanyRestService {

...
@RequestMapping("/emap")
publicvoidemployeeMap(finalCallback<Map<String, Employee>>empMapCallback) {

finalCallbackBuilder callbackBuilder =CallbackBuilder.newCallbackBuilder();
callbackBuilder.wrap(empMapCallback); //Forward to error handling, timeout, and callback defined in empMapCallback
employeeService.getEmployeesAsMap(callbackBuilder.build());

}
In this case, we use the wrap method. This will forward errors, timeouts and the callback return to the empMapCallback.
To run this, we need to start up the application.

Starting up the REST application.

publicstaticvoid main(finalString... args) throws Exception {

/** Create a ManagedServiceBuilder which simplifies QBit wiring. */
finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder().setRootURI("/");
managedServiceBuilder.enableLoggingMappedDiagnosticContext();

/** Create a service queue for the employee service. */
finalServiceQueue employeeServiceQueue = managedServiceBuilder.createServiceBuilderForServiceObject(
newEmployeeServiceImpl()).buildAndStartAll();

/** Add a CompanyRestService passing it a client proxy to the employee service. */
managedServiceBuilder.addEndpointService(
newCompanyRestService(employeeServiceQueue.createProxy(EmployeeService.class)));

/** Start the server. */
managedServiceBuilder.startApplication();

}
If we wanted to copy and mutate the map before we serialized it, we could use the withMapCallback to capture the async return, i.e., Callback from the employeeService.

CompanyRestService using withMapCallback and delegate

@RequestMapping("/")
publicclassCompanyRestService {
...

@RequestMapping("/emap2")
publicvoidemployeeMap2(finalCallback<Map<String, Employee>>empMapCallback) {

finalCallbackBuilder callbackBuilder =CallbackBuilder.newCallbackBuilder();
callbackBuilder.delegate(empMapCallback); //Forward to error handling and timeout defined in empMapCallback

callbackBuilder.withMapCallback(String.class, Employee.class, employeeMap -> {
logger.info("GET MAP {}", employeeMap);
empMapCallback.returnThis(employeeMap);
});
employeeService.getEmployeesAsMap(callbackBuilder.build());

}
In this case we forward just the error handling and timeout handling to the callback that we are creating, and then we create a custom return handler using withMapCallback.

CompanyRestService using withMapCallback and delegateWithLogging

@RequestMapping("/")
publicclassCompanyRestService {
...

@RequestMapping("/emap3")
publicvoidemployeeMap3(finalCallback<Map<String, Employee>>empMapCallback) {

finalCallbackBuilder callbackBuilder =CallbackBuilder.newCallbackBuilder();
// Forward to error handling and timeout defined in empMapCallback, but install some additional logging for
// timeout and error handling that associates the error and timeout handling with this call.
callbackBuilder.delegateWithLogging(empMapCallback, logger, "employeeMap3");
callbackBuilder.withMapCallback(String.class, Employee.class, employeeMap -> {
logger.info("GET MAP {}", employeeMap);
empMapCallback.returnThis(employeeMap);
});
employeeService.getEmployeesAsMap(callbackBuilder.build());
}
If you want to handle error logging and timeout logging in the context of this services log, you can simply use the delegateWithLogging method. This will setup some basic logging for error handing and timeouts.
We of course also have methods that work with List can Collections and Sets and....

Working with list by using withListCallback

@RequestMapping("/")
publicclassCompanyRestService {
...

@RequestMapping("/elist")
publicvoidemployeeList(finalCallback<List<Employee>>empListCallback) {

finalCallbackBuilder callbackBuilder =CallbackBuilder.newCallbackBuilder();
// Forward to error handling and timeout defined in empMapCallback, but install some additional logging for
// timeout and error handling that associates the error and timeout handling with this call.
callbackBuilder.delegateWithLogging(empListCallback, logger, "employeeList");
callbackBuilder.withListCallback(Employee.class, employeeList -> {
logger.info("GET List {}", employeeList);
empListCallback.returnThis(employeeList);
});
employeeService.getEmployeesAsList(callbackBuilder.build());
}
The above works as you would expect. Let's mix things up a bit. We will callfindEmployeeByName which may or many not return an employee.

Working with optional by using withOptionalCallback

@RequestMapping("/")
publicclassCompanyRestService {
...

@RequestMapping("/find")
publicvoidfindEmployee(finalCallback<Employee>employeeCallback,
@RequestParam("name") finalStringname) {

finalCallbackBuilder callbackBuilder =CallbackBuilder.newCallbackBuilder();
// Forward to error handling and timeout defined in empMapCallback,
// but install some additional logging for
// timeout and error handling that associates the error and timeout handling with this call.
callbackBuilder.delegateWithLogging(employeeCallback, logger, "employeeMap3");
callbackBuilder.withOptionalCallback(Employee.class, employeeOptional -> {


if (employeeOptional.isPresent()) {
employeeCallback.returnThis(employeeOptional.get());
} else {
employeeCallback.onError(newException("Employee not found"));
}
});
employeeService.findEmployeeByName(callbackBuilder.build(), name);
}
To work with Optional's we use withOptionalCallback. Here we return an error if the employee is not found and we return the Employee object if he is found.
/**
* You need this is you want to do error handling (Exception) from a callback.
* Callback Builder
* created by rhightower on 3/23/15.
*/
@SuppressWarnings("UnusedReturnValue")
publicclassCallbackBuilder {


/**
* Builder method to set callback handler that takes a list
* @param componentClass componentClass
* @param callback callback
* @param <T> T
* @return this
*/
public<T>CallbackBuilderwithListCallback(finalClass<T>componentClass,
finalCallback<List<T>>callback) {
this.callback = callback;
returnthis;
}


/**
* Builder method to set callback handler that takes a set
* @param componentClass componentClass
* @param callback callback
* @param <T> T
* @return this
*/
public<T>CallbackBuilderwithSetCallback(finalClass<T>componentClass,
finalCallback<Set<T>>callback) {
this.callback = callback;
returnthis;
}


/**
* Builder method to set callback handler that takes a collection
* @param componentClass componentClass
* @param callback callback
* @param <T> T
* @return this
*/
public<T>CallbackBuilderwithCollectionCallback(finalClass<T>componentClass,
finalCallback<Collection<T>>callback) {
this.callback = callback;
returnthis;
}


/**
* Builder method to set callback handler that takes a map
* @param keyClass keyClass
* @param valueClass valueClass
* @param callback callback
* @param <K> key type
* @param <V> value type
* @return this
*/
public<K, V>CallbackBuilderwithMapCallback(finalClass<K>keyClass,
finalClass<V>valueClass,
finalCallback<Map<K, V>>callback) {
this.callback = callback;
returnthis;
}


/**
* Builder method to set callback handler that takes a boolean
* @param callback callback
* @return this
*/
publicCallbackBuilderwithBooleanCallback(finalCallback<Boolean>callback) {
this.callback = callback;
returnthis;
}

/**
* Builder method to set callback handler that takes a integer
* @param callback callback
* @return this
*/
publicCallbackBuilderwithIntCallback(finalCallback<Integer>callback) {
this.callback = callback;
returnthis;
}


/**
* Builder method to set callback handler that takes a long
* @param callback callback
* @return this
*/
publicCallbackBuilderwithLongCallback(finalCallback<Long>callback) {
this.callback = callback;
returnthis;
}


/**
* Builder method to set callback handler that takes a string
* @param callback callback
* @return this
*/
publicCallbackBuilderwithStringCallback(finalCallback<String>callback) {
this.callback = callback;
returnthis;
}



/**
* Builder method to set callback handler that takes an optional string
* @param callback callback
* @return this
*/
publicCallbackBuilderwithOptionalStringCallback(finalCallback<Optional<String>>callback) {
this.callback = callback;
returnthis;
}



/**
* Builder method to set callback handler that takes an optional string
* @param callback callback
* @return this
*/
public<T>CallbackBuilderwithOptionalCallback(finalClass<T>cls, finalCallback<Optional<T>>callback) {
this.callback = callback;
returnthis;
}

Read more about callback builders and how to handle errors, timeouts and downstream calls.

Reactor

Let's say that EmployeeService was really talking to some downstream remote services or perhaps to Cassandra and/or Redis. Let's also say that you want to add some timeout for this downstream system. Let's say 10 seconds.
Then our example will use the QBit Reactor and the easiest way to do that would be to subclass the BaseService.

Using QBit Reactor from the BaseService

packageio.advantageous.qbit.example.callback;

importio.advantageous.qbit.admin.ManagedServiceBuilder;

importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestParam;
importio.advantageous.qbit.reactive.Callback;
importio.advantageous.qbit.reactive.CallbackBuilder;
importio.advantageous.qbit.reactive.Reactor;
importio.advantageous.qbit.reactive.ReactorBuilder;
importio.advantageous.qbit.service.BaseService;
importio.advantageous.qbit.service.ServiceQueue;
importio.advantageous.qbit.service.stats.StatsCollector;
importio.advantageous.qbit.util.Timer;
importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

importjava.util.List;
importjava.util.Map;
importjava.util.concurrent.TimeUnit;


@RequestMapping("/")
publicclassCompanyRestServiceUsingReactorextendsBaseService {


privatefinalLogger logger =LoggerFactory.getLogger(CompanyRestService.class);
privatefinalEmployeeService employeeService;

publicCompanyRestServiceUsingReactor(Reactorreactor,
Timertimer,
StatsCollectorstatsCollector,
EmployeeServiceemployeeService) {
super(reactor, timer, statsCollector);
this.employeeService = employeeService;
reactor.addServiceToFlush(employeeService);
}



@RequestMapping("/emap")
publicvoidemployeeMap(finalCallback<Map<String, Employee>>empMapCallback) {

finalCallbackBuilder callbackBuilder =super.reactor.callbackBuilder();
callbackBuilder.wrap(empMapCallback); //Forward to error handling, timeout, and callback defined in empMapCallback
employeeService.getEmployeesAsMap(callbackBuilder.build());

}


@RequestMapping("/emap2")
publicvoidemployeeMap2(finalCallback<Map<String, Employee>>empMapCallback) {

finalCallbackBuilder callbackBuilder =super.reactor.callbackBuilder();
callbackBuilder.delegate(empMapCallback); //Forward to error handling and timeout defined in empMapCallback

callbackBuilder.withMapCallback(String.class, Employee.class, employeeMap -> {
logger.info("GET MAP {}", employeeMap);
empMapCallback.returnThis(employeeMap);
});
employeeService.getEmployeesAsMap(callbackBuilder.build());

}


@RequestMapping("/emap3")
publicvoidemployeeMap3(finalCallback<Map<String, Employee>>empMapCallback) {

finalCallbackBuilder callbackBuilder =super.reactor.callbackBuilder();
// Forward to error handling and timeout defined in empMapCallback, but install some additional logging for
// timeout and error handling that associates the error and timeout handling with this call.
callbackBuilder.delegateWithLogging(empMapCallback, logger, "employeeMap3");
callbackBuilder.withMapCallback(String.class, Employee.class, employeeMap -> {
logger.info("GET MAP {}", employeeMap);
empMapCallback.returnThis(employeeMap);
});
employeeService.getEmployeesAsMap(callbackBuilder.build());
}


@RequestMapping("/elist")
publicvoidemployeeList(finalCallback<List<Employee>>empListCallback) {

finalCallbackBuilder callbackBuilder =super.reactor.callbackBuilder();
// Forward to error handling and timeout defined in empMapCallback, but install some additional logging for
// timeout and error handling that associates the error and timeout handling with this call.
callbackBuilder.delegateWithLogging(empListCallback, logger, "employeeList");
callbackBuilder.withListCallback(Employee.class, employeeList -> {
logger.info("GET List {}", employeeList);
empListCallback.returnThis(employeeList);
});
employeeService.getEmployeesAsList(callbackBuilder.build());
}


@RequestMapping("/find")
publicvoidfindEmployee(finalCallback<Employee>employeeCallback,
@RequestParam("name") finalStringname) {

finallong startTime =super.time;

finalCallbackBuilder callbackBuilder =super.reactor.callbackBuilder();
// Forward to error handling and timeout defined in empMapCallback, but install some additional logging for
// timeout and error handling that associates the error and timeout handling with this call.
callbackBuilder.delegateWithLogging(employeeCallback, logger, "employeeMap3");
callbackBuilder.withOptionalCallback(Employee.class, employeeOptional -> {


super.recordTiming("findEmployee", time - startTime);
if (employeeOptional.isPresent()) {

employeeCallback.returnThis(employeeOptional.get());
} else {
employeeCallback.onError(newException("Employee not found"));
}
});
employeeService.findEmployeeByName(callbackBuilder.build(), name);
}


publicstaticvoidmain(finalString... args) throwsException {

/** Create a ManagedServiceBuilder which simplifies QBit wiring. */
finalManagedServiceBuilder managedServiceBuilder =ManagedServiceBuilder.managedServiceBuilder().setRootURI("/");
managedServiceBuilder.enableLoggingMappedDiagnosticContext();

/** Create a service queue for the employee service. */
finalServiceQueue employeeServiceQueue = managedServiceBuilder.createServiceBuilderForServiceObject(
newEmployeeServiceImpl()).buildAndStartAll();

/** Add a CompanyRestService passing it a client proxy to the employee service. */
managedServiceBuilder.addEndpointService(
newCompanyRestServiceUsingReactor(
ReactorBuilder.reactorBuilder().setDefaultTimeOut(10).setTimeUnit(TimeUnit.SECONDS).build(),
Timer.timer(),
managedServiceBuilder.getStatServiceBuilder().buildStatsCollector(),
employeeServiceQueue.createProxy(EmployeeService.class)));

/** Start the server. */
managedServiceBuilder.startApplication();

}
}
Notice that the callbackBuilder is now constructed from the reactor (final CallbackBuilder callbackBuilder = super.reactor.callbackBuilder();).
To learn more about the Reactor, please read Reactively handling async calls with QBit Reactive Microservices.

Stats

When you use the BaseService, you also have access to the stats system.

Stats from BaseService

    @RequestMapping("/find")
publicvoid findEmployee(finalCallback<Employee> employeeCallback,
@RequestParam("name") finalString name) {

finallong startTime =super.time;

finalCallbackBuilder callbackBuilder =super.reactor.callbackBuilder();
callbackBuilder.delegateWithLogging(employeeCallback, logger, "employeeMap3");
callbackBuilder.withOptionalCallback(Employee.class, employeeOptional -> {


/** Record timing. */
super.recordTiming("findEmployee", time - startTime);

if (employeeOptional.isPresent()) {
/* Increment count of employees found. */
super.incrementCount("employeeFound");
employeeCallback.returnThis(employeeOptional.get());
} else {
/* Increment count of employees not found. */
super.incrementCount("employeeNotFound");
employeeCallback.onError(newException("Employee not found"));
}
});
employeeService.findEmployeeByName(callbackBuilder.build(), name);
}

Micro-batching and QBit - tuning micro-batches

$
0
0

Intro

I have been working on QBit for two years or so. Not me alone, but others as well. QBit has improved leaps and bounds and we have used it in anger on several projects, and it continues to improve. I like to tell you how it all started, and then share some code talk and perf tests.

Early days

QBit started by working on a project where we had a CPU intensive user preference engine running in memory. The engine could calculate around 30 thousand user requests a second. It looks at user actions which are summarized and stored in memory, and then would generate a list of recommendations with some rules based mix-ins based on users stated preferences versus their actual actions. I did not design the algorithm. I did rewrite most of the original with a focus on basic performance. There was a working prototype but it would not handle the load we were expecting (the way it was written).
The application could get up to 100 million users. And the users would not trickle in as there were peak hours in a week when almost all of the users would be using the system, and then the system would slow down to just a trickle for most of the week. In production, I think the application never had more than 13 million users (not sure about this at most it was 20 million) but we had to assume the worse (or best).
We tried several approaches. We ended up using Vertx and running 9 verticles (this is based on my memory of the project ). We had an IO vertical and then 8 verticles that were running the rules engine. The first issue was we could not get much throughput. Each front end request would look at the user id, hash it, and then pick one of the 8 verticle running the rules preference engine. In a tight loop, the rules preference engine could easily do 30K requests per second (generating a recommendation list), but in this scenario, the whole system with 8 verticles could only do 15K to 30K requests. We used wrk and some custom Lua scripts to pound out the worst case scenario. (A vertical in Vertx is like a module that has its own event loop).
I thought about working on making the preference engine faster. But it seemed to me that the real problem or rather the problem that was burning us the most was the ability to get more requests into preference engine at a time. Also since we were constantly modifying the engine as the requirements changed, I felt optimizing it would be waste when we were having a hard time using what we had because it was hard to bridge the gap of time spent in message passing. Also we tried several designs and the project was on a tight timeframe, we needed some quick wins or we were dead in the water.
I read through the Mechanical Sympathy blog prior to this project and papers as best I could and I already employed ideas and techniques from the ideas in production to great effect. I also had a few mentors in this space. This was not my first time to bat with these ideas. I had some previous success but this project was the primary driver for QBit so I talk about it.
My idea for this project was simple. The workload in my opinion was hard to adapt to a disruptor (at least hard based on my current skill-set at the time, and I am still not sure it made sense for this use-case, but I leave the idea open for future experimentation). My idea was pretty simple, what if I sent a list of requests instead of a single request to the rules engine (which were running in-memory and we were using the Vertx Event bus to issue requests). Under peak load each machine could get thousands of requests per second (not just to generate the list of recommendations but to update the stats that influenced the generation of the recommendations). The application had to be fast, but we needed it to be flexible as well as the requirements were changing all of the time (even days before launch). Later this was also some logic for the rules engine to say hey, I am pretty busy, make the request lists bigger. This seemed to work well. We were allowing the back pressure to control the batch size.
We came up with what I called at the time, poor man's disruptor. It was a confluence of ideas from reading Akka in ActionLMAX papers, working with Vertx and just my own empirical observations. Now after hearing a few talks on Apache Spark, I think the more correct term is micro-batching, but I did not know that at the time. I used micro-batching on another project before this one which was called the disk-batcher which was able to write users events to disk at a rate of 720 MB per second, but this was the first time I used the technique for a CPU intensive application.
The short of it is this, it worked. It was not perfect, but we were able to improve the per server throughput from 30K TPS to 150K TPS to later 200K TPS. Later we would employ the same sort of technique to send event data to be stored into the backend (data backup) and to stream users into memory as the load increased. We got these numbers while employing a full array of servers getting hit by a cloud load testing company.
A similar application was in production that was using 2,000 servers. Another application was in production that was using 150 servers at a competitor. Our final product could have run on six servers or less in production, we ran it on 13 for reliability. It could handle 10x to 100x the load as the other solutions. It was truly a high-speed in-memory microservice. The actual load never reached what it could handle. It seemed like whenever we had a problem, micro-batching was the way to solve it at scale.
By the way, I promised I could run the whole service on three servers in production so I missed, but striving for a higher goal made me focused. If I had twice the time to work on the project, maybe we could have, but there was a time to market aspect as well. It was not just that the code had to run fast. We had to get the project done fast so the real trick was finding the right techniques to meet the goals.
The idea behind micro-batching is simple, at least in my definition of it. Attempt to increase throughput by reducing thread hand-off of messages, by sending many messages at one time. If a certain period of time passes, send what you have, if you are under heavy load, only send what you have if you have reached the batch size or the timeout has been reached. You can also send what you have if the processing side tells you it is not busy or you can create larger batches if the processing side (or your response monitoring) tells you that the processor is bigger. If possible under larger load, detect this state and create larger batches. Micro-batching makes the application slower under light load, but much, much faster under high-load. The trick is how to balance and tweak the batch size and the batch time, timeout (and set up back pressure signals). I don’t think this fully solves this problem, but we created something that works and that we have employed it to do things and handle load that surprises a lot of people.
One of the issues we had was a lot of the micro-batching was hand-coded (by me). I had the same sort of idea repeated in six or seven areas of the application with slight modifications. All hand tuned. Then we started to create a reusable library that I called QBit. It had nothing to do with REST per se. (Although I also wrote a lot of code to handle REST and JSON parsing for the various calls that this service ended up supporting).
I threw must of the original code away, and started QBit from a clean slate. Early QBit code exists in a few projects but QBit is on github.
Using QBit on a recent project after this one, we did some initial load testing and we were handling 10x more load than we actually get in production so we did not have a chance to tune it, which I honestly think we could have increased the performance by another 10x (the app not QBit) but there was no need (which was a bit of a disappointment for me). QBit was good enough out of the box.
The first application was fairly cool (pre-QBit). We used Vertx at scale. We implemented in-memory CPU intensive applications at scale. We employed micro-batching. The ideas from that project are in QBit. QBit could not exist without that experience, and would not be where is was today without working on several other applications with QBit since then.
One issue we had in making the approaches more wide spread was other developers were not familiar with async programming. And there was no easy way to do things like define a REST endpoint (pre-QBit), and coordinate calls to other async services (in-memory and remote). It was still easier to write applications using traditional Spring MVC and if the application did not need the throughput writing it using Vertx and the hodgepodge of libs we wrote (pre QBit) did not make sense. For high-load application, it was worth it. For a smaller load application, it was not.
QBit was the framework I started to make applications like we built easier to produce. I find that small-load services over time, often start getting a lot more load. I could write a whole article on this point. But basically if you write a useful service and others (in the same company) or others start integrating it in new ways with new load demands that you never expected, then it is better to have a lot of head-room. QBit tries to make async, microservice development easier so you can even write your smaller services with it.

Detour Microservices, reactive programming and 12 factor applications

What does QBit have to do with Microservices. When we came up with the design for the preference engine, people said it was a Microservice. This was a while ago.
I ignore all such terms as Microservice until someone tells me what it means. It saves time. Then I go and study it. There are so many ideas out there to study, and if you chased them all down, there would be no time to write code. I learn by doing. Reading and then doing some more. To me the ideas behind Microservices really clicked.
Apparently, I had been on many projects that employed Microservice ideas before I knew what the term meant. I thought I was doing Restful SOA, simple SOA or whatever. Once I found out what Microservices meant, I was sure that this is what I wanted to encourage with QBit.

12 Factor Microservices

More recently people have got me interested in reactive streaming, reactive programming and 12-factor microservice development. Again, many of these things we have been doing in one shape, form or another but giving the ideas name gives them power as they are easier to communicate. For a while now, after working on some of these high-speed, cloud deployed, services, I have been wanting to add (and have added in ad hoc ideas) many of the concepts from these ideas into QBIt and the application that we have written with QBit and Vertx. I see that Vertx 3 added a lot of the same things that I added to QBit for microservice monitoring. The idea for 12-factor microservice development time has come. Vertx 3 did such a job, that I decided to make it so you could embed QBit inside of Vertx (again as this was how it was originally) as well as use Vertx as a network lib.

Back to micro-batching

QBit support micro-batching. It is built into QBit’s core. One of the first set of experiments I did with QBit was try to find a decently fast implementation of micro-batching (although at the time I did not know the term was even called micro-batching and I was calling it a batching queue or the poor man’s disruptor).

Code walk through

Let's do some perf testing.
The very core of QBit is the queue. Use these flags when starting up the garbage collector.

Setup garbage collector

-Xms4g -Xmx4g -XX:+UseG1GC

Trade class

publicclassTrade {
finalString name;
finallong amount;

publicTrade(Stringname, longamount) {
this.name = name;
this.amount = amount;
}

publicStringgetName() {
return name;
}

publiclonggetAmount() {
return amount;
}
}
We have a simple trade class. We will then send this through the queue at a rate of 100 million a second. Here is how we construct the queue.

Create a queue

finalQueueBuilder queueBuilder =QueueBuilder
.queueBuilder()
.setName("trades")
.setBatchSize(batchSize)
.setSize(size)
.setPollWait(pollWait);


finalQueue<Trade> queue = queueBuilder.build();
The size is the size of the underlying java util queue (if appropriate). The batch size is how many messages that we are sending each time. Unless flush is called. The poll wait is how long you want to wait (poll wait) after you get a null from a poll.
To listen to trades coming into the queue, we will use a simple mechanism.

Increment an atomic integer

finalAtomicLong tradeCounter =newAtomicLong();

queue.startListener(item -> {

tradeCounter.incrementAndGet();
});
We could make this more efficient.
The micro batching is on the client side of the equation.

SendQueue

finalSendQueue<Trade> tradeSendQueue = queue.sendQueue();
for (int c =0; c < tradeCount; c++) {
tradeSendQueue.send(newTrade("ibm", 100L));
}
tradeSendQueue.flushSends();
We created a helper method to run messages through the queue (over and over).

Method to run our perf test

privatestaticvoid run(int runs, int tradeCount, 
int batchSize, int checkEvery,
int numThreads, int pollWait,
int size) {

finalQueueBuilder queueBuilder =QueueBuilder
.queueBuilder()
.setName("trades")
.setBatchSize(batchSize)
.setSize(size)
.setPollWait(pollWait);
You can specify the numThreads (how many threads), runs (how many runs), and the parameters we talked about before.
With this setup:

Perf test

publicstaticvoid main(finalString... args) throws Exception {

finalint runs =76;
finalint tradeCount =220_000;
finalint batchSize =1_000;
finalint checkEvery =0;
finalint numThreads =6;
finalint pollWait =1_000;
finalint size =1_000_000;

for (int index =0; index <100; index++) {
run(runs, tradeCount, batchSize, checkEvery, numThreads, pollWait, size);
}
}
With this, we were able to get this:
DONE traded 100,320,000 in 1001 ms 
batchSize = 1,000, checkEvery = 0, threads= 6

DONE traded 100,320,000 in 999 ms
batchSize = 1,000, checkEvery = 0, threads= 6

DONE traded 100,320,000 in 987 ms
batchSize = 1,000, checkEvery = 0, threads= 6
It takes ten or so runs for the GC etc. to tune itself.
We consistently get over 100M messages per second. It took a while to tweak the runs, tradeCount, etc. to get 100M messages per second. If we drop to 50,000 we can be a lot more flexible with number of threads, trade count per thread, etc.
Here is the complete code listing to get 100M TPS.

Complete code listing for 100 M TPS test

packageio.advantageous.qbit.example.perf;

importio.advantageous.boon.core.Sys;
importio.advantageous.qbit.queue.Queue;
importio.advantageous.qbit.queue.QueueBuilder;
importio.advantageous.qbit.queue.SendQueue;

importjava.util.ArrayList;
importjava.util.List;
importjava.util.concurrent.atomic.AtomicLong;

publicclassQueuePerfMain {

publicstaticclassTrade {
finalString name;
finallong amount;

publicTrade(Stringname, longamount) {
this.name = name;
this.amount = amount;
}

publicStringgetName() {
return name;
}

publiclonggetAmount() {
return amount;
}
}

publicstaticvoidmain(finalString... args) throwsException {

finalint runs =76;
finalint tradeCount =220_000;
finalint batchSize =1_000;
finalint checkEvery =0;
finalint numThreads =6;
finalint pollWait =1_000;
finalint size =1_000_000;


for (int index =0; index <10; index++) {
run(2, 1000, 10, 3, 2, 100, size);
}

for (int index =0; index <100; index++) {
run(runs, tradeCount, batchSize, checkEvery, numThreads, pollWait, size);
}
}


privatestaticvoidrun(intruns, inttradeCount, intbatchSize, intcheckEvery, intnumThreads, intpollWait, intsize) {

finalQueueBuilder queueBuilder =QueueBuilder
.queueBuilder()
.setName("trades")
.setBatchSize(batchSize)
.setSize(size)
.setPollWait(pollWait);


finalint totalTrades = tradeCount * runs * numThreads;

if (checkEvery >0) {
queueBuilder.setLinkTransferQueue();
queueBuilder.setCheckEvery(checkEvery);
queueBuilder.setBatchSize(batchSize);
}

finalQueue<Trade> queue = queueBuilder.build();
finalAtomicLong tradeCounter =newAtomicLong();

queue.startListener(item -> {
item.getAmount();
item.getName();
tradeCounter.incrementAndGet();
});


finallong startRun =System.currentTimeMillis();

for (int r =0; r < runs; r++) {
runThreads(tradeCount, numThreads, queue, tradeCounter);
}
System.out.printf("DONE traded %,d in %d ms \nbatchSize = %,d, checkEvery = %,d, threads= %,d \n\n",
totalTrades,
System.currentTimeMillis() - startRun,
batchSize,
checkEvery,
numThreads);
queue.stop();

}

privatestaticvoidrunThreads(inttradeCount, intnumThreads, Queue<Trade>queue, AtomicLongtradeCounter) {
finalList<Thread> threads =newArrayList<>();
for (int t =0; t < numThreads; t++) {

finalThread thread =newThread(() -> {
sendMessages(queue, tradeCount);
});
thread.start();
threads.add(thread);
}
for (int index =0; index <100000; index++) {
Sys.sleep(10);
if (tradeCounter.get() >= (tradeCount * numThreads)) {
break;
}
}
}

privatestaticvoidsendMessages(finalQueue<Trade>queue, finalinttradeCount) {
finalSendQueue<Trade> tradeSendQueue = queue.sendQueue();
for (int c =0; c < tradeCount; c++) {
tradeSendQueue.send(newTrade("ibm", 100L));
}
tradeSendQueue.flushSends();
}


}

There are ways to optimize the test. I have similar tests running up to 200M TPS but the code is a lot harder to follow. This is fairly decent speed and the code is easy to follow and explain.
To compare micro-batching to not using micro-batching, I reduce the messages to 48 million instead of 100 million. I tried running no batching through 100 million and it seem to hang for a long time. I am sure it would have finished eventually. But I am not that patient.
It takes no batching about 3000 milliseconds to process 48 million messages. Fairly consistently with a really wide standard deviation. But a batch size of 1000, yields 550 milliseconds to process with a very tight standard deviation. This becomes more pronounced as the service becomes more CPU intensive.
More to come.
Viewing all 213 articles
Browse latest View live