Quantcast
Channel: Sleepless Dev
Viewing all 213 articles
Browse latest View live

QBit Microservices Lib 1.5.0.RELEASE

$
0
0
QBit now supports Reakt invokable promises for local and remote client proxies. 
This gives a nice fluent API for async programming.

Invokeable promise

        employeeService.lookupEmployee("123")
.then((employee)-> {...}).catchError(...).invoke();
QBit callbacks are now also Reakt Callbacks without breaking the QBit contract for Callbacks.
A full write up on QBit Invokable Promise is pending, but the curious can see ReaktInterfacesTest Service Queue, ServiceBundle for more details, and the Remote Websocket Reakt interfaces for remote access proxies.
  • 683 Use Metrik for metrics system
  • 682 Support Reakt with Websocket RPC proxies
  • 680 Support Inovkable promises on Service Queues
  • 679 Testing for Inovkable promises on proxies
  • 678 Fix health check logging
  • 676 Remote proxies support Reakt Callbacks and promises
  • 675 Local proxies support Reakt Inokable promises
  • 674 Local proxies support Reakt callbacks
  • 673 Remote proxies support callback
  • 672 Get rid of boiler plate code for Reactor, StatsCollector and Health Check

QBit Microservice Lib adds support for Reakt Reactive Java's Promise, Callbacks

$
0
0
QBit supports Reakt Callbacks and Promises (not to mention Reakt's reactor).
Continuing on with our Restful microservice example, we show how you can mix Reakt's promises with the QBit microservices lib. Reakt is the Reactive Java Library.
If you have read through the QBit microservice documentation, you know that you can call QBit services remotely or locally by using client proxies. Now QBit supports Reakt Callbacks andPromises in those client proxies.
Here is an example using our Todo example from before.

Using Reakt Promise in QBit proxy

packagecom.mammatustech.todo;

importio.advantageous.reakt.promise.Promise;

importjava.util.List;

publicinterfaceTodoManager {
Promise<Boolean>add(Todotodo);
Promise<Boolean>remove(Stringid);
Promise<List<Todo>>list();
}
Let's implement our earlier service using a manager class.

TodoManager

packagecom.mammatustech.todo;

importio.advantageous.qbit.annotation.QueueCallback;
importio.advantageous.qbit.annotation.RequestParam;
importio.advantageous.qbit.reactive.Callback;
importio.advantageous.qbit.service.stats.StatsCollector;
importio.advantageous.reakt.reactor.Reactor;

importjava.time.Duration;
importjava.util.ArrayList;
importjava.util.Map;
importjava.util.TreeMap;

import staticio.advantageous.qbit.annotation.QueueCallbackType.*;

publicclassTodoManagerImpl {


privatefinalMap<String, Todo> todoMap =newTreeMap<>();

/**
* Used to manage callbacks and such.
*/
privatefinalReactor reactor;

/**
* Stats Collector for KPIs.
*/
privatefinalStatsCollector statsCollector;

publicTodoManagerImpl(Reactorreactor, StatsCollectorstatsCollector) {
this.reactor = reactor;
this.statsCollector = statsCollector;

/** Send stat count i.am.alive every three seconds. */
this.reactor.addRepeatingTask(Duration.ofSeconds(3),
() -> statsCollector.increment("todoservice.i.am.alive"));

this.reactor.addRepeatingTask(Duration.ofSeconds(1), statsCollector::clientProxyFlush);
}


publicvoidadd(finalCallback<Boolean>callback, finalTodotodo) {

/** Send KPI add.called every time the add method gets called. */
statsCollector.increment("todoservice.add.called");
todoMap.put(todo.getId(), todo);
callback.accept(true);
}


publicvoidremove(finalCallback<Boolean>callback, final@RequestParam("id") Stringid) {

/** Send KPI add.removed every time the remove method gets called. */
statsCollector.increment("todoservice.remove.called");
Todo remove = todoMap.remove(id);
callback.accept(remove !=null);

}


publicvoidlist(finalCallback<ArrayList<Todo>>callback) {

/** Send KPI add.list every time the list method gets called. */
statsCollector.increment("todoservice.list.called");

callback.accept(newArrayList<>(todoMap.values()));
}


@QueueCallback({EMPTY, IDLE, LIMIT})
publicvoidprocess() {
reactor.process();
}


}
All of the service code from before sans the RequestMappings is in this class. We also left in some stats gathering from the StatsD Microservice Monitoring example. Notice that the proxy interface and the service methods do not have to match. In the service is typical to use callbacks, but in the client proxies, you can use callbacks or promises. Promises give a nice, fluent programming flow.
Testing an async lib can be difficult, but we can use a Reakt blocking promise to help test this. But before we do that, let's run this service in a service bundle as follows:

Running the TodoManager service in a service bundle.

    @Test
publicvoid testManager() throws Exception {

/** Create service bundle . */
finalServiceBundleBuilder serviceBundleBuilder = serviceBundleBuilder();
serviceBundleBuilder.getRequestQueueBuilder().setBatchSize(1); //for testing
finalServiceBundle serviceBundle = serviceBundleBuilder.build();

/** Create implementation of our manager. */
finalTodoManagerImpl todoManagerImpl =newTodoManagerImpl(Reactor.reactor(), newStatsCollector() {
});

/** Add implementation to service bundle. */
serviceBundle.addServiceObject("todo", todoManagerImpl);
finalTodoManager todoManager = serviceBundle.createLocalProxy(TodoManager.class, "todo");
serviceBundle.start();
...
Now let's test async adding a Todo to the TodoManager.

Async adding a Todo to the TodoManager test using a blocking promise

/** Add a Todo. */
finalPromise<Boolean> addPromise = blockingPromise();
todoManager.add(newTodo("Buy Tesla", "Buy new Tesla", System.currentTimeMillis()))
.catchError(Throwable::printStackTrace).invokeWithPromise(addPromise);
assertTrue(addPromise.get());
Notice we use a blocking promise in the test. In the app we would use a replay promise or a callback promise with then handlers and catchError handlers.
Here is the full test exampling using blocking promises.

Before we show that example, here is some follow up information on Reakt.
Here is the full test example.

Example of using a Promise based client proxy and blocking promises

packagecom.mammatustech.todo;

importio.advantageous.qbit.service.ServiceBundle;
importio.advantageous.qbit.service.ServiceBundleBuilder;
importio.advantageous.qbit.service.stats.StatsCollector;
importio.advantageous.reakt.promise.Promise;
importio.advantageous.reakt.reactor.Reactor;
importorg.junit.Test;

importjava.util.List;

import staticio.advantageous.qbit.service.ServiceBundleBuilder.*;
import staticio.advantageous.reakt.promise.Promises.*;
import staticorg.junit.Assert.assertEquals;
import staticorg.junit.Assert.assertTrue;

publicclassTodoManagerTest {


@Test
publicvoidtestManager() throwsException {

/** Create service bundle . */
finalServiceBundleBuilder serviceBundleBuilder = serviceBundleBuilder();
serviceBundleBuilder.getRequestQueueBuilder().setBatchSize(1);
finalServiceBundle serviceBundle = serviceBundleBuilder.build();

/** Create implementation. */
finalTodoManagerImpl todoManagerImpl =newTodoManagerImpl(Reactor.reactor(), newStatsCollector() {
});

/** Add implementation to service bundle. */
serviceBundle.addServiceObject("todo", todoManagerImpl);
finalTodoManager todoManager = serviceBundle.createLocalProxy(TodoManager.class, "todo");
serviceBundle.start();

/** Add a Todo. */
finalPromise<Boolean> addPromise = blockingPromise();
todoManager.add(newTodo("Buy Tesla", "Buy new Tesla", System.currentTimeMillis()))
.catchError(Throwable::printStackTrace).invokeWithPromise(addPromise);
assertTrue(addPromise.get());

/** Call list to get a list of Todos. */
finalPromise<List<Todo>> listPromise = blockingPromise();
todoManager.list().invokeWithPromise(listPromise);
finalList<Todo> todos = listPromise.get();
assertEquals(1, todos.size());
assertEquals("Buy Tesla", todos.get(0).getName());

/** Get the id of the Todo to remove it. */
finalString id = todos.get(0).getId();

/** Remove the todo with the todo id. */
finalPromise<Boolean> removePromise = blockingPromise();
todoManager.remove(id).invokeWithPromise(removePromise);
assertTrue(removePromise.get());

/** See if the todo was removed. */
finalPromise<List<Todo>> listPromise2 = blockingPromise();
todoManager.list().invokeWithPromise(listPromise2);
finalList<Todo> todos2 = listPromise2.get();
assertEquals(0, todos2.size());

}
}
QBit generates stubs with Callbacks args or callback methods that return Invokable promises. Invokable promises allow you to write fluent, lambda friendly code.

Example of fluent, lambda friendly invokable promise code

        employeeService.lookupEmployee("123")
.then((employee)-> {...}).catchError(...).invoke();
We changed our TodoService to use a TodoManager so we could show using invokable promises in the wild.

TodoService uses TodoManager

packagecom.mammatustech.todo;

importio.advantageous.qbit.annotation.*;
importio.advantageous.qbit.reactive.Callback;

importjava.util.*;

import staticio.advantageous.qbit.annotation.QueueCallbackType.EMPTY;
import staticio.advantageous.qbit.annotation.QueueCallbackType.IDLE;
import staticio.advantageous.qbit.annotation.QueueCallbackType.LIMIT;
import staticio.advantageous.qbit.service.ServiceProxyUtils.flushServiceProxy;


/**
* <code>
* curl -X POST -H "Content-Type: application/json" /
* http://localhost:8888/v1/todo-service/todo /
* -d '{"id":"id1", "name":"buy tesla", "description":"daddy wants"}'
* </code>
*
* <code>
* curl http://localhost:8888/v1/todo-service/todo
* </code>
*/
@RequestMapping("/todo-service")
publicclassTodoService {

privatefinalTodoManager todoManager;

publicTodoService(finalTodoManagertodoManager) {
this.todoManager = todoManager;
}


@RequestMapping(value="/todo", method=RequestMethod.POST)
publicvoidadd(finalCallback<Boolean>callback,
finalTodotodo) {
todoManager.add(todo)
.catchError(callback::reject)
.then(callback::resolve)
.invoke();
}



@RequestMapping(value="/todo", method=RequestMethod.DELETE)
publicvoidremove(finalCallback<Boolean>callback,
final@RequestParam("id") Stringid) {
todoManager.remove(id)
.catchError(callback::reject)
.then(callback::resolve)
.invoke();
}



@RequestMapping(value="/todo", method=RequestMethod.GET)
publicvoidlist(finalCallback<List<Todo>>callback) {
todoManager.list()
.catchError(callback::reject)
.then(callback::resolve)
.invoke();
}



@QueueCallback({EMPTY, IDLE, LIMIT})
publicvoidprocess() {
flushServiceProxy(todoManager);
}


}
For completeness here is our main method which still has the StatsD code from the last example.

TodoServiceMain

packagecom.mammatustech.todo;


importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.service.ServiceBundle;
importio.advantageous.qbit.service.stats.StatsCollector;
importio.advantageous.reakt.reactor.Reactor;

importjava.net.URI;
importjava.util.Objects;

publicclassTodoServiceMain {


publicstaticvoidmain(finalString... args) throwsException {

//To test locally use https://hub.docker.com/r/samuelebistoletti/docker-statsd-influxdb-grafana/
finalURI statsdURI =URI.create("udp://192.168.99.100:8125");

//For timer
finalReactor reactor =Reactor.reactor();


/* Create the ManagedServiceBuilder which manages a clean shutdown, health, stats, etc. */
finalManagedServiceBuilder managedServiceBuilder =
ManagedServiceBuilder.managedServiceBuilder()
.setRootURI("/v1") //Defaults to services
.setPort(8888); //Defaults to 8080 or environment variable PORT

/** Enable statsD */
enableStatsD(managedServiceBuilder, statsdURI);
finalStatsCollector statsCollector = managedServiceBuilder.createStatsCollector();


/** Create todo impl. */
finalTodoManagerImpl impl =newTodoManagerImpl(reactor, statsCollector);


/** Create service bundle for internal todo manager. */
finalServiceBundle serviceBundle = managedServiceBuilder.createServiceBundleBuilder().build();
serviceBundle.addServiceObject("todoManager", impl).startServiceBundle();


/** Create TodoManager. */
finalTodoManager todoManager = serviceBundle.createLocalProxy(TodoManager.class, "todoManager");

/** Start the REST/Websocket service. */
managedServiceBuilder.addEndpointService(newTodoService(todoManager)).getEndpointServerBuilder()
.build().startServer();

/* Start the admin builder which exposes health end-points and swagger meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Todo Server and Admin Server started");

}

/**
* Enable Stats D.
*
* @param host statsD host
* @param port statsD port
*/
publicstaticvoidenableStatsD(ManagedServiceBuildermanagedServiceBuilder, Stringhost, intport) {
if (port <1) thrownewIllegalStateException("StatsD port must be set");
Objects.requireNonNull(host, "StatsD Host cannot be null");
if (host.isEmpty()) thrownewIllegalStateException("StatsD Host name must not be empty");
managedServiceBuilder.getStatsDReplicatorBuilder().setHost(host).setPort(port);
managedServiceBuilder.setEnableStatsD(true);
}

/**
* Enable Stats D.
*
* @param uri for statsd
*/
publicstaticvoidenableStatsD(ManagedServiceBuildermanagedServiceBuilder, URIuri) {
if (!uri.getScheme().equals("udp")) thrownewIllegalStateException("Scheme must be udp");
enableStatsD(managedServiceBuilder, uri.getHost(), uri.getPort());
}
}
The main method creates a Manager and creates a Service and wires them together.
In this example, we showed using Promises in the local client proxy with a service bundle, but you could also use them from a remote proxy exposed via WebSocket.
In the next example we implement this example interface.

Example interface

interfaceServiceDiscovery {
Promise<URI>lookupService(URIuri);
}
We implement the above example interface as Remote WebSocket RPC, local service bundle, local service queue, using strongly typed and loosely typed end points.

Example showing Promises being used in local and remote proxies

packageio.advantageous.qbit.vertx;


importio.advantageous.boon.core.Sys;
importio.advantageous.qbit.client.Client;
importio.advantageous.qbit.client.ClientBuilder;
importio.advantageous.qbit.server.EndpointServerBuilder;
importio.advantageous.qbit.server.ServiceEndpointServer;
importio.advantageous.qbit.service.ServiceBuilder;
importio.advantageous.qbit.service.ServiceBundle;
importio.advantageous.qbit.service.ServiceBundleBuilder;
importio.advantageous.qbit.service.ServiceQueue;
importio.advantageous.qbit.time.Duration;
importio.advantageous.qbit.util.PortUtils;
importio.advantageous.reakt.promise.Promise;
importorg.junit.After;
importorg.junit.Before;
importorg.junit.Test;

importjava.net.URI;
importjava.util.concurrent.CountDownLatch;
importjava.util.concurrent.TimeUnit;
importjava.util.concurrent.atomic.AtomicReference;

import staticorg.junit.Assert.*;

publicclassReaktInterfaceTest {

finalURI successResult =URI.create("http://localhost:8080/employeeService/");

ServiceDiscovery serviceDiscovery;
ServiceDiscovery serviceDiscoveryStrongTyped;
ServiceDiscovery serviceDiscoveryServiceBundle;
ServiceDiscovery serviceDiscoveryWebSocket;


ServiceDiscoveryImpl impl;
URI empURI;
CountDownLatch latch;
AtomicReference<URI> returnValue;
AtomicReference<Throwable> errorRef;

int port;
Client client;
ServiceEndpointServer server;
ServiceBundle serviceBundle;
ServiceQueue serviceQueue;
ServiceQueue serviceQueue2;

@Before
publicvoidbefore() {

port =PortUtils.findOpenPortStartAt(9000);


latch =newCountDownLatch(1);
returnValue =newAtomicReference<>();
errorRef =newAtomicReference<>();
impl =newServiceDiscoveryImpl();
empURI =URI.create("marathon://default/employeeService?env=staging");


server =EndpointServerBuilder.endpointServerBuilder()
.addService("/myservice", impl)
.setPort(port).build().startServer();

Sys.sleep(200);

client =ClientBuilder.clientBuilder().setPort(port).build().startClient();

serviceQueue =ServiceBuilder.serviceBuilder().setServiceObject(impl).buildAndStartAll();
serviceBundle =ServiceBundleBuilder.serviceBundleBuilder().build();
serviceBundle.addServiceObject("myservice", impl);
serviceQueue2 =ServiceBuilder.serviceBuilder().setInvokeDynamic(false).setServiceObject(impl)
.buildAndStartAll();


serviceDiscoveryServiceBundle = serviceBundle.createLocalProxy(ServiceDiscovery.class, "myservice");
serviceBundle.start();

serviceDiscovery = serviceQueue.createProxyWithAutoFlush(ServiceDiscovery.class, Duration.TEN_MILLIS);
serviceDiscoveryStrongTyped = serviceQueue2.createProxyWithAutoFlush(ServiceDiscovery.class,
Duration.TEN_MILLIS);

serviceDiscoveryWebSocket = client.createProxy(ServiceDiscovery.class, "/myservice");
}

@After
publicvoidafter() {
serviceQueue2.stop();
serviceQueue.stop();
serviceBundle.stop();
server.stop();
client.stop();
}

publicvoidawait() {
try {
latch.await(10, TimeUnit.SECONDS);
} catch (InterruptedException e) {
thrownewIllegalStateException(e);
}
}

@Test
publicvoidtestServiceWithReturnPromiseSuccess() {
testSuccess(serviceDiscovery);
testSuccess(serviceDiscoveryStrongTyped);
testSuccess(serviceDiscoveryServiceBundle);
testSuccess(serviceDiscoveryWebSocket);

}

privatevoidtestSuccess(ServiceDiscoveryserviceDiscovery) {
serviceDiscovery.lookupService(empURI).then(this::handleSuccess)
.catchError(this::handleError).invoke();
await();
assertNotNull("We have a return", returnValue.get());
assertNull("There were no errors", errorRef.get());
assertEquals("The result is the expected result", successResult, returnValue.get());
}


@Test
publicvoidtestServiceWithReturnPromiseFail() {
testFail(serviceDiscovery);
testFail(serviceDiscoveryStrongTyped);
testFail(serviceDiscoveryServiceBundle);
testFail(serviceDiscoveryWebSocket);
}

privatevoidtestFail(ServiceDiscoveryserviceDiscovery) {
serviceDiscovery.lookupService(null).then(this::handleSuccess)
.catchError(this::handleError).invoke();

await();
assertNull("We do not have a return", returnValue.get());
assertNotNull("There were errors", errorRef.get());
}


@Test(expected=IllegalStateException.class)
publicvoidtestServiceWithReturnPromiseSuccessInvokeTwice() {
finalPromise<URI> promise = serviceDiscovery.lookupService(empURI).then(this::handleSuccess)
.catchError(this::handleError);
promise.invoke();
promise.invoke();
}

@Test
publicvoidtestIsInvokable() {
finalPromise<URI> promise = serviceDiscovery.lookupService(empURI).then(this::handleSuccess)
.catchError(this::handleError);

assertTrue("Is this an invokable promise", promise.isInvokable());
}


privatevoidhandleError(Throwableerror) {
errorRef.set(error);
latch.countDown();
}

privatevoidhandleSuccess(URIuri) {
returnValue.set(uri);
latch.countDown();
}


interfaceServiceDiscovery {
Promise<URI>lookupService(URIuri);
}

publicclassServiceDiscoveryImpl {
publicvoidlookupService(finalio.advantageous.qbit.reactive.Callback<URI>callback, finalURIuri) {
if (uri ==null) {
callback.reject("uri can't be null");
} else {
callback.resolve(successResult);
}
}
}
}

Konf - Typed Java Config System

$
0
0

Konf - Typed Java Config System

Java configuration library similar in concept to TypeSafe config, but uses full JavaScript, YAML or JSON for configuration.
(See Konf webiste - Java typed config system for JSON, YAML and JavaScript based Java config for more details.).
Uses JavaScript/JSON/YAML as config for Java.
You can use full JavaScript for configuration as long as you define a variable called config that results in a JavaScript object which equates to a Java map.

Using Konf on your project

Konf is in the public maven repo.

Using konf from maven

<dependency>
<groupId>io.advantageous.konf</groupId>
<artifactId>konf</artifactId>
<version>1.0.0.RELEASE</version>
</dependency>

Using konf from gradle

compile 'io.advantageous.konf:konf:1.0.0.RELEASE'

Using konf from scala sbt

libraryDependencies +="io.advantageous.konf"%"konf"%"1.0.0.RC1"

Using konf from clojure leiningen

[io.advantageous.konf/konf "1.0.0.RC1"]
Here is an example config for JavaScript.
Konf expects the conf variable to be set to a JavaScript object with properties.

JavaScript based configuration for Java

var config = {

myUri:uri("http://host:9000/path?foo=bar"),

someKey: {
nestedKey:234,
other:"this text"
}

};
The interface for Konf is Config. You can get a sub Config from Config (getConfig(path)). The pathis always in dot notation (this.that.foo.bar). You can also use:
  • getInt(path)
  • getLong(path)
  • getDouble(path)
  • getString(path)
  • getStringList(path) gets a list of strings
  • getConfig(path) gets a sub-config.
  • getMap(path) gets a map which is a sub-config.
  • getConfigList(path) gets a list of configs at the location specified.
getMap works with JavaScript objects. getStringList and getConfigList works with JavaScript array of string and a JavaScript array of JavaScript objects.
Not you get an exception if the path requested is not found. Use hasPath(path) if you think the config path might be missing.
Here is the full interface.

Config interface

publicinterfaceConfig {

/** Get string at location. */
StringgetString(Stringpath);

/** Checks to see if config has the path specified. */
booleanhasPath(Stringpath);

/** Get int at location. */
intgetInt(Stringpath);

/** Get float at location. */
floatgetFloat(Stringpath);

/** Get double at location. */
doublegetDouble(Stringpath);

/** Get long at location. */
longgetLong(Stringpath);

/** Get list of strings at location. */
List<String>getStringList(Stringpath);

/** Get map at location. */
Map<String, Object>getMap(Stringpath);

/** Get a sub-config at location. */
ConfiggetConfig(Stringpath);

/** Get list of sub-configs at location. */
List<Config>getConfigList(Stringpath);

/** Get a single POJO out of config at path. */
<T>Tget(Stringpath, Class<T>type);

/** Get a list of POJOs. */
<T>List<T>getList(Stringpath, Class<T>componentType);
}
The getX methods work like you would expect. Given this config file.

Sample config for testing and showing how config works

var config = {

myUri:uri("http://host:9000/path?foo=bar"),

someKey: {
nestedKey:234,
other:"this text"
},

int1:1,
float1:1.0,
double1:1.0,
long1:1,
string1:"rick",
stringList: ['Foo', 'Bar'],
configInner: {
int2:2,
float2:2.0
},
uri:uri("http://localhost:8080/foo"),
myClass:"java.lang.Object",
myURI:"http://localhost:8080/foo",
employee: {"id":123, "name":"Geoff"},
employees: [
{id:123, "name":"Geoff"},
{id:456, "name":"Rick"},
{id:789, 'name':"Paul"}
]
};
We can do the following operations.
First we load the config.

Loading the config.

privateConfig config;

@Before
publicvoid setUp() throws Exception {
config =ConfigLoader.load("test-config.js");
}
Then we show reading basic types with the config object using getX.

Reading basic types from config

    @Test
publicvoid testSimple() throws Exception {

//getInt
assertEquals(1, config.getInt("int1"));

//getStringList
assertEquals(asList("Foo", "Bar"),
config.getStringList("stringList"));

//getString
assertEquals("rick", config.getString("string1"));

//getDouble
assertEquals(1.0, config.getDouble("double1"), 0.001);

//getLong
assertEquals(1L, config.getLong("long1"));

//getFloat
assertEquals(1.0f, config.getFloat("float1"), 0.001);

//Basic JDK value types are supported like class.
assertEquals(Object.class, config.get("myClass", Class.class));

//Basic JDK value types are supported like URI.
assertEquals(URI.create("http://localhost:8080/foo"),
config.get("myURI", URI.class));

assertEquals(URI.create("http://localhost:8080/foo"),
config.get("uri", URI.class));

}
You can work with nested properties as well.

Reading a nested config from the config

    @Test
publicvoid testGetConfig() throws Exception {
//Read nested config.
finalConfig configInner = config.getConfig("configInner");
assertEquals(2, configInner.getInt("int2"));
assertEquals(2.0f, configInner.getFloat("float2"), 0.001);
}

@Test
publicvoid testGetMap() throws Exception {
//Read nested config as a Java map.
finalMap<String, Object> map = config.getMap("configInner");
assertEquals(2, (int) map.get("int2"));
assertEquals(2.0f, (double) map.get("float2"), 0.001);
}
You can read deeply nested config items as well by specifying the property path using dot notation.

Reading nested properties with dot notation from config

    @Test
publicvoid testSimplePath() throws Exception {

assertTrue(config.hasPath("configInner.int2"));
assertFalse(config.hasPath("configInner.foo.bar"));
assertEquals(2, config.getInt("configInner.int2"));
assertEquals(2.0f, config.getFloat("configInner.float2"), 0.001);
}
You can also read POJOs directly out of the config file.

Reading a pojo directly out of the config file

    @Test
publicvoid testReadClass() throws Exception {
finalEmployee employee = config.get("employee", Employee.class);
assertEquals("Geoff", employee.name);
assertEquals("123", employee.id);
}
You can read a list of POJOs at once.

Reading a pojo list directly out of the config file

    @Test
publicvoid testReadListOfClass() throws Exception {
finalList<Employee> employees = config.getList("employees", Employee.class);
assertEquals("Geoff", employees.get(0).name);
assertEquals("123", employees.get(0).id);
}
You can also read a list of config objects out of the config as well.

Reading a config list directly out of the config file

    @Test
publicvoid testReadListOfConfig() throws Exception {
finalList<Config> employees = config.getConfigList("employees");
assertEquals("Geoff", employees.get(0).getString("name"));
assertEquals("123", employees.get(0).getString("id"));
}
## Using Config with YAML
First include a YAML to object parser like YAML Beans or a library like this.
#### Example YAML
name:Nathan Sweet
age: 28
address:4011 16th Ave S
phone numbers:
- name:Home
number:206-555-5138
- name:Work
number:425-555-2306

Using Konf with YAML

//Use YamlReader to load YAML file.
YamlReader reader =newYamlReader(newFileReader("contact.yml"));

//Convert object read from YAML into Konf config
Config config =ConfigLoader.loadFromObject(reader.read());

//Now you have strongly typed access to fields
String address = config.getString("address");
You can also read Pojos from anywhere in the YAML file as well as sub configs.

You can also use Konf with JSON using Boon

See Boon JSON parser project, and Boon in five minutes

Using Konf with JSON

ObjectMapper mapper =JsonFactory.create();


/* Convert object read from YAML into Konf config.
'src' can be File, InputStream, Reader, String. */
Config config =ConfigLoader.loadFromObject(mapper.fromJson(src));


//Now you have strongly typed access to fields
String address = config.getString("address");
Boon supports LAX JSON (Json with comments, and you do not need to quote the field).
If you like our configuration project, please try our Reactive Java project or our Actor based microservices lib.

Konf Java configuration with JSON, YAML and JavaScript version 1.1.0.RELEASE

$
0
0


Added getDuration(path)getIntList(path), and the rest of the numbers (double, float, long).

Working with java.time.Duration

  • getDuration(path) get a duration
  • getDurationList(path) get a duration list
Konf supports "10 seconds" style config for duration as well as
having built-in functions and support for ISO-8601. See documentation 
for duration config
for more details.
Konf can reads list of numbers.
  • getIntList reads list of ints
  • getLongList reads list of longs
  • getDoubleList reads list of doubles
  • getFloatList reads list of floats
See documentation list of number configuration 
for more details.

Thanks

Downloads




Working with Durations



Konf, the type safe Java configuration library, has a way to configure java.time.Duration. You use the getDuration(path) and getDurationList(path) method to read a duration from a location.

Different way to configure duration

var config = {
tenSeconds:seconds(10),
tenDays:days(10),
tenMinutes:minutes(10),
tenHours:hours(10),
tenMillis:millis(10),
tenMilliseconds:milliseconds(10),
fifteenMinutes:"PT15M",
tenSeconds2:"10 seconds",
tenMinutes2:"10m",
tenHours2:"10 h",
tenDays2:"10 day",
tenMillis2:"10ms"
};

TypeSafe Config style

Duration can be specified with time (type safe config style).
You can postFix the type value in a string with any of these prefixes.

Typesafe config style postfixes

ns, nano, nanos, nanosecond, nanoseconds
us, micro, micros, microsecond, microseconds
ms, milli, millis, millisecond, milliseconds
s, second, seconds
m, minute, minutes
h, hour, hours
d, day, days

ISO-8601 Duration

Konf also supports ISO-8601 duration format, i.e., PT15M.

Built-in functions for Duration

There are also built-in functions for durations using seconds(5)minutes(5), etc.

Example usage

First we load the config.

Load the config

privateConfig config;

@Before
publicvoid setUp() throws Exception {
config =ConfigLoader.load("test-config.js");
}
Then we use getDuration(path) to read the Duration values.
        assertEquals(Duration.ofMillis(10), 
config.getDuration("tenMillis"));

assertEquals(Duration.ofMillis(10),
config.getDuration("tenMilliseconds"));

assertEquals(Duration.ofSeconds(10), config.getDuration("tenSeconds"));
assertEquals(Duration.ofMinutes(10), config.getDuration("tenMinutes"));
assertEquals(Duration.ofHours(10), config.getDuration("tenHours"));
assertEquals(Duration.ofDays(10), config.getDuration("tenDays"));

assertEquals(Duration.ofMinutes(15),
config.getDuration("fifteenMinutes"));

assertEquals(Duration.ofSeconds(10), config.getDuration("tenSeconds2"));
assertEquals(Duration.ofMinutes(10), config.getDuration("tenMinutes2"));
assertEquals(Duration.ofHours(10), config.getDuration("tenHours2"));
assertEquals(Duration.ofDays(10), config.getDuration("tenDays2"));
assertEquals(Duration.ofMillis(10), config.getDuration("tenMillis2"));
assertEquals(Duration.ofMillis(10), config.getDuration("tenMilliseconds2"));

Working with lists of numbers


Konf reads list of numbers.
  • getIntList reads list of ints
  • getLongList reads list of longs
  • getDoubleList reads list of doubles
  • getFloatList reads list of floats
Given this sample configuration file.
test-config.js
var config = {

floats: [1.0, 2.0, 3.0],
doubles: [1.0, 2.0, 3.0],
longs: [1.0, 2.0, 3.0],
ints: [1, 2, 3],

intsNull: [1, null, 3],
intsWrongType: [1, "2", 3]
}
First we load the config file (must be on the class path).

Load config file

privateConfig config;

@Before
publicvoid setUp() throws Exception {
config =ConfigLoader.load("test-config.js");
}
...
Now we can use getDoubleList et al to read values from the config as follows.

Read config items

    @Test
publicvoid testNumberList() throws Exception {
assertEquals(asList(1.0, 2.0, 3.0),
config.getDoubleList("doubles"));

assertEquals(asList(1.0f, 2.0f, 3.0f),
config.getFloatList("floats"));

assertEquals(asList(1, 2, 3),
config.getIntList("ints"));

assertEquals(asList(1L, 2L, 3L),
config.getLongList("longs"));
}
The list of numbers must not contain nulls.

Nulls do not work

    @Test(expected =IllegalArgumentException.class)
publicvoid listHasNull() {
assertEquals(asList(1, 2, 3), config.getIntList("intsNull"));
}
The list of numbers must contain valid numbers.

Wrong types do not work

    @Test(expected =IllegalArgumentException.class)
publicvoid wrongTypeInList() {
assertEquals(asList(1, 2, 3), config.getIntList("intsWrongType"));

}




QBit Resourceful RESTful Microservices

$
0
0
You can build resourceful REST URIs using @RequestMapping from QBit microservices lib.
Typically you use HTTP GET to get a list of something and if you are returning more than one then the URI path should end with / as follows:

@RequestMapping("/department/")

    @RequestMapping("/department/")
publicList<Department> getDepartments() {
To add to a list, you would use a PUT or a POST. PUT is generally used for updates, and POST is used to create. If you were editing an object at a give ID you would use a PUT, but if you were adding a new item to a list, you would use a PUT to update a list or a POST to create an item. Dealers choice.

@RequestMapping(value = "/department/{departmentId}/", method = RequestMethod.POST)

@RequestMapping(value ="/department/{departmentId}/", method =RequestMethod.POST)
publicboolean addDepartment(@PathVariable("departmentId") Integer departmentId,
finalDepartment department) {
You could easily get into a long drawn out argument about which to use PUT or POST in the above scenario, because you are updating the list (adding an item to it), but you are creating a department. Just remember POST for create, and PUT for update.
To get a single employee, you could use a path param. Some say that path params are nice for people and search engines, I say ok dude.

Using a path param @RequestMapping(value = "/department/{departmentId}/employee/{employeeId}", method = RequestMethod.GET)

    @RequestMapping(value ="/department/{departmentId}/employee/{employeeId}", method =RequestMethod.GET)
publicEmployee getEmployee(@PathVariable("departmentId") Integer departmentId,
@PathVariable("employeeId") Long employeeId) {
HTTP GET is the default, but you can specify it as we did above.
There are other annotations that work the same way except the HTTP method is in the name of the param.
  • @POST
  • @PUT
  • @GET
You could rewrite the last examples like this.

Using the @POST, @PUT, @GET

    @GET("/department/")
publicList<Department> getDepartments() {

...
@POST(value ="/department/{departmentId}/")
publicboolean addDepartment(@PathVariable("departmentId") Integer departmentId,
finalDepartment department) {


...


@GET(value ="/department/{departmentId}/employee/{employeeId}")
publicEmployee getEmployee(@PathVariable("departmentId") Integer departmentId,
@PathVariable("employeeId") Long employeeId) {
Making the method name also be the annotation name makes things a bit more compact, and a bit easier to read.
Here are some more Resourceful REST examples in QBit to ponder.

Resource based RESTful API for Microservices

packagecom.mammatustech.hr;



importio.advantageous.qbit.annotation.*;

importjava.util.*;

@RequestMapping("/hr")
publicclassHRService {

finalMap<Integer, Department> departmentMap =newHashMap<>();


@RequestMapping("/department/")
publicList<Department>getDepartments() {
returnnewArrayList<>(departmentMap.values());
}

@RequestMapping(value="/department/{departmentId}/", method=RequestMethod.POST)
publicbooleanaddDepartment(@PathVariable("departmentId") IntegerdepartmentId,
finalDepartmentdepartment) {

departmentMap.put(departmentId, department);
returntrue;
}

@RequestMapping(value="/department/{departmentId}/employee/", method=RequestMethod.POST)
publicbooleanaddEmployee(@PathVariable("departmentId") IntegerdepartmentId,
finalEmployeeemployee) {

finalDepartment department = departmentMap.get(departmentId);

if (department ==null) {
thrownewIllegalArgumentException("Department "+ departmentId +" does not exist");
}

department.addEmployee(employee);
returntrue;
}

@RequestMapping(value="/department/{departmentId}/employee/{employeeId}", method=RequestMethod.GET)
publicEmployeegetEmployee(@PathVariable("departmentId") IntegerdepartmentId,
@PathVariable("employeeId") LongemployeeId) {

finalDepartment department = departmentMap.get(departmentId);

if (department ==null) {
thrownewIllegalArgumentException("Department "+ departmentId +" does not exist");
}

Optional<Employee> employee = department.getEmployeeList().stream().filter(
employee1 -> employee1.getId() == employeeId).findFirst();

if (employee.isPresent()){
return employee.get();
} else {
thrownewIllegalArgumentException("Employee with id "+ employeeId +" Not found ");
}
}


@RequestMapping(value="/department/{departmentId}/employee/{employeeId}/phoneNumber/",
method=RequestMethod.POST)
publicbooleanaddPhoneNumber(@PathVariable("departmentId") IntegerdepartmentId,
@PathVariable("employeeId") LongemployeeId,
PhoneNumberphoneNumber) {

Employee employee = getEmployee(departmentId, employeeId);
employee.addPhoneNumber(phoneNumber);
returntrue;
}



@RequestMapping(value="/department/{departmentId}/employee/{employeeId}/phoneNumber/")
publicList<PhoneNumber>getPhoneNumbers(@PathVariable("departmentId") IntegerdepartmentId,
@PathVariable("employeeId") LongemployeeId) {

Employee employee = getEmployee(departmentId, employeeId);
return employee.getPhoneNumbers();
}


@RequestMapping(value="/kitchen/{departmentId}/employee/phoneNumber/kitchen/",
method=RequestMethod.POST)
publicbooleanaddPhoneNumberKitchenSink(@PathVariable("departmentId") IntegerdepartmentId,
@RequestParam("employeeId") LongemployeeId,
@HeaderParam("X-PASS-CODE") StringpassCode,
PhoneNumberphoneNumber) {

if ("passcode".equals(passCode)) {
Employee employee = getEmployee(departmentId, employeeId);
employee.addPhoneNumber(phoneNumber);
returntrue;
} else {
returnfalse;
}
}



}

Creating your own custom DSL for config files with Konf (example looking up Mesos Ports or Docker Ports)

$
0
0
Konf is a Java configuration system. You can use it to easily create your own config DSLs.
At times it is helpful to add some configuration logic to a config file. As long as you keep the logic to config small and only put in config logic. The config logic should not be complicated.
The config logic forms a DSL for your configuration. If you do not need this then use JSON or YAML for your config. We find it very useful for working in EC2, Jenkins, local dev boxes, Nomad, Heroku and Mesosphere.
Here is an example that figures out port numbers running in Mesosphere.

mycompany-config-utils.js example config javascript

var createLogger =Java.type("org.slf4j.LoggerFactory").getLogger;

var log =createLogger("config.log");

functionmesosPortAt(index, defaultPort) {
var fromMesos =env("PORT"+ index);
var portReturned = fromMesos ?parseInt(fromMesos) : defaultPort;
log.info("Mesos Port At "+ index +" was "+ portReturned +
" default was {}"+ defaultPort);
return portReturned;
}
Then you can use this config "DSL" from your config files. The example todo-service-development.js uses the config logic functions from the previous example, namely,mesosPortAt and createLogger (development deploy).

todo-service-development.js

var config = {

platform: {

statsd:"udp://"+getDockerHost() +":8125",

servicePort:mesosPortAt(0, 8080),
adminPort:mesosPortAt(1, 9090),

vertxOptions: {
clustered:false
},

discovery: {
providers: ["docker:http://"+getDockerHost() +":2375"]
}
},

todoService: {

recordTodoTimeout:"30 seconds",
recordTodoListTimeout:"10 seconds",
circuitBreakerCheckInterval:'10s',
maxErrorsPerCircuitBreakerCheck:10,
checkCassandraConnectionInterval:"10s",

cassandra: {
uri:"discovery:docker:///cassandra?containerPort=9042",
replicationFactor:1
}

}

};

functiongetDockerHost() {
returnisMacOS() &&!env("DOCKER_HOST")
?"192.168.99.100"
:dockerHostOrDefault("localhost");
}

log.info("STAGING DEVELOPMENT");
log.info("impressions DEPLOYMENT_ENVIRONMENT {} ", env("DEPLOYMENT_ENVIRONMENT"));
log.info("impressions DOCKER_HOST {} ", env("DOCKER_HOST"));
log.info("impressions PORT_0 {} ", env("PORT0"));
log.info("impressions PORT_1 {} ", env("PORT1"));
Notice we added getDockerHost to our dev config file. These config files would exist in your deployment jar files.
Your production version can use just your "standard" config functions as follows.

todo-service-production.js

var config = {

platform: {

statsd:"udp://statsd1.ops.prod.somecompany.net:8086",

servicePort:mesosPortAt(0, 8080),
adminPort:mesosPortAt(1, 9090),


vertxOptions: {
clustered:false
},

discovery: {
providers: ["dns://ns-660.aawsdns-13.net:53",
"dns://ns-1399.aawsdns-36.org:53",
"dns://ns-511.aawsdns-63.com:53"]
}
},

todoService: {

recordTodoTimeout:"30 seconds",
recordTodoListTimeout:"10 seconds",
circuitBreakerCheckInterval:'10s',
maxErrorsPerCircuitBreakerCheck:10,
checkCassandraConnectionInterval:"10s",

cassandra: {
uri:"discovery:dns:A:///cassandra1.ds.prod.rbmhops.net?port=9042",
replicationFactor:1
}

}

};


log.info("STAGING PRODUCTION");
log.info("impressions DEPLOYMENT_ENVIRONMENT {} ", env("DEPLOYMENT_ENVIRONMENT"));
log.info("impressions DOCKER_HOST {} ", env("DOCKER_HOST"));
log.info("impressions PORT_0 {} ", env("PORT0"));
log.info("impressions PORT_1 {} ", env("PORT1"));
This was your production and QA config are locked down, but your dev config is more flexible to accommodate different dev environments (Linux, MacOSX, and Windows).
To load your base config utils, create a Java utility jar that your apps depend on as follows.
packageio.advantageous.platform.config;


importio.advantageous.config.Config;
importio.advantageous.config.ConfigLoader;
importorg.slf4j.Logger;
importorg.slf4j.LoggerFactory;

importjava.util.concurrent.atomic.AtomicReference;

/**
* Convenient utility methods for handling env and config.
*
* @author Geoff Chandler
* @author Rick Hightower
*/
publicfinalclassConfigUtils {

privatestaticfinalStringDEPLOYMENT_ENVIRONMENT;
privatestaticfinalLogger logger =LoggerFactory.getLogger(ConfigUtils.class);
privatestaticAtomicReference<Config> rootConfig =newAtomicReference<>();

static {
finalString env =System.getenv("DEPLOYMENT_ENVIRONMENT");
DEPLOYMENT_ENVIRONMENT= env ==null?"development": env.toLowerCase();
}

privateConfigUtils() {
thrownewIllegalStateException("config utils is not to be instantiated.");
}

privatestaticConfigloadRootConfig() {
// Load the right config for the right environment.
finalString resourceName =String.format("config-%s.js", DEPLOYMENT_ENVIRONMENT);
//Load your config utils
finalConfig load =ConfigLoader.load("mycompany-config-utils.js", resourceName);

if (!rootConfig.compareAndSet(null, load)) {
logger.warn("Config was set, and and you can't overwrite it. {}", resourceName);
}
return load;
}

publicstaticConfiggetConfig(finalStringbasePath) {
if (rootConfig.get() ==null) {
loadRootConfig();
}
return rootConfig.get().getConfig(basePath);
}

}
That is it. Now you can write your own config DSLs. Happy coding!
Konf has the following built-in config functions.
var env =Java.type("java.lang.System").getenv;
var uri =Java.type("java.net.URI").create;
var system =Java.type("java.lang.System");
var duration =Java.type("java.time.Duration");

/** To store private vars. */
var konf = {
osNameInternal :system.getProperty("os.name").toLowerCase()
};

functionseconds(unit) {
returnduration.ofSeconds(unit);
}

functionminutes(unit) {
returnduration.ofMinutes(unit);
}

functionhours(unit) {
returnduration.ofHours(unit);
}

functiondays(unit) {
returnduration.ofDays(unit);
}


functionmillis(unit) {
returnduration.ofMillis(unit);
}


functionmilliseconds(unit) {
returnduration.ofMillis(unit);
}

functionsysProp(prop) {
returnsystem.getProperty(prop);
}

functionsysPropOrDefault(prop, defaultValue) {
returnsystem.getProperty(prop, defaultValue.toString());
}

functionisWindowsOS() {
return (konf.osNameInternal.indexOf("win") >=0);
}

functionisMacOS() {
return (konf.osNameInternal.indexOf("mac") >=0);
}

functionisUnix() {
varOS=konf.osNameInternal;
return (OS.indexOf("nix") >=0||OS.indexOf("nux") >=0||OS.indexOf("aix") >0 );
}


functionisLinux() {
returnkonf.osNameInternal.indexOf("linux") >=0;
}


functionisSolaris() {
return (konf.osNameInternal.indexOf("sunos") >=0);
}

Konf - Java config system that supports YAML, JSON, Java properties, Java pojos, List, Maps and JavaScript

$
0
0

Konf - Typed Java Config system

Java configuration library similar in concept to TypeSafe config, but uses full YAML or JSON or JavaScript for configuration (and more).
Konf allows you to easily create your own config DSLs.

Using Konf on your project

Konf is in the public maven repo.

Using konf from maven

<dependency>
<groupId>io.advantageous.konf</groupId>
<artifactId>konf</artifactId>
<version>1.3.0.RELEASE</version>
</dependency>

Using konf from gradle

compile 'io.advantageous.konf:konf:1.3.0.RELEASE'

Using konf from scala sbt

libraryDependencies +="io.advantageous.konf"%"konf"%"1.3.0.RELEASE"

Using konf from clojure leiningen

[io.advantageous.konf/konf "1.3.0.RELEASE"]
Here is an example config for JavaScript.
Konf expects the config variable to be set to a JavaScript object with properties.

JavaScript based configuration for Java

var config = {

myUri:uri("http://host:9000/path?foo=bar"),

someKey: {
nestedKey:234,
other:"this text"
}

};
You can use full JavaScript for configuration as long as you define a variable called config that results in a JavaScript object which equates to a Java map.

Defining your own DSL

You can define you own config DSL for your environment. We have a full example that shows you how to create a custom config DSL for your internal projects. The example uses Mesosphere and Docker PORT look ups and it is from a real project.

Defining your own config DSL

var config = {

platform: {

statsd:"udp://"+getDockerHost() +":8125",

servicePort:mesosPortAt(0, 8080),
adminPort:mesosPortAt(1, 9090),
...
See the real world for example that uses Konf to find ports under Mesosphere (running in stating or prod) or under Docker (running on a local developers box).

Overview

Overview
  • implemented in plain Java SDK almost no dependencies (sl4j, and reflekt with no others)
  • supports files in : YAML, JSON, JSON LAX, JavaScript, Java properties or any tree of Map/List basic types and POJOs
  • allows you to easily create your own config DSL
  • merges multiple configs across all formats
  • can load from configs, from classpath, http, file or just an Java Object tree
  • great support for "nesting" (treat any subtree of the config the same as the whole config)
  • users can override the config with Java system properties, java -Dmyapp.foo.bar=10 and sysProp
  • users can override the config with OS environment variables
  • supports configuring an app, with its framework and libraries, all from a single file such as application.yaml
  • parses duration and size settings, "512k" or "10 seconds"
  • converts types, so if you ask for a boolean and the value is the string "yes", or you ask for a float and the value is an int, it will figure it out.
  • API based on immutable Config instances, for thread safety and easy reasoning about config transformations
  • extensive test coverage
This library limits itself to config. If you want to load config from another source, e.g., database or Redis or MongoDB, then you would need to write some custom code. The library has nice support for merging configurations (Configs with fall-backs) so if you build a custom Config from a custom source it's easy to merge it in. Just implement Config and then useconfig(config...) to configure your config into a chain of other configs. This is described at length below see "Loading config files with fallbacks".

License

The license is Apache 2.0.

Release Notes

Please see Release Notes, and Release Notes In Progress for the latest releases.

Build

The build uses gradle and the tests are written in Java; and, the library itself is plain Java.

Using the Library

importio.advantageous.config.ConfigLoader;

Config conf =ConfigLoader.load("myconfig.js", "reference.js");
int bar1 = conf.getInt("foo.bar");
Config foo = conf.getConfig("foo");
int bar2 = foo.getInt("bar");

Longer Examples

You can see longer examples in tests along with sample config. You can run these examples by git cloning this project and running gradle test.
In brief, as shown in the examples:
You create a Config instance provided by your application. You use ConfigLoader.load() and you can define your own config system. You could setup default reference.yaml or reference.json but you don't have to. You could just load a single level of config. Config is as complex or as simple as you need.
Config can be created with the parser methods in ConfigLoader.load or built up from any POJO object tree or tree of Map/List/Pojos basic value. It is very flexible. Examples are shown below and linked to below that use JSON, YAML and allow you to define your own DSL like config. It is very simple and easy to use.

Immutability

Objects are immutable, so methods on Config which transform the configuration return a new Config. There is no complex tree of Config objects. Just Config. It is pretty simple to use and understand.

Java interface for Konf is Config.

The Java interface for Konf is Config. You can get a sub Config from Config (getConfig(path)). The path is always in dot notation (this.that.foo.bar). You can also use:
  • hasPath(path)
  • getInt(path)
  • getLong(path)
  • getDouble(path)
  • getBoolean(path) can be true, false, "yes", "no", "on", "off", yes, no, off, on
  • getString(path)
  • getStringList(path) gets a list of strings
  • getConfig(path) gets a sub-config.
  • getMap(path) gets a map which is a sub-config.
  • getConfigList(path) gets a list of configs at the location specified.
  • getIntList(path)
  • getLongList(path)
  • getDoubleList(path)
  • getBooleanList(path)
  • getDuration(path) gets java.time.Duration useful for timeouts
  • getDurationList(path) gets duration list
  • getUri(path) gets java.net.URI useful for connecting to downstream services
  • getUriList(path) useful for connecting to downstream services
The getMap works with JavaScript objects (or Java maps see below for loading config from Java objects, YAML or JSON). The getStringList and getConfigList works with JavaScript array of string and a JavaScript array of JavaScript objects.
Note you get an exception if the path requested is not found. Use hasPath(path) if you think the config path might be missing.
Here is partial glimpse at the Config interface.

Config interface

publicinterfaceConfig {

/** Get string at location. */
StringgetString(Stringpath);

/** Checks to see if config has the path specified. */
booleanhasPath(Stringpath);

/** Get int at location. */
intgetInt(Stringpath);

/** Get float at location. */
floatgetFloat(Stringpath);

/** Get double at location. */
doublegetDouble(Stringpath);

/** Get long at location. */
longgetLong(Stringpath);

/** Get list of strings at location. */
List<String>getStringList(Stringpath);

/** Get map at location. */
Map<String, Object>getMap(Stringpath);

/** Get a sub-config at location. */
ConfiggetConfig(Stringpath);

/** Get list of sub-configs at location. */
List<Config>getConfigList(Stringpath);

/** Get a single POJO out of config at path. */
<T>Tget(Stringpath, Class<T>type);

/** Get a list of POJOs. */
<T>List<T>getList(Stringpath, Class<T>componentType);

/** Get duration. Good for timeouts */
DurationgetDuration(Stringpath);

/** Get duration list. */
List<Duration>getDurationList(Stringpath);

/** Get int list. */
List<Integer>getIntegerList(Stringpath);
...
}
The getX methods work like you would expect. Given this config file.

JavaScript functions for config

JavaScript functions that we support

  • sysProp(propName) to read a sysProp as in fooSize : sysProp("my.foo.size")
  • sysPropOrDefault(propName, defaultValue) to read a sysProp or a default
  • isWindowsOS()isMacOS()isUnix()isLinux()isSolaris()
  • env() as in fooSize : env('MY_FOO_SIZE') or even fooSize : sysPropOrDefault("my.foo.size", env('MY_FOO_SIZE'))
  • uri() which creates a java.net.URI as in fooURI : uri ("http://localhost:8080/")
  • java.time.Duration is imported as duration
  • java.lang.System is imported as system
  • seconds(units)minutes(units)hours(units)days(units)millis(units) and milliseconds(units) define a Duration which is useful for configuring timeouts and interval jobs
  • constants yesnoonoff for boolean config

Sample config for testing and showing how config works

var config = {

myUri:uri("http://host:9000/path?foo=bar"),

someKey: {
nestedKey:234,
other:"this text"
},

int1:1,
float1:1.0,
double1:1.0,
long1:1,
string1:"rick",
stringList: ['Foo', 'Bar'],
configInner: {
int2:2,
float2:2.0
},
uri:uri("http://localhost:8080/foo"),
myClass:"java.lang.Object",
myURI:"http://localhost:8080/foo",
employee: {"id":123, "name":"Geoff"},
employees: [
{id:123, "name":"Geoff"},
{id:456, "name":"Rick"},
{id:789, 'name':"Paul"}
]
};
We can do the following operations.
First we load the config.

Loading the config.

privateConfig config;

@Before
publicvoid setUp() throws Exception {
config =ConfigLoader.load("test-config.js");
}
Note that ConfigLoader.load(resources...) takes a variable length string array. By default a resource String can contain a valid URI, which can have the scheme classpathfile, or http. If you do not specify a scheme than the path is assumed to be a classpath resource.

Using different resources

        config =ConfigLoader.load(
"/io/mycompany/foo-classpath.js",
"classpath:test-config.js",
"classpath://foo.js",
"classpath:/bar.js",
"file://opt/app/config.js",
"file:///opt/app/config2.js",
"file:/opt/app/config.js",
"http://my.internal.server:9090/foo.js"
);
Then we show reading basic types with the config object using getX.

Reading basic types from config

    @Test
publicvoid testSimple() throws Exception {

//getInt
assertEquals(1, config.getInt("int1"));

//getStringList
assertEquals(asList("Foo", "Bar"),
config.getStringList("stringList"));

//getString
assertEquals("rick", config.getString("string1"));

//getDouble
assertEquals(1.0, config.getDouble("double1"), 0.001);

//getLong
assertEquals(1L, config.getLong("long1"));

//getFloat
assertEquals(1.0f, config.getFloat("float1"), 0.001);

//Basic JDK value types are supported like class.
assertEquals(Object.class, config.get("myClass", Class.class));

//Basic JDK value types are supported like URI.
assertEquals(URI.create("http://localhost:8080/foo"),
config.get("myURI", URI.class));

assertEquals(URI.create("http://localhost:8080/foo"),
config.get("uri", URI.class));

}
You can work with nested properties as well.

Reading a nested config from the config

    @Test
publicvoid testGetConfig() throws Exception {
//Read nested config.
finalConfig configInner = config.getConfig("configInner");
assertEquals(2, configInner.getInt("int2"));
assertEquals(2.0f, configInner.getFloat("float2"), 0.001);
}

@Test
publicvoid testGetMap() throws Exception {
//Read nested config as a Java map.
finalMap<String, Object> map = config.getMap("configInner");
assertEquals(2, (int) map.get("int2"));
assertEquals(2.0f, (double) map.get("float2"), 0.001);
}
You can read deeply nested config items as well by specifying the property path using dot notation.

Reading nested properties with dot notation from config

    @Test
publicvoid testSimplePath() throws Exception {

assertTrue(config.hasPath("configInner.int2"));
assertFalse(config.hasPath("configInner.foo.bar"));
assertEquals(2, config.getInt("configInner.int2"));
assertEquals(2.0f, config.getFloat("configInner.float2"), 0.001);
}
You can also read POJOs directly out of the config file.

Reading a pojo directly out of the config file

    @Test
publicvoid testReadClass() throws Exception {
finalEmployee employee = config.get("employee", Employee.class);
assertEquals("Geoff", employee.name);
assertEquals("123", employee.id);
}
You can read a list of POJOs at once.

Reading a pojo list directly out of the config file

    @Test
publicvoid testReadListOfClass() throws Exception {
finalList<Employee> employees = config.getList("employees", Employee.class);
assertEquals("Geoff", employees.get(0).name);
assertEquals("123", employees.get(0).id);
}
You can also read a list of config objects out of the config as well.

Reading a config list directly out of the config file

finalList<Config> employees = config.getConfigList("employees");
assertEquals("Geoff", employees.get(0).getString("name"));
assertEquals("123", employees.get(0).getString("id"));

Using Config with YAML

First include a YAML to object parser like YAML Beans or a library like this.

Example YAML

name:Nathan Sweet
age: 28
address:4011 16th Ave S
phone numbers:
- name:Home
number:206-555-5138
- name:Work
number:425-555-2306

Using Konf with YAML

//Use YamlReader to load YAML file.
YamlReader reader =newYamlReader(newFileReader("contact.yml"));

//Convert object read from YAML into Konf config
Config config =ConfigLoader.loadFromObject(reader.read());

//Now you have strongly typed access to fields
String address = config.getString("address");
You can also read Pojos from anywhere in the YAML file as well as sub configs.

You can also use Konf with JSON using Boon

See Boon JSON parser project, and Boon in five minutes

Using Konf with JSON

ObjectMapper mapper =JsonFactory.create();


/* Convert object read from YAML into Konf config.
'src' can be File, InputStream, Reader, String. */
Config config =ConfigLoader.loadFromObject(mapper.fromJson(src));


//Now you have strongly typed access to fields
String address = config.getString("address");
Boon supports LAX JSON (Json with comments, and you do not need to quote the field).

Working with java.time.Duration

  • getDuration(path) get a duration
  • getDurationList(path) get a duration list
Konf supports "10 seconds" style config for duration as well as having built-in functions and support for ISO-8601. See documentation for duration config for more details.
Konf can reads list of numbers.
  • getIntList reads list of ints
  • getLongList reads list of longs
  • getDoubleList reads list of doubles
  • getFloatList reads list of floats
See documentation list of number configuration for more details.

Konf can read memory sizes

  • getMemorySize(path)
  • getMemorySizeList(path)
This means we support config like:

Sizes supported.

  diskSpace :" 10 gigabytes",
diskVolumes : [" 10 gigabytes", "10GB", "10 gigabytes", 10]
We support the following size Strings.

Supported size strings

publicenumMemorySizeUnit {

BYTES(1, "B", "b", "byte", "bytes"),
KILO_BYTES(1_000, "kB", "kilobyte", "kilobytes"),
MEGA_BYTES(1_000_000, "MB", "megabyte", "megabytes"),
GIGA_BYTES(1_000_000_000, "GB", "gigabyte", "gigabytes"),
TERA_BYTES(1_000_000_000, "TB", "terabyte", "terabytes"),
PETA_BYTES(1_000_000_000_000L, "PB", "petabyte", "petabytes"),
EXA_BYTES(1_000_000_000_000_000L, "EB", "exabyte", "exabytes"),
ZETTA_BYTES(1_000_000_000_000_000_000L, "ZB", "zettabyte", "zettabytes");
You can also specify the sizes with built-in functions if you don't want to use strings.

Using built-in functions to create sizes.

  diskVolumes: [kilobytes(10), megabytes(10), bytes(10), gigabytes(10)]

Loading config files with fallbacks

import staticio.advantageous.config.ConfigLoader.*;
...
privateConfig config;
...
config = configs(config("test-config.js"), config("reference.js"));
You can load config. The config method is an alias for load(resources...). The configs(config...) creates a series of configs where the configs are search from left to right. The first config that has the object (starting from the left or 0 index) will return the object.
Give the following two configs (from the above example).

test-config.js

var config = {
abc :"abc",

reference.js

var config = {
abc :"abcFallback",
def :"def"
}
You could run this test.

Testing the reference.js is a fallback for test-config.js.

import staticio.advantageous.config.ConfigLoader.*;
...

config = configs(config("test-config.js"), config("reference.js"));

finalString value = config.getString("abc");
assertEquals("abc", value);

finalString value1 = config.getString("def");
assertEquals("def", value1);
You can load your config anyway you like. The String abc is found when looking up the key abc because it is in the test-config.js which gets read before the value abcFallback which is in reference.js. Yet the def key yields the "def"because it is defined in reference.js but not test-config.js. You can implement the same style config reading and fallback as is in Type Safe Config but with your DSL.

Thanks

If you like our configuration project, please try our Reactive Java project or our Actor based microservices lib.

Integrating TypeSafe Config and Konf.

$
0
0

Konf - Typed Java Config Integration

This allows you to combine TypeSafe config and Konf. You can have TypeSafe config be a fallback for Konf or the other way around.
You can load TypeSafe config as a Konf Config instance as follows:

Loading Typesafe config as a Konf Config object

Config config =TypeSafeConfig.typeSafeConfig();
finalString abc = config.getString("abc");
assertEquals("abc", abc);
You can also chain TypeSafe config as fallback or Konfig as a fallback for TypeSafe config as follows:

Konf as a fallback for TypeSafe config.

import staticio.advantageous.config.ConfigLoader.config;
import staticio.advantageous.config.ConfigLoader.configs;
import staticio.advantageous.config.ConfigLoader.load;

...

Config config;
...
config = configs(TypeSafeConfig.typeSafeConfig(), config("test-config.js"));

Reactive Java with Reakt

$
0
0

Reactive Java with Reakt

It might seem like Reakt is brand new. But it is not brand new. Most of what is in Reakt existed in QBit for years. But after working with Node.js and JavaScript promises, we realized we could write a lot cleaner interface. Instead of QBit's CallbackBuilder (QBits original Promise library) and Reactor we are moving towards Reakt promises, streams and the Reakt Reactor.
The trick for async and reactive programming is not the streams, it is the call coordination.

What is Reakt again?

If you are not familiar with Reakt. There was an informative interview about Reakt, and QBit has started to support Reakt as a first class citizen. You should start with the Reakt websiteand the Reakt documentation. You can use Reakt with any async lib including async NoSQL drivers.

Understanding managing callbacks

You want to async call serviceA, then serviceB, take the results of serviceA & serviceB, then call serviceC. Then based on the results of call C, call D or E and then return the results to the original caller. Calls to ABCD and E are all async calls and none should take longer than 10 seconds, if they do, then return a timeout to the original caller.
Let's say that the whole async call sequence should timeout in 20 seconds if it does not complete, and also check for circuit breakers, and provide back pressure feedback so the system does not have cascading failures. QBit is really good at this, but creating the Reactor (QBit's callback and task manager is called Reactor) was trial by fire. It was created while using it at scale, and was created while needing something like it and not finding anything. Which did great for the quick evolution of QBit's Reactor but its design could improve as well as its ease-of-use (although these were goals). QBit Reactor is quite good, but we could do better. QBit call coordination is goodReakt's will be better, and it is already useful.

Continous improvement

Reakt gives us a chance to work on QBit 2 call coordination and pull out async call coordinationinto a lib that can use be used with other projects that don't use QBit. This is the goal with Reakt.Reakt already has extention libraries for Guava/Cassandra and Vert.xReakt is also supported byQBit, a reactive microservice lib. The Reakt library can work with any JVM async framework. It is not tied to QBit microservices lib.

Next steps with Reakt

Reakt IO, which is in progress, will sit on top of Vertx or Conekt (a lightweight Netty IO lib that we are working on). Reakt IO provides a common interface for lightweight IO libs. Reakt provides promises and streams that are Java 8 lambda expression friendly (Groovy closure friendly, Kotlin closures friendly, Scala closure friendly, and Jython lambda friendly too).

When can I use Reakt

Now. QBit, which is a Java microservice lib, can handle call coordination like this at scale. It works. But this call coordination does not need to live in QBit. Thus Reakt was born. Reakt is already very useful and innovative and can be used with Vert.xGuavaCassandra and more.Reakt is laser focused on async call coordination on the JVM, and making that experience clean and enjoyable.

Reactive programming

Reactive programming is not a new paradigm. Reactive programming centers around data event flows and propagation of change. There are many example of reactive programming like spreadsheet updates, the Swing event loop, the Vert.x event bus, Node.js event loop and the JavaScript browser event loop.
Reactive programming often is at the core of interactive user interfacesModel-view-controller, simulations, real time animations, but can also be used to for reactive microservice development. It is a general technique with many applications.
You can even manage complex data flows and transformations using tools like Spark. You can do amazing near real time big data transformation using reactive streams, and tools like SparkFunctional reactive programming has its place, and is increasingly used to process large streams of data into useable information in near real time. However, there is a much more common and mundane use of reactive programming.

Object-oriented reactive programming

Object-oriented reactive programming (OORP) combines object oriented programming withreactive programming. This has become a very popular model with tools like AngularReact andjQueryjQuery and other libs also manage call coordination with Promises. Promises are a very common way to manage streams of event data into actionable responses. Promises have become so popular in the JavaScript/Node.js world that Promises are part of ES6.

Promise vs streams

Events and Streams are great for things that can happen multiple times — keyup, touchstart, or event a user action stream from Kafka, etc.
With those events you don't really care about what happened before when you attached the listener.
But often times when dealing with service calls and data repositories, you want to handle a response with a specific next action, and a different action if there was an error or timeout from the responses. You essentially want to call and handle a response asynchronously and that is what promises allow.
At their most basic level, promises are like event listeners except:
A promise can only succeed or fail once. A promise cannot succeed or fail twice, neither can it switch from success to failure. Once it enters its completed state, then it is done.

Reakt Promises

Reakt promises are very similar in concept to ES6 promises, which have become standardized in the JavaScript/TypeScript/ES5/ES6/ES7 pantheon on languages.
A promise can be:
  • fulfilled The callback relating to the promise succeeded
  • rejected The callback/action relating to the promise failed
  • pending The callback has not been fulfilled or rejected yet
  • completed The callback/action has been fulfilled/resolved or rejected
Java is not single threaded, meaning that two bits of code can run at the same time, so the design of this promise and streaming library takes that into account. We came out with many tools that adapt the Promise constructs (single threaded event loop in JavaScript) to the Java world (multi-threaded, race condition possibilities, thread visibility, etc.).
There are three types of promises:
  • Callback promises (async)
  • Blocking promises (for testing and legacy integration)
  • Replay promises (allow promises to be handled on the same thread as caller, managed by aReactor)
Unlike their JavaScript cousins that do not have to worry about thread safety. Promises in Reakt can be invokable.
Promises can be very fluent.

Passing a promise as a callback handler

        employeeService.lookupEmployee(33, result -> {
result.then(e -> saveEmployee(e))
.catchError(error -> {
logger.error("Unable to lookup", error);
});
});
Promises in Java become even more fluent when you use invokable promises.

Using an invokable promise

        employeeService.lookupEmployee("123")
.then((employee)-> {...}).catchError(...).invoke();
Replay promises are the most like their JS cousins. Replay promises are usually managed by theReakt Reactor and supports environments like Vert.xAkkaReactorNetty, async noSQL drivers and QBit.
It is common to make async calls to store data in a NoSQL store or to call a remote REST interface or deal with a distributed cache or queue. Also Java is strongly typed so the library that mimics JS promises is going to look a bit different. Reakt uses similar terminology to ES6 promises where it makes sense.
We have been on projects where we wrote libs in JS and Java that did very similar things and the promise code for ES6 and Java looks close to the point where we have to take a double look to decide which language we are working with.

Conclusion

QBit has had Promises for a few years now, but they were called them CallbackBuilders instead and were not as easy-to-work with. Reakt focuses using standard terminology and ease-of-use. With Reakt you can use the same terminology and modeling on projects that do not use QBit, reactive microservices lib like Conekt, Vert.xRxJavaProject ReactorLightbend, and reactive streams.

Reactive Java code examples using Reakt (with links to docs for further info)

$
0
0
This is a presentation of different Reakt features in the context of a real application. I have renamed the classnames and such, but this is from an in-progress microservice. It demonstrates where you would use the Reakt pieces to build a reactive Java application.
This covers usage of Reakt:
  • blocking promises
  • Promises.all
  • invokable promise
  • Expected values
  • Circuit breakers
  • Working with the Reactor
  • Working with streams
  • Using AsyncSuppliers to create downstream services
  • Reakt Guava integration
  • Using promise.thenMap

Blocking promise example

Let's say you have an async microservices application, and you want to write some unit and integration tests. You want the tests to wait until the system starts up. Rather you want the system to notify the test when it is done loading.
NOTE: The examples below are written in Kotlin which is a JVM language from Idea. I use Kotlin because the example are easier to read and take up less space. If you know Java 8, you should follow no problem. Think of it like pseudo code. Kotlin works well with Java classes and such.

Loading a system for testing.

importio.advantageous.qbit.util.PortUtils
importio.advantageous.reakt.AsyncSupplier
importio.advantageous.reakt.promise.Promises

object DFTestUtils {

val adminPort :AtomicInteger= AtomicInteger(9090)
val eventBusPort :AtomicInteger= AtomicInteger(8080)

fun loadSystem():ServicePlatform {
/** Load System. */
val loadPromise =Promises.blockingPromiseNotify(Duration.ofSeconds(4))
val servicePlatform = servicePlatform().withNamespace(Constants.TODO_SERVICE)
.setWaitForStartup(true).setAdminPort(PortUtils.findOpenPortStartAt(adminPort.andIncrement))
.setEventBusPort(PortUtils.findOpenPortStartAt(eventBusPort.andIncrement))
Main.run(servicePlatform).invokeWithPromise(loadPromise)

loadPromise.get() //Wait for the system to load before we start.
Assert.assertFalse("system loaded", loadPromise.failure())
return servicePlatform
}
Notice that we create a promising using Promises.blockingPromiseNotify(Duration.ofSeconds(4)). We call loadPromise.get() to wait until the system loads. Blocking promises are good for testing.
Now we can use the loadSystem in our tests. Here is a test that does a health check against a running server.

Using loadSystem / blocking promise in our test

    @Test
@Throws(Exception::class)
fun mainHealthCheckTest() {

val servicePlatform = loadSystem()
val httpTextResponse =HttpClientBuilder.httpClientBuilder().setPort(servicePlatform.adminPort)
.buildAndStart().get("/__admin/ok")

assertNotNull(httpTextResponse)
assertEquals("true", httpTextResponse.body())
assertEquals(200, httpTextResponse.code().toLong())

shutdownSystem(servicePlatform)
}

Using invokable promises to notify when the entire system is done loading

The actual code that loads our system uses an invokable promise.

Load system calls Main.run which returns an invokable promise

importio.advantageous.reakt.AsyncSupplier
importio.advantageous.reakt.Callback
importio.advantageous.reakt.Stream
importio.advantageous.reakt.promise.Promise
importio.advantageous.reakt.promise.Promises.*
object Main {

...
var repoServiceQueue:ServiceQueue?=null
private val logger =LoggerFactory.getLogger(Main::class.java)
...
fun main(args:Array<String>) {
run(servicePlatform().withNamespace(TODO_SERVICE)).invoke()
}

...
fun run(servicePlatform:ServicePlatform):Promise<Void> {
return invokablePromise { donePromise ->

val loadCollectionServicePromise = promiseBoolean()
val loadTodoServicePromise = promiseBoolean()

val loadPromise = all(loadCollectionServicePromise,
loadTodoServicePromise)
.thenPromise(donePromise)

createCollectionService(servicePlatform, loadPromise)
createTodoServiceService(servicePlatform, loadPromise)
servicePlatform.start()
}
}

Working with Promises.all

There is a lot going on here. The run method is using Promises.invokablePromise. Then the run method uses Promises.all to chain the promise loadCollectionServicePromise and loadTodoServicePromise together. The method Promises.all is used when you want all of the promises to trigger then the all promise triggers. This way you are being notified when both the createCollectionService and the createTodoServiceService async reply. You don't want to start testing before the system is initialized.

Working with Reakt Streams

The project that I am working on uses leader election from elekt. Elekt, leadership API lib, uses Reakt streams. For testing, we just simulate the Elekt consul support.
The LeaderElector looks like this:

LeaderElector uses Reakt streams support

publicinterfaceLeaderElector {

/**
* Attempt to elect this service as leader.
* Returns true if successful, and false if not successful.
* @param endpoint endpoint describes the host and port of the leader.
* @param callback callback
*/
voidselfElect(finalEndpointendpoint, finalCallback<Boolean>callback);

/**
*
* This will send leadership changes as the occur over the stream.
*
* @param callback callback returns new leader.
*/
voidleadershipChangeNotice(finalStream<Endpoint>callback);

/**
*
* This will come back quickly with a new Leader.
* If no Endpoint is returned in the callback then there is no leader.
*
* @param callback callback returns new leader.
*/
voidgetLeader(finalCallback<Endpoint>callback);


}
To simulate that for integration testing, we use this mock LeaderElector.

Using a test LeaderElector stream for integration testing

private fun createLeaderElector():Supplier<LeaderElector> {
returnSupplier {
object :LeaderElector {
override fun leadershipChangeNotice(stream:Stream<Endpoint>?) {
logger.info("Leader notice registered")
val idOfServer = Identity()
stream?.reply(Endpoint(idOfServer.host, idOfServer.servicePort))
Thread({
Thread.currentThread().isDaemon =true
while (true) {
Thread.sleep(1000*10)
stream?.reply(Endpoint(idOfServer.host, idOfServer.servicePort))
}
})
}

override fun selfElect(endpoint:Endpoint?, callback:Callback<Boolean>?) {
logger.info("Self elect was called")
callback?.resolve(true);
}

override fun getLeader(callback:Callback<Endpoint>?) {
logger.info("Self elect was called")
val idOfServer = Identity()
callback?.resolve(Endpoint(idOfServer.host, idOfServer.servicePort))
}
}
}
}
Notice the call to stream.reply to send a stream of leader elect notifications that this server has been elected the leader.

Reakt Expected and Circuit breakers

It is important to monitor the health of your system, and sometimes it is good not to beat a dead horse. If downstream services are broken there is no point in using them until the are fixed. In Reakt we use Circuit Breakers and Expected to handle when some service is support to be there or some value is expected.
Let's demonstrate this with MetricsCollectionServiceImpl.

MetricsCollectionServiceImpl

importio.advantageous.reakt.Breaker
importio.advantageous.reakt.Breaker.*
importio.advantageous.reakt.Expected
importjava.util.function.Function
importjava.util.function.Supplier

/**
* Metrics Collection Service.
* Manages finding leader with LeaderElector.
*/
classMetricsCollectionServiceImpl
/**
* @param mgmt ServiceManagementBundle (from QBit)
* *
* @param leaderElectorSupplier leaderElectorSupplier for connecting to the leader elector.
* *
* @param todoListServiceSupplier todoListServiceSupplier for connecting
* * to the todo service.
* *
* @param metricRepository metric repo for storing metrics
*/
(
/**
* Service management bundle which includes stats collection, Reakt reactor, QBit health management, and more.
*/
private val mgmt: ServiceManagementBundle,
/**
* Supplies a leader elector interface. LeaderElector is from Elekt.
*/
private val leaderElectorSupplier: Supplier<LeaderElector>,
/**
* This is used to create a supplier.
*/
private val todoListServiceSupplier: Function<Endpoint, TodoServiceClient>,
/**
* Metric repository for saving repositories.
*/
private val metricRepository: MetricRepositoryClient)
: MetricsCollectionService {


/**
* The current leaderEndpoint which starts out empty.
*/
private var leaderEndpoint =Expected.empty<Endpoint>()
/**
* The actual todoService wrapped in a Reakt Circuit breaker.
*/
private var todoService =Breaker.opened<TodoServiceClient>()
/**
* The leader elector we are using, wrapped in Reakt Circuit breaker
*/
private var leaderElectorBreaker:Breaker<LeaderElector>=Breaker.opened()
/**
* Call count per second.
*/
private var callCount:Long=0

init {
/*
* Check circuit breaker health every 10 seconds.
*/
mgmt.reactor().addRepeatingTask(seconds(10)) {
healthCheck()
}
createLeaderElector()
mgmt.reactor().addRepeatingTask(seconds(1)) { throughPut() }
mgmt.reactor().addRepeatingTask(millis(100), { flushService(metricRepository) })
}
Notice that we are using Reakt circuit breakers. Notice we are using Reakt reactor's Reactor.addRepeatingTask to periodically check the health of our repo. Reakt's Reactor is used to manage callbacks so they execute on this thread, callback timeouts, and repeating tasks.
Let's look at the healthCheck that runs every 10 seconds to see how circuit breakers work.

healthCheck that runs every 10 seconds via a Reakt Reactor task

private fun healthCheck() {
if (mgmt.isFailing) {
logger.warn("CollectionService Health is suspect")
} else {
logger.debug("CollectionService is Healthy")
}

leaderElectorBreaker.ifBroken {
createLeaderElector()
}

/* If the TODO service is broken, i.e. the circuit is open then do... */
todoService.ifBroken {
/* Check to see if we have a leaderEndpoint. */
leaderEndpoint
.ifPresent { leaderEndpoint ->
this.handleNewLeader(leaderEndpoint)
}
/* If we don't have a leaderEndpoint, then look it up. */
.ifEmpty {
/* Look up the endpoint if the elector is not broken. */
leaderElectorBreaker
.ifOk { elector ->
this.lookupLeader(elector)
}
.ifBroken {
logger.warn("We have no leader and the leader elector is down")
}
}
}
}
The leaderEndpoint is an expected value that might not exist. The methods ifOk and ifBroken are from circuit breaker. The ifOk means the fuse it not burned out. The ifBroken means the fuse blew. As you can see combining Expected values and services wrapped in Breakers allows us to simplify and reasoning on what to do if things go down.
When a fuse opens or breaks, then we can work around it. Here is how we mark a broken breaker.

Marking a service Breaker as broken (the fuse is open)

try {
leaderElectorBreaker =Breaker.operational(leaderElectorSupplier.get())
leaderElectorBreaker
.ifOk { this.lookupLeader(it) }
.ifBroken {
logger.error("Unable to connect to leader supplier")
}

if (leaderElectorBreaker.isOk)
mgmt.increment("leader.elector.create.success")
else
mgmt.increment("leader.elector.create.fail")

leaderElectorBreaker.ifOk {
this.handleElectionStream(it)
}
} catch (ex:Exception) {
mgmt.increment("leader.elector.create.fail.exception")
logger.error("Unable to connect to leader supplier", ex)
leaderElectorBreaker =Breaker.broken<LeaderElector>()
}
Notice the use of Breaker.operational to denote that we have a new service to work with that should work. Then if the service fails, we mark it has broken with Breaker.broken.

Working with Reakt Streams

Here is us handling the election stream that we showed a mock-up of earlier.

Working with Reakt Streams

private fun handleElectionStream(leaderElector:LeaderElector) {
leaderElector.leadershipChangeNotice { result ->
result
.catchError { error ->// Run on this service thread
mgmt.reactor()
.deferRun {
logger.error("Error handling election stream")
mgmt.increment("leader.stream.elect.error")
this.leaderElectorBreaker = broken<LeaderElector>()
createLeaderElector()
}
}
.then { endpoint ->// Run on this service thread
mgmt.reactor().deferRun {
mgmt.increment("leader.stream.elect.notify")
logger.info("New Leader Notify {} {}", endpoint.host, endpoint.port)
handleSuccessfulLeaderLookupOrStream(endpoint)
}
}
}
}
Notice that we use reactor.deferRun so we can handle this stream on this services thread.
Now let's show another example of Promises.all. We have a Cassandra service that wants to write a heap of records to the DB. It wants to write the records in parallel.
/**
* Stores Metric data and results into Cassandra.
*/
internal classCassandraMetricRepository
/**
* @param sessionAsyncSupplier supplier to supply Cassandra session.
* @param serviceMgmt serviceMgmt to manage callbacks and repeating tasks.
* @param promise returns when cassandra initializes.
*
*/
(
/**
* Cassandra Session supplier.
*/
private val sessionAsyncSupplier: AsyncSupplier<Session>,
/**
* QBit serviceMgmt for repeating tasks, stats, time and callbacks that execute on the caller's thread.
*/
private val serviceMgmt: ServiceManagementBundle,
promise: Promise<Boolean>) : MetricRepositoryService {
/**
* generate the sequence for backup.
*/
private val sequenceGen = AtomicLong(2)
/**
* Reference to the cassandra session which get connected to async.
*/
private var sessionBreaker =Breaker.opened<Session>()
/**
* Error counts from Cassandra driver for the last time period.
*/
private val errorCount = AtomicLong()
...
Notice that we create our sessionBreaker, which is our reference to Cassandra as an opened Circuit. We define a sessionAsyncSupplier An AsyncSupplier is also from Reakt. It is like a regular Supplier except it is async.
We use the reactor to define a repeating task to check the health of the Cassandra connection.

Using the reactor

    init {

/* Connect the Cassandra session. */
connectSession(promise)

/*
This makes sure we are connected.
It provides circuit breaker if sessionBreaker is down to auto reconnect.
*/
serviceMgmt.reactor().addRepeatingTask(Duration.ofSeconds(5)) 
                          { this.cassandraCircuitBreaker() }
}
There we check for the health of our Cassandra session and if it goes down, we try to reconnect just like before.
We use the circuit breaker to do alternative logic if our connection goes down.

using alternative Breaker logic

    override fun recordMetrics(callback:Callback<Boolean>, metrics:List<Metric>) {
sessionBreaker()
/* if we are not connected, fail fast. */
.ifBroken { callback.reject("Not connected to Cassandra") }
/* If we are connected then call cassandra. */
.ifOk { session -> doStoreMetrics(session, callback, metrics) }
}
Note the use of ifBroken and ifOk. This way we can control the reconnect.
The method doStoreMetrics stores many records to Cassandra asynchronously, and even though it saves records in parallel it does not let its caller know via a callback, unless all of the records were stored.

Using reactor.all to coordinate many async calls

/**
* Does the low level cassandra storage.
*/
private fun doStoreMetrics(session:Session,
callback:Callback<Boolean>,
metrics:List<Metric>) {

logger.debug("Storing metrics {}", metricss.size)
/* Make many calls to cassandra using its async lib to recordMetrics
each imprint. */
val promises = metrics.map({ metric -> doStoreMetric(session, metric) }).toList()
/*
* Create a parent promise to contain all of the promises we
* just created for each imprint.
*/
serviceMgmt.reactor().all(promises)
.then {
serviceMgmt.increment("bulk.store.success);
logger.info("metrics were stored {}", metrics.size)
callback.resolve(true)
}
.catchError { error ->
serviceMgmt.increment("bulk.store.error);
logger.error("Problem storing metrics ${metrics.size}", error)
callback.reject(error)
}
}
It does this call coordination by using reactor.all to create a promise that only replies if all of the other promise reply. The method doStoreMetric returns a single promise. We use Kotlin streams (just like Java streams but more concise) to turn the list of metrics into a list of calls to doStoreMetric into a list of Promises which we then pass to reactor.all to make all of those promises into a single promise.
The doStoreMetric uses Reakt Guava/Cassandra integration to turn a ListableFuture into a Reakt promise.

Working with Reakt Cassandra / Guava support, and using thenMap

importio.advantageous.reakt.guava.Guava.registerCallback

private fun doStoreMetric(session:Session,
metric :Metric):Promise<Boolean> {
val resultSetFuture = session.executeAsync(QueryBuilder.insertInto(METRICS_TABLE)
.value("employeeId", metric.employeeId)
.value("metricType", metric.metricType.name.toLowerCase())
.value("metricName", metric.metricName)
.value("provider", metric.provider)
.value("externalId", metric.externalId)
.value("value", metric.value)
.value("surrogateKey", metric.surrogateKey)
.value("created_at", metric.timestamp))
return createPromiseFromResultSetFutureForStore(resultSetFuture, "Storing Metric")
}


private fun createPromiseFromResultSetFutureForStore(resultSetFuture:ResultSetFuture,
message:String):Promise<Boolean> {

val resultSetPromise = serviceMgmt.reactor().promise<ResultSet>()

val promise = resultSetPromise.thenMap({ it.wasApplied() }).catchError { error ->
if (error is DriverException) {
callback.ifPresent { callback1 -> callback1.reject(error.message, error) }
logger.error("Error "+ message, error)
errorCount.incrementAndGet()
}
}
registerCallback<ResultSet>(resultSetFuture, resultSetPromise)
return promise
}

Using thenMap to convert a promise into another type of Promise

Notice we use registerCallback from the Reakt Guava integration to convert the future into a promise. We also use promise.thenMap to convert a Promise into a Promise.

Using invokable Promises inside of an Actor or Managed event loop

Using invokeWithReactor

override fun collectTodo(callback:Callback<Boolean>,
todoList:List<Todo>) {
callCount++
todoRepo.recordTodoList(todoList)
.then { ok ->
todoService
.ifOk { todoService1 ->
doCollectWithCallback(callback, todoList, todoService1)
}
.ifBroken {
mgmt.increment("collect.call.df.service.broken")
logger.error("Connection to todoService is down.")
mgmt.increment("collect.broken")
}
}
.catchError { error ->
mgmt.setFailing()
logger.error("Connection to cassandra is down.", error)
callback.reject("Connection to cassandra is down. "+ error.message, error)
}
.invokeWithReactor(mgmt.reactor())
}
You can invoke invokable promises in the context of a Reactor by using invokeWithReactor(mgmt.reactor()). This allows the callback handlers from the promises to run in the same thread as the service actor or event loop.

Conclusion

I hope you enjoyed this article. It links back to areas of the Reakt documentation where you can find more details. If you are new to Reakt and what to understand the question Why Reakt and What is Reakt I suggest reading this. Also this interview about Reakt might help.

Understanding the QBit microservices lib's serviceQueue

$
0
0
QBit is made up of queues. There are request queues, response queues and event queues.
serviceQueue is a set of three queues, namely requests (methodCalls), responses and events. The serviceQueue turns an ordinary POJO (plain old Java Object) into a Service Actor. AserviceQueue is building block of QBit.
  • serviceQueue turns a POJO into a Service Actor
  • serviceBundle groups serviceQueues under different addresses, shares a response queue, allows for service pools, serviceSharding, etc.
  • serviceServer exposes a serviceBundle to REST and WebSocket RPC.
QBit allows you to adapt POJOs to become Service Actors. A Service Actor is a form of an active object. Method calls to a Service Actor are delivered asynchronously, and handled on one thread which can handle tens of millions or more method calls per second. Let's demonstrate by creating a simple POJO and turning it into a Service Actor.

Associating POJO with serviceQueue to make a service actor

ServiceQueue serviceQueue;
...

// Create a serviceQueue with a serviceBuilder.
finalServiceBuilder serviceBuilder = serviceBuilder();

//Start the serviceQueue.
serviceQueue = serviceBuilder
.setServiceObject(newTodoManagerImpl())
.buildAndStartAll();
The above code registers the POJO TodoManagerImpl with a serviceQueue by using the method serviceBuilder.setServiceObject. The serviceQueue is started by thebuildAndStartAll method of ServiceBuilder.
ServiceQueue is an interface (io.advantageous.qbit.service.ServiceQueue). TheServiceQueue is created with a ServiceBuilder(io.advantageous.qbit.service.ServiceBuilder). You create a Service Actor by associating a POJO with a serviceQueue. You make this association between the serviceQueue and your service POJO with the `ServiceBuilder.
Once started the serviceQueue can handle method calls on behalf of the TodoManagerImpl and recieve events and deliver them to TodoManagerImplTodoManagerImpl can sit behind theserviceQueue. If you only access TodoManagerImpl POJO service from a serviceQueue then it will only ever be accessed by one thread. TodoManagerImpl can handle tens of millions of calls per second, and all of those calls will be thread safe. Here is a simple example of a POJO that we will expose as a Service Actor.

Implementation

packagecom.mammatustech.todo;

importio.advantageous.qbit.reactive.Callback;

importjava.util.ArrayList;
importjava.util.Map;
importjava.util.TreeMap;

publicclassTodoManagerImpl {

privatefinalMap<String, Todo> todoMap =newTreeMap<>();

publicTodoManagerImpl() {
}

publicvoidadd(finalCallback<Boolean>callback, finalTodotodo) {
todoMap.put(todo.getId(), todo);
callback.resolve(true);
}

publicvoidremove(finalCallback<Boolean>callback, finalStringid) {
finalTodo removed = todoMap.remove(id);
callback.resolve(removed !=null);
}

publicvoidlist(finalCallback<ArrayList<Todo>>callback) {
callback.resolve(newArrayList<>(todoMap.values()));
}
}
Notice that this example does not return values, instead it uses the callback to send a response back to the client. A call to callback.resolve(someValue) will send that value to theresponseQueue. Method calls come in on the requestQueue. The responses go out on the theresponseQueue. Let's explore this concept.
The serviceQueue has the following interface.

Partial code listing of serviceQueue showing queues

/**
* Manages a service that sits behind a queue.
* created by Richard on 7/21/14.
*
* @author rhightower
*/
publicinterfaceServiceQueueextends ... {
...

Objectservice();
SendQueue<MethodCall<Object>>requests();
SendQueue<Event<Object>>events();
ReceiveQueue<Response<Object>>responses();
...
These methods are not typically accessed. They are for integration and internal usage but they can help you understand QBit microservices a bit better.
You can access the POJO that the serviceQueue is wrapping with service(). You can send method calls directly to the serviceQueue by using the requests() method to get asendQueue (SendQueue<MethodCall<Object>>). You can send events directly to theserviceQueue by using the events() method to get a sendQueue. Note that the sendQueueyou receive will not be thread safe (they implement micro-batching), so each thread will need to get its own copy of an event or methodCall (request) sendQueue. A sendQueue is the client's view of the queue.
On the receiver side (service side) events and methodCalls queues are handled by the same thread so that all events and methodCalls go to the POJO (e.g., TodoManagerImpl) on the same thread. This is what makes that POJO a Service Actor (active object).
Typically to make calls to a Service Actor, you use a service client proxy, which is just an interface. The service client proxy can return Promises or take a Callback as the first or last argument of the method. A promise is a deferred result that you can handle asynchronously. ThePromise interface is similar to ES6 promises.

Service Client Proxy interface

packagecom.mammatustech.todo;

importio.advantageous.reakt.promise.Promise;
importjava.util.List;


publicinterfaceTodoManagerClient {
Promise<Boolean>add(Todotodo);
Promise<Boolean>remove(Stringid);
Promise<List<Todo>>list();
}

Todo POJO to store a Todo item

packagecom.mammatustech.todo;

publicclassTodo {

privatefinalString name;
privatefinalString description;
privatefinallong createTime;
privateString id;

publicTodo(Stringname, Stringdescription, longcreateTime) {
...
}

//normal getters, equals, hashCode
}
To create an use a service client proxy you use the serviceQueue.

Creating and using a service client proxy

TodoManagerClient client;
ServiceQueue serviceQueue;


//Create a client proxy to communicate with the service actor.
client = serviceQueue
.createProxyWithAutoFlush(TodoManagerClient.class,
Duration.milliseconds(5));


//Add an item
finalPromise<Boolean> promise =Promises.blockingPromiseBoolean();

// Add the todo item.
client.add(newTodo("write", "Write tutorial", timer.time()))
.invokeWithPromise(promise);


assertTrue("The call was successful", promise.success());
assertTrue("The return from the add call", promise.get());

//Get a list of items

finalPromise<List<Todo>> promiseList =Promises.blockingPromiseList(Todo.class);

// Get a list of todo items.
client.list().invokeWithPromise(promiseList);

// See if the Todo item we created is in the listing.
finalList<Todo> todoList =
promiseList.get().stream()...


//Remove an item
// Remove promise
finalPromise<Boolean> removePromise =
Promises.blockingPromiseBoolean();
client.remove(todo.getId())
.invokeWithPromise(removePromise);
Note Blocking Promises are great for testing and integration but not something you typically use in your reactive microserivce (sot of defeats the whole purpose).
Here is a simple unit test showing what we have done and talked about so far, after this let's show a non-blocking example and some call coordination.

Unit test to show it is working

packagecom.mammatustech.todo;

importio.advantageous.qbit.service.ServiceBuilder;
importio.advantageous.qbit.service.ServiceQueue;
importio.advantageous.qbit.time.Duration;
importio.advantageous.qbit.util.Timer;
importio.advantageous.reakt.promise.Promise;
importio.advantageous.reakt.promise.Promises;
importorg.junit.After;
importorg.junit.Before;
importorg.junit.Test;


importjava.util.List;
importjava.util.stream.Collectors;

import staticio.advantageous.qbit.service.ServiceBuilder.serviceBuilder;
import staticorg.junit.Assert.assertEquals;
import staticorg.junit.Assert.assertTrue;

publicclassTodoManagerImplTest {

TodoManagerClient client;
ServiceQueue serviceQueue;
finalTimer timer =Timer.timer();

@Before
publicvoidsetup() {

// Create a serviceQueue with a serviceBuilder.
finalServiceBuilder serviceBuilder = serviceBuilder();

//Start the serviceQueue.
serviceQueue = serviceBuilder
.setServiceObject(newTodoManagerImpl())
.buildAndStartAll();

//Create a client proxy to communicate with the service actor.
client = serviceQueue.createProxyWithAutoFlush(TodoManagerClient.class, Duration.milliseconds(5));

}

@Test
publicvoidtest() throwsException {
finalPromise<Boolean> promise =Promises.blockingPromiseBoolean();

// Add the todo item.
client.add(newTodo("write", "Write tutorial", timer.time()))
.invokeWithPromise(promise);


assertTrue("The call was successful", promise.success());
assertTrue("The return from the add call", promise.get());

finalPromise<List<Todo>> promiseList =Promises.blockingPromiseList(Todo.class);

// Get a list of todo items.
client.list().invokeWithPromise(promiseList);

// See if the Todo item we created is in the listing.
finalList<Todo> todoList = promiseList.get().stream()
.filter(todo -> todo.getName().equals("write")
&& todo.getDescription().equals("Write tutorial")).collect(Collectors.toList());

// Make sure we found it.
assertEquals("Make sure there is one", 1, todoList.size());


// Remove promise
finalPromise<Boolean> removePromise =Promises.blockingPromiseBoolean();
client.remove(todoList.get(0).getId()).invokeWithPromise(removePromise);



finalPromise<List<Todo>> promiseList2 =Promises.blockingPromiseList(Todo.class);

// Make sure it is removed.
client.list().invokeWithPromise(promiseList2);

// See if the Todo item we created is removed.
finalList<Todo> todoList2 = promiseList2.get().stream()
.filter(todo -> todo.getName().equals("write")
&& todo.getDescription().equals("Write tutorial")).collect(Collectors.toList());

// Make sure we don't find it.
assertEquals("Make sure there is one", 0, todoList2.size());

}

@After
publicvoidtearDown() {
serviceQueue.stop();
}


}
You can find this source code at this github repo.
Here is a build file for the example so you can see the dependencies.

Build file build.gradle

group 'qbit-ex'
version '1.0-SNAPSHOT'

apply plugin:'java'


apply plugin:'application'


mainClassName ="com.mammatustech.todo.TodoServiceMain"


compileJava {
sourceCompatibility =1.8
}

repositories {
mavenCentral()
mavenLocal()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
compile 'io.advantageous.qbit:qbit-vertx:1.8.3'
compile 'io.advantageous.qbit:qbit-admin:1.8.3'
}

Executing a bunch of methods at once

We can execute a bunch of methods at once and use Promises.all to do the next item when they all succeed or Promises.any to something when any of them succeed.

Executing many methods on a service proxy at once

    @Test
publicvoid testUsingAll() throws Exception {

/* A list of promises for things we want to do all at once. */
finalList<Promise<Boolean>> promises =newArrayList<>(3);
finalCountDownLatch latch =newCountDownLatch(1);
finalAtomicBoolean success =newAtomicBoolean();


/** Add a todoItem to the client add method */
finalTodo todo =newTodo("write", "Write tutorial", timer.time());
finalPromise<Boolean> promise
= client.add(todo);
promises.add(promise);

/** Add two more. */
promises.add(client.add(newTodo("callMom", "Call Mom", timer.time())));
promises.add(client.add(newTodo("callSis", "Call Sister", timer.time())));

/** Now async wait for them all to come back. */
Promises.all(promises).then(done -> {
success.set(true);
latch.countDown();
}).catchError(e-> {
success.set(false);
latch.countDown();
});

/** Invoke the promises. */
promises.forEach(Promise::invoke);

/** They are all going to come back async. */
latch.await();
assertTrue(success.get());
}

Thread model

The serviceQueue can be started and stopped. There are several options to start aserviceQueue. You can start it with two threads, one thread for response handling and another thread for request/event handling (startAll()). You can start the serviceQueue with just the request/event handling thread (start()). You can also start it with one thread managing request/event and responses. Caution must be exercised with the last way since if a callback or promise blocks then your serviceQueue will be blocked. Typically you use startAll or you use a serviceBundle where one response queue is shared with many serviceQueues. TheserviceQueue was meant to be composable so you can access the queues and provide your own thread model if needed or desired.

Exception Handling

Typically you handle a exception from a Service Actor by calling callback.reject(exception)to pass the exception downstream to the client or you catch it and handle it in whatever way makes sense. If you do not catch an exception then the thread for your Service Actor will terminate. However, QBit will log the exception that you did not handle and restart a new thread to manage your Service Actor.

Handling calls to other Service Actor

In the QBit microservice lib it is common to call other async services, remote Service Actors, REST services, and async NoSQL database drivers. If you Service Actor is stateful (which is common with high-speed services), then you will want to do use a Reactor. There is theReactor that comes with QBit which is EOL (since we are replacing it with the one we wrote forReakt), and then there is the Reactor that comes Reakt. The serviceQueue allows events/method calls to all come to the Service Actor on one thread. The reactor is a way to also allow method call callbacks to happen on the same thread, and since the callbacks happen on the same thread as the Service Actor access to the Service Actors data (fields, collaborating objects, etc.) are also thread safe. You only need to use a Reactor if you want to handle callback on the same thread as the Service Actor, which is not always needed. You can also use theReactor to handle streaming data on the same thread as the Service Actor. The Reactor can also be used for scheduling async tasks or just scheduling a task to be run on the Service Actoras soon as possible.

Getting notified when you start, stop, etc.

You can get notified of different Service Actor lifecycle events like started, stopped, when the micro batch limit was met, when the request queue is empty, and more. These lifecycle events allow you to do thing in batches and thus effectively pass data from one service to another (both remote and local). The reactor for example has a process method that is usually called when the request queue has reached a limit or is empty. There are two ways to do this. You can use aQueueCallbackHandler with a ServiceBuilder (or ServiceBundle) or you can use the annotation@QueueCallback.

Admin package

The Admin package adds Consul discovery, and StatsD support to QBit microservices, and provides a simplified builder for creating a set of managed services which you can easily expose via REST or WebSocket RPC.
It is quite easy to build bridges into the QBit world and we have done so via Kafka, Vert.x event bus and even JMS. QBit was meant to be composeable so you can pick your messaging platform and plug QBit into it.
Two main packages of note in the QBit admin packages are the ManagedServiceBuilder and theServiceManagementBundle. The ManagedServiceBuilder gives you access to building a group of services and then easily wiring them to the same health monitor, discovery system and metrics/stats system. Whilst the ServiceManagementBundle allows services to interact with common QBit services like stats, health and discovery.
Let's show some simple examples using these that we will continue on in our discussion of theServiceBundle and the ServiceEndpointServer.

Understanding the QBit microervices lib's serviceBundle

$
0
0

Understanding the serviceBundle

The serviceBundle is a collection of services sitting behind serviceQueue's. You use aserviceBundle when you want to share a response queue and a response queue thread. TheserviceBundle can also share the same thread for the request queue but that is not the default. The ServiceEndpointServer which is used to expose service actors as remotemicroservices via REST and WebSocket uses the serviceBundle.
The serviceBundle is also used to add other forms of services, like service pools, and sharded services.
Let's walk through an example. We will use the Todo example that we used for serviceQueue's. Since we are covering ServiceBundle, we will add another service called Auditor and its implementation called AuditorImpl. We will change the TodoManagerImpl to use the Auditor.
Let's review our Todo example. The Todo example, has a TodoManagerClient interface.

TodoManagerClient

packagecom.mammatustech.todo;

importio.advantageous.reakt.promise.Promise;
importjava.util.List;

publicinterfaceTodoManagerClient {
Promise<Boolean>add(Todotodo);
Promise<Boolean>remove(Stringid);
Promise<List<Todo>>list();
}
This is the interface we will use to invoke async methods.
To this we will add a new service called Auditor.

Auditor

packagecom.mammatustech.todo;

interfaceAuditor {
voidaudit(finalStringoperation, finalStringlog);
}
We will keep the implementation simple so we can focus on QBit and the serviceBundle.

AuditorImpl

packagecom.mammatustech.todo;

publicclassAuditorImplimplementsAuditor {

publicvoidaudit(finalStringoperation, finalStringlog) {

System.out.printf("operations %s, message %s log\n",
operation, log);
}
}
Now to mix things up a bit and since we are talking about a serviceBundle, we will pass anAuditor instance to the constructor of the TodoManagerImpl.

AuditorImpl

packagecom.mammatustech.todo;

importio.advantageous.qbit.annotation.QueueCallback;
importio.advantageous.qbit.annotation.QueueCallbackType;
importio.advantageous.qbit.reactive.Callback;

import staticio.advantageous.qbit.service.ServiceProxyUtils.flushServiceProxy;

publicclassTodoManagerImpl {

privatefinalMap<String, Todo> todoMap =newTreeMap<>();
privatefinalAuditor auditor;

publicTodoManagerImpl(finalAuditorauditor) {
this.auditor = auditor;
}

publicvoidadd(finalCallback<Boolean>callback,
finalTodotodo) {
todoMap.put(todo.getId(), todo);
auditor.audit("add", "added new todo");
callback.resolve(true);
}

publicvoidremove(finalCallback<Boolean>callback,
finalStringid) {
finalTodo removed = todoMap.remove(id);

auditor.audit("add", "removed new todo");
callback.resolve(removed !=null);
}

publicvoidlist(finalCallback<ArrayList<Todo>>callback) {
auditor.audit("list", "auditor added");
callback.accept(newArrayList<>(todoMap.values()));
}

@QueueCallback({QueueCallbackType.LIMIT,
QueueCallbackType.EMPTY,
QueueCallbackType.IDLE})
publicvoidprocess() {
flushServiceProxy(auditor);
}
...
}
Note that the addremovelist all use the auditor instance. Unlike the serviceQueuethere is no auto flush feature. This is typically because serviceBundless contain manyserviceQueues. If you wanted to get auto-flush going with a serviceQueue in a bundle, then you add the serviceQueue to the bundle or you look up the serviceQueue from the bundle and then use the serviceQueue to create the auto flush client proxy. This is usually not needed as manually flushing at the right time is better for thread hand off performance and IO performance. QBit uses micro-batching to optimize sending operations to other local and remote service actors.

QueueCallbacks

Since the TodoManagerImpl is using another service actor, we will flush operations to that actor when the processing queue for the TodoManagerImpl is idle, empty or reached its limit.

TodoManager using QueueCallbacks

packagecom.mammatustech.todo;
...
importio.advantageous.qbit.annotation.QueueCallback;
importio.advantageous.qbit.annotation.QueueCallbackType;

import staticio.advantageous.qbit.service.ServiceProxyUtils.flushServiceProxy;

publicclassTodoManagerImpl {
...

@QueueCallback({QueueCallbackType.LIMIT,
QueueCallbackType.EMPTY,
QueueCallbackType.IDLE})
publicvoidprocess() {
flushServiceProxy(auditor);
}
...
You can do this with annotaitons. (You can also do this without using annotations, which will show later.). The above @QueueCallback annotation says if the processing queue is empty (QueueCallbackType.EMPTY, no more requests or events in the queue), or if the request processing queue is idle (QueueCallbackType.IDLE, not busy at all), or if we have hit the queue limit (QueueCallbackType.LIMIT can only happen under heavy load or if you set the limit very low). A queue limit of ten would have ten times less thread handoff time than a queue limit of size 1 (under heavy load). If the auditor were a remote service, having a larger batch size than 1 would save on the cost of the IO operations.
You can turn off micro-batching by setting the processing queue to 1.
Later when we introduce the Reactor you can set up a reoccurring job that fires every 10ms or 100ms to flush collaborating services like the auditor.
You can use QueueCallbacks with any serviceQueue and with any serviceBundle.
There are other QueueCallbacks to get notified with the services has shutdown and when it has started.

QueueCallback for init and shutdown

publicclassTodoManagerImpl {
...

@QueueCallback({QueueCallbackType.INIT})
publicvoidinit() {
auditor.audit("init", "init service");
}

@QueueCallback({QueueCallbackType.SHUTDOWN})
publicvoidshutdown() {
System.out.println("operation shutdown, shutdown service");
flushServiceProxy(auditor);
}
The init operation would get called once when the serviceQueue for the microservice actorstarts up. The shutdown operation would get called once when the when the microservice actorshuts down.
Let's create a serviceBundle and add the auditor and todoManager services to it, and run them.

Using the service bundle with the auditor and todoManager services

/** Object address to the todoManagerImpl service actor. */
privatefinalString todoAddress ="todoService";
/** Object address to the auditorService service actor. */
privatefinalString auditorAddress ="auditorService";
/** Service Bundle */
privateServiceBundle serviceBundle;
/** Client service proxy to the todoManager */
privateTodoManagerClient client;
/** Client service proxy to the auditor. */
privateAuditor auditor;

/* Create the serviceBundleBuilder. */
finalServiceBundleBuilder serviceBundleBuilder = serviceBundleBuilder();

/* Create the service bundle. */
serviceBundle = serviceBundleBuilder.build();

/* Add the AuditorImpl instance to the serviceBundle. */
serviceBundle.addServiceObject(auditorAddress, newAuditorImpl());

/* Create a service client proxy for the auditor. */
auditor = serviceBundle.createLocalProxy(Auditor.class, auditorAddress);

/* Create a todo manager and pass the
client proxy of the auditor to it. */
finalTodoManagerImpl todoManager =newTodoManagerImpl(auditor);

// Add the todoManager to the serviceBundle.
serviceBundle
.addServiceObject(todoAddress, todoManager);

/* Create a client proxy to communicate
with the service actor. */
client = serviceBundle
.createLocalProxy(TodoManagerClient.class,
todoAddress);

// Start the service bundle.
serviceBundle.start();
Above we create the serviceBundleBuilder which can be used to the response and request queue size, types, batch size, and more. Then we create the serviceBundle. Next we add theauditor microservice actor to the serviceBundle under the address specified byauditorAddress. Next we create a service client proxy for the auditor microservice actor that we can pass to the TodoManagerImpl. We then add the TodoManagerImpl to form themicroservice actor for the TodoManager Service. Next we create a client of the TodoManagerService to test with. Then we start the serviceBundle.
 To use the `todoManager` service proxy client aka `client`, the code is much like it was before with the `serviceQueue` example except now we will flush (since by default the queue batch size is greater than 1). 

Using the todoManager microservice client proxy

finalPromise<Boolean> promise =Promises.blockingPromiseBoolean();

// Add the todo item.
client.add(newTodo("write", "Write tutorial", timer.time()))
.invokeWithPromise(promise);
flushServiceProxy(client);


assertTrue("The call was successful", promise.success());
assertTrue("The return from the add call", promise.get());

finalPromise<List<Todo>> promiseList =Promises.blockingPromiseList(Todo.class);

// Get a list of todo items.
client.list().invokeWithPromise(promiseList);

// Call flush since this is not an auto-flush. */
flushServiceProxy(client);


// See if the Todo item we created is in the listing.
finalList<Todo> todoList = promiseList.get().stream()
.filter(todo -> todo.getName().equals("write")
&& todo.getDescription().equals("Write tutorial")).collect(Collectors.toList());

// Make sure we found it.
assertEquals("Make sure there is one", 1, todoList.size());


// Remove promise
finalPromise<Boolean> removePromise =
Promises.blockingPromiseBoolean();
client.remove(todoList.get(0).getId())
.invokeWithPromise(removePromise);
flushServiceProxy(client);


finalPromise<List<Todo>> promiseList2 =
Promises.blockingPromiseList(Todo.class);

// Make sure it is removed.
client.list().invokeWithPromise(promiseList2);
flushServiceProxy(client);

// See if the Todo item we created is removed.
finalList<Todo> todoList2 = promiseList2.get().stream()
.filter(todo -> todo.getName().equals("write")
&& todo.getDescription()
.equals("Write tutorial"))
.collect(Collectors.toList());

// Make sure we don't find it.
assertEquals("Make sure there is one",
0, todoList2.size());

flushServiceProxy(client);
We can also repeat the async example were we executed more than one operation at a time.

Making async calls and coordinating with Promises

/* A list of promises for things we 
want to do all at once. */
finalList<Promise<Boolean>> promises =
newArrayList<>(3);
finalCountDownLatch latch =newCountDownLatch(1);
finalAtomicBoolean success =newAtomicBoolean();


/** Add a todoItem to the client add method */
finalTodo todo =newTodo("write", "Write tutorial",
timer.time());
finalPromise<Boolean> promise
= client.add(todo);
promises.add(promise);

/** Add two more. */
promises.add(client.add(newTodo("callMom",
"Call Mom", timer.time())));
promises.add(client.add(newTodo("callSis",
"Call Sister", timer.time())));

/** Now async wait for them all to come back. */
Promises.all(promises).then(done -> {
success.set(true);
latch.countDown();
}).catchError(e -> {
success.set(false);
latch.countDown();
});

/** Invoke the promises. */
promises.forEach(Promise::invoke);
flushServiceProxy(client);


/** They are all going to come back async. */
latch.await();
assertTrue(success.get());
Please note that you can explicitly flush an client microservice proxy, it will also flush if you go over the limit for the request queue, or you can set the batch size to 1.

Understanding the QBit microservices lib's serviceEndpointServer

$
0
0
The ServiceEndpointServer essentially exposes a ServiceBundle to WebSocket and REST remote calls. This document is using the Todo example from the discussion of ServiceQueue and the ServiceBundle.
In fact, you can use ServiceEndpointServer very similar to the way we used ServiceBundle.

Creating a serviceEndpointServer

importio.advantageous.qbit.server.EndpointServerBuilder;
importio.advantageous.qbit.server.ServiceEndpointServer;
import staticio.advantageous.qbit.server.EndpointServerBuilder.endpointServerBuilder;
...

/* Create the serviceBundleBuilder. */
finalEndpointServerBuilder endpointServerBuilder =
endpointServerBuilder();

endpointServerBuilder.addService(auditorAddress,
newAuditorImpl());


/* Create the service endpoint server. */
serviceEndpointServer = endpointServerBuilder.build();
We use a EndpointServerBuilder to build a serviceEndpointServer. You can add services to the builder or you can add them directly to the serviceEndpointServer.
Note you can use EndpointServerBuilder but most examples will use theManagedServiceBuilder which has the benefit of wiring the services it creates into the microservice health check system and the microservice statistics/monitoring/distributed MDC logging systems that QBit provides.
The serviceEndpointServer has a serviceBundle.

Using serviceEndpointServer's serviceBundle

/* Create a service client proxy for the auditor. */
auditor = serviceEndpointServer.serviceBundle()
.createLocalProxy(Auditor.class, auditorAddress);

/* Create a todo manager and pass the
client proxy of the auditor to it. */
finalTodoManagerImpl todoManager =
newTodoManagerImpl(auditor);

// Add the todoManager to the serviceBundle.
serviceEndpointServer.serviceBundle()
.addServiceObject(todoAddress, todoManager);

/* Create a client proxy to communicate
with the service actor. */
client = serviceEndpointServer.serviceBundle()
.createLocalProxy(TodoManagerClient.class,
todoAddress);
Note if we wanted to hide access to the auditor, we could put the auditor in anotherserviceQueue or serviceBundle that was not accessible to WebSocket or REST.
We can use the proxy client just like we did before. We can create a local microservice actor proxy client. The only real difference is that auto flush is built into serviceEndpointServer and notserviceBundle.

Example of making local calls to the TodoService

/* A list of promises for things we want to do all at once. */
finalList<Promise<Boolean>> promises =newArrayList<>(3);
finalCountDownLatch latch =newCountDownLatch(1);
finalAtomicBoolean success =newAtomicBoolean();


/** Add a todoItem to the client add method */
finalTodo todo =newTodo("write", "Write tutorial", timer.time());
finalPromise<Boolean> promise
= client.add(todo);
promises.add(promise);

/** Add two more. */
promises.add(client.add(newTodo("callMom", "Call Mom", timer.time())));
promises.add(client.add(newTodo("callSis", "Call Sister", timer.time())));

/** Now async wait for them all to come back. */
Promises.all(promises).then(done -> {
success.set(true);
latch.countDown();
}).catchError(e -> {
success.set(false);
latch.countDown();
});

/** Invoke the promises. */
promises.forEach(Promise::invoke);


/** They are all going to come back async. */
latch.await();
assertTrue(success.get());
Ok. Up until this point, nothing is really different than before. The TodoManagerImpl is now accessible via REST and WebSocket.

Using TodoManager service over WebSocket

importio.advantageous.qbit.client.Client;
importio.advantageous.qbit.client.ClientBuilder;

...
//REMOVE THIS Create a client proxy to communicate with the service actor.
//REMOVE client = serviceEndpointServer.serviceBundle()
//REMOVE .createLocalProxy(TodoManagerClient.class, todoAddress);

/* Start the service endpoint server
and wait until it starts. */
serviceEndpointServer.startServerAndWait();



/* Create the WebSocket Client Builder. */
finalClientBuilder clientBuilder =ClientBuilder.clientBuilder();

/** Build the webSocketClient. */
webSocketClient = clientBuilder.setHost("localhost")
.setPort(8080)
.build();

/* Create a REMOTE client proxy to communicate with the service actor. */
client = webSocketClient.createProxy(TodoManagerClient.class, todoAddress);

/* Start the remote client. */
webSocketClient.start();

...


@After
publicvoid tearDown() throws Exception{
Thread.sleep(100);
serviceEndpointServer.stop(); //stop the server
webSocketClient.stop(); //stop the client
}
The client like the service endpoint server also auto-flushes. You can use the remote client (remote microservice client proxy) just like before (when we showed the local microservice client proxy).

Remote client gets used just like the local client.

/* A list of promises for things we want to do all at once. */
finalList<Promise<Boolean>> promises =newArrayList<>(3);
finalCountDownLatch latch =newCountDownLatch(1);
finalAtomicBoolean success =newAtomicBoolean();


/** Add a todoItem to the client add method */
finalTodo todo =newTodo("write", "Write tutorial", timer.time());
finalPromise<Boolean> promise
= client.add(todo);
promises.add(promise);

/** Add two more. */
promises.add(client.add(newTodo("callMom", "Call Mom", timer.time())));
promises.add(client.add(newTodo("callSis", "Call Sister", timer.time())));

/** Now async wait for them all to come back. */
Promises.all(promises).then(done -> {
success.set(true);
latch.countDown();
}).catchError(e -> {
success.set(false);
latch.countDown();
});

/** Invoke the promises. */
promises.forEach(Promise::invoke);


/** They are all going to come back async. */
latch.await();
assertTrue(success.get());
To expose the TodoManagerImpl to REST, we will define a main method to start the server. Then we will add @RequestMapping@POST@PUT@DELETE/@RequestParam, and @GET.

Adding @RequestMapping@POST@PUT,@DELETE/@RequestParam, and @GET

packagecom.mammatustech.todo;
...
importio.advantageous.qbit.annotation.*;
importio.advantageous.qbit.annotation.http.DELETE;
importio.advantageous.qbit.annotation.http.GET;
importio.advantageous.qbit.annotation.http.PUT;
importio.advantageous.qbit.reactive.Callback;
...
@RequestMapping("/todo-service")
publicclassTodoManagerImpl {

privatefinalMap<String, Todo> todoMap =newTreeMap<>();
privatefinalAuditor auditor;

publicTodoManagerImpl(finalAuditorauditor) {
this.auditor = auditor;
}


@GET("/todo/count")
publicintsize() {
return todoMap.size();
}



@PUT("/todo/")
publicvoidadd(finalCallback<Boolean>callback, finalTodotodo) {
todoMap.put(todo.getId(), todo);
auditor.audit("add", "added new todo");
callback.resolve(true);
}

@DELETE("/todo/")
publicvoidremove(finalCallback<Boolean>callback,
@RequestParam("id") finalStringid) {
finalTodo removed = todoMap.remove(id);

auditor.audit("add", "removed new todo");
callback.resolve(removed !=null);
}

@GET("/todo/")
publicvoidlist(finalCallback<List<Todo>>callback) {
auditor.audit("list", "auditor added");
callback.accept(newArrayList<>(todoMap.values()));
}
...
}
The main method just creates the microservices and starts the server.

Main method to start the service

packagecom.mammatustech.todo;

importio.advantageous.qbit.server.EndpointServerBuilder;
importio.advantageous.qbit.server.ServiceEndpointServer;

import staticio.advantageous.qbit.server.EndpointServerBuilder.endpointServerBuilder;

publicclassTodoServiceMain {

publicstaticvoidmain(finalString... args) {


/** Object address to the auditorService service actor. */
finalString auditorAddress ="auditorService";


/* Create the serviceBundleBuilder. */
finalEndpointServerBuilder endpointServerBuilder =
endpointServerBuilder();

endpointServerBuilder.setPort(8080).setUri("/");

endpointServerBuilder.addService(auditorAddress,
newAuditorImpl());


/* Create the service server. */
finalServiceEndpointServer serviceEndpointServer =
endpointServerBuilder.build();


/* Create a service client proxy for the auditor. */
finalAuditor auditor = serviceEndpointServer
.serviceBundle()
.createLocalProxy(Auditor.class, auditorAddress);

/* Create a todo manager and pass
the client proxy of the auditor to it. */
finalTodoManagerImpl todoManager =
newTodoManagerImpl(auditor);

// Add the todoManager to the serviceBundle.
serviceEndpointServer.addService(todoManager);

/* Start the service endpoint server
and wait until it starts. */
serviceEndpointServer.startServerAndWait();

System.out.println("Started");
}

}
No RESTful microservice is proven to be RESTful without some curl script.

curl accessing service

echo"Todo item list before "
curl http://localhost:8080/todo-service/todo/
echo

echo"Count of Todo items "
curl http://localhost:8080/todo-service/todo/count
echo

echo"PUT a TODO item"
curl -X PUT http://localhost:8080/todo-service/todo/ \
-H 'Content-Type: application/json' \
-d '{"name":"wash-car", "description":"Take the car to the car wash", "createTime":1463950095000}'
echo


echo"Todo item list after add "
curl http://localhost:8080/todo-service/todo/
echo

echo"Count of Todo items after add "
curl http://localhost:8080/todo-service/todo/count
echo

echo"Remove a TODO item"
curl -X DELETE http://localhost:8080/todo-service/todo/?id=wash-car::1463950095000
echo


echo"Todo item list after add "
curl http://localhost:8080/todo-service/todo/
echo

echo"Count of Todo items after add "
curl http://localhost:8080/todo-service/todo/count
echo

$ ./curl-test.sh
Todo item list before
[]
Count of Todo items
0
PUT a TODO item
true
Todo item list after add
[{"name":"wash-car","description":"Take the car to the car wash","createTime":1463950095000,"id":"wash-car::1463950095000"}]
Count of Todo items after add
1
Remove a TODO item
true
Todo item list after add
[]
Count of Todo items after add
0
For completeness, here is the build file.

Build file

group 'qbit-ex'
version '1.0-SNAPSHOT'

apply plugin:'java'


apply plugin:'application'


mainClassName ="com.mammatustech.todo.TodoServiceMain"


compileJava {
sourceCompatibility =1.8
}

repositories {
mavenCentral()
mavenLocal()
}

dependencies {
testCompile group:'junit', name:'junit', version:'4.11'
compile 'io.advantageous.qbit:qbit-vertx:1.9.1'
compile 'io.advantageous.qbit:qbit-admin:1.9.1'
compile 'io.advantageous.reakt:reakt:2.8.15'
}

Conclusion

ServiceEndpointServer exposes a ServiceBundle as a remote accessible microservice whose methods can be invoked over WebSocket and HTTP/REST. Remote proxies can be created with QBit Client/ClientBuilder. The ServiceEndpointServer and the Client are both auto flushing (interval duration of flush is configurable from their respective builders).
To learn more about QBit and REST see Restful QBit tutorial and Resourceful RESTful Microservices tutorial.
To learn about the ManagedServiceBuilder please read QBit Batteries included which covers health, stats and microservice monitoring. The QBit batteries included also covers using QBit with Swagger. QBit can generate swagger JSON from all of its services which you can then use to generate clients for other platforms.

KPI Microservices Monitoring with QBit

$
0
0

KPI Microservices Monitoring

Mammatus Tech
We recently revised this. Here is the old version. You can see how much QBit and Reakt have progressed.
There has been a lot written on the subject of Microservices Monitoring. Monitoring is a bit of an overloaded term. There is service health monitoring, which can be done with tools like Mesosphere/Marathon, Nomad, Consul, etc. There is also KPI monitoring, which is done with tools like Grafana, Graphite, InfluxDB, StatsD, etc. Then there is log monitoring and search with tools like the ELK stack (elastic-search, LogStash and Kibana) and Splunk, where you can easily trace logs down to the requests or client ID in a request header. And, then there is system monitoring (JVM, slow query logs, network traffic, etc.), with tools like SystemD, and more. You will want all of this when you are doing Microservices Development.
The more insight you have into your system, the easier it will be to support and debug. Microservices imply async distributed development. Doing async distributed development without monitoring is like running with scissors.
To summarize Microservices Monitoring is:
  • KPI Monitoring (e.g., StatsD, Grafana, Graphite, InfluxDB, etc.)
  • Health Monitoring (e.g., Consul, Nomad, Mesosphere/Marathon, Heroku, etc.)
  • Log monitoring (e.g., ELK stack, Splunk, etc.)
QBit has support for ELK/Splunk by providing support for MDC. QBit has support for systems that can monitor health like Mesosphere/Marathon, Heroku, Consul, Nomad, etc. by having an internal health system that QBit service actors all check-in with that then gets rolled up to other systems like Mesosphere/Marathon, Heroku, Consul, Nomad, etc.
In this tutorial we are going to just cover KPI monitoring for microservices which is sometimes called Metrics Monitoring or Stats Monitoring. KPI stands for Key Performance Indicators. These are the things you really care about to see if your system is up and running, and how hard it is getting hit, and how it is performing.
At the heart of the QBit KPI system is the Metrics collector. QBit uses the Metrik interface for tracking Microservice KPIs.

Metrik Interface for tracking KPIs

publicinterfaceMetricsCollector {

default voidincrement(finalStringname) {
recordCount(name, 1);
}

default voidrecordCount(Stringname, longcount) {
}

default voidrecordLevel(Stringname, longlevel) {
}

default voidrecordTiming(Stringname, longduration) {
}

}
We are recording counts per time period, current level or gauge at this instance in time and timings which is how long did something take.

Demonstrating using QBit metrics

This guide assumes you have read through the main overview of QBit and have gone through the first tutorials, but you should be able to follow along if you have not, you just will be able to follow along better if you read the docs (at least skimmed) and went through the first set of tutorials.
Let's show it. First we need to build. Use Gradle as follows:

gradle.build

group 'qbit-ex'
version '1.0-SNAPSHOT'

apply plugin: 'java'


apply plugin: 'application'


mainClassName = "com.mammatustech.todo.TodoServiceMain"


compileJava {
sourceCompatibility = 1.8
}

repositories {
mavenCentral()
mavenLocal()
}

dependencies {
testCompile group: 'junit', name: 'junit', version: '4.11'
compile 'io.advantageous.reakt:reakt:2.8.17'
compile 'io.advantageous.qbit:qbit-admin:1.10.0.RELEASE'
compile 'io.advantageous.qbit:qbit-vertx:1.10.0.RELEASE'
}


dependencies {
testCompile group: 'junit', name: 'junit', version: '4.11'
}
Read the comments in all of the code listings.
In this example we will use StatsD but QBit is not limited to StatsD for Microservice KPI monitoring. In fact QBit can do amazing things with its StatsService like clustered rate limiting based on OAuth header info, but that is beyond this tutorial.
StatsD is a protocol that you can send KPI messages to over UDP. Digital Ocean has a really nicedescription of StatsD.
The easiest way to setup StatsD is to use Docker. Docker is a great tool for development, and using Docker with Nomad, CoreOS, Mesosphere/Marathon, etc. is a great way to deploy Docker containers, but at a minimum you should be using the Docker tools for development.
Set up the StatsD server stack by using this public docker container.
QBit ships with StatsD support in the qbit-admin lib (jar). It has done this for a long time.
We will connect to StatsD with this URI.

URI to connect to with StatsD

finalURI statsdURI =URI.create("udp://192.168.99.100:8125");
Depending on how you have Docker setup, your URI might look a bit different. If you are running Docker tools with a Mac, then that should be your URI. (On Linux the above IO is likely to belocalhost not 192.168.99.100, go through the docker tool tutorials if you are lost at this point. It will be worth your time. I promise. I promise.invoke promise.)
If you have not already followed the instructions at statsD, grafana, influxdb docker container docs, do so now.

Running Docker

docker run -d \
--name docker-statsd-influxdb-grafana \
-p 3003:9000 \
-p 3004:8083 \
-p 8086:8086 \
-p 22022:22 \
-p 8125:8125/udp \
samuelebistoletti/docker-statsd-influxdb-grafana
The above yields

Servers

Host        Port        Service

3003 9000 grafana to see the results
8086 8086 influxdb to store the results
3004 8083 influxdb-admin to query the results
8125 8125 statsd server that listens to statsD UPD messages
22022 22 sshd
If you want to see the metrics and see if this is working, go through the influxDB tutorial and look around at the measurements with the influx-admin. Influx is a time series database. Grafana allows you to see pretty graphs and charts of the microservice KPIs that we are collecting. You will want to learn grafana as well.
We use the host and port of the URI to connect to the StatsD daemon that is running on the docker container.

Setting up StatsD by using QBit managedServiceBuilder

...
managedServiceBuilder.enableStatsD(URI.create("udp://192.168.99.100:8125"));
managedServiceBuilder.getContextMetaBuilder().setTitle("TodoMicroService");
We covered using and setting up the managedServiceBuilder in the first tutorials, and the complete code listing is below. You could use managedServiceBuilder to create astatsCollector as follows:

You could do this... managedServiceBuilder to create the StatsCollector/MetricsCollector

StatsCollector statsCollector = managedServiceBuilder.createStatsCollector();

/* Start the service. */
managedServiceBuilder.addEndpointService(newTodoService(reactor, statsCollector))
Since services typically deal with the health system, the reactor (callback management, tasks management, repeating tasks) and the stats collector we created a ServiceManagementBundle that is a facade over the health system, stats, and the reactor.

Better way to work with stats, health and the reactor

/** Create the management bundle for this service. */
finalServiceManagementBundle serviceManagementBundle =
serviceManagementBundleBuilder().setServiceName("TodoServiceImpl")
.setManagedServiceBuilder(managedServiceBuilder).build();
The QBit StatsCollector interface extends the Metrik MetricsCollector interface (from QBit 1.5 onwards). ServiceManagementBundle has a stats method that returns a StatsCollector as well as common facade methods on the ServiceManagementBundle

Using the StatsCollector.

Then we just need to use it.

Using the StatsCollector to collect KPIs about our service

For kicks, we track the KPI todoservice.i.am.alive every three seconds.

Tracking KPI i.am.am.alive

@RequestMapping("/todo-service")
publicclassTodoServiceImplimplementsTodoService {


privatefinalMap<String, Todo> todoMap =newTreeMap<>();

privatefinalServiceManagementBundle mgmt;

publicTodoServiceImpl(ServiceManagementBundlemgmt) {
this.mgmt = mgmt;
/** Send stat count i.am.alive every three seconds. */
mgmt.reactor().addRepeatingTask(Duration.ofSeconds(3),
() -> mgmt.increment("i.am.alive"));

}

Tracking calls to add method

    @Override
@POST(value ="/todo")
publicPromise<Boolean> addTodo(finalTodo todo) {
return invokablePromise(promise -> {
/** Send KPI addTodo called every time the addTodo method gets called. */
mgmt.increment("addTodo.called");
todoMap.put(todo.getId(), todo);
promise.accept(true);
});
}

Tracking calls to remove method

    @Override
@DELETE(value ="/todo")
publicfinalPromise<Boolean> removeTodo(final @RequestParam("id") String id) {
return invokablePromise(promise -> {
/** Send KPI addTodo.removed every time the removeTodo method gets called. */
mgmt.increment("removeTodo.called");
todoMap.remove(id);
promise.accept(true);
});
}

You can register repeating tasks with @QueueCallback as follows:

Managing callbacks and repeating tasks

    @QueueCallback({EMPTY, IDLE, LIMIT})
publicvoid process() {
reactor.process();
}
But you do not need to if you use the serviceManagementBundle. Just specify it when you add the service to the managedServiceBuilder.

Adding service to managedServiceBuilder with a serviceManagementBundle

/* Start the service. */
managedServiceBuilder
//Register TodoServiceImpl
.addEndpointServiceWithServiceManagmentBundle(todoService, serviceManagementBundle)
//Build and start the server.
.startApplication();

Complete example

Todo.java

packagecom.mammatustech.todo;

publicclassTodo {

privateString id;

privatefinalString name;
privatefinalString description;
privatefinallong createTime;

publicTodo(Stringname, Stringdescription, longcreateTime) {
this.name = name;
this.description = description;
this.createTime = createTime;

this.id = name +"::"+ createTime;
}


publicStringgetId() {
if (id ==null) {
this.id = name +"::"+ createTime;
}
return id;
}

publicStringgetName() {
return name;
}

publicStringgetDescription() {
return description;
}

publiclonggetCreateTime() {
return createTime;
}

@Override
publicbooleanequals(Objecto) {
if (this== o) returntrue;
if (o ==null|| getClass() != o.getClass()) returnfalse;

Todo todo = (Todo) o;

if (createTime != todo.createTime) returnfalse;
return!(name !=null?!name.equals(todo.name) : todo.name !=null);

}

@Override
publicinthashCode() {
int result = name !=null? name.hashCode() :0;
result =31* result + (int) (createTime ^ (createTime >>>32));
return result;
}
}

TodoServiceImpl.java to show tracking KPIs.

packagecom.mammatustech.todo;

importio.advantageous.qbit.admin.ServiceManagementBundle;
importio.advantageous.qbit.annotation.RequestMapping;
importio.advantageous.qbit.annotation.RequestMethod;
importio.advantageous.qbit.annotation.RequestParam;
importio.advantageous.qbit.annotation.http.DELETE;
importio.advantageous.qbit.annotation.http.GET;
importio.advantageous.qbit.annotation.http.POST;
importio.advantageous.reakt.promise.Promise;

importjava.time.Duration;
importjava.util.ArrayList;
importjava.util.Map;
importjava.util.TreeMap;

import staticio.advantageous.reakt.promise.Promises.invokablePromise;


/**
* Default port for admin is 7777.
* Default port for main endpoint is 8888.
* <p>
* <pre>
* <code>
*
* Access the service:
*
* $ curl http://localhost:8888/v1/...
*
*
* To see swagger file for this service:
*
* $ curl http://localhost:7777/__admin/meta/
*
* To see health for this service:
*
* $ curl http://localhost:8888/__health -v
* Returns "ok" if all registered health systems are healthy.
*
* OR if same port endpoint health is disabled then:
*
* $ curl http://localhost:7777/__admin/ok -v
* Returns "true" if all registered health systems are healthy.
*
*
* A node is a service, service bundle, queue, or server endpoint that is being monitored.
*
* List all service nodes or endpoints
*
* $ curl http://localhost:7777/__admin/all-nodes/
*
*
* List healthy nodes by name:
*
* $ curl http://localhost:7777/__admin/healthy-nodes/
*
* List complete node information:
*
* $ curl http://localhost:7777/__admin/load-nodes/
*
*
* Show service stats and metrics
*
* $ curl http://localhost:8888/__stats/instance
* </code>
* </pre>
*/
@RequestMapping("/todo-service")
publicclassTodoServiceImplimplementsTodoService {


privatefinalMap<String, Todo> todoMap =newTreeMap<>();

privatefinalServiceManagementBundle mgmt;

publicTodoServiceImpl(ServiceManagementBundlemgmt) {
this.mgmt = mgmt;
/** Send stat count i.am.alive every three seconds. */
mgmt.reactor().addRepeatingTask(Duration.ofSeconds(3),
() -> mgmt.increment("i.am.alive"));

}


@Override
@POST(value="/todo")
publicPromise<Boolean>addTodo(finalTodotodo) {
return invokablePromise(promise -> {
/** Send KPI addTodo called every time the addTodo method gets called. */
mgmt.increment("addTodo.called");
todoMap.put(todo.getId(), todo);
promise.accept(true);
});
}


@Override
@DELETE(value="/todo")
publicfinalPromise<Boolean>removeTodo(final@RequestParam("id") Stringid) {
return invokablePromise(promise -> {
/** Send KPI addTodo.removed every time the removeTodo method gets called. */
mgmt.increment("removeTodo.called");
todoMap.remove(id);
promise.accept(true);
});
}


@Override
@GET(value="/todo", method=RequestMethod.GET)
publicfinalPromise<ArrayList<Todo>>listTodos() {
return invokablePromise(promise -> {
/** Send KPI addTodo.listTodos every time the listTodos method gets called. */
mgmt.increment("listTodos.called");
promise.accept(newArrayList<>(todoMap.values()));
});
}


}

TodoServiceMain.java showing how to configure StatsD QBit for MicroService KPI tracking

packagecom.mammatustech.todo;


importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.admin.ServiceManagementBundle;

importjava.net.URI;

import staticio.advantageous.qbit.admin.ManagedServiceBuilder.managedServiceBuilder;
import staticio.advantageous.qbit.admin.ServiceManagementBundleBuilder.serviceManagementBundleBuilder;

publicclassTodoServiceMain {


publicstaticvoidmain(finalString... args) throwsException {



/* Create the ManagedServiceBuilder which manages a clean shutdown, health, stats, etc. */
finalManagedServiceBuilder managedServiceBuilder = managedServiceBuilder()
.setRootURI("/v1") //Defaults to services
.setPort(8888); //Defaults to 8080 or environment variable PORT

managedServiceBuilder.enableStatsD(URI.create("udp://192.168.99.100:8125"));
managedServiceBuilder.getContextMetaBuilder().setTitle("TodoMicroService");

/** Create the management bundle for this service. */
finalServiceManagementBundle serviceManagementBundle =
serviceManagementBundleBuilder().setServiceName("TodoServiceImpl")
.setManagedServiceBuilder(managedServiceBuilder).build();

finalTodoService todoService =newTodoServiceImpl(serviceManagementBundle);

/* Start the service. */
managedServiceBuilder
//Register TodoServiceImpl
.addEndpointServiceWithServiceManagmentBundle(todoService, serviceManagementBundle)
//Build and start the server.
.startApplication();

/* Start the admin builder which exposes health end-points and swagger meta data. */
managedServiceBuilder.getAdminBuilder().build().startServer();

System.out.println("Todo Server and Admin Server started");

}
}

TodoService interface

packagecom.mammatustech.todo;

importio.advantageous.reakt.promise.Promise;

importjava.util.ArrayList;

publicinterfaceTodoService {
Promise<Boolean>addTodo(Todotodo);

Promise<Boolean>removeTodo(Stringid);

Promise<ArrayList<Todo>>listTodos();
}

TodoServiceImplTest that shows how to unit test with Reakt

packagecom.mammatustech.todo;

importio.advantageous.qbit.admin.ManagedServiceBuilder;
importio.advantageous.qbit.admin.ServiceManagementBundle;
importio.advantageous.qbit.queue.QueueCallBackHandler;
importio.advantageous.qbit.service.ServiceBuilder;
importorg.junit.Test;

importjava.util.concurrent.TimeUnit;

import staticio.advantageous.qbit.admin.ManagedServiceBuilder.managedServiceBuilder;
import staticio.advantageous.qbit.admin.ServiceManagementBundleBuilder.serviceManagementBundleBuilder;
import staticjunit.framework.TestCase.assertFalse;
import staticorg.junit.Assert.assertTrue;

publicclassTodoServiceImplTest {
@Test
publicvoidtest() throwsException {
finalTodoService todoService = createTodoService();

finalTodo rick =newTodo("foo", "rick", 1L);

//Add Rick
assertTrue(todoService
.addTodo(rick)
.invokeAsBlockingPromise().get());


//Add Diana
assertTrue(todoService
.addTodo(newTodo("bar", "diana", 1L))
.invokeAsBlockingPromise().get());

//Remove Rick
assertTrue(todoService.removeTodo(rick.getId())
.invokeAsBlockingPromise().get());

//Make sure Diana is in the listTodos
assertTrue(todoService.listTodos()
.invokeAsBlockingPromise()
.get()
.stream()
.filter(
todo -> todo.getDescription().equals("diana")

)
.findFirst()
.isPresent()
);


//Make sure Rick is not in the listTodos
assertFalse(todoService.listTodos()
.invokeAsBlockingPromise()
.get()
.stream()
.filter(
todo -> todo.getDescription().equals("rick")

)
.findFirst()
.isPresent()
);

}

privateTodoServicecreateTodoService() {
/* Create the ManagedServiceBuilder which manages a clean shutdown, health, stats, etc. */
finalManagedServiceBuilder managedServiceBuilder = managedServiceBuilder(); //Defaults to 8080 or environment variable PORT


/** Create the management bundle for this service. */
finalServiceManagementBundle serviceManagementBundle =
serviceManagementBundleBuilder().setServiceName("TodoService")
.setManagedServiceBuilder(managedServiceBuilder).build();

finalTodoService todoServiceImpl =newTodoServiceImpl(serviceManagementBundle);


returnServiceBuilder.serviceBuilder().setServiceObject(todoServiceImpl).addQueueCallbackHandler(
newQueueCallBackHandler() {
@Override
publicvoidqueueProcess() {
serviceManagementBundle.process();
}
})
.buildAndStartAll()
.createProxyWithAutoFlush(TodoService.class, 50, TimeUnit.MILLISECONDS);

}
}

Using ansible to install Oracle Java on an ec2 box running in the cloud

$
0
0
I am writing this down so I don't forget. I started this task but had to stop a few times, and then remember where I left off and at some point, others will need to know how to get started.

Install ansible

brew install ansible

Install Amazon EC2 Ansible integration tool

Go here and follow instructions for Amazon EC2 Ansible integration tool. You will run a Python script and setup a few environment variable. It is painless. This will create an ansible inventory file based on your EC2 environment.

Start ssh agent with your key

$ ssh-agent bash 
$ ssh-add ~/.ssh/YOUR_KEY.pem

Install ansible Oracle Java install plugin

$ ansible-galaxy install ansiblebit.oracle-java

Start up an EC2 box tag it as elk=elk

However you like, start up an EC2 instance and tag it with the tag elk=elk. This is just the type of box. In this case, I am in the process of writing an ansible setup script for an ELK stack.

Test your ansible connection with the ping module

$ ansible tag_elk_elk -m ping -u ec2-user 
54.68.31.178 | SUCCESS => {
"changed": false,
"ping": "pong"
}

Create an ansible playbook

---
- hosts:tag_elk_elk
user:ec2-user
sudo:yes
roles:
- { role:ansiblebit.oracle-java,
oracle_java_set_as_default:yes,
oracle_java_version:8,
oracle_java_version_update:102,
oracle_java_version_build:14}

tasks:
- name:ensure apache is at the latest version
yum:name=httpd state=latest

Run the ansible playbook

$ ansible-playbook elk_install_playbook.yml 
Ok. Now back to creating my ansible ELK install script. There are also tasks for creating an Amazon box with Ansible.

Packer EC2 support and Ansible for our Cassandra Database Clusters

$
0
0
Packer is used to generate machine and container images for multiple platforms from a single source configuration. We use Packer to create AWS EC2 AMIs (images) and Docker images. (We use Vagrant to setup dev images on Virtual Box.) 

Packer for AWS Cassandra Database EC2/AMI

This code listing is our Packer provisioning script to produce an EC2 AMI which we can later use to produce and EC2 instance with Cassandra installed. This script will install Cassandra on the EC2 image. 

packer-ec2.json - Packer creation script for EC2 Cassandra Database instance

{
"variables": {
"aws_access_key": "",
"aws_secret_key": "",
"aws_region": "us-west-2",
"aws_ami_image": "ami-d2c924b2",
"aws_instance_type": "m4.large",
"image_version" : "0.2.2"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `aws_region`}}",
"source_ami": "{{user `aws_ami_image`}}",
"instance_type": "{{user `aws_instance_type`}}",
"ssh_username": "centos",
"ami_name": "cloudurable-cassandra-{{user `image_version`}}",
"tags": {
"Name": "cloudurable-cassandra-{{user `image_version`}}",
"OS_Version": "LinuxCentOs7",
"Release": "7",
"Description": "CentOS 7 image for Cloudurable Cassandra image"
},
"user_data_file": "config/user-data.sh"
}
],
"provisioners": [
{
"type": "file",
"source": "scripts",
"destination": "/home/centos/"
},
{
"type": "file",
"source": "resources",
"destination": "/home/centos/"
},
{
"type": "shell",
"scripts": [
"scripts/000-ec2-provision.sh"
]
},
{
"type": "ansible",
"playbook_file": "playbooks/ssh-addkey.yml"
}
]
}
Notice that we are using a packer amazon-ebs builder to build an AMI image based on our local dev boxes EC2 setup.
Also, see that we use a series of Packer provisioners. The packer file provisioner can copy files or directories to a machine image. The packer shell provisioner can run shell scripts. Lastly the packer ansible provisioner can run ansible playbooks. We covered what playbooks/ssh-addkey.yml does in the previous article, but in short it sets up the keys so we use ansible with our the Cassandra Database cluster nodes.

Bash provisioning

Before we started applying ansible to do provisioning, we used bash scripts that get reused for packer/docker, packer/aws, and vagrant/virtual-box. The script 000-ec2-provision.sh invokes these provisioning scripts which the first three articles covered at varying degrees (skim those articles if you are curious or the source code, but you don’t need it per se to follow). This way we can use the same provisioning scripts with AMIs, VirtualBox, and AWS EC2.

scripts/000-ec2-provision.sh

#!/bin/bash
set -e

sudo cp -r /home/centos/resources/ /root/
sudo mv /home/centos/scripts/ /root/

echo RUNNING PROVISION
sudo /root/scripts/000-provision.sh
echo Building host file
sudo /root/scripts/002-hosts.sh
echo RUNNING TUNE OS
sudo /root/scripts/010-tune-os.sh
echo RUNNING INSTALL the Cassandra Database
sudo /root/scripts/020-cassandra.sh
echo RUNNING INSTALL CASSANDRA CLOUD
sudo /root/scripts/030-cassandra-cloud.sh
echo RUNNING INSTALL CERTS
sudo /root/scripts/040-install-certs.sh
echo RUNNING SYTSTEMD SETUP
sudo /root/scripts/050-systemd-setup.sh

sudo chown -R cassandra /opt/cassandra/
sudo chown -R cassandra /etc/cassandra/

We covered what each of those provisioning scripts does in the first three articles, but for those just joining us, they install packages, programs and configure stuff.

Using Packer to build our ec2 AMI

To build the AWS AMI, we use packer build as follows.

Building the AWS AMI

$ packer build packer-ec2.json
After the packer build completes, it will print out the name of the AMI image it created, e.g., ami-6db33abc.

Cassandra Tutorial: Cassandra Cluster DevOps/DBA series

The first tutorial in this Cassandra tutorial series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). The third article in this series was about configuring and using Ansible (building on the first two articles). This article (the 4th) will cover applying the tools and techniques from the first three articles to produce an image (EC2 AMI to be precise) that we can deploy to AWS/EC2. To do this explanation, we will use Packer, Ansible, and the Aws Command Line tools. The AWS command line tools are essential for doing DevOps with AWS.

Check out more information about the Cassandra Database

MetricsD to send Linux OS metrics to Amazon CloudWatch

$
0
0

metricsd to send Linux OS metrics to AWS

We are using metricsd to read OS metrics and send data to AWS CloudWatch Metrics. Metricsd gathers OS KPIs for AWS CloudWatch Metrics. We install this as a systemd process which depends on cassandra. We also install the Cassandra Database as a systemd process.
We use systemd unit quite a bit. We use systemd to start up Cassandra config scripts. We use systemd to start up Cassandra/Kafka, and to shut Cassandra/Kakfa (this article does not cover Kafka at all) down nicely. Since systemd is pervasive in all new mainstream Linux distributions, you can see that systemd is an important concept for DevOps.
Metricsd gets installed as a systemd service by our provisioning scripts.

Installing metricsd systemd from our provisioning scripts

cp ~/resources/etc/systemd/system/metricsd.service /etc/systemd/system/metricsd.service
cp ~/resources/etc/metricsd.conf /etc/metricsd.conf
systemctl enable metricsd
systemctl start metricsd
We use systemctl enable to install metricsd to start up on system start. We then use systemctl start to start metricsd.
We could write a whole article on metricsd and AWS CloudWatch metrics, and perhaps we will. For more informatino about metricsd please see the metricsd github project.
The metricsd system unit depends on the Cassandra service. The unit file is as follows.

/etc/systemd/system/metricsd.service

[Unit]
Description=MetricsD OS Metrics
Requires=cassandra.service
After=cassandra.service

[Service]
ExecStart=/opt/cloudurable/bin/metricsd

WorkingDirectory=/opt/cloudurable
Restart=always
RestartSec=60
TimeoutStopSec=60
TimeoutStartSec=60


[Install]
WantedBy=multi-user.target

Retrospective - Past Articles in this Cassandra Cluster DevOps/DBA series

The first article in this series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). The third article in this series was about configuring and using Ansible (building on the first two articles). This article (the 4th) will cover applying the tools and techniques from the first three articles to produce an image (EC2 AMI to be precise) that we can deploy to AWS/EC2. To do this explanation, we will use Packer, Ansible, and the Aws Command Line tools. The AWS command line tools are essential for doing DevOps with AWS.

Check out more information about the Cassandra Database

Check out the metricsd github project page. 


Metricsd

Reads OS metrics and sends data to AWS CloudWatch Metrics.
Metricsd gathers OS metrics for AWS CloudWatch. You can install it as a systemd process.
Configuration

/etc/metricsd.conf



# AWS Region string `hcl:"aws_region"`
# If not set, uses aws current region for this instance.
# Used for testing only.
# aws_region = "us-west-1"

# EC2InstanceId string `hcl:"ec2_instance_id"`
# If not set, uses aws instance id for this instance
# Used for testing only.
# ec2_instance_id = "i-my-fake-instanceid"

# Debug bool `hcl:"debug"`
# Used for testing and debugging
debug = false

# Local bool `hcl:"local"`
# Used to ingore local ec2 meta-data, used for development only.
# local = true

# TimePeriodSeconds time.Duration `hcl:"interval_seconds"`
# Defaults to 30 seconds, how often metrics are collected.
interval_seconds = 10

# Used to specify the environment: prod, dev, qa, staging, etc.
# This gets used as a dimension that is sent to cloudwatch.
env="dev"

# Used to specify the top level namespace in cloudwatch.
namespace="Cassandra Cluster"

# Used to specify the role of the AMI instance.
# Gets used as a dimension.
# e.g., dcos-master, consul-master, dcos-agent, cassandra-node, etc.
server_role="dcos-master"


Installing as a service

If you are using systemd you should install this as a service.

/etc/systemd/system/metricsd.service


[Unit]
Description=metricsd
Wants=basic.target
After=basic.target network.target

[Service]
User=centos
Group=centos
ExecStart=/usr/bin/metricsd
KillMode=process
Restart=on-failure
RestartSec=42s


[Install]
WantedBy=multi-user.target


Copy the binary to /usr/bin/metricsd. Copy the config to /etc/metricsd.conf. You can specify a different conf location by using /usr/bin/metricsd -conf /foo/bar/myconf.conf.

Installing

$ sudo cp metricsd_linux /usr/bin/metricsd 
$ sudo systemctl stop metricsd.service
$ sudo systemctl enable metricsd.service
$ sudo systemctl start metricsd.service
$ sudo systemctl status metricsd.service
● metricsd.service - metricsd
Loaded: loaded (/etc/systemd/system/metricsd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2016-12-21 20:19:59 UTC; 8s ago
Main PID: 718 (metricsd)
CGroup: /system.slice/metricsd.service
└─718 /usr/bin/metricsd

Dec 21 20:19:59 ip-172-31-29-173 systemd[1]: Started metricsd.
Dec 21 20:19:59 ip-172-31-29-173 systemd[1]: Starting metricsd...
Dec 21 20:19:59 ip-172-31-29-173 metricsd[718]: INFO : [main] - 2016/12/21 20:19:59 config.go:29: Loading config /et....conf
Dec 21 20:19:59 ip-172-31-29-173 metricsd[718]: INFO : [main] - 2016/12/21 20:19:59 config.go:45: Loading log...
There are full example packer install scripts under bin/packer/packer_ec2.json. The best doc is a working example.

Metrics

CPU metrics

  • softIrqCnt - count of soft interrupts for the last period
  • intrCnt - count of interrupts for the last period
  • ctxtCnt - count of context switches for the last period
  • processesStrtCnt - count of processes started for the last period
  • GuestJif - jiffies spent in guest mode for last time period
  • UsrJif - jiffies spent in usr mode for last time period
  • IdleJif - jiffies spent in usr mode for last time period
  • IowaitJif - jiffies spent handling IO for last time period
  • IrqJif - jiffies spent handling interrupts for last time period
  • GuestniceJif - guest nice mode
  • StealJif - time stolen by noisy neighbors for last time period
  • SysJif - jiffies spent doing OS stuff like system calls in last time period
  • SoftIrqJif - jiffies spent handling soft IRQs in the last time period
  • procsRunning - count of processes currently running
  • procsBlocked - count of processes currently blocked (could be for IO or just waiting to get CPU time)

Disk metrics

  • dUVol<VOLUME_NAME>AvailPer - percentage of disk space left (per volume)

Mem metrics

  • mFreeLvl - free memory in kilobytes
  • mUsedLvl - used memory in kilobytes
  • mSharedLvl - shared memory in kilobytes
  • mBufLvl - memory used by IO buffers in kilobytes
  • mAvailableLvl - memory available in kilobytes
  • mFreePer - percentage of memory free
  • mUsedPer - percentage of memory used
If swapping is enabled (which is unlikely), then you will get the above with mSwpX instead of mX.

systemd-cloud-watch to send Linux logs (systemd journald) to AWS CloudWatch

$
0
0

systemd-cloud-watch to send OS logs to AWS log aggregation

We use systemd-cloud-watch to read OS logs from systemd/journald and send data to AWS CloudWatch Log. The  systemd-cloud-watch daemon journald logs and aggregates them to AWS CloudWatch Logging. Just like  metricsd we install  systemd-cloud-watch  as a systemd process which depends on cassandra. Remember that we also install Cassandra as a systemd process, which we will cover in a moment.
The  systemd-cloud-watch  daemon gets installed as a systemd service by our provisioning scripts.

Installing systemd-cloud-watch systemd service from our provisioning scripts

cp ~/resources/etc/systemd/system/systemd-cloud-watch.service /etc/systemd/system/systemd-cloud-watch.service
cp ~/resources/etc/systemd-cloud-watch.conf /etc/systemd-cloud-watch.conf
systemctl enable systemd-cloud-watch
systemctl start systemd-cloud-watch
We use systemctl enable to install systemd-cloud-watch to start up when the system starts. We then use systemctl start to start systemd-cloud-watch.
The systemd-cloud-watch system unit depends on the Cassandra service. The unit file is as follows:

/etc/systemd/system/systemd-cloud-watch.service

[Unit]
Description=SystemD Cloud Watch Sends Journald logs to CloudWatch
Requires=cassandra.service
After=cassandra.service

[Service]
ExecStart=/opt/cloudurable/bin/systemd-cloud-watch /etc/systemd-cloud-watch.conf

WorkingDirectory=/opt/cloudurable
Restart=always
RestartSec=60
TimeoutStopSec=60
TimeoutStartSec=60


[Install]
WantedBy=multi-user.target
Note to use metricsd and systemd-cloud-watch we have to set up the right AWS IAM roles, and then associate that IAM instance role with our instances when we start them up.
The systemd-cloud-watch.conf is set up to use the AWS log group cassandra as follows:

systemd-cloud-watch.conf

log_priority=7
debug=true
log_group="cassandra"
batchSize=5

For this to work, we will have to create a log group called cassandra.

Creating a AWS CloudWatch log group

$ aws logs create-log-group  --log-group-name cassandra
To learn more about systemd-cloud-watch, please see the systemd-cloud-watch GitHub project.

Retrospective - Past Articles in this Cassandra Tutorial: Cassandra Cluster DevOps/DBA series

The first article in this Cassandra tutorial series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). The third article in this series was about configuring and using Ansible (building on the first two articles). This article (the 4th) will cover applying the tools and techniques from the first three articles to produce an image (EC2 AMI to be precise) that we can deploy to AWS/EC2. To do this explanation, we will use Packer, Ansible, and the Aws Command Line tools. The AWS command line tools are essential for doing DevOps with AWS.

Check out more information about the Cassandra Database

Find more information on the systemd-cloud-watch daemon at its github project page:


Yet another fork of Systemd Journal CloudWatch Writer

Why this fork.
The project by Rick Hightower and Geoff Chandler forked a project that was a bit broken that repeated journald logs to cloudwatch. They fixed it up and made is useable and performant.
If you are using systemd based operating system (all modern Linux server distributions use systemd and journald), then you want critical messages (errors, warnings, etc.) from the OS and core daemons to be sent to AWS cloudwatch.
Once they are in AWS CloudWatch logs, you can write triggers and filters to trigger alerts which trigger lambda function, etc. They become actionable.
However, as great as journald is, you still have legacy apps that use syslog and then you have Java application that support MDC. Thus instead of just getting logs from journald and batch sending them to cloudwatch, we want to repeat logs from syslog (over UDP) and repeat logs from Java applications (like Cassandra) over JSON logstash format using logback.
This is mainly for our Cassandra AMI from Cloudurable, but it can be used other places.
  • Use QBit to pass / batch log messages efficiently
  • Listen on UDP syslog
  • Listen to UDP JSON messages (logstash-logback-encoder).
The rest of the readme is largely adapted from advantageous.

Systemd Journal CloudWatch Writer

This utility reads from the systemd journal, and sends the data in batches to Cloudwatch.
This is an alternative process to the AWS-provided logs agent. The AWS logs agent copies data from on-disk text log files into Cloudwatch. This utility systemd-cloud-watch reads the systemd journal and writes that data in batches to CloudWatch.
There are other ways to do this using various techniques. But depending on the size of log messages and size of the core parts these other methods are fragile as AWS CloudWatch limits the size of the messages. This utility allows you cap the log field size, include only the fields that you want, or exclude the fields you don't want. We find that this is not only useful but essential.

Log format

The journal event data is written to CloudWatch Logs in JSON format, making it amenable to filtering using the JSON filter syntax. Log records are translated to CloudWatch JSON events using a structure like the following:

Sample log

{
"instanceId" : "i-xxxxxxxx",
"pid" : 12354,
"uid" : 0,
"gid" : 0,
"cmdName" : "cron",
"exe" : "/usr/sbin/cron",
"cmdLine" : "/usr/sbin/CRON -f",
"systemdUnit" : "cron.service",
"bootId" : "fa58079c7a6d12345678b6ebf1234567",
"hostname" : "ip-10-1-0-15",
"transport" : "syslog",
"priority" : "INFO",
"message" : "pam_unix(cron:session): session opened for user root by (uid=0)",
"syslogFacility" : 10,
"syslogIdent" : "CRON"
}
The JSON-formatted log events could also be exported into an AWS ElasticSearch instance using the CloudWatchsync mechanism. Once in ElasticSearch, you can use an ELK stack to obtain more elaborate filtering and query capabilities.

Installation

If you have a binary distribution, you just need to drop the executable file somewhere.
This tool assumes that it is running on an EC2 instance.
This tool uses libsystemd to access the journal. systemd-based distributions generally ship with this already installed, but if yours doesn't you must manually install the library somehow before this tool will work.
There are instructions on how to install the Linux requirements for development below see - Setting up a Linux env for testing/developing (CentOS7).
We also have two excellent examples of setting up a dev environment using bin.packer for both AWS EC2 andDocker. We setup CentoOS 7. The EC2 instance bin.packer build uses the aws command line to create and connect to a running image. These should be instructive for how to setup this utility in your environment to run with systemdas we provide  all of the systemd scripts in the bin.packer provision scripts for EC2. An example is good. A running example is better.

Configuration

This tool uses a small configuration file to set some values that are required for its operation. Most of the configuration values are optional and have default settings, but a couple are required.
The configuration file uses a syntax like this:
log_group ="my-awesome-app"
The following configuration settings are supported:
  • aws_region: (Optional) The AWS region whose CloudWatch Logs API will be written to. If not provided, this defaults to the region where the host EC2 instance is running.
  • ec2_instance_id: (Optional) The id of the EC2 instance on which the tool is running. There is very little reason to set this, since it will be automatically set to the id of the host EC2 instance.
  • journal_dir: (Optional) Override the directory where the systemd journal can be found. This is useful in conjunction with remote log aggregation, to work with journals synced from other systems. The default is to use the local system's journal.
  • log_group: (Required) The name of the cloudwatch log group to write logs into. This log group must be created before running the program.
  • log_priority: (Optional) The highest priority of the log messages to read (on a 0-7 scale). This defaults to DEBUG (all messages). This has a behaviour similar to journalctl -p <priority>. At the moment, only a single value can be specified, not a range. Possible values are: 0,1,2,3,4,5,6,7 or one of the corresponding "emerg", "alert", "crit", "err", "warning", "notice", "info", "debug". When a single log level is specified, all messages with this log level or a lower (hence more important) log level are read and pushed to CloudWatch. For more information about priority levels, look athttps://www.freedesktop.org/software/systemd/man/journalctl.html
  • log_stream: (Optional) The name of the cloudwatch log stream to write logs into. This defaults to the EC2 instance id. Each running instance of this application (along with any other applications writing logs into the same log group) must have a unique log_stream value. If the given log stream doesn't exist then it will be created before writing the first set of journal events.
  • buffer_size: (Optional) The size of the event buffer to send to CloudWatch Logs API. The default is 50. This means that cloud watch will send 50 logs at a time.
  • fields: (Optional) Specifies which fields should be included in the JSON map that is sent to CloudWatch.
  • omit_fields: (Optional) Specifies which fields should NOT be included in the JSON map that is sent to CloudWatch.
  • field_length: (Optional) Specifies how long string fileds can be in the JSON map that is sent to CloudWatch. The default is 255 characters.
  • queue_batch_size : (Optional) Internal. Default to 10,000 entries, how large the queue buffer is. This is chunks of log entries that can be sent to the cloud watch repeater.
  • queue_channel_size: (Optional) Internal. Default to 3 entries, how large the queue buffer is. This is how many queue_batch_size can be around to send before the journald reader waits for the cloudwatch repeater.
  • queue_poll_duration_ms : (Optional) Internal. Default to 10 ms, how long the queue manager will wait if there are no log entries to send to check again to see if there are log entries to send.
  • queue_flush_log_ms : (Optional) If queue_batch_size has not been met because there are no more journald entries to read, how long to flush the buffer to cloud watch receiver. Defaults to 100 ms.
  • debug: (Optional) Turns on debug logging.
  • local: (Optional) Used for unit testing. Will not try to create an AWS meta-data client to read region and AWS credentials.
  • tail: (Optional) Start from the tail of log. Only send new log entries. This is good for reboot so you don't send all of the logs in the system, which is the default behavior.
  • rewind: (Optional) Used to rewind X number of entries from the tail of the log. Must be used in conjunction with the tail setting.
  • mock-cloud-watch : (Optional) Used to send logs to a Journal Repeater that just spits out message and priority to the console. This is used for development only.
If your average log message was 500 bytes, and your used the default setting then assuming the server was generating journald messages rapidly you could use a heap of up to queue_channel_size (3) * queue_batch_size(10,000) * 500 bytes (15,000,000). If you had a very resource constrained env, reduce the queue_batch_size and/or the queue_channel_size.

AWS API access

This program requires access to call some of the Cloudwatch API functions. The recommended way to achieve this is to create an IAM Instance Profile that grants your EC2 instance a role that has Cloudwatch API access. The program will automatically discover and make use of instance profile credentials.
The following IAM policy grants the required access across all log groups in all regions:

IAM file

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:log-group:*",
"arn:aws:logs:*:*:log-group:*:log-stream:*"
]
}
]
}
In more complex environments you may want to restrict further which regions, groups and streams the instance can write to. You can do this by adjusting the two ARN strings in the "Resource" section:
  • The first * in each string can be replaced with an AWS region name like us-east-1 to grant access only within the given region.
  • The * after log-group in each string can be replaced with a Cloudwatch Logs log group name to grant access only to the named group.
  • The * after log-stream in the second string can be replaced with a Cloudwatch Logs log stream name to grant access only to the named stream.
Other combinations are possible too. For more information, see the reference on ARNs and namespaces.

Coexisting with the official Cloudwatch Logs agent

This application can run on the same host as the official Cloudwatch Logs agent but care must be taken to ensure that they each use a different log stream name. Only one process may write into each log stream.

Running on System Boot

This program is best used as a persistent service that starts on boot and keeps running until the system is shut down. If you're using journald then you're presumably using systemd; you can create a systemd unit for this service. For example:
[Unit]
Description=journald-cloudwatch-logs
Wants=basic.target
After=basic.target network.target

[Service]
User=nobody
Group=nobody
ExecStart=/usr/local/bin/journald-cloudwatch-logs /usr/local/etc/journald-cloudwatch-logs.conf
KillMode=process
Restart=on-failure
RestartSec=42s
This program is designed under the assumption that it will run constantly from some point during system boot until the system shuts down.
If the service is stopped while the system is running and then later started again, it will "lose" any journal entries that were written while it wasn't running. However, on the initial run after each boot it will clear the backlog of logs created during the boot process, so it is not necessary to run the program particularly early in the boot process unless you wish to promptly capture startup messages.

Building

Test cloud-watch package

go test -v  github.com/advantageous/systemd-cloud-watch/cloud-watch

Build and Test on Linux (Centos7)

 ./run_build_linux.sh
The above starts up a docker container, runs go getgo buildgo test and then copies the binary to systemd-cloud-watch_linux.

Debug process running Linux

 ./run_test_container.sh
The above starts up a docker container that you can develop with that has all the prerequisites needed to compile and test this project.

Sample debug session

$ ./run_test_container.sh
latest: Pulling from advantageous/golang-cloud-watch
Digest: sha256:eaf5c0a387aee8cc2d690e1c5e18763e12beb7940ca0960ce1b9742229413e71
Status: Image is up to date for advantageous/golang-cloud-watch:latest
[root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/
.git/ README.md cloud-watch/ bin.packer/ sample.conf
.gitignore build_linux.sh main.go run_build_linux.sh systemd-cloud-watch.iml
.idea/ cgroup/ output.json run_test_container.sh systemd-cloud-watch_linux

[root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/

[root@6e0d1f984c03 systemd-cloud-watch]# ls
README.md build_linux.sh cgroup cloud-watch main.go output.json bin.packer run_build_linux.sh
run_test_container.sh sample.conf systemd-cloud-watch.iml systemd-cloud-watch_linux

[root@6e0d1f984c03 systemd-cloud-watch]# source ~/.bash_profile

[root@6e0d1f984c03 systemd-cloud-watch]# export GOPATH=/gopath

[root@6e0d1f984c03 systemd-cloud-watch]# /usr/lib/systemd/systemd-journald &
[1] 24

[root@6e0d1f984c03 systemd-cloud-watch]# systemd-cat echo "RUNNING JAVA BATCH JOB - ADF BATCH from `pwd`"

[root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go clean"
Running go clean

[root@6e0d1f984c03 systemd-cloud-watch]# go clean

[root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go get"
Running go get

[root@6e0d1f984c03 systemd-cloud-watch]# go get

[root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go build"
Running go build
[root@6e0d1f984c03 systemd-cloud-watch]# go build

[root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go test"
Running go test

[root@6e0d1f984c03 systemd-cloud-watch]# go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch
=== RUN TestRepeater
config DEBUG: 2016/11/30 08:53:34 config.go:66: Loading log...
aws INFO: 2016/11/30 08:53:34 aws.go:42: Config set to local
aws INFO: 2016/11/30 08:53:34 aws.go:72: Client missing credentials not looked up
aws INFO: 2016/11/30 08:53:34 aws.go:50: Client missing using config to set region
aws INFO: 2016/11/30 08:53:34 aws.go:52: AWSRegion missing using default region us-west-2
repeater ERROR: 2016/11/30 08:53:44 cloudwatch_journal_repeater.go:141: Error from putEvents NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
--- SKIP: TestRepeater (10.01s)
cloudwatch_journal_repeater_test.go:43: Skipping WriteBatch, you need to setup AWS credentials for this to work
=== RUN TestConfig
test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
test INFO: 2016/11/30 08:53:44 config_test.go:33: [Foo Bar]
--- PASS: TestConfig (0.00s)
=== RUN TestLogOmitField
test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
--- PASS: TestLogOmitField (0.00s)
=== RUN TestNewJournal
--- PASS: TestNewJournal (0.00s)
=== RUN TestSdJournal_Operations
--- PASS: TestSdJournal_Operations (0.00s)
journal_linux_test.go:41: Read value=Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available → current limit 4.0G).
=== RUN TestNewRecord
test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
--- PASS: TestNewRecord (0.00s)
=== RUN TestLimitFields
test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
--- PASS: TestLimitFields (0.00s)
=== RUN TestOmitFields
test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
--- PASS: TestOmitFields (0.00s)
PASS
ok github.com/advantageous/systemd-cloud-watch/cloud-watch 10.017s

Building the docker image to build the linux instance to build this project

# from project root
cd bin.packer
bin.packer build packer_docker.json

To run docker dev image

# from project root
cd bin.packer
./run.sh

Building the ec2 image with bin.packer to build the linux instance to build this project

# from project root
cd bin.packer
bin.packer build packer_ec2.json
We use the docker support for bin.packer. ("Packer is a tool for creating machine and container images for multiple platforms from a single source configuration.")
Use ec2_env.sh_example to create a ec2_env.sh with the instance id that was just created.

ec2_env.sh_example

#!/usr/bin/env bash
export ami=ami-YOURAMI
export subnet=subnet-YOURSUBNET
export security_group=sg-YOURSG
export iam_profile=YOUR_IAM_ROLE
export key_name=MY_PEM_FILE_KEY_NAME

Using EC2 image (assumes you have ~/.ssh config setup)
# from project root
cd bin.packer

# Run and log into dev env running in EC2
./runEc2Dev.sh

# Log into running server
./loginIntoEc2Dev.sh

Setting up a Linux env for testing/developing (CentOS7).

yum -y install wget
yum install -y git
yum install -y gcc
yum install -y systemd-devel


echo"installing go"
cd /tmp
wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz
tar -C /usr/local/ -xzf go1.7.3.linux-amd64.tar.gz
rm go1.7.3.linux-amd64.tar.gz
echo'export PATH=$PATH:/usr/local/go/bin'>>~/.bash_profile

Setting up Java to write to systemd journal

gradle build

compile 'org.gnieh:logback-journal:0.2.0'

logback.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

<appendername="journal"class="org.gnieh.logback.SystemdJournalAppender" />

<rootlevel="INFO">
<appender-refref="journal" />
<customFields>{"serviceName":"adfCalcBatch","serviceHost":"${HOST}"}</customFields>
</root>


<loggername="com.mycompany"level="INFO"/>

</configuration>

Commands for controlling systemd service EC2 dev env

# Get status
sudo systemctl status journald-cloudwatch
# Stop Service
sudo systemctl stop journald-cloudwatch
# Find the service
ps -ef | grep cloud
# Run service manually
/usr/bin/systemd-cloud-watch_linux /etc/journald-cloudwatch.conf

Derived

Status

Done and released.

Using as a lib.

You can use this project as a lib and you can pass in your own JournalRepeater and your own Journal.

Interface for JournalRepeater

package cloud_watch


typeRecordstruct {...} //see source code

typeJournalRepeaterinterface {
// Close closes a journal opened with NewJournal.
Close() error;
WriteBatch(records []Record) error;
}

Interface for Journal

typeJournalinterface {
// Close closes a journal opened with NewJournal.
Close() error;

// Next advances the read pointer into the journal by one entry.
Next() (uint64, error);

// NextSkip advances the read pointer by multiple entries at once,
// as specified by the skip parameter.
NextSkip(skip uint64) (uint64, error);

// Previous sets the read pointer into the journal back by one entry.
Previous() (uint64, error);

// PreviousSkip sets back the read pointer by multiple entries at once,
// as specified by the skip parameter.
PreviousSkip(skip uint64) (uint64, error);

// GetDataValue gets the data object associated with a specific field from the
// current journal entry, returning only the value of the object.
GetDataValue(field string) (string, error);


// GetRealtimeUsec gets the realtime (wallclock) timestamp of the current
// journal entry.
GetRealtimeUsec() (uint64, error);

AddLogFilters(config *Config)

// GetMonotonicUsec gets the monotonic timestamp of the current journal entry.
GetMonotonicUsec() (uint64, error);

// GetCursor gets the cursor of the current journal entry.
GetCursor() (string, error);


// SeekHead seeks to the beginning of the journal, i.e. the oldest available
// entry.
SeekHead() error;

// SeekTail may be used to seek to the end of the journal, i.e. the most recent
// available entry.
SeekTail() error;

// SeekCursor seeks to a concrete journal cursor.
SeekCursor(cursor string) error;

// Wait will synchronously wait until the journal gets changed. The maximum time
// this call sleeps may be controlled with the timeout parameter. If
// sdjournal.IndefiniteWait is passed as the timeout parameter, Wait will
// wait indefinitely for a journal change.
Wait(timeout time.Duration) int;
}

Using as a lib

package main

import (
jcw "github.com/advantageous/systemd-cloud-watch/cloud-watch"
"flag"
"os"
)

varhelp = flag.Bool("help", false, "set to true to show this help")

funcmain() {

logger:= jcw.NewSimpleLogger("main", nil)

flag.Parse()

if *help {
usage(logger)
os.Exit(0)
}

configFilename:= flag.Arg(0)
if configFilename == "" {
usage(logger)
panic("config file name must be set!")
}

config:= jcw.CreateConfig(configFilename, logger)
logger = jcw.NewSimpleLogger("main", config)
journal:= jcw.CreateJournal(config, logger) //Instead of this, load your own journal
repeater:= jcw.CreateRepeater(config, logger) //Instead of this, load your own repeater

jcw.RunWorkers(journal, repeater, logger, config )
}

funcusage(logger *jcw.Logger) {
logger.Error.Println("Usage: systemd-cloud-watch <config-file>")
flag.PrintDefaults()
}
You could for example create a JournalRepeater that writes to InfluxDB instead of CloudWatch.
Improvements:
  • Added unit tests (there were none).
  • Heavily reduced locking by using qbit instead of original implementation.
  • Added cross compile so I can develop/test on my laptop (MacOS).
  • Made logging stateless. No more need for a state file.
  • No more getting out of sync with CloudWatch.
  • Detects being out of sync and recovers.
  • Fixed error with log messages being too big.
  • Added ability to include or omit logging fields.
  • Created docker image and scripts to test on Linux (CentOS7).
  • Created EC2 image and scripts to test on Linux running in AWS EC2 (CentOS7).
  • Code organization (we use a package).
  • Added comprehensive logging which includes debug logging by config.
  • Uses actual timestamp from journal log record instead of just current time
  • Auto-creates CloudWatch log group if it does not exist
  • Allow this to be used as a library by providing interface for Journal and JournalWriter.


Running the Cassandra Database as a systemd service

$
0
0

Running the Cassandra Database as a systemd service

If the Cassandra Database stops for whatever reason, systemd can attempt to restart it. The systemd unit file can ensure that our Cassandra service stays running. The systemd-cloud-watch utility will be sure to log all restarts to AWS CloudWatch.
Here is the systemd unit file for Cassandra.

/etc/systemd/system/cassandra.service

[Unit]
Description=Cassandra Service

[Service]
Type=forking
PIDFile=/opt/cassandra/PID

ExecStartPre=- /sbin/swapoff -a
ExecStartPre=- /bin/chown -R cassandra /opt/cassandra
ExecStart=/opt/cassandra/bin/cassandra -p /opt/cassandra/PID

WorkingDirectory=/opt/cassandra
Restart=always
RestartSec=60
TimeoutStopSec=60
TimeoutStartSec=60
User=cassandra

[Install]
WantedBy=multi-user.target
The above will tells systemd to restart the Cassandra Database in one minute if it goes down. Since we are using OS log aggregation to AWS Cloudwatch every time Cassandra goes down or is restarted by systemd, we will get log messages that we can create alerts and trigger in CloudWatch to then run AWS Lambdas that work with the rest of the AWS ecosystem. Critical bugs in queries or UDF or UFA could cause Cassandra to go down. These could be hard to track down and sporadic. Logging aggregation helps.

Redux - Past Articles in this Cassandra Tutorial: Cassandra Cluster DevOps/DBA series

The first article in this Cassandra tutorial series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). The third article in this series was about configuring and using Ansible (building on the first two articles). This article (the 4th) will cover applying the tools and techniques from the first three articles to produce an image (EC2 AMI to be precise) that we can deploy to AWS/EC2. To do this explanation, we will use Packer, Ansible, and the Aws Command Line tools. The AWS command line tools are essential for doing DevOps with AWS.

Check out more information about the Cassandra Database

Using AWS CLI to create our Cassandra EC2 instance from custom AMI

$
0
0

Using AWS CLI to create our Cassandra EC2 instance from our custom AMI (Amazon image)


Packer installs Cassandra on the AMI. Then we use the AMI to produce Amazon Cassandra EC2 instances. Now we can use that Amazon Cassandra AMI to create an Amazon Cassandra instance. 

Packer building Amazon Cassandra AMI

We built the Amazon Cassandra image using packer build as follows.

Building the AWS AMI

$ packer build packer-ec2.json
After the packer build completes, it will print out the name of the AMI image it created, e.g., ami-6db33abc. Now it is time to use the Amazon CLI (aws cli) to create the ec2 instance.

Using AWS CLI to create our Cassandra EC2 instance

The AWS Command Line Interface is the ultimate utility to DevOp manage your AWS services.
“With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.” –AWS CLI Docs
The AWS command line tool does it all. You can create VPCs. You can run CloudFormations. You can even use it to back up the Amazon Cassandra Database snapshot files to S3. If you are working with AWS and doing DevOps, you must master the AWS CLI.

Automating Amazon Cassandra EC2 instance creation

Starting up an EC2 instance with the right, AMI id, IAM instance role, into the correct subnet, using the appropriate security groups, with the right AWS key-pair name can be tedious. We must automate as using the AWS console (GUI) is error prone (requires too much human intervention).
Instead of using the AWS console, we use the aws command line. We create four scripts to automate creating and connecting to EC2 instances:
  • bin/ec2-env.sh - setups common AWS references to subnets, security groups, key pairs
  • bin/create-ec2-instance.sh - uses aws command line to create an ec2 instance
  • bin/login-ec2-cassandra.sh Uses ssh to log into Cassandra node we are testing
  • bin/get-IP-cassandra.sh Uses aws command line to get the public IP address of the cassandra instance
Note to parse the JSON coming back from the *aws command line we use jq. Note that jq is a lightweight command-line JSON processor. To download and install jq see the jq download documents.

bin/create-ec2-instance.sh Create an EC2 instance based on our new AMI from Packer

#!/bin/bash
set -e

source bin/ec2-env.sh

instance_id=$(aws ec2 run-instances --image-id "$AMI_CASSANDRA" --subnet-id "$SUBNET_CLUSTER" \
--instance-type m4.large --iam-instance-profile "Name=$IAM_PROFILE_CASSANDRA" \
--associate-public-ip-address --security-group-ids "$VPC_SECURITY_GROUP" \
--key-name "$KEY_NAME_CASSANDRA" | jq --raw-output .Instances[].InstanceId)

echo "${instance_id} is being created"

aws ec2 wait instance-exists --instance-ids "$instance_id"

aws ec2 create-tags --resources "${instance_id}" --tags Key=Name,Value="${EC2_INSTANCE_NAME}"

echo "${instance_id} was tagged waiting to login"

aws ec2 wait instance-status-ok --instance-ids "$instance_id"

bin/login-ec2-cassandra.sh

Notice we use the aws ec2 wait to ensure the instance is ready before we tag it and before we log into it.
All of the ids for the servers AWS resources we need to refer to are in scripts/ec2-ens.sh. Notice that all of our AWS/EC2 shell scripts load this env file source bin/ec2-env.sh as follows:

bin/ec2-env.sh common AWS resources exposed as ENV Vars

#!/bin/bash
set -e

export AMI_CASSANDRA=ami-6db33abc
export VPC_SECURITY_GROUP=sg-a8653123

export SUBNET_CLUSTER=subnet-dc0f2123
export KEY_NAME_CASSANDRA=cloudurable-us-west-2
export PEM_FILE="${HOME}/.ssh/${KEY_NAME_CASSANDRA}.pem"
export IAM_PROFILE_CASSANDRA=IAM_PROFILE_CASSANDRA
export EC2_INSTANCE_NAME=cassandra-node
Earlier we created an AWS key pair called  cloudurable-us-west-2. You will need to create a VPC security group with ssh access. You should lock it down to only accept ssh connections from your IP. At this stage, you can use a default VPC, and for now use a public subnet. Replace the ids above with your subnet (SUBNET_CLUSTER), your key pair (KEY_NAME_CASSANDRA), your AMI (AMI_CASSANDRA), and your IAM instance role (IAM_PROFILE_CASSANDRA). The IAM instance role should have access to create logs and metrics for AWS CloudWatch.
The login script (login-ec2-cassandra.sh) uses ssh to log into the instance, but to know what IP to use, it uses  get-IP-cassandra.sh

bin/login-ec2-cassandra.sh Log into new EC2 Cassandra Database instance using ssh

#!/bin/bash
set -e

source bin/ec2-env.sh

if [ ! -f "$PEM_FILE" ]; then
echo "Put your key file $PEM_FILE in your .ssh directory."
exit 1
fi
ssh -i "$PEM_FILE" centos@`bin/get-IP-cassandra.sh`

Ensure you create a key pair in AWS. Copy it to ~/.ssh and then run chmod 400 on the pem file. Note the above script uses bin/get-IP-cassandra.sh to get the IP address of the server as follows:

bin/get-IP-cassandra.sh Get public IP address of new EC2 instance using aws cmdline

#!/bin/bash
set -e

source bin/ec2-env.sh

aws ec2 describe-instances --filters "Name=tag:Name,Values=${EC2_INSTANCE_NAME}" \
| jq --raw-output .Reservations[].Instances[].PublicIpAddress

Running bin/create-ec2-instance.sh

To run bin/create-ec2-instance.sh

Running bin/create-ec2-instance.sh

$ bin/create-ec2-instance.sh
Let’s show how to check to see if everything is up and running.

Interactive session showing everything running

$ pwd
~/github/cassandra-image
$ bin/create-ec2-instance.sh
i-013daca3d11137a8c is being created
i-013daca3d11137a8c was tagged waiting to login
The authenticity of host '54.202.110.114 (54.202.110.114)' can't be established.
ECDSA key fingerprint is SHA256:asdfasdfasdfasdfasdf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '54.202.110.114' (ECDSA) to the list of known hosts.

[centos@ip-172-31-5-57 ~]$ systemctl status cassandra
● cassandra.service - Cassandra Service
Loaded: loaded (/etc/systemd/system/cassandra.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-01 02:15:10 UTC; 14min ago
Process: 456 ExecStart=/opt/cassandra/bin/cassandra -p /opt/cassandra/PID (code=exited, status=0/SUCCESS)
Main PID: 5240 (java)
CGroup: /system.slice/cassandra.service
└─5240 java -Xloggc:/opt/cassandra/bin/../logs/gc.log -ea -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=1000003 -XX:+AlwaysPreTouch -XX:-UseBiasedLocking -XX:+U...

Mar 01 02:14:13 ip-172-31-22-103.us-west-2.compute.internal systemd[1]: Starting Cassandra Service...
Mar 01 02:15:10 ip-172-31-5-57 systemd[1]: Started Cassandra Service.

[centos@ip-172-31-5-57 ~]$ systemctl status metricds
Unit metricds.service could not be found.
[centos@ip-172-31-5-57 ~]$ systemctl status metricsd
● metricsd.service - MetricsD OS Metrics
Loaded: loaded (/etc/systemd/system/metricsd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-01 02:15:10 UTC; 14min ago
Main PID: 5243 (metricsd)
CGroup: /system.slice/metricsd.service
└─5243 /opt/cloudurable/bin/metricsd

Mar 01 02:25:15 ip-172-31-5-57 metricsd[5243]: INFO : [worker] - 2017/03/01 02:25:15 config.go:30: Loading config /etc/metricsd.conf
Mar 01 02:25:15 ip-172-31-5-57 metricsd[5243]: INFO : [worker] - 2017/03/01 02:25:15 config.go:46: Loading log...


[centos@ip-172-31-5-57 ~]$ systemctl status systemd-cloud-watch
● systemd-cloud-watch.service - SystemD Cloud Watch Sends Journald logs to CloudWatch
Loaded: loaded (/etc/systemd/system/systemd-cloud-watch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-01 02:15:10 UTC; 15min ago
Main PID: 5241 (systemd-cloud-w)
CGroup: /system.slice/systemd-cloud-watch.service
└─5241 /opt/cloudurable/bin/systemd-cloud-watch /etc/systemd-cloud-watch.conf

Mar 01 02:30:44 ip-172-31-5-57 systemd-cloud-watch[5241]: main INFO: 2017/03/01 02:30:44 workers.go:138: Read record &{i-013daca3d11137a8c 1488335194775 5241 0 0 systemd-cloud-w /opt/cloudurable/bin/systemd-cloud-watch /opt/cloudurable/bin...
...
Mar 01 02:30:44 ip-172-31-5-57 systemd-cloud-watch[5241]: main INFO: 2017/03/01 02:30:44 workers.go:138: Read record &{i-013daca3d11137a8c 1488335194776 5241 0 0 systemd-cloud-w /opt/cloudurable/bin/systemd-cloud-watch /opt...7f10a2c35de4098
Mar 01 02:30:44 ip-172-31-5-57 systemd-cloud-watch[5241]: repeater INFO: 2017/03/01 02:30:44 cloudwatch_journal_repeater.go:209: SENT SUCCESSFULLY
Mar 01 02:30:44 ip-172-31-5-57 systemd-cloud-watch[5241]: repeater
We used systemctl status systemd-cloud-watchsystemctl status cassandra, and systemctl status metricsdto ensure it is all working.

Cassandra Tutorial: Cassandra Cluster DevOps/DBA series

The first tutorial in this Cassandra tutorial series focused on setting up a Cassandra Cluster. The first Cassandra tutorial setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). The third article in this series was about configuring and using Ansible (building on the first two articles). This article (the 4th) will cover applying the tools and techniques from the first three articles to produce an image (EC2 AMI to be precise) that we can deploy to AWS/EC2. To do this explanation, we will use Packer, Ansible, and the Aws Command Line tools. The AWS command line tools are essential for doing DevOps with AWS.

Check out more information about the Cassandra Database

Viewing all 213 articles
Browse latest View live