Discovering Ethereum nodes with Apache Tuweni

As I anticipated in my previous post, now I tried to discover Hyperledger Besu nodes. I choose this Ethereum implementation because I feel comfortable with the JVM environment but, of course, any other implementation should work the same.

Also, Besu’s documentation is great, and as RSKj, it’s fully configurable via CLI options (but in a much much nicer way).

Create a mini isolated Ethereum network

As before, I want two nodes, knowing each other, isolated from the rest of the world, with peer discovery enabled on localhost so they can be reached by Tuweni.

Besu can be downloaded from Pegasys homepage (btw, not nice to have the checksum column empty) and, once decompressed, the executable is in bin/besu.

As in RSK (well ok, they took it from Ethereum), the enode id is the secp256k1 deviation of a known private key, and for the sake of simplicity, I’ll use the same as before for each node.

So, these are the command line arguments I passed by for Besu v1.3.8.

Node 1

  • --network=dev development network
  • --node-private-key-file=/Users/diegoll/dev/ethereum/besu/9ab5623.key file containing the key to derive 0x9ab5623… nodeId
  • --data-path=/Users/diegoll/dev/ethereum/besu/data-9ab5623 directory to store data (I kept it separate of the key file just in case I wanted to reset everything)
  • --p2p-interface=127.0.0.1 I want to bind it only to local calls
  • --p2p-port=50501 a free port
  • --bootnodes=enode://33c8feaebe6964bae8eaf1186913d3f38986ac4a01b93384a46879ca4291dd095f79bdddc8544ad6e15fdee3c30dfc396511109b602d654d41a2135834bccfc4@127.0.0.1:50502 the sibling node with known nodeId based on its known private key

Node 2

  • --network=dev development network
  • --node-private-key-file=/Users/diegoll/dev/ethereum/besu/33c8fea.key file containing the key to derive 0x33c8fea… nodeId
  • --data-path=/Users/diegoll/dev/ethereum/besu/data-33c8fea directory to store data
  • --p2p-interface=127.0.0.1 I want to bind it only to local calls
  • --p2p-port=5050 another free port
  • --bootnodes=enode://9ab56235e4f5771814515dfaf2193b2bdbf9815e1411cb0ff7dcc0dfa4caec2c4d72cdf6bb9a1ced2e8287fb746428a47529e5325001d17fc3ba10dc9036dc99@127.0.0.1:50501 the sibling node with known nodeId based on its known private key

Create a simple Apache Tuweni Discovery Service

This time the code I used actually prints the looked up nodes, I slightly modified the previous example with this because it now really works.

package io.blockmate.eth

import org.apache.tuweni.crypto.SECP256K1
import org.apache.tuweni.devp2p.DiscoveryService
import org.bouncycastle.jce.provider.BouncyCastleProvider
import java.security.Security

import kotlinx.coroutines.*
import java.net.InetAddress
import java.net.URI


fun main() = runBlocking {
    Security.addProvider(BouncyCastleProvider())
    val key9ab5623 = SECP256K1.PublicKey.fromHexString("0x9ab56235e4f5771814515dfaf2193b2bdbf9815e1411cb0ff7dcc0dfa4caec2c4d72cdf6bb9a1ced2e8287fb746428a47529e5325001d17fc3ba10dc9036dc99")
    val service = DiscoveryService.open(
        keyPair = SECP256K1.KeyPair.random(),
        port = 50500,
        host = "localhost",
        bootstrapURIs = listOf(
            "enode://${key9ab5623.toHexString()}@127.0.0.1:50501"
        ).map { URI.create(it) },
        advertiseAddress = InetAddress.getLoopbackAddress()
    )
    service.awaitBootstrap()
    val peers = service.lookup(key9ab5623)
    peers.forEach { print("${it.endpoint}\n") }
}

(the build.gradle.kts for building this can be exactly the same as before)

The results

As expected, now I can see in the console this nice output

Endpoint(address=/127.0.0.1, udpPort=50501, tcpPort=50501)
Endpoint(address=/127.0.0.1, udpPort=50500, tcpPort=50500)

What’s next

I’m not quite sure, I may use this as starting point of accessing the whole Ethereum Mainnet and maybe gather some information about its topology.

[not] Discovering RSK nodes with Apache Tuweni

Apache Tuweni is an effort to provide different modules to build ethereum compatible nodes. I took the personal challenge to learn a little more of it and to increase my Kotlin fu along the way.

As I worked on RSKj for more than two years I thought I could use it as my test environment for discovering their nodes through plain ÐΞVp2p UDP connections. It turned out I couldn’t, basically because RSK doesn’t implement the mentioned discovery protocol, but I think it’s worth to document the process.

Create a mini isolated RSK network

At this point I wanted two nodes, knowing each other, isolated from the rest of the world with peer discovery enabled on localhost so they can be reached by Tuweni.

For these nodes to know each other from the beginning, they need to know the nodeIds exposed to the protocol. Accordingly, you must be in control of the private key to be use when launching your node and derivate from it the corresponding secp256k1 public key. As a playground tool, NOT for production usage, RSKj’s GenNodeKeyId main should do the trick.

So, these are the command line arguments I passed to the rskj executable jar (WASABI-1.1.0):

Node 1

  • -Dbind_address=localhost I want to only respond to local calls
  • -Dpeer.port=50501 a free port
  • -Dpeer.privateKey=e081f18ae05516238e6669a855d48b5115b652588eba7a99f48bf4f1acf1f7c0 a known private key which generates a 0x9ab5623… nodeId
  • -Dpeer.active.0.nodeId=33c8feaebe6964bae8eaf1186913d3f38986ac4a01b93384a46879ca4291dd095f79bdddc8544ad6e15fdee3c30dfc396511109b602d654d41a2135834bccfc4 Node 2’s nodeId
  • -Dpeer.active.0.ip=localhost Node 2’s host
  • -Dpeer.active.0.port=50502 Node 2’s port
  • -Ddatabase.dir=/Users/diegoll/.rsk/regtest-9ab562 current instance database (you must set a different one for each instance)
  • -Drpc.providers.web.http.enabled=false I disabled JSON-RPC so we don’t have port collisions on localhost
  • -Dpeer.discovery.enabled=true I enable peer discovery so I can start receiving messages from Apache Tuweni

Node 2

  • -Dminer.client.enabled=false I want just a single node generating blocks
  • -Dbind_address=localhost I want to only respond to local calls
  • -Dpeer.port=50502 a free port
  • -Dpeer.privateKey=772754048b169a8de6da3db7d97ba464718bd3da712c958937b0d4aa3434f110 private key which derives the 0x33c8fea… peer.active.0.nodeId value in Node 1
  • -Dpeer.active.0.nodeId=9ab56235e4f5771814515dfaf2193b2bdbf9815e1411cb0ff7dcc0dfa4caec2c4d72cdf6bb9a1ced2e8287fb746428a47529e5325001d17fc3ba10dc9036dc99 the nodeId derived from the 0xe081f1… peer.privateKey of Node 1
  • -Dpeer.active.0.ip=localhost Node 1’s host
  • -Dpeer.active.0.port=50501 Node 1’s port
  • -Ddatabase.dir=/Users/diegoll/.rsk/regtest-33c8fe

These -D arguments are strictly java CLI options so they must be placed before the jar invocation. E.g. java -D... -D... -jar xxx.jar [xxx app options]

Both nodes must be started with the --regtest option, and a discretional --reset for cleaning up everything before each run

Create a simple Apache Tuweni Discovery Service

Tuweni is written in a mix of Java and Kotlin and particularly, the DiscoveryService that I wanted to try is in Kotlin, so I went full on that language.

Then, this is the code I used for trying to reach my mini RSK network:

package io.blockmate.eth

import org.apache.tuweni.crypto.SECP256K1
import org.apache.tuweni.devp2p.DiscoveryService
import org.bouncycastle.jce.provider.BouncyCastleProvider
import org.logl.slf4j.Slf4jLoggerProvider
import java.security.Security

import kotlinx.coroutines.*
import java.net.InetAddress
import java.net.URI


fun main() = runBlocking {
    Security.addProvider(BouncyCastleProvider())
    val service = DiscoveryService.open(
        keyPair = SECP256K1.KeyPair.random(),
        port = 50500,
        host = "localhost",
        bootstrapURIs = listOf(
            "enode://9ab56235e4f5771814515dfaf2193b2bdbf9815e1411cb0ff7dcc0dfa4caec2c4d72cdf6bb9a1ced2e8287fb746428a47529e5325001d17fc3ba10dc9036dc99@127.0.0.1:50501"
        ).map { URI.create(it) },
        advertiseAddress = InetAddress.getLoopbackAddress(),
        loggerProvider = Slf4jLoggerProvider()
    )
    service.awaitBootstrap()
    Unit
}

With this build.gradle.kts support

import org.jetbrains.kotlin.gradle.tasks.KotlinCompile

plugins {
    kotlin("jvm") version "1.3.61"
}

group = "io.blockmate"
version = "1.0-SNAPSHOT"

repositories {
    jcenter()
}

dependencies {
    implementation("org.apache.tuweni:tuweni-devp2p:0.10.0")
    implementation("org.bouncycastle:bcprov-jdk15on:1.64")
    implementation("org.logl:logl-slf4j:0.3.1")
    runtimeOnly("ch.qos.logback:logback-classic:1.2.3")
}

val compileKotlin: KotlinCompile by tasks
val compileTestKotlin: KotlinCompile by tasks

compileKotlin.kotlinOptions.jvmTarget = "11"
compileTestKotlin.kotlinOptions.jvmTarget = "11"

The results

As spoiled in the beginning, it didn’t work as expected but at least I was able to receive the Ping packet on the RSK side and reach out the exception which made evident the differences in the protocol.

I wasn’t able to found any specification of the RSK Discovery protocol but the stacktrace gives a good hint on where to start looking at it

2020-01-03-11:03:37.592 ERROR [c.r.n.d.PacketDecoder]  Exception processing inbound message from null : 7911bdaf109a2c5d6345cb655807dd4d2692ef366c9c9ac78e24937792d63be4c04f591dad1facaf2cef72542013d265fd5f2995772c7350a93d17092e99da506c7df7227c8d0bfbf793a9f5413d6cad1288f752d40a3fcf59afb89e086372410101e304c9847f00000182c54480cb847f00000182c54582c545845e0f49ce86016f6bb7ebc1
java.lang.ClassCastException: org.ethereum.util.RLPList cannot be cast to org.ethereum.util.RLPItem
	at co.rsk.net.discovery.message.PingPeerMessage.parse(PingPeerMessage.java:93)
	at co.rsk.net.discovery.message.PingPeerMessage.<init>(PingPeerMessage.java:49)
	at co.rsk.net.discovery.message.PeerDiscoveryMessageFactory.createMessage(PeerDiscoveryMessageFactory.java:31)
	at co.rsk.net.discovery.message.MessageDecoder.decode(MessageDecoder.java:60)
	at co.rsk.net.discovery.PacketDecoder.decodeMessage(PacketDecoder.java:48)
	at co.rsk.net.discovery.PacketDecoder.decode(PacketDecoder.java:43)
	at co.rsk.net.discovery.PacketDecoder.decode(PacketDecoder.java:35)

What’s next

I’ll give a try of the same isolated topology on top of a real Ethereum implementation like Hyperledger Besu.

UPDATE: link to the Hyperledger Besu post

Starting with Vert.x 3 – Creating a new project

These are my first impressions on what have changed in Vert.x 3 from a regular java developer perspective.

Vert.x 3 has ripped off the old modules concepts in favor of a more straightforward approach. Now you don’t need to create a .zip file with an structure to deploy your application, instead, you run your application inside vert.x however you want. These is a great power and at first I found myself a little lost.

You can still download a vert.x distribution but, with the current state of the platform, I see it like an easy way to start making experiments (even in the many languages vert.x support) and it’s not necessary anymore for productive environments.

In Vert.x 3 you create your own main function, invoke a Vert.x factory, and voilà, you have the whole platform there to serve you. Indeed, you can see the io.vertx.core.Starter class, which is invoked by the executable bin/vertx in the distribution, to see there is nothing but CLI args parsing and building configurations for the factory I mentioned before.

From the v2 era I liked to have a main point written in javascript to deploying all my java verticles. I though it as my init script so it made sense to have it in an scripting language, I didn’t want to lose that. So now it’s my responsibility to create this structure. A few lines of code is worth a thousand words:
(Deprecated: see Update section below)

    public class Launcher {
        public static void main(String[] args) {
            Logger vertxLogger = LoggerFactory.getLogger(Launcher.class.getName());
            Vertx vertx = Vertx.vertx();
            vertx.deployVerticle("main.js", event -> {
                if (event.succeeded()) {
                    vertxLogger.info("Your Vert.x application is started!");
                } else {
                    vertxLogger.error("Unable to start your application", event.cause());
                }
            });

            Runtime.getRuntime().addShutdownHook(new Thread() {
                public void run() {
                    final CountDownLatch latch = new CountDownLatch(1);
                    vertx.close(ar -> {
                        if (ar.succeeded()) {
                            vertxLogger.info("Your Vert.x application is stopped!");
                        } else {
                            vertxLogger.error("Failure in stopping Vert.x", ar.cause());
                        }
                        latch.countDown();
                    });
                    try {
                        if (!latch.await(2, TimeUnit.MINUTES)) {
                            vertxLogger.error("Timed out waiting to undeploy all");
                        }
                    } catch (InterruptedException e) {
                        throw new IllegalStateException(e);
                    }
                }
            });
        }
    }

In line 5 I deploy my main verticle.
Lines 13-32 are stolen from io.vertx.core.Starter to properly shutdown the application.

For this to run you just need a few dependencies: io.vertx:vertx-core:jar:3.0.0-milestone4:compile and io.vertx:vertx-lang-js:jar:3.0.0-milestone4:runtime

Packing your application

Now it’s just a matter of packing the application to make it runnable. There is a movement loving the concept of fatJar, I personally dislike it for various reasons, one of them is that I see it completely unnecessary, we are advanced Java developers, man up, setup you classpath and invoke the Main Class as the good lord says (well, that was too much).

In Maven this is extra easy, you just need to properly setup the maven-jar-plugin to write the MANIFEST and the maven-assembly-plugin to load an assembly descriptor. Then you can have a simple run.sh script for running it without even having to remember the name of your main jar. I use something like this:

    #!/usr/bin/env bash
    DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
    java -jar $DIR/libs/${project.build.finalName}.${project.packaging} $@

Here is a minimal pom.xml briefing all the above ideas:

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

      <modelVersion>4.0.0</modelVersion>
      <groupId>com.locademiaz</groupId>
      <artifactId>initial-vertx-3-project</artifactId>
      <version>0.1-SNAPSHOT</version>
      <packaging>jar</packaging>
    
      <dependencies>
        <dependency>
          <groupId>io.vertx</groupId>
          <artifactId>vertx-core</artifactId>
          <version>${vertx.version}</version>
        </dependency>
        <dependency>
          <groupId>io.vertx</groupId>
          <artifactId>vertx-lang-js</artifactId>
          <version>${vertx.version}</version>
          <scope>runtime</scope>
        </dependency>
      </dependencies>
    
      <build>
        <resources>
          <resource>
            <directory>src/main/js</directory>
          </resource>
        </resources>
        <plugins>
          <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.3</version>
            <configuration>
              <source>1.8</source>
              <target>1.8</target>
            </configuration>
          </plugin>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-jar-plugin</artifactId>
            <version>2.6</version>
            <configuration>
              <archive>
                <manifest>
                  <mainClass>com.locademiaz.Launcher</mainClass>
                  <addClasspath>true</addClasspath>
                </manifest>
              </archive>
            </configuration>
          </plugin>
          <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <version>2.5.3</version>
            <configuration>
              <descriptor>src/main/assembly/dist.xml</descriptor>
            </configuration>
            <executions>
              <execution>
                <id>make-assembly</id>
                <phase>package</phase>
                <goals>
                  <goal>single</goal>
                </goals>
              </execution>
            </executions>
          </plugin>
        </plugins>
      </build>

      <properties>
        <vertx.version>3.0.0-milestone4</vertx.version>
      </properties>
    </project>

And also the dist.xml:

    <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
      <id>dist</id>
      <formats>
        <format>tar.gz</format>
      </formats>
      <baseDirectory>${project.artifactId}</baseDirectory>
      <dependencySets>
        <dependencySet>
          <outputDirectory>libs/</outputDirectory>
          <scope>runtime</scope>
        </dependencySet>
      </dependencySets>
      <files>
        <file>
          <source>${project.build.directory}/${project.build.finalName}.${project.packaging}</source>
          <outputDirectory>libs/</outputDirectory>
          <fileMode>0664</fileMode>
        </file>
        <file>
          <source>src/main/sh/run.sh</source>
          <fileMode>0744</fileMode>
          <filtered>true</filtered>
        </file>
      </files>
    </assembly>

With this initial configuration, you run mvn package and you’ll got a tarball into the target/ folder, untar it and you’ll have a nice running script for your whole application.

Bonus track

I prefer to use Logback as my logging system, and in the old days we had to modify the vert.x distribution and the executable to get this support.
Nowadays you just add System.setProperty(LoggerFactory.LOGGER_DELEGATE_FACTORY_CLASS_NAME, SLF4JLogDelegateFactory.class.getName()); as the first line of your main, add the ch.qos.logback:logback-classic:jar:1.1.3:runtime dependency and you are there.
For the extra lazy, here is a simple logback.xml:

    <configuration>
      <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender" level="INFO">
        <encoder>
          <pattern>[%-5level] %logger{15} - %msg%n</pattern>
        </encoder>
      </appender>
    
      <logger name="io.vertx" level="INFO">
        <appender-ref ref="STDOUT" />
      </logger>
      <logger name="com.hazelcast" level="ERROR">
        <appender-ref ref="STDOUT" />
      </logger>
      <logger name="io.netty.util.internal.PlatformDependent" level="ERROR">
        <appender-ref ref="STDOUT" />
      </logger>
      <root level="INFO">
      </root>
    </configuration>

I also like to have an user accessible logging configuration, for this, you must add the following to the dist.xml file

        <file>
          <source>src/main/resources/logback.xml</source>
          <fileMode>0644</fileMode>
        </file>

and modify the java invocation line in run.sh to include -Dlogback.configurationFile=file:$DIR/logback.xml

I hope you find this useful and enjoy the great things Vert.x 3 has to offer.

UPDATE:

I’ve been thinking and I rather see unnecessary to have my own main class which at the end will imitate the provided io.vertx.core.Starter. So I’ve simplified my initial deployment removing all references to my custom mainClass and with this run.sh

    #!/usr/bin/env bash
    DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
    java -Dvertx.logger-delegate-factory-class-name=io.vertx.core.logging.impl.SLF4JLogDelegateFactory \
         -Dlogback.configurationFile=file:$DIR/logback.xml \
         -classpath "$DIR/libs/*" io.vertx.core.Starter run main.js \
         -conf $DIR/conf.json

Also, if you want to run your application using maven (useful for debugging) you just need to configure the org.codehaus.mojo:exec-maven-plugin like this:

    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>exec-maven-plugin</artifactId>
      <version>1.4.0</version>
      <configuration>
        <mainClass>io.vertx.core.Starter</mainClass>
        <systemProperties>
          <systemProperty>
            <key>vertx.logger-delegate-factory-class-name</key>
            <value>io.vertx.core.logging.impl.SLF4JLogDelegateFactory</value>
          </systemProperty>
        </systemProperties>
        <arguments>
          <argument>run</argument>
          <argument>main.js</argument>
          <argument>-conf</argument>
          <argument>${project.build.outputDirectory}/conf.json</argument>
        </arguments>
      </configuration>
    </plugin>

Handling multiple verticle deployment

Every Vert.x module needs a single point of start.
Once your application has many verticles interacting you’ll need one verticle to start them all, one verticle to find them, one verticle to bring them all and in the darkness bind them (I’m sorry).
For this particular task I prefer to use javascript. I tend to think of this verticle as the start script of my application, so I like it to be easy to write and I found the javascript syntax to be a good fit on this.

My deployment verticle

I usually name it app-deployer.js and I place it in src/main/javascript (I’ll explain below how to add it to Maven).

var container = require("vertx/container")
var console = require("vertx/console")
var config = container.config;

var skipDeploy;
if (config.skipDeploy) {
  skipDeploy = config.skipDeploy;
} else {
  skipDeploy = [];
}

const verticles = [
  {
    name : "con.locademiaz.vertx.FirstVerticle",
    instances : 1,
    config : {
      foo_param : config.bar_param
    }
  }
];

deployVerticles(verticles, 0);

function deployVerticles(verticles, verticleIndexToDeploy) {
  if (verticleIndexToDeploy < verticles.length) {
    const verticle = verticles[verticleIndexToDeploy];
    const verticleName = verticle.name;
    if (skipDeploy.indexOf(verticleName) == -1 ) {
      container.deployVerticle(
        verticleName,
        verticle.instances,
        verticle.config,
        function(err, deployID) {
          if (!err) {
            console.log("[Ok] " + verticleName);
            deployVerticles(verticles, verticleIndexToDeploy + 1);
          } else {
            console.log("[Fail] " + verticleName + " -> " + err.getMessage());
          }
        }
      )
    } else {
      console.log("[Skip] " + verticleName);
      deployVerticles(verticles, verticleIndexToDeploy + 1);
    }
  }
}

In this file you can do all kinds of tricks.
One of my preferred ones is to add a list of blacklisted verticles. This is useful for testing because you can deploy this file, getting your all system deployed, and mock some behavior inside your test.
Also, as I like to keep my external configuration lean, it is here where I manipulate what is passed by the command line and adapt it to each verticle needs.

Extending your Maven configuration

Following what is explained in my previous post this is how you add your verticle deployer to your Maven configuration

pom.xml

Add the resource folder

<project>
  ...
  <build>
    ...
    <resources>
      ...
      <resource><directory>src/main/javascript</directory></resource>
      ...
    </resources>
    ...
  </build>
  ...
</project>

And now, because I’m a control freak, I don’t like unnecessary files ends up in the jar for my project, so I exclude them from the package as follows:

<project>
  ...
  <build>
    ...
    <plugins>
      ...
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-jar-plugin</artifactId>
        <version>2.4</version>
        <configuration>
          <excludes>
            <exclude>app-deployer.js</exclude>
          </excludes>
        </configuration>
      </plugin>
      ...
    </plugins>
    ...
  </build>
  ...
</project>

And that’s it, now you have a nice entry point for your module.
Enjoy.
.

Vert.x Java project – Quick start

After having 41 successful-n-productive™ Vert.x systems running, I have distilled a few practices which makes worth to share.

Building environment

I’ve decided to go with Maven for managing dependencies and build cycle. Vert.x has support for Gradle but I dropped this option because I just wanted a simple descriptive language for my project, instead of a full imperative language for this purpose. Nothing against Gradle though.

Vert.x modules

Vert.x recommends to organize your application as modules. Beside all the technical benefits of this practice (classloader isolation, dependency management, etc), I would also like to add that this is a great way to define different areas of development across your team. This, along with a clear document of the services provided and exposed by the modules within its interfaces, will help you to isolate different application concerns as the building blocks for the whole system.

Initial setup

Vert.x has a Maven Archetype for getting started, but IMHO it doesn’t do well with Maven files, and it also generates a single project containing examples for every supported language. BTW, I’ve sent a pull request for this, that has never discussed.
This post will assume you have some knowledge of Maven, its structure and usage.

pom.xml

You can read an in depth explanation of the structure of this file here (this is not yet ready)

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.locademiaz.vertx</groupId>
  <artifactId>quick-start</artifactId>
  <version>0.1-SNAPSHOT</version>
  <packaging>jar</packaging>

  <dependencies>
    <dependency>
      <groupId>io.vertx</groupId>
      <artifactId>vertx-platform</artifactId>
      <version>2.1M2</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>io.vertx</groupId>
      <artifactId>testtools</artifactId>
      <version>2.0.2-final</version>
      <scope>test</scope>
    </dependency>
  </dependencies>

  <build>
    <resources>
      <resource>
        <directory>src/main/resources</directory>
        <filtering>true</filtering>
      </resource>
      <resource>
        <directory>src/main/js</directory>
      </resource>
    </resources>
    <testResources>
      <testResource>
        <directory>src/test/resources</directory>
        <filtering>true</filtering>
      </testResource>
    </testResources>

    <plugins>
      <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-compiler-plugin</artifactId>
         <version>2.5.1</version>
         <configuration>
           <source>1.7</source>
           <target>1.7</target>
         </configuration>
      </plugin>
      <plugin>
        <artifactId>maven-assembly-plugin</artifactId>
        <version>2.4</version>
        <configuration>
          <outputDirectory>${project.build.directory}/mods</outputDirectory>
          <finalName>${module.name}</finalName>
          <appendAssemblyId>false</appendAssemblyId>
          <descriptors>
            <descriptor>src/main/assembly/mod.xml</descriptor>
          </descriptors>
        </configuration>
        <executions>
          <execution>
            <id>make-assembly</id>
            <phase>package</phase>
            <goals>
              <goal>single</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>io.vertx</groupId>
        <artifactId>vertx-maven-plugin</artifactId>
        <version>2.0.1-final</version>
      </plugin>
    </plugins>
  </build>
  <properties>
    <module.name>${project.groupId}~${project.artifactId}~${project.version}</module.name>
  </properties>
</project>
mod.xml

Vert.x defines a structure for its modules, so, below is a Maven Assembly definition for building it. In concordance with the pom.xml defined above, this must be placed in the src/main/assembly directory.

<?xml version="1.0" encoding="UTF-8"?>
<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">

  <id>mod</id>

  <formats>
    <format>zip</format>
    <format>dir</format>
  </formats>
  <dependencySets>
    <dependencySet>
      <useProjectArtifact>false</useProjectArtifact>
      <outputDirectory>lib/</outputDirectory>
      <scope>runtime</scope>
      <fileMode>664</fileMode>
    </dependencySet>
  </dependencySets>

  <includeBaseDirectory>false</includeBaseDirectory>

  <fileSets>
    <fileSet>
      <directory>${project.build.outputDirectory}</directory>
      <outputDirectory>/</outputDirectory>
      <includes>
        <include>**</include>
      </includes>
    </fileSet>
  </fileSets>
</assembly>
Your first verticle

Now we have everything setup to start coding. Create a FirstVerticle.java file inside the src/main/java/com/locademiaz/vertx directory with the following content:

package com.locademiaz.vertx;

import org.vertx.java.platform.Verticle;

public class FirstVerticle extends Verticle {

    @Override
    public void start() {
        container.logger().info("Your first verticle has been started!");
    }
}
mod.json

The final part of a Vert.x module is its descriptor. You can find a comprehensive list of its fields here. Place this file in the src/main/resources directory

{
    "main":"com.locademiaz.vertx.FirstVerticle",
    "deploys":"${module.name}",
}

Running your module

Development phase

As you can see in the pom.xml file above the vertx-maven-plugin is configured in lines 72-76. Having this configured, allows you to run, directly from your source code, Vert.x with your module just in place. For doing so just run:

$ mvn clean package vertx:runMod

et voilà, you should see the message logged into the system console

Productive phase

As presented here, the .zip generated in the target/mods by the command

$ mvn clean package

is ready to be deployed. You can drop it into your Nexus repository, or into your local maven structure, or you can also use the handy runzip option:

$ vertx runzip target/mods/com.locademiaz.vertx~quick-start~0.1-SNAPSHOT.zip

In the upcoming entries you can expect more insights about handling deployment, configuration, vert.x packaging, testing, continuous integration and any other shenanigan I encounter in this pleasant trip. Stay tuned.

Turn your Java apps Gnome-Shell friendly

The Problem

When you try to add a java application as favorite into the Gnome Shell‘s lateral dock and run it you’ll end up having duplicated icons, one for the launcher and one for the running app. This happens b/c the shell uses an application based system for grouping tasks, so the idea is, if you add an application as a favorite launcher and you start it you’ll end having that launcher icon highlighted. Internally the shell matches the running process with the Exec clause of the .desktop file .
This works well except for applications running inside a VM or being interpreted b/c those will share the same running process. On that situation the shell inspects the WM_CLASS X Window property [1] and matches it with the full name of the desktop file. E.g. if your applications has the WM_CLASS set as “mySwingApp”, for this to successfully matched in the dock with its launcher, that launcher must be called mySwingApp.desktop located according XDG.

Note: for inspecting that value on any window you just need to run xprop WM_CLASS and click into the target window

Why is this happening?

Even if you are creating a swing application from scratch there is no easy way to tweak that X Window property using plain and portable APIs. Taking a look into openjdk sources, this is how them manages it

String mainClassName = null;

StackTraceElement trace[] = (new Throwable()).getStackTrace();
int bottom = trace.length - 1;
if (bottom >= 0) {
mainClassName = trace[bottom].getClassName();
}
if (mainClassName == null || mainClassName.equals("")) {
mainClassName = "AWT";
}
awtAppClassName = getCorrectXIDString(mainClassName);

As you may note what is used for this value is the name of the class running the Swing main loop.

Da solution

Digging around the web I founded a java agent and a pretty similar explanation on what’s going on.
So what I did was to fork that agent and improve it a little bit. What I did was to move it into a maven structure and remove its packaging as a fat-jar (I’m radically against fat-jars as you can see in my comments here).

The forked project is located in my github here:
https://github.com/diega/window-matching-agent
and please take a look to the README there

A practical example

My first motivation on doing this was based on IntelliJ IDEA so I’ll paste here my environment.

  • Download the agent-1.0.jar and put it wherever you want (I put it into IntelliJ’s bin/ folder)
  • Edit the file bin/idea.vmoptions adding this line

    -javaagent:agent-1.0.jar=intellij-ultimate

  • Create the file ~/.local/share/applications/intellij-ultimate.desktop with the following content

    [Desktop Entry]
    Version=10.5.1
    Name=IntelliJ IDEA Ultimate Edition
    Comment=The Most Intelligent Java IDE
    Categories=Applications;Development;
    Encoding=UTF-8
    Exec=env IDEA_CLASSPATH\=../lib/asm.jar /home/diego/bin/ideaIU-10.5/bin/idea.sh
    GenericName=IntelliJ
    Icon=/home/diego/bin/ideaIU-10.5/bin/idea128.png
    MimeType=text/x-java
    Terminal=false
    Type=Application
    URL=http://www.jetbrains.com/idea
    

Latest notes

If you download the agent-1.0.jar into another location (or with another name) you must adjust the -javaagent: parameter.
Of course, change the path in the Exec entry to point to your own executable.

Hope this helps somebody, it took me a while to figure out the-right-things-to-do™ :)


[1]: Application Based GNOME 3

Drools persistence on top of HashMap

Introduction

This post shows a reference implementation of drools persistence on top of a non transactional HashMap. This should serve as inspiration for more complex scenarios (like the Berkley DB one). This work was also developed under Intalio’s sponsorship.

As the new abstraction is heavily inspired on JPA there are some mechanisms which should be emulated in order to get the same behavior on both implementations.

Involved classes

Starting from the abstraction described in my previous post this is the new hierarchy for persisting drools runtime into a HashMap.

Drools abstract storage persistence diagram
Drools abstract storage persistence diagram

I’ll try to explain the most relevant objects in the diagram relating them to JPA components.

MapBasedPersistenceContext
behaves like the EntityManager, it stores all objects which are not yet commited. Finding objects is also resolved through this interface, for support this behavior it has to have access to the KnowledgeSessionStorage.
KnowledgeSessionStorage
represents the real persistent storage. This is the extension point for support any other non-JPA implementation. It provides saveOrUpdate and find methods and is the responsible to assign id’s to the entities.
ManualTransactionManager
as we cannot rely on JTA to manage our session Drools hooks explicit calls whenever things should be serialized. This component must access the NonTransactionalPersistenceSession to get the entities waiting for being persisted into the KnowledgeSessionStorage.

JBPM5

The same concepts have been applied into JBPM5 codebase and we have there proper extensions for managing process semantics.

You’ll find there:

  • MapBasedProcessPersistenceContext
  • ProcessStorage
  • ManualProcessTransactionManager
  • etc

Usage

You must setup the environment you gonna pass to the JPAKnowledgeService (still keeping the name for backward compatibility) setting the TRANSACTION_MANAGER and PERSISTENCE_CONTEXT_MANAGER keys. For this I have just created simple factories but you can build them the way you want.

I suggest using KnowledgeSessionStorageEnvironmentBuilder or ProcessStorageEnvironmentBuilder. There doesn’t exist a default implementation of this storage, the one on top of HashMap is only used for testing porposes, so you have to pass to the builder your own implementation of the desired storage.

Testing further implementations

To test this two implementations (and any new one which wants to respect the drools persistence semantic) together, I have created a simple set of abstract tests(*).
For Drools Expert:
https://github.com/droolsjbpm/droolsjbpm/[…]/MapPersistenceTest.java
For JBPM5
https://github.com/krisv/jbpm/[…]/MapPersistenceTest.java
In a future will be great to abstract the current tests on JPA to be able to run abstract of the lower persistence layer.

What’s next

Clean up this interfaces a bit more and start working on a DB Berkley implementation

(*) tests names are not declarative enough, I’m holding a commit for this until coming repository split is done

Drools Abstract Persistence Layer

Introduction

On the past two months, and under Intalio’s sponsorship, I’ve been working adding a new persistence layer into Drools. The main goal of this is to support Berlkey DB as persistent backend. Added abstractions are on that direction.
The approach for this task was to work on top of the current drools-persistence-jpa module. Is assumed this module is tested enough (through junit or merely day-to-day usage) and this is the one who defines the semantics which drools persistent applications should attach.

Little background on managing persistence

When you use the engine in a regular way, you obtain the ksession through the kbase and it doesn’t know anything about how to persist its state. To provide the ksession with persistence capabilities Drools makes use of the command pattern. That way, instead of creating it directly from the kbase you go through a factory which will return a decorator which will handle how and where the state is persisted. Being a decorator, at this point, will be totally transparent to the user whether or not the state is persisted.

Abstracting persistence

As the name dictates, drools-persistence-jpa module is heavily oriented to JPA usage. Then, what we have made here was to clean up the use of JPA interfaces and move them into this module.

The most used interface by JPA is the EntityManager, this one was abstracted by the PersistenceContext interface which now has specific methods for persisting SessionInfo’s and WorkItemInfo’s.
Internally Drools uses different scopes for dealing with persistenceContext’s, one for the whole application and one for each command. This behaviour has also been abstracted into PersistenceContextManager.

Persistence layer before refactor
Persistence layer before refactor

Persistence layer after refactor
Persistence layer after refactor

Backward compatibility

Other important aspect of this refactor is that maintains backward compatiblity. That means that, for the moment, you shouldn’t notice any difference if you already have your JPA application running. Current way to configure the ksession is still the same but we’ll add some new in the future, which, I wish, will end up being more polish and abstract.

JBPM5

This persistence refactor also applies for jbpm5 which now have a ProcessPersistenceContext and a ProcessPersistenceContextManager.

What’s next

In a comming post I’ll show a reference implementation on the top of a regular HashMap.

Simple Drools Rule Flow to BPMN2 migration tool

For ease the transition from Drools Flow files to the new BPMN2 standard drools gives you a nice but hidden feature, a XmlBPMNProcessDumper.
This tiny piece takes a RuleFlowProcess instance and spits nice well formed BPMN2 files.
So, for making your life just a little better, I’ve created a project on github exposing this internal piece with a nice interface. Take a look at the class RuleFlow2BPMN2Migrator in

https://github.com/diega/rf-bpmn2-migrator

Enjoy.

Disclaimer: calling a 7-line class a tool isn’t fair enough, but you know…