Handling multiple verticle deployment

April 4, 2014 - One Response

Every Vert.x module needs a single point of start.
Once your application has many verticles interacting you’ll need one verticle to start them all, one verticle to find them, one verticle to bring them all and in the darkness bind them (I’m sorry).
For this particular task I prefer to use javascript. I tend to think of this verticle as the start script of my application, so I like it to be easy to write and I found the javascript syntax to be a good fit on this.

My deployment verticle

I usually name it app-deployer.js and I place it in src/main/javascript (I’ll explain below how to add it to Maven).

var container = require("vertx/container")
var console = require("vertx/console")
var config = container.config;

var skipDeploy;
if (config.skipDeploy) {
  skipDeploy = config.skipDeploy;
} else {
  skipDeploy = [];
}

const verticles = [
  {
    name : "con.locademiaz.vertx.FirstVerticle",
    instances : 1,
    config : {
      foo_param : config.bar_param
    }
  }
];

deployVerticles(verticles, 0);

function deployVerticles(verticles, verticleIndexToDeploy) {
  if (verticleIndexToDeploy < verticles.length) {
    const verticle = verticles[verticleIndexToDeploy];
    const verticleName = verticle.name;
    if (skipDeploy.indexOf(verticleName) == -1 ) {
      container.deployVerticle(
        verticleName,
        verticle.instances,
        verticle.config,
        function(err, deployID) {
          if (!err) {
            console.log("[Ok] " + verticleName);
            deployVerticles(verticles, verticleIndexToDeploy + 1);
          } else {
            console.log("[Fail] " + verticleName + " -> " + err.getMessage());
          }
        }
      )
    } else {
      console.log("[Skip] " + verticleName);
      deployVerticles(verticles, verticleIndexToDeploy + 1);
    }
  }
}

In this file you can do all kinds of tricks.
One of my preferred ones is to add a list of blacklisted verticles. This is useful for testing because you can deploy this file, getting your all system deployed, and mock some behavior inside your test.
Also, as I like to keep my external configuration lean, it is here where I manipulate what is passed by the command line and adapt it to each verticle needs.

Extending your Maven configuration

Following what is explained in my previous post this is how you add your verticle deployer to your Maven configuration

pom.xml

Add the resource folder

<project>
  ...
  <build>
    ...
    <resources>
      ...
      <resource><directory>src/main/javascript</directory></resource>
      ...
    </resources>
    ...
  </build>
  ...
</project>

And now, because I’m a control freak, I don’t like unnecessary files ends up in the jar for my project, so I exclude them from the package as follows:

<project>
  ...
  <build>
    ...
    <plugins>
      ...
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-jar-plugin</artifactId>
        <version>2.4</version>
        <configuration>
          <excludes>
            <exclude>app-deployer.js</exclude>
          </excludes>
        </configuration>
      </plugin>
      ...
    </plugins>
    ...
  </build>
  ...
</project>

And that’s it, now you have a nice entry point for your module.
Enjoy.
.

Vert.x Java project – Quick start

April 3, 2014 - One Response

After having 41 successful-n-productive™ Vert.x systems running, I have distilled a few practices which makes worth to share.

Building environment

I’ve decided to go with Maven for managing dependencies and build cycle. Vert.x has support for Gradle but I dropped this option because I just wanted a simple descriptive language for my project, instead of a full imperative language for this purpose. Nothing against Gradle though.

Vert.x modules

Vert.x recommends to organize your application as modules. Beside all the technical benefits of this practice (classloader isolation, dependency management, etc), I would also like to add that this is a great way to define different areas of development across your team. This, along with a clear document of the services provided and exposed by the modules within its interfaces, will help you to isolate different application concerns as the building blocks for the whole system.

Initial setup

Vert.x has a Maven Archetype for getting started, but IMHO it doesn’t do well with Maven files, and it also generates a single project containing examples for every supported language. BTW, I’ve sent a pull request for this, that has never discussed.
This post will assume you have some knowledge of Maven, its structure and usage.

pom.xml

You can read an in depth explanation of the structure of this file here (this is not yet ready)

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.locademiaz.vertx</groupId>
  <artifactId>quick-start</artifactId>
  <version>0.1-SNAPSHOT</version>
  <packaging>jar</packaging>

  <dependencies>
    <dependency>
      <groupId>io.vertx</groupId>
      <artifactId>vertx-platform</artifactId>
      <version>2.1M2</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>io.vertx</groupId>
      <artifactId>testtools</artifactId>
      <version>2.0.2-final</version>
      <scope>test</scope>
    </dependency>
  </dependencies>

  <build>
    <resources>
      <resource>
        <directory>src/main/resources</directory>
        <filtering>true</filtering>
      </resource>
      <resource>
        <directory>src/main/js</directory>
      </resource>
    </resources>
    <testResources>
      <testResource>
        <directory>src/test/resources</directory>
        <filtering>true</filtering>
      </testResource>
    </testResources>

    <plugins>
      <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-compiler-plugin</artifactId>
         <version>2.5.1</version>
         <configuration>
           <source>1.7</source>
           <target>1.7</target>
         </configuration>
      </plugin>
      <plugin>
        <artifactId>maven-assembly-plugin</artifactId>
        <version>2.4</version>
        <configuration>
          <outputDirectory>${project.build.directory}/mods</outputDirectory>
          <finalName>${module.name}</finalName>
          <appendAssemblyId>false</appendAssemblyId>
          <descriptors>
            <descriptor>src/main/assembly/mod.xml</descriptor>
          </descriptors>
        </configuration>
        <executions>
          <execution>
            <id>make-assembly</id>
            <phase>package</phase>
            <goals>
              <goal>single</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>io.vertx</groupId>
        <artifactId>vertx-maven-plugin</artifactId>
        <version>2.0.1-final</version>
      </plugin>
    </plugins>
  </build>
  <properties>
    <module.name>${project.groupId}~${project.artifactId}~${project.version}</module.name>
  </properties>
</project>
mod.xml

Vert.x defines a structure for its modules, so, below is a Maven Assembly definition for building it. In concordance with the pom.xml defined above, this must be placed in the src/main/assembly directory.

<?xml version="1.0" encoding="UTF-8"?>
<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">

  <id>mod</id>

  <formats>
    <format>zip</format>
    <format>dir</format>
  </formats>
  <dependencySets>
    <dependencySet>
      <useProjectArtifact>false</useProjectArtifact>
      <outputDirectory>lib/</outputDirectory>
      <scope>runtime</scope>
      <fileMode>664</fileMode>
    </dependencySet>
  </dependencySets>

  <includeBaseDirectory>false</includeBaseDirectory>

  <fileSets>
    <fileSet>
      <directory>${project.build.outputDirectory}</directory>
      <outputDirectory>/</outputDirectory>
      <includes>
        <include>**</include>
      </includes>
    </fileSet>
  </fileSets>
</assembly>
Your first verticle

Now we have everything setup to start coding. Create a FirstVerticle.java file inside the src/main/java/com/locademiaz/vertx directory with the following content:

package com.locademiaz.vertx;

import org.vertx.java.platform.Verticle;

public class FirstVerticle extends Verticle {

    @Override
    public void start() {
        container.logger().info("Your first verticle has been started!");
    }
}
mod.json

The final part of a Vert.x module is its descriptor. You can find a comprehensive list of its fields here. Place this file in the src/main/resources directory

{
    "main":"com.locademiaz.vertx.FirstVerticle",
    "deploys":"${module.name}",
}

Running your module

Development phase

As you can see in the pom.xml file above the vertx-maven-plugin is configured in lines 72-76. Having this configured, allows you to run, directly from your source code, Vert.x with your module just in place. For doing so just run:

$ mvn clean package vertx:runMod

et voilà, you should see the message logged into the system console

Productive phase

As presented here, the .zip generated in the target/mods by the command

$ mvn clean package

is ready to be deployed. You can drop it into your Nexus repository, or into your local maven structure, or you can also use the handy runzip option:

$ vertx runzip target/mods/com.locademiaz.vertx~quick-start~0.1-SNAPSHOT.zip

In the upcoming entries you can expect more insights about handling deployment, configuration, vert.x packaging, testing, continuous integration and any other shenanigan I encounter in this pleasant trip. Stay tuned.

Turn your Java apps Gnome-Shell friendly

August 30, 2011 - 5 Responses

The Problem

When you try to add a java application as favorite into the Gnome Shell‘s lateral dock and run it you’ll end up having duplicated icons, one for the launcher and one for the running app. This happens b/c the shell uses an application based system for grouping tasks, so the idea is, if you add an application as a favorite launcher and you start it you’ll end having that launcher icon highlighted. Internally the shell matches the running process with the Exec clause of the .desktop file .
This works well except for applications running inside a VM or being interpreted b/c those will share the same running process. On that situation the shell inspects the WM_CLASS X Window property [1] and matches it with the full name of the desktop file. E.g. if your applications has the WM_CLASS set as “mySwingApp”, for this to successfully matched in the dock with its launcher, that launcher must be called mySwingApp.desktop located according XDG.

Note: for inspecting that value on any window you just need to run xprop WM_CLASS and click into the target window

Why is this happening?

Even if you are creating a swing application from scratch there is no easy way to tweak that X Window property using plain and portable APIs. Taking a look into openjdk sources, this is how them manages it

String mainClassName = null;

StackTraceElement trace[] = (new Throwable()).getStackTrace();
int bottom = trace.length - 1;
if (bottom >= 0) {
    mainClassName = trace[bottom].getClassName();
}
if (mainClassName == null || mainClassName.equals("")) {
    mainClassName = "AWT";
}
awtAppClassName = getCorrectXIDString(mainClassName);

As you may note what is used for this value is the name of the class running the Swing main loop.

Da solution

Digging around the web I founded a java agent and a pretty similar explanation on what’s going on.
So what I did was to fork that agent and improve it a little bit. What I did was to move it into a maven structure and remove its packaging as a fat-jar (I’m radically against fat-jars as you can see in my comments here).

The forked project is located in my github here:
https://github.com/diega/window-matching-agent
and please take a look to the README there

A practical example

My first motivation on doing this was based on IntelliJ IDEA so I’ll paste here my environment.

  • Download the agent-1.0.jar and put it wherever you want (I put it into IntelliJ’s bin/ folder)
  • Edit the file bin/idea.vmoptions adding this line

    -javaagent:agent-1.0.jar=intellij-ultimate

  • Create the file ~/.local/share/applications/intellij-ultimate.desktop with the following content

    [Desktop Entry]
    Version=10.5.1
    Name=IntelliJ IDEA Ultimate Edition
    Comment=The Most Intelligent Java IDE
    Categories=Applications;Development;
    Encoding=UTF-8
    Exec=env IDEA_CLASSPATH\=../lib/asm.jar /home/diego/bin/ideaIU-10.5/bin/idea.sh
    GenericName=IntelliJ
    Icon=/home/diego/bin/ideaIU-10.5/bin/idea128.png
    MimeType=text/x-java
    Terminal=false
    Type=Application
    URL=http://www.jetbrains.com/idea
    

Latest notes

If you download the agent-1.0.jar into another location (or with another name) you must adjust the -javaagent: parameter.
Of course, change the path in the Exec entry to point to your own executable.

Hope this helps somebody, it took me a while to figure out the-right-things-to-do™ :)


[1]: Application Based GNOME 3

Drools persistence on top of HashMap

February 7, 2011 - One Response

Introduction

This post shows a reference implementation of drools persistence on top of a non transactional HashMap. This should serve as inspiration for more complex scenarios (like the Berkley DB one). This work was also developed under Intalio’s sponsorship.

As the new abstraction is heavily inspired on JPA there are some mechanisms which should be emulated in order to get the same behavior on both implementations.

Involved classes

Starting from the abstraction described in my previous post this is the new hierarchy for persisting drools runtime into a HashMap.

Drools abstract storage persistence diagram

Drools abstract storage persistence diagram

I’ll try to explain the most relevant objects in the diagram relating them to JPA components.

MapBasedPersistenceContext
behaves like the EntityManager, it stores all objects which are not yet commited. Finding objects is also resolved through this interface, for support this behavior it has to have access to the KnowledgeSessionStorage.
KnowledgeSessionStorage
represents the real persistent storage. This is the extension point for support any other non-JPA implementation. It provides saveOrUpdate and find methods and is the responsible to assign id’s to the entities.
ManualTransactionManager
as we cannot rely on JTA to manage our session Drools hooks explicit calls whenever things should be serialized. This component must access the NonTransactionalPersistenceSession to get the entities waiting for being persisted into the KnowledgeSessionStorage.

JBPM5

The same concepts have been applied into JBPM5 codebase and we have there proper extensions for managing process semantics.

You’ll find there:

  • MapBasedProcessPersistenceContext
  • ProcessStorage
  • ManualProcessTransactionManager
  • etc

Usage

You must setup the environment you gonna pass to the JPAKnowledgeService (still keeping the name for backward compatibility) setting the TRANSACTION_MANAGER and PERSISTENCE_CONTEXT_MANAGER keys. For this I have just created simple factories but you can build them the way you want.

I suggest using KnowledgeSessionStorageEnvironmentBuilder or ProcessStorageEnvironmentBuilder. There doesn’t exist a default implementation of this storage, the one on top of HashMap is only used for testing porposes, so you have to pass to the builder your own implementation of the desired storage.

Testing further implementations

To test this two implementations (and any new one which wants to respect the drools persistence semantic) together, I have created a simple set of abstract tests(*).
For Drools Expert:
https://github.com/droolsjbpm/droolsjbpm/[...]/MapPersistenceTest.java
For JBPM5
https://github.com/krisv/jbpm/[...]/MapPersistenceTest.java
In a future will be great to abstract the current tests on JPA to be able to run abstract of the lower persistence layer.

What’s next

Clean up this interfaces a bit more and start working on a DB Berkley implementation

(*) tests names are not declarative enough, I’m holding a commit for this until coming repository split is done

Drools Abstract Persistence Layer

January 28, 2011 - 2 Responses

Introduction

On the past two months, and under Intalio’s sponsorship, I’ve been working adding a new persistence layer into Drools. The main goal of this is to support Berlkey DB as persistent backend. Added abstractions are on that direction.
The approach for this task was to work on top of the current drools-persistence-jpa module. Is assumed this module is tested enough (through junit or merely day-to-day usage) and this is the one who defines the semantics which drools persistent applications should attach.

Little background on managing persistence

When you use the engine in a regular way, you obtain the ksession through the kbase and it doesn’t know anything about how to persist its state. To provide the ksession with persistence capabilities Drools makes use of the command pattern. That way, instead of creating it directly from the kbase you go through a factory which will return a decorator which will handle how and where the state is persisted. Being a decorator, at this point, will be totally transparent to the user whether or not the state is persisted.

Abstracting persistence

As the name dictates, drools-persistence-jpa module is heavily oriented to JPA usage. Then, what we have made here was to clean up the use of JPA interfaces and move them into this module.

The most used interface by JPA is the EntityManager, this one was abstracted by the PersistenceContext interface which now has specific methods for persisting SessionInfo’s and WorkItemInfo’s.
Internally Drools uses different scopes for dealing with persistenceContext’s, one for the whole application and one for each command. This behaviour has also been abstracted into PersistenceContextManager.

Persistence layer before refactor

Persistence layer before refactor


Persistence layer after refactor

Persistence layer after refactor

Backward compatibility

Other important aspect of this refactor is that maintains backward compatiblity. That means that, for the moment, you shouldn’t notice any difference if you already have your JPA application running. Current way to configure the ksession is still the same but we’ll add some new in the future, which, I wish, will end up being more polish and abstract.

JBPM5

This persistence refactor also applies for jbpm5 which now have a ProcessPersistenceContext and a ProcessPersistenceContextManager.

What’s next

In a comming post I’ll show a reference implementation on the top of a regular HashMap.

Simple Drools Rule Flow to BPMN2 migration tool

December 28, 2010 - One Response

For ease the transition from Drools Flow files to the new BPMN2 standard drools gives you a nice but hidden feature, a XmlBPMNProcessDumper.
This tiny piece takes a RuleFlowProcess instance and spits nice well formed BPMN2 files.
So, for making your life just a little better, I’ve created a project on github exposing this internal piece with a nice interface. Take a look at the class RuleFlow2BPMN2Migrator in

https://github.com/diega/rf-bpmn2-migrator

Enjoy.

Disclaimer: calling a 7-line class a tool isn’t fair enough, but you know…

Are you ready for Drools 5.2 + jbpm5?

December 16, 2010 - Leave a Response

Great things are happening into the core of Drools and it will impact into the whole community.
This post is not about the great features we gonna have soon but about how to make the transition painless.
From the developer point of view things are getting better and better.

    Git migration
    Maven 3 compliance
    Repository cleanup (no more .classpath and .project files)
    Guvnor building coming less and less painful
    Huge internal refactors and optimizations

The major of this tasks were leaded by ge0ffrey who has made our lives a lot better :)

As regular user of the framework when you finally decide to move from 5.1.1 to 5.2 you will find a big difference. Drools Flow is gone… but it has reborn as JBPM5!.
So the concepts remains the same but you’ll need to adjust your dependencies and make a reorganization of your imports. This is the first step on breaking backward compatibility preparing us for the big crash arriving on 6.0 :)
Basically u gonna need to add

<depdency>
  <groupId>org.jbpm</groupId>
  <artifactId>jbpm-[flow | persistence-jpa | bpm2 | ...]</artifactId>
  <version>${jbpm.version}</version>
</dependency>

Doing so you should change a lot of your org.drools packages into org.jbpm.

w00t

New Guvnor Feature – Rule Templates

May 28, 2010 - Leave a Response

Here is a screencast showing guvnor rule template features

As you can see here,  the Template Editor is an extension of the famous Rule Editor which now has a Template Data tab. On this tab you’ll find a grid having the template placeholders (a.k.a. “template keys”) as columns. So in your knowledge base you’ll finally add the rules generated by the drools-template module, using the data into the grid and the parametrized rule that you’ve created in the editor.

Drools Flow :: Work Items

April 19, 2010 - Leave a Response

Introduction

Working on improve our knowledge about the Drools platform and to give something back to the community. This time we’ll discuss a language extension called Work Items. The official documentation says: “[Drools Flow] offers constructs that are closely related to the problem the user is trying to solve”. In other words, creating your own WorkItems lets you extend the business modeling language to a domain oriented way.
Creating a language for your process gives you an enormous flexibility in what you can express. The funny thing is that with this great power doesn’t come a great responsibility, meaning there is no more responsibility than needed to write any bored and unflexible process language.

Read the rest of this entry »

Jetty + DataSource + JTA

April 18, 2010 - 2 Responses

Working on my next post about Drools Flow Work Items I wanted to have a one click example. This example involves a web application so I started with Jetty which fits perfect for a mvn jetty:run example. In the standalone example we run our own JNDI directory provided by Bitronix. Jetty has JNDI directory embedded so we have to register the datasource and the transaction manager. Doing this task for a totally jetty noob could be a little difficult and time consuming. So, hoping nobody has to spend too much time on this, I’ll try to summarize the steps I did to make this.
Read the rest of this entry »

Follow

Get every new post delivered to your Inbox.

Join 161 other followers