Category: java

Robotic Xylophone – The Java Side

About a year ago a good friend called and asked if I would be interested in a little hobby program with him.  He wanted to create a robotic xylophone using an old bell kit he found in the local classifieds.  He wanted to control it using and Arduino Mega board.  His proposition was that he could do all the hardware work, but he didn’t know how to do the software. That’s where I came in.

He fabricated a mount for the bell kit.  It would hold the bell kit above solenoids mounted on a board.  There was one solenoid for each note in the bell kit.  He would then put neodymium magnets on top of the solenoid.  The magnets would be reversed from the polarity of the solenoid so that when current was applied to the solenoid it would shoot the pin up and so it would strike the bottom of the bell.  I thought it was pretty ingenious.

This was my first (and only) Arduino project, so I had a lot to learn.  We went through several iterations of ideas on how to get the music into the Arduino.  Maybe a web interface that would allow the user to “write” the music.  Sounded like a pain.

I have a child that I believe is talented musically (yeah, every dad will say that about their child) and I thought it would be fun for her to play a duet with herself.  The robotic bell kit replaying something she had already played, and then her playing the other part on her own bell kit.  We have a digital piano that allows us to record a track as it’s played and write it to USB in midi format.  So I decided that would be the way to get the music into the Arduino.

That meant I had to figure out how to parse the midi file on the Arduino.  It’s been a VERY long time since I have written any C code.  And I didn’t find any suitably easy libraries I could use to do it.  Finally, I didn’t want to learn the ins and outs of the midi format, so I looked to see if there was a midi parsing library for Java, which I’m very comfortable in.  Sure enough there was.  And even better, it was super easy to use and didn’t even require any other libraries, it’s part of core Java (maybe not once 9 comes out huh?)

So I decided what I would do is write a java program that would translate a midi file into a custom format that I could more easily read on the Arduino.  I would then write that file to an SD card which I would read from on the Arduino.  It took me longer to come to that design than it did to write all the code.

Speaking of code, here’s the Java side of things. I added a bunch of comments, so I won’t be doing any further explanation of it.

import java.util.ArrayList;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Queue;

import javax.sound.midi.InvalidMidiDataException;
import javax.sound.midi.MidiEvent;
import javax.sound.midi.MidiMessage;
import javax.sound.midi.MidiSystem;
import javax.sound.midi.Sequence;
import javax.sound.midi.ShortMessage;
import javax.sound.midi.Track;

 * Class that reads a midi file and outputs a custom format that is then read in my arduino code.
 * The customer format consists for an event stream, where each event is 3 data members constituting
 *   a total of 4 bytes per event.
 * An event is [outPin, hi/lo, duration(ms)]
 * outPin : if -1 then command is to delay, otherwise it contains the pin to change hi/low value for
 * hi/lo : if not delay then set out pin to hi on 1, low on 0. Otherwise ignore.
 * duration: if delay, then this is duration, otherwise ignore
 * So an event of [8,1,0] says to turn pin 8 high.
 *    an event of [-1,1,180] says to sleep for 180ms before handling the next event in the stream.
public class MidiToArduino {

    public static void main( String... args ) {
        if ( args.length <= 0 ) {
            System.out.println( "usage: java MidiToArduino <file1> <file2> ..." );
        for ( String filename : args ) {
            MidiToArduino instance = new MidiToArduino( filename );
            System.out.println( (instance.convert() ? "Converted: " : "Failed: " ) + filename );

    // arduino is expecting short, short, int (1 bytes, 1 bytes, 2 bytes)
    // pin, value, duration.

    private static Map<Short, Short> pinMap = new HashMap<>();

    private static final short LOWEST = 55;
    private static final short HIGHEST = 84;
    private static final short OCTAVE = 13;

    // we need to map notes from the midi stream to out pins on the arduino mega.
    // We only map pins 55 through 84 because that's all the notes that were available on
    // the bell kit we were using.  We map them to pins 2-13, and 22-38 becuase those
    // are the pins we are going to be using to ouptut signale to the solenoid.
        // 53 = F
        // 84 = C
        // middle C = 60
        pinMap.put((short)55, (short)2);
        pinMap.put((short)56, (short)3);
        pinMap.put((short)57, (short)4);
        pinMap.put((short)58, (short)5);
        pinMap.put((short)59, (short)6);
        pinMap.put((short)60, (short)7);
        pinMap.put((short)61, (short)8);
        pinMap.put((short)62, (short)9);
        pinMap.put((short)63, (short)10);
        pinMap.put((short)64, (short)11);
        pinMap.put((short)65, (short)12);
        pinMap.put((short)66, (short)13);
        pinMap.put((short)67, (short)22);
        pinMap.put((short)68, (short)23);
        pinMap.put((short)69, (short)24);
        pinMap.put((short)70, (short)25);
        pinMap.put((short)71, (short)26);
        pinMap.put((short)72, (short)27);
        pinMap.put((short)73, (short)28);
        pinMap.put((short)74, (short)29);
        pinMap.put((short)75, (short)30);
        pinMap.put((short)76, (short)31);
        pinMap.put((short)77, (short)32);
        pinMap.put((short)78, (short)33);
        pinMap.put((short)79, (short)34);
        pinMap.put((short)80, (short)35);
        pinMap.put((short)81, (short)36);
        pinMap.put((short)82, (short)37);
        pinMap.put((short)83, (short)38);
        pinMap.put((short)84, (short)39);


    // These are the commands within the midi stream that we are interested in.
    private static final int NOTE_ON = 0x90;
    private static final int NOTE_OFF = 0x80;
    // This is how long we want to allow the out pin on the arduino to remain high in order to play a note.
    private static final int DELAY_MS = 10;
    // This is the key for a DELAY event in the feed to the arduino
    private static final short DELAY = -1;

    private final String inputFileName;
    private final String outputFileName;

    public MidiToArduino( String filename ) {
        inputFileName = filename;
        outputFileName = getOutputFilename( inputFileName );

     * This is where the work happens.  Not thread safe. For each conversion you must create a new instance of MidiToArduino.
    public boolean convert() {
        // when a note is played we will then add an event to this queue so that we stop applying a high
        // signal to that pin after the appropriate amount of time, as specified in DELAY_MS.
        Queue<Event> liftEvents = new LinkedList<>();
        // This list keeps the final event stream that will be sent to the arduino.  It will be a merging of the
        // midi events and our newly created lift events.
        List<Event> allEvents = new ArrayList<>();
        try {
            // most of this is boilerplate to read the midi file.
            Sequence sequence = MidiSystem.getSequence(new File(inputFileName));

            // We need to map from the 'tick' in midi, to milliseconds, since that's what our delay is on arduino.
            long microseconds = sequence.getMicrosecondLength();
            long tickLength = sequence.getTickLength();

            long msPerTick = (microseconds/(tickLength*1000));

            System.out.println( "msPerTick = " + msPerTick );

            // choose the longest track of any multitrack midi file, assuming it is the one with the music notes.
            Track track = getLongestTrack( sequence );

            int lastTick = 0;
            for ( int i = 0; i < track.size(); i++ ) {
                // iterate through and output each event, but not key offs.
                MidiEvent event = track.get(i);
                MidiMessage message = event.getMessage();
                // all this message stuff is from the midi parsing library.
                if ( message instanceof ShortMessage )
                    long tick = event.getTick()*msPerTick;
                    // before we process the next event from the midi stream, we have to see if there are any
                    // notes that we are currently playing that need to be lifted.
                    while ( liftEvents.peek() != null && liftEvents.peek().tick < tick ) {                         lastTick = addEvent( allEvents, liftEvents.poll(), lastTick );                     }                     ShortMessage sm = (ShortMessage)message;                     int command = sm.getCommand();                     int velocity = sm.getData2();                     if ( command == NOTE_ON && velocity > 0 ) {
                        // we only want to handle this event if it's where the player played a note.
                        int key = sm.getData1();
                        Event e = new Event( (int)tick, (short)key, (short)1 );
                        lastTick = addEvent( allEvents, e, lastTick );
                        // make sure we insert a new event to lift this note at the appropriate time.
                        liftEvents.add(new Event((int)tick + DELAY_MS, (short)key, (short)0));
        } catch (InvalidMidiDataException | IOException e) {
            // bail
            return false;

        writeToFile( allEvents );

        return true;

    private void writeToFile( List<Event> events ) {
        try (FileOutputStream out = new FileOutputStream( outputFileName )) {
            // we write just the bytes because it was easy to read on the arduino
        } catch (IOException e) {

    private byte[] getAllBytes(List<Event> allEvents) {
        // we know that each event will take 4 bytes, so we can easily create an array of the appropriate size.
        byte[] allBytes = new byte[allEvents.size()*4];
        int pos = 0;
        for ( Event e : allEvents ) {
            System.out.println( e + "," );
            for ( byte b : e.getBytes()) {
                allBytes[pos++] = b;
        return allBytes;

     * Because I don't know which track contains the actual music, I just pick the track with the most events and assume.
    private Track getLongestTrack( Sequence sequence ) {
        Track result = null;
        Track[] tracks = sequence.getTracks();
        for ( int i = 0; i < tracks.length; i++ ) {
            if ( result == null ) {
                result = tracks[i];
            else if ( tracks[i].size() > result.size() ) {
                result = tracks[i];
        return result;

     * Add an event and return the time of the last event.
     * @param events
     * @param newEvent
     * @return
    private int addEvent( List<Event> events, Event newEvent, int lastTick ) {
        int duration = newEvent.tick - lastTick;
        if ( duration > 0 ) {
            events.add( new Event(duration, DELAY, (short)1 ));
        events.add(new Event( 0, getPin( ), newEvent.value));
        return lastTick + duration;

     * Get the output pin that will be used to play a note.  If the note from the midi stream is too high
     * or too low, it will be adjusted by an octave in the right direction until it is within the range
     * that can be played by our bell kit.
    private short getPin( short note ) {
      while ( note < LOWEST ) {
          System.out.println( "raising " + note );
          note += OCTAVE;
      while ( note > HIGHEST ) {
          System.out.println( "lowering " + note );
          note -= OCTAVE;
      return pinMap.get(note);

    private String getOutputFilename( String input ) {
        String base = input;
        int index = input.lastIndexOf(".");
        if ( index > 0 ) {
            base = input.substring(0, index + 1 );
        return base + "jwf";

     * Simple representation of an event within our event stream.
    private class Event {
        public final short pin;
        public final short value;
        public final int tick;

        public Event( int tick, short pin, short value) {
 = pin;
          this.value = value;
          this.tick = tick;

         * Return the bytes as they should be written to the file.
         * [ pin (1 byte), value (1 byte), duration (2 bytes) ]
         * @return
        public byte[] getBytes() {
            return new byte[] { (byte)pin, (byte)value, (byte)(tick >> 8), (byte)tick };

         * Output the note for debugging in a format that is easy to copy and paste into an array in the arduino code
         * for testing a static event stream.
        public String toString() {
            return "{ " + (pin != DELAY ? pin : "DELAY") + ", " + value + ", " + tick + "}";


Iterative Code vs. Reactive Code

At work we have an internal app that downloads a large JSON file and uses angular to create tables and allows for filtering and whatnot.  The user interface is SUPER slow.  Basically unusable, and it didn’t give me the data in the format I needed.  So, I decided to take the giant JSON dump and parse it with a little java program and output the information I need.  The JSON data looks like this:

apps – an ordered list of strings.

gavs – an ordered list of strings.

hosts – an ordered list of strings.

uses – a list of 3-item integer arrays where the first item in the array is the index into the app collection, the second is the index into the gavs collection, and the third is an index into the hosts collection.  Something like [45, 67, 189] would mean that it references the app at the 45th index, the gav at the 67th, and the host at the 189th.

I needed to get a list of all apps that are associated with certain gavs.  So my first pass was an iterative solution. It looked something like this:

public static void main( String... args ) {
    for ( blah : blah ) // stuff that gets the gavIndex I'm interested in...
        for ( List<Integer> use : uses ) {
            // if the index in the gav collection matches the gav that I'm testing now
            if ( gavIndex == use.get(1)) {
                // get the app string from the apps collection so I can show a user friendly
                // app name instead of just an index.
                String app = apps.get(use.get(0));
                // references is a multimap that is storing my apps per gav.
                if ( references.get(app) == null || !references.get(app).contains(module)) {
                    // so put the reference in the multimap if it's not already there.
                    references.put(app, module );
    // code to output my references.

Now the reactive code:

public static void main( String... args ) {
    PublishSubject<List<Integer>> subject = PublishSubject.create();
    for ( blah : blah ) // stuff that gets the gavIndex I'm interested in...
            .filter(use -> shouldInsert( apps, use, gavIndex, module, references))
            .subscribe(use -> references.put(apps.get(use.get(0)), module));
    // code to output my references.
// basically the condition that was inline in the iterative solution.
private static boolean shouldInsert( List<String> apps, List<Integer> use, int gavIndex, String module, Multimap<String, String> references ) {
    Collection<String> currentUses = references.get(apps.get(use.get(0)));
    boolean result = (gavIndex == use.get(1)) && (currentUses == null || !currentUses.contains( module ));
    return result;

I get the same result from both processes. The reactive I find interesting because I just set everything up, then when the subject subscribes to the observable it all just goes. I also think the succinctness of the reactive code is actually easier to read.

Are there fewer lines? Not really, and that shouldInsert method sure takes a lot of parameters. Currently it’s not thread safe, but I could get there easily, and then I would expect the reactive version to be faster than the iterative. Finally, the question is, what does rx buy me over java 8 streams in this case? Not really anything, but I don’t have a ton of experience with java 8 streams yet, and sadly I don’t have time to mess with it currently. Anyway, it was interesting to me, and thought it might be interesting to others.

RxJava concurrency demo application


Back when I was in a reading group and had 2 weeks to learn Erlang I wrote a little air traffic control application to highlight the concurrency capabilities of Erlang.  Here is my blog post regarding that.

A few months ago I volunteered to do a presentation on rxJava at my employer’s internal technical conference.  What I didn’t tell them is that I didn’t know anything about it other than it was a current buzz word.  I thought it would be interesting and that giving a presentation on it would be a great reason to learn it.  Fortunately for me, a brilliant co-worker also proposed to present on it so we were paired up together.  I’m not going to lie, he did nearly all the powerpoint work, including the flow of the presentation.  I got to talk about operators (combining, filtering, subscribing) and testing/debugging/error handling.  All in all I thought it went quite well.

Since I had no experience, I decided I needed some application to get up to speed on rxJava.  I wanted to be able to answer questions and have more experience than just having read the documentation before the people attending the session.

So, I decided to write the air traffic control application in java.  Sadly, I don’t know that I could even really read the Erlang anymore, but the ideas are pretty easy to understand.

Enough jabbering about that, let’s got to the code.  I will do some explaining as we go, but you should have some familiarity with reactive concepts.  If you need a primer, head to and spend some time reading.

The Setup

Here’s the premise.  There are airplanes that need to land.  There is a flight tower that directs them where to go.  If two planes enter the same place then they collide.  If an airplane enters the space occupied by the tower then it has landed.  Airspace is represented by a square grid and the tower is in the middle of the grid.  The air traffic controllers are not very smart (but smarter than the Erlang version!)

Here’s how I broke things out.

  • The Radio – Responsible for transmitting messages from the tower to the plane.
  • The Radar – Responsible for broadcasting the position of the planes.
  • The Planes
    • Each plane receives messages from the radar so they can determine if another plane has entered their space (in which case they collide
    • Each plane also receives messages from the radio.  These tell the plane where to move to next.
    • Each plane sends a blip on the radar to broadcast its current location.
  • The Tower
    • Receives blips on the radar with the location of each plane.
    • Sends messages on the radio that tell the plane where to go next.
  • Radar Screen
    • Receives blips from the radar
    • Displays a graph showing where each plane is on the grid.

The Code

The code can be found in it’s fullness at The following does actually contain most of it, but if you want to run things, you’ll want to fetch it from github.


PublishSubject<RadioMessage> radio = PublishSubject.create();

Ok, this one is simple.  The radio is a PublishSubject for RadioMessages.  A RadioMessage simply contains the target flight number and the location that the target flight should fly to next.  Because it’s a PublishSubject it can subscribe to Observables that emit RadioMessages, and will also pass those through to any subscribers that are observing the radio.


PublishSubject<Blip> radar = PublishSubject.create();

Another nice, simple one.  The radar is a PublishSubject for Blips.  A Blip contains the id of the blip source (in this case a flight number), and the location of the blip source.  Finally, it contains a blip type, like MOVE, LAND, or CRASH.  It being a PublishSubject here has the same benefits as the Radio.


Now we start getting into some of the guts and putting rxJava to use.  First, here is the code that creates the Tower.

 // create the tower, which is where the planes try to get to land.
 Tower tower = new Tower(TOWER_LOCATION, radar);
 // and allow the tower to emit on the radio
 Observable.create( tower ).subscribe(radio);

So first we create the tower with a nice, simple constructor.  I went back and forth a few time on how to handle the tower and the radar.  The subscription isn’t a simple one like the one above with the radio.  I decided to encapsulate it within the Tower class.

Here is the bulk of the interesting code within the Tower class:

public class Tower implements Observable.OnSubscribe<RadioMessage>{

  private Pair location;

  Subscriber<? super RadioMessage> radio;
  Observable<Blip.  radar;

  public Tower( Pair location, Observable<Blip> radar ) {
    this.location = location;
    this.radar = radar;


   * Implementation of the OnSubscribe interface
  public void call(Subscriber<? super RadioMessage> t) {
    radio = t;

  private void connectRadar() {
    // on a MOVE blip from a plane we will send them information on where to go next.
    radar.filter(b -> b.type == MOVE &amp;amp;&amp;amp; !b.location.equals(location))
        .subscribe(b -> {
          if ( !radio.isUnsubscribed()) {
            radio.onNext(new RadioMessage(, getNewCoordinates(b.location)));

The interesting part here is in the connectRadar method.  The rxJava APIs are very fluent, but here’s a simple english explanation.

  1. First we only want to see the blips on the radar that are of the MOVE type and that are not within the tower’s location.
  2. We don’t want to handle all these blips on the main thread, so we observe on the computation scheduler.
  3. In onNext we use the current coordinates of the blip and then emit on the radio a message that tells the plane where to go next.

That’s really it for the tower.


Here is the code that creates the planes:

    long totalSleep = 0;
    // create all the planes.
    for ( int i = 0; i < PLANES; i++ ) { Plane plane = new Plane(i, getNewSpeed(), getStartingPair(GRID_SIZE), TOWER_LOCATION, radar); // subscribe the plane to the radio // if we subscribe on a different thread, then we may not get our first message radio.filter(msg -> msg.flightNumber == plane.flightNumber )
          .subscribe( plane );
      // tell the radar to listen to blips from the plane.
      // rather than start them all at once, they will enter the grid when this Observable calls onNext.
      Observable.timer(totalSleep, TimeUnit.MILLISECONDS)
          .subscribe( n -> plane.takeoff());
      totalSleep += getNextSleep();

I ran into one gotcha on this one.  As you can see by the comment, if I subscribe on a different scheduler then I would occasionally see the case where the plane would send a blip, the tower would receive the blip and send a radio message, all before the plane subscribed to the radio.  Ah the joys of concurrency.  While rxJava makes this a lot easier, there are still all the same concerns when it comes to threaded code.

Another tidbit you’ll see here.  I wanted to create all the planes at once and then have them come onto the grid using a staggered schedule.  To accomplish this I used an Observable.timer that when it fires would tell the plane to take off.  I accumulate the timeout so that each plane takes off some random time after the previous one.

Here are the details of the Plane class:

public class Plane implements Observer<RadioMessage>, Observable.OnSubscribe<Blip> {

  public final int flightNumber;
  private Pair location;
  private Pair towerLocation;
  private int speed;
  private AtomicBoolean flying = new AtomicBoolean(false);

  Observable<Blip> radar;
  Subscription radarSubscription;
  Subscriber<? super Blip> blipSubscriber;

  public Plane( int flightNumber, int speed, Pair startingLocation,
        Pair towerLocation, Observable<Blip> radar ) {
    this.flightNumber = flightNumber;
    this.speed = speed;
    this.radar = radar;
    this.location = startingLocation;
    // when we get to the tower location we have landed.
    this.towerLocation = towerLocation;

   * Implementation of onNext for the Observer interface.  The way we subscribe
   * means that we will only get RadioMessages that are directed at our flightNumber
  public void onNext(RadioMessage m) {
    Observable.timer( speed, TimeUnit.MILLISECONDS)
    .subscribe( n -> {
        // when the timer goes off, it calls this onNext message
        if (flying.get()) {
          // if we haven't crashed while traveling to our new location then set our current to the new.
          this.location = m.location;
          if ( this.location.equals(towerLocation) ) {
          else {
            // if we haven't landed, then send a blip to tell the tower our new location.

   * Implementation of the OnSubscribe interface
  public void call( Subscriber<? super Blip> t ) {
    blipSubscriber = t;

  private void sendBlip( Blip blip ) {
    if ( !blipSubscriber.isUnsubscribed()) {

  public void takeoff() {
    sendBlip( new Blip( flightNumber, location, MOVE ));

  private void land() {
    sendBlip(new Blip(flightNumber, location, LAND));

  private void move() {
    sendBlip(new Blip(flightNumber, location, MOVE));

   * Subscribe to the radar
  private void connectRadar() {
    // get blips that are in our airspace that is not us.
    // if we get a blip it must be another airplane that will cause us to crash.
    radarSubscription = radar.filter(b -> != this.flightNumber)
      .filter(b -> b.location.equals(this.location))
      .filter(b -> b.type == MOVE || b.type == CRASH )
      .subscribe(blip -> {
          // on any blip on the radar in our space, it will cause us to crash if we are still flying.
          if ( flying.get()) {

  * We have received a blip on the radar.  That means someone else has entered our space.
  * Because of that we must crash.
  private Blip getRadarResponse( Blip blip ) {
    // whether landing or crashing we are done with this flight.
    // we will unsubscribe from the radar because once crashed we don't need any more blips
    // send a crash blip on the radar.  This will notify the plane that hit us that 
    // we were already in this space and cause them to crash also.
    return new Blip( flightNumber, location, Blip.BlipType.CRASH);

The plane class has comments that hopefully explain the bulk of the code, but here are a few notes on it anyway.

First, for brevity I left off the onCompleted and onError methods that would normally be required of an implementation of the Observer interface.

Second, I would have liked this class to implement Observer AND Observer but that’s illegal in java.  This is why I had to pass the radar in to the constructor, so the plane could subscribe to it.  I could have at that point also just had the radar subject subscribe back to the plane, but that made it more tightly coupled.  This way, having the Blip Observable and the Blip Observer separate, the plane doesn’t need to know that it’s implemented as a subject.

One way I may have been able to get around not being able to have the Plane implement Observer twice would be to go to composition.  The Plane could have had an accessor method that would return the Observer and another that would return the Observer.  Then it would only be implementing the OnSubscribe interface.

I should probably also point out that the Tower and the Planes each expect to only have one subscriber.  It should probably be a thread-safe collection of subscribers so that more than one Observer can subscribe.  The PublishSubject helps obviate this in my case, but I’m not sure my way is the best way here.

Ancillary Code

Since my application creates all the planes at once on different schedulers, If I don’t have a mechanism to make the main thread wait for all the planes to complete then the application would start and finish nearly instantly.  To get around this I use a CountDownLatch initialized with the number of planes that we create.  Then we simply subscribe to the radar and look for crash and land events and decrement the latch on those.  Finally, we just wait for the latch to complete and then the program can terminate.  Here’s the code:

// Create a latch so we don't end the program prematurely.
CountDownLatch latch = new CountDownLatch(PLANES);

// subscribe to the radar so we countdown whenever a plane lands or crashes.
radar.filter(b -> != -1 &amp;&amp; (b.type == LAND || b.type == CRASH))
.subscribe(b -> latch.countDown());

// plane creation code here

// complete when all planes have landed or crashed.

Next, we have the radar screen.  This is purely for output.  The Erlang version didn’t have this nicety, but it sure was great for debugging… and entertainment.

public class RadarScreen implements Observer<Blip> {

  private BiMap<Integer, Pair> flightMap = Maps.synchronizedBiMap(HashBiMap.create());
  private Map<Pair, Integer> crashes = Maps.newConcurrentMap();

  private int gridSize;
  private Pair location;

  public RadarScreen( int gridSize, Pair location ) {
    this.gridSize = gridSize;
    this.location = location; // location is the location of the tower.

  public void onNext(Blip t) {
    if ( t.type == LAND ) {
      // if they are landing just remove by id.
    else {
      // remove by location
      if ( t.type == MOVE ) {
        // if they are moving then put them back on the map
        flightMap.put(, t.location);
      else if ( t.type == CRASH ) {
        // if they are crashing do not put them back on the map, but add it to the crashes collection.
        crashes.put(t.location, 10);

It is a very simple implementation that simply rewrites the entire graph to System.out when a radar event is received.  On my mac I can watch it in the STS console and it looks fairly animated.  Fortunately, since you could subscribe to the radar with anything, you could just as easily put together a swing UI to show the planes.

I left out the implementation that prints the graph (since it’s just creating a StringBuffer and then spitting it to System.out) and I also left out the onCompleted and onError methods.  I also left out some extra code that is used to print asterisks in crash locations.

Lessons Learned

This was quite a fun experiment, and was a great way to learn about reactive programming in general, and rxJava specifically.  My part of the presentation had to do with the operators, debugging, testing, and error handling.  I didn’t build much error handling into this example, but I did get to play with the creation, filtering, and mapping operators, and spent plenty of time testing and debugging.

My first implementation was kind of similar to this implementation, but had a lot more garbage that I just didn’t need.  I had extra layers of Observables and Observers and it was just a mess.  My second implementation was almost purely functional.  It was all contained in one class with a lot of chained calls to map, flatMap, filter, etc.  I kind of liked it, but these both had a fatal flaw.  Both implementations depended on some global state.  This is something I didn’t have in the Erlang implementation because the state was always passed around from function to function.

My final implementation is the one you see here.  It’s not perfect, but I managed to get the global state hidden in the RadarScreen class.  It feels more properly reactive than the first sample, but still has some of the strengths of object oriented programming using the encapsulation of the plane, tower, and radar screen objects.  I’ve learned plenty of times that whatever you first implement with a new technology is just not going to be right.  I’m sure there are holes in this project, things that could have been done differently, and perhaps things that a reactive pro will simply look at and say, “huh?!” but in any case, it gave me a good start.  Here are a few things I learned:

  1. Reactive code is still concurrent code.  Treat it as such
  2. System.out and log.debug are your friends.
  3. Being able to subscribe to an Observable simply for debug purposes is really awesome.  I did this a couple times with the subjects I created just so I could validate the messages going through.  For instance when the message would come through the radio but would not be received by the plane because the plane hadn’t subscribed yet.
  4. If you can’t subscribe with a purely debug Observer then doOnNext is also really awesome.  See #3.
  5. Reactive programming is mind bending for someone with 15 years of experience in core java.  You have to start thinking a little bit more like Javascript and less like java
  6. Java 8 lambdas are really awesome.
  7. There are a lot of ways to do the same thing with rxJava.
  8. I want to do more reactive programming.

Adding Security to a Partially Exposed Web Service

In my previous post I talked about adding some conditional security to a web service by only exposing certain methods and model representations using the new Conditional annotation and a HandlerInterceptor in a Spring 4 based Spring Boot app. Tonight I decided to add some real Spring Security magic to it.

First, add the Spring Security dependency. I took this right from the Spring Security guide on the website.


Then I added the following new Configuration class to my existing Application.

  static class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {

Next, I modified my HandlerInterceptor and it ended up as follows:

public class PublicHandlerInterceptor extends HandlerInterceptorAdapter {

  public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal();
    if ( handler instanceof HandlerMethod ) {
      HandlerMethod method = (HandlerMethod)handler;
      if ( method.getMethodAnnotation(Public.class) != null
          && hasAnyRole( (User)principal, method.getMethodAnnotation(Public.class).forRoles())) {
        return true;
    return false;
  private boolean hasAnyRole( User principal, String[] rolesStrings ) {
    if ( rolesStrings == null || rolesStrings.length == 0 ) {
      return true;
    Set<String> roles = Sets.newHashSet(rolesStrings);
    for ( GrantedAuthority auth : principal.getAuthorities() ) {
      if ( roles.contains(auth.getAuthority())) {
        return true;
    return false;

In the previous iteration I was simply looking for the Public annotation. Now I am looking for a parameter on that annotation. The parameter defaults to empty, which behaves just like the previous iteration of the project, but now you can specify that it should only be public for certain roles. Obviously, this necessitated a change to the Public annotation, as follows:

public @interface Public {
  String[] forRoles() default {};

Finally, I modified the usage of the annotation to test out the new functionality:


You should also be able to pass an array of role names to the forRoles parameter of the annotation.

The Spring Security filters are executed before my HandlerInterceptor so the user would have gone through all the authentication checks by that point. Then, in my HandlerInterceptor, rather than let the user know that there is an endpoint at the given location they just have to try harder to hack it, they get a 404 if they are not allowed to access it.

I suspect you could probably get the same by using a custom handler for an AuthorizationException (or whatever spring-security throws) and using a spring security method annotation to check the role and return a 404 from the handler instead of the standard response but I wanted to build on the previous example and keep the conditional behavior.

Exposing a read-only view of a Spring-MVC web service

Alright, so this is actually more flexible than just a read-only view, but that was the case that prompted me to play around with things so that’s where I’m starting. I was partially inspired by a co-worker’s blog entry regarding creating resource filters with jersey and jaxrs 2.0.

So down to the scenario. I have a simple CRUD webservice that I’ve implemented in Spring-MVC. For my demonstration I used Spring Boot, but you can do it any way you want. One key is that this solution depends on a new feature found in Spring Framework version 4.0.

In my webservice I have a @Controller that has @RequestMappings for GET, PUT, POST, and DELETE, following the normal REST semantics for each method. Now, I have this webservice securely deployed in my production environment and all of my internal services can hit it and everything is awesome.

Now let’s pretend I want to expose some of the resources on the big, bad internet. I want to expose all the GET resources so my front end developers can read the information and put it in a web page, or so my mobile apps can get at it, but I don’t really want to expose the ability for them to create, update, or delete information. Now I’ve got a couple of options.

Option 1

I create a new webservice.  It shares the dependencies of the original so it has access to all the same services, but the controller doesn’t contain any RequestMappings other than the GET resources I want to expose.  This is very secure because I have total control over what is available.  IF the original service was designed appropriately so the Controllers don’t contain any business logic then you can easily reuse all of the logic in the previous webservice.  If not, then it’s a good opportunity to get that done I guess.  On the downside, you now have two artifacts to maintain and deploy.

Option 2

I create a webservice that will proxy requests from the big, bad internet and send them to my internal webservice.  The proxy returns a 404 for any resource/method that should not be exposed, and forwards other requests on to the internal webservice.  Again, my service is secure and I can manage which of the resources are exposed.  Also, again, I have two deployables, and this time they aren’t nearly as related as they were before.  The proxy can be very thin, possibly something as simple as nginx or apache with appropriate rules.

Option 3

This is the option I will explore.  With this option, I modify my webservice so that it can be deployed internally AND externally and lock down the resources that shouldn’t be exposed to the public without having to create a separate deployable artifact.  We will simply annotate those request handlers that should be exposed to the public, basically forming a white-list, and all those that are not explicitly exposed will be restricted from view when certain conditions are met.

In addition, this solution will automatically apply a Jackson JsonView to restrict which properties of the data are exposed, not just which request mappings are exposed.  This will allow us to give a restricted view of the response for the general public on the big bad internet, and the full data for those hitting our internal deployment of the webservice.  We would still be deploying to two environments, one for the public and one for internal, but it would be the same artifact in both places.

First, we are going to use the new @Conditional annotation that was introduced with Spring 4.0.  It allows you to conditionally create a Spring bean.  We will use conditionally defined beans to modify the behavior of the application at runtime.

To The Code

First, the Condition that allows us to change the behavior of the application without having to change any code. My condition is based on the IP address assigned to the server. You could modify the condition to whatever fits your needs. Maybe it checks an environment variable or something. It’s important to note that this condition is evaluated when the bean is created, so if it’s a singleton bean it’ll only be evaluated once. If you are looking to have the condition depend on something from the client then it would probably have to be a request scoped bean, but I haven’t checked to see if that actually works or not. It seems like it should.

 * Condition to check if we are in production or not.
public class ProductionCondition implements Condition {

  public boolean matches(ConditionContext context, AnnotatedTypeMetadata meta) {
    Enumeration ifaces;
    try {
      ifaces = NetworkInterface.getNetworkInterfaces();
      while ( ifaces.hasMoreElements()) {
        NetworkInterface iface = ifaces.nextElement();
        Enumeration addresses =  iface.getInetAddresses();
        while ( addresses.hasMoreElements()) {
          InetAddress address = addresses.nextElement();
          // Set whatever your public, production IP Address space is here!
          if ( address.getHostAddress().startsWith("192.168" )) {
            // If we match, then return true so the bean annotated with this conditional will be created.
            return true;
    catch (SocketException e) {
    return false;

Now we can use the above Condition to conditionally create Spring beans.

Here’s my Spring Boot application.  It also defines other beans for my spring-data-jpa repositories, but those aren’t relevant to what we are doing so I’ve left them out.

public class Application {

  public static void main (String[] args ) {, args );

  static class WebConfig extends WebMvcConfigurerAdapter {
    public void configureMessageConverters(List<HttpMessageConverter> converters) {
      MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter();
      ObjectMapper mapper = new ObjectMapper() {
        private static final long serialVersionUID = 1L;
        protected DefaultSerializerProvider _serializerProvider(SerializationConfig config) {
          return super._serializerProvider(config.withView(Views.Public.class));
      mapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false);

   * Only create this bean if we are in "production" mode.
   * @return
  public MappedInterceptor publicHandlerInterceptor() {
    return new MappedInterceptor(null, new PublicHandlerInterceptor());

  // Other beans here for JPA configuration

Notice that in the application I have two @Conditional beans. One is a new HandlerInterceptor that I’ll show in a second. The other is a full @Configuration. Because the publicHandlerInterceptor @Bean definition returns a MappedInterceptor it will automatically be configured within the Spring MVC application. If it returned a HandlerInterceptor then more work would have to be done to register it with the Spring MVC application.

Secondly, notice that the Conditional Configuration class extends the WebMvcConfigurerAdapter for allowing me to easily configure Spring MVC-type functionality. Sadly, configuring a custom Jackson ObjectMapper in Spring is much more painful (IMO) than it ought to be, so I’m going to get off on a bit of a tangent. Skip to the next section if you are confident in your ObjectMapper abilities.

ObjectMapper Tangent

It would be fantastic if I could configure the ObjectMapper used for a @ResponseBody by simply defining a @Bean named objectMapper and be good to go. Sadly, that’s not the case. I had to add the MessageConverter in the configuration, and set the ObjectMapper for that MessageConverter. Now, here’s the rub. I kept trying to make my configuration changes to the ObjectMapper by calling getSerializationConfig().blah(). Jackson SerializationConfig is immutable Calling getSerializationConfig() and then all of the handy .with(MappingFeature) or whatever just doesn’t work because it simply returns a new instance of SerializationConfig, but doesn’t modify the one that is in the ObjectMapper. You can see my learning process for this at StackOverflow

Back to the Show

So, the reason I needed to modify the ObjectMapper configuration was so that I could make it always use a given Jackson JsonView for every @ResponseBody encountered. The custom implementation of the ObjectMapper I pasted was the first way I found to configure it to always use the JsonView I specified, otherwise I had to call writeWithView on the writer, and I wasn’t sure where to do that. This configuration gives us the white-list of data properties that should be serialized in each response.

To use it, simply annotate the object returned as your @ResponseBody with the @JsonView annotation from Jackson, something like:

  public String getName() {
    return name;

Securing the RequestMappings

The Application configuration has a conditional bean for a HandlerInterceptor, which looks like this:

public class PublicHandlerInterceptor extends HandlerInterceptorAdapter {
  public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    HandlerMethod method = (HandlerMethod)handler;
    if ( method.getMethodAnnotation(Public.class) != null ) {
      return true;
    return false;

This HandlerInterceptor will be evaluated for every RequestMapping. Here, we look at the actual method that is being called to handle the request. If it is annotated with our custom @Public annotation, then we allow the request to proceed by returning true from the HandlerInterceptor. If it isn’t, then we return false and send a 404 to the client.

Finally, here’s the Public annotation definition

public @interface Public {}

And its usage:

  @RequestMapping(method=RequestMethod.GET, produces=MediaType.APPLICATION_JSON)
  public @ResponseBody Iterable getCollection(
    @RequestParam(value="ids", required=false) List ids,
    @RequestParam(value="limit", required=false, defaultValue="100") int limit ) {
      // lookup a collection of MyObjects and return them

  @RequestMapping( value="/{id}", method=RequestMethod.PUT, consumes=MediaType.APPLICATION_JSON, produces=MediaType.APPLICATION_JSON)
  public @ResponseBody MyObject putValue(@PathVariable Long id, @RequestBody MyObject d ) {
    // do some things to update an object and return the representation of the updated object

With this in place, I’m able to deploy my webservice (with spring-boot it’s just a jar that contains embedded tomcat!) and run it without any further alterations. The getCollection method would be available in both deployment locations. The putValue handler would only be available in those deployment locations that do NOT match the condition I have specified, so only those that are visible internally. The representation of MyObject is appropriate for the deployment location without any further changes to the webservice either. I merely select the properties of MyObject that I want exposed publicly and annotate them with the appropriate JsonView.

A white-list approach ensures that nothing slips through the cracks to the big, bad internet just because a developer forgot to restrict it. Instead, they must evaluate each request handler and data property and explicitly expose it in the public view.

I could have had my proof of concept developed and tested in under 2 hours had I not run into my difficulties configuring the ObjectMapper. That’s a lesson I won’t soon forget though. I tested all this by making the condition match my IP address when I was connected to my work VPN. When I started the application up and I was connected it would restrict the request handlers and the serialized properties. If I was not connected I could execute any method and would see all of the data properties.

It’s probably not perfect a perfect solution. Does such a thing exist? The one question I’ve thought of is what happens if my code is already using JsonViews? I’m not sure how it would play together. Nevertheless it is an interesting exploration of the capabilities of the @Conditional annotation and HandlerInterceptors.

Eclipselink static weaving

I’m playing with a new project and decided to get eclipselink static weaving working in this one. I started on the official eclipselink project documentation on the subject. That’s nice and all, but it doesn’t say anything about getting the weaving to work with maven or eclipse. I really wanted both. Here’s what I did.

First, in persistence.xml you should add the following property:

      <property name="eclipselink.weaving" value="static" />

My maven project has several modules. domain-api contains the entity definitions. domain-impl contains the code for interacting with the database. That means that my persistence.xml is contained in domain-impl and the @Entity classes are in domain-api. That’s alright. To get this all working I decided to use the command line option rather than use the ant task.

The weaving needs to take place on the entities, so the weaving step is placed in the domain-api pom.xml. Within build/plugins I added this plugin.

          <commandlineArgs>-classpath %classpath -loglevel FINE -persistenceinfo ${basedir}/../domain-impl/src/main/resources ${basedir}/target/classes ${basedir}/target/classes</commandlineArgs>

That’ll run the java program to weave the class files in place. Note that my -persistenceinfo argument points to ../domain-impl/src/main/resources. That’s because the StaticWeave class will look for META-INF/persistence.xml within that directory, and my persistence.xml is contained in the domain-impl module, not within domain-api. If you are using maven resource filtering on your persistence.xml this will cause a problem since domain-impl builds after domain-api. That’s not a problem in my case so I’m going to be lazy and not address it.

Once you have that in your pom you should be able to do a mvn clean install and see output something like:

[INFO] --- exec-maven-plugin:1.2.1:java (weave-classes) @ domain-api ---
[EL Config]: metadata: 2012-11-23 23:15:42.798--ServerSession(1166215941)--Thread(Thread[,5,])--
The access type for the persistent class [class com.xyg.model.JoelTest] is set to [FIELD].

I added line breaks for formatting. Anyway, that kind of output means things are working.

Sadly, Eclipse’s maven integration isn’t smart enough to figure this part of the pom out. If we tell Eclipse to ignore this part of the build then the classes won’t get weaved and it’s likely that if you try to run the project from within eclipse it’s not going to work properly. Fortunately, we can work around this.

Within the same pom (in my case, in domain-api) you can add another plugin. This plugin is NOT within the plugins element, but is within a pluginManagement element. You can get eclipse to generate the entry for you by telling it to ignore the execution when it tells you it has an error because it has no lifecycle mapping for the given execution. When you do that it’ll generate an xml block in your pom.xml that will look like this:

        <!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself.-->

The version eclipse generates will say something like <ignore></ignore> in the action section of this configuration. If you simply change that to <execute/> then eclipse will execute it. It will execute it every time, even on incremental builds, so hopefully it isn’t too intrusive. I’m just getting started on this project so I don’t have many entity classes yet, but if there’s an issue I’ll get back and update this post.

With this configuration I’m able to execute my integration tests from the command line and I’m also able to run the tests and launch the webapp from within eclipse, and I don’t incur the runtime penalty of dynamic weaving.

Jackson Mixins

You are working with a third party library. You need to serialize an object from that library to JSON. Or, in my case, I needed to serialize an implementation of an interface defined in the third party library. In any case you can’t modify the class you need to serialize, but you also need to change the way the class is mapped. Jackson provides a great mechanism to work around this using what they call MixIns.

Let’s say you are given this:

public interface ThirdPartyInterface {
  long getItemID();
  float getValue();

For some reason beyond your control, you need to use id instead of itemID in the JSON as the key for getItemID. To make things even more awesome, you need to have quotes around the itemID and the value, but you don’t want quotes around EVERY numeric field, just those two. With Jackson, I create my mixin interface, and I provide the Jackson annotations that I would like to apply on the instances of the ThirdPartyInterface to my new one.

public interface MyMixin {
  long getItemID();

  float getValue();

Now you can see that I have the JsonProperty annotations in there to change the name of the key, and I have the JsonSerialize annotation in there, using the Jackson builtin ToStringSerializer to convert the long and float values into Strings, which will ensure they are quoted in the JSON output.

In order to use my Mixin, I configure the ObjectMapper to do so as follows.

  ObjectMapper mapper = new ObjectMapper();
  mapper.getSerializationConfig().addMixInAnnotations(ThirdPartyInterface.class, MyMixin.class);

In my case, I have several implementations of the ThirdPartyInterface and configuring the mixin as above applies to all implementations. Pretty slick.

Spring ConverterFactory Implementation

In my Spring MVC 3 based application I had recently implemented a few Converters for some of my JPA based data objects. It started with one, then another, and so on. By the time I got around to adding my fourth converter to the spring configuration file I knew it was time to pull it out and abstract it a bit. Thankfully, Spring allows you to implement a ConverterFactory that is responsible for creating the converters for some types.

Each of my entities extend an abstract base class that looks basically like this

public abstract class DataObjectAbstract<K extends Serializable>
        implements DataObject<K>
    protected transient String[] excludedEqualsFields = new String[] { "key", "version" };

    protected int version;

    public boolean equals( Object that )
        return EqualsBuilder.reflectionEquals( this, that, excludedEqualsFields );

    public int hashCode()
        return HashCodeBuilder.reflectionHashCode( this, excludedEqualsFields );

    public String toString()
        return ToStringBuilder.reflectionToString( this, ToStringStyle.MULTI_LINE_STYLE );

The DataObject interface simply declares a getKey and setKey method.

So, in my Spring MVC Controller methods I was originally accepting a String or Long, then using my own data access objects to lookup the entities I needed. The next iteration in my implementation was to implement the Converters as I mentioned above. That was very simple and worked well, but having many data objects I didn’t want to copy that implementation over and over again. This is where the ConverterFactory comes in. Here’s my implementation:

public class DataObjectConverterFactory
        implements ConverterFactory<String, DataObject<Long>>
    EntityManager em;

    public <T extends DataObject<Long>> Converter<String, T> getConverter( Class<T> type )
        return new GenericLongKeyedDataObjectConverter<T>( type, em );

The ConverterFactory interface is basically as simple as the Converter interface. The Class<T> type parameter to the getConverter method tells us what type we are going to convert to.  One option from here is to have a big nasty if/else statement with a bunch of instanceof methods that create a new Converter.  I thought about doing this and passing in the appropriate data access object and performing the lookup.  That would be only two classes and then I could convert all of my DataObjects, but I didn’t like the idea of a bajillion instanceof statements.  So you can see I implemented a GenericLongKeyedDataObjectConverter which takes the target type and the EntityManager as a parameters.  Here’s the implementation of the generic converter class:

 * A generic converter used for converting from a string representation of an entity key to the DataObject itself.
 * @param <T> The type that is to be converted to.
public class GenericLongKeyedDataObjectConverter<T extends DataObject<Long>>
        implements Converter<String, T>
    private Class<T> type;
    private EntityManager em;

     * @param type An instance of Class for the type being converted to
     * @param em EntityManager used to perform the lookup.
    public GenericLongKeyedDataObjectConverter( Class<T> type, EntityManager em )
        this.type = type;
        this.em = em;

    public T convert( String stringKey )
        Long key = Long.parseLong( stringKey );
        return em.find( type, key );

An extremely simple parameterized class implementation of the Converter interface. Here, with no use of instanceof, I’m creating the appropriate converter implementation for all of my persisted classes.  If you have a group of objects that you want converted and they all inherit from a base class, a ConverterFactory may be a better solution than implementing a bunch of converters manually.

Finally, here’s the bean xml configuration:

<bean id="conversionService" class="">
    <property name="converters">
            <ref bean="dataObjectConverterFactory" />

Notice that we reference the dataObjectConverterFactory bean, but I never defined it in my xml config.  That’s because I used the @Component annotation on my implementation class.

Get the “Next” value in a Java Enum

Java Enums. An awesome addition to Java 1.5 so we could avoid using public static ints for that purpose. I’ve been using them for some time with success and never noticed one deficiency until now. You can get the ordinal of an enum value with the ordinal() method. That is basically the index in the order the values were declared. So, if your enum look something like this:


In this case, MERCURY would have an ordinal of 0, then VENUS 1, and so forth. Now, what if you want to iterate through them? You can get all the values of an enumerated type as an array using the values() method. Cool, right? Well, what if I don’t want to iterate through them, but I want to simply progress from one to the next. It would be cool if math operators (like + and -) would let you go from one to the next, but that’s not the case. I had a need to go from one to the next, so I changed my enum by adding the following method.

    public Planet getNext() {
      return this.ordinal() < Planet.values().length - 1
          ? Planet.values()[this.ordinal() + 1]
          : null;

Now, if I do Planet.MERCURY.getNext() I would get VENUS. This takes advantage of the ordinal of each entry (which you cannot assign in any other way than the order in which you declare the enum values) and the values method, indexing into the values array to get the next value. If you attempt to go off the end, it’ll return null. It would be simple to make it wrap instead if that makes sense for your case. It would also be trivial to take this and implement a “getPrevious” if you have a need to go in reverse.

How to create a custom taglib containing an EL function for JSP

At some point in your use of JSP, there’s something you’re going to need to do for which you can’t find a spring or jstl tag. In that case, you can create a custom function in your custom tag library. It sounds more difficult than it is. All you will need is a tag library descriptor, and your class that implements the function. That’s about it. Here’s my TLD file.

<taglib xmlns="" 


        <function-signature>java.lang.String doMyStuff( java.util.Collection )</function-signature>

This file should be placed in your WEB-INF directory. In the function-signature, be sure to use fully qualified names.

Next is the class that implements the function.

package com.mydomain.util.ElFunctions;

import java.util.Collection;

 * Functions for use in expression language in the jsp views.
public class ElFunctions

     * This is the function that is called by the Expression Language processor.  It must be static.
     * @param myparam
     * @return
    public static String doMyStuff( Collection<SomeType> myparam )
        // do stuff here and return the results

Finally, just reference the function in my jsp file.

<%-- where you declare your taglibs, include this one, which references the tld we created in the first step. --%>

<%@ taglib prefix="my" uri="/WEB-INF/my.tld" %>

<!-- more html and whatever, in my case I'm using spring:message to output the results of my method call -->

<spring:message text="${my:doMyStuff(bean.collection)}" />

The call to ${my:doMytuff(bean.collection)} causes the EL processor to call my function when it evaluates that snippet. In this case, ‘bean’ would be some java bean available to the view, and ‘collection’ would be a property on the bean that returns the collection expected as input to doMyStuff.