Author: digitaljoel

Those Pesky 12 words

I’m working with a company that is developing a wallet for crypto currencies.  There is a lot of consternation about the friction introduced by asking the user to back up their private key using 12 words, as is common with BIP39.  The concern is that most of the users targeted by this wallet will not be familiar with the concept, and that it will feel burdensome to them to have to keep 12 words safe somewhere.  I’ve been asked multiple times to “fix the 12-words problem” but frankly, 12 words IS the fix.

What is BIP39?

BIP39 is a bitcoin improvement proposal for a mnemonic mechanism for backing up a private key.  Bitcoin and other cryptocurrencies use asymmetric cryptography to sign transactions which prove ownership, and therefore the ability to spend, bitcoin.  Therefore, securing your private key is paramount to the security of your bitcoin, as anyone with the private key can create a valid signature and spend your funds.

A private key is simply a 256-bit random number, which means it is between 0 and about 1.158*1077.  There are various mechanisms for creating that random number, which I won’t go into.  But that number can be represented by 78 decimal digits, which is terribly long, or in hexadecimal it is 64 characters long, like this:


You can also display the private key in WIF-compressed format (Base58 +checksum), which would be something like this:


Each of these is even worse than the computer-generated WiFi password at my mom’s house, and she struggles enough with that.  Imagine me telling her to write down either of the above, and if she doesn’t do it right, or loses it, then her retirement is gone.  The chance of a transcription error in attempting to keep the above is large because neither representation has any sort of meaning to the human mind.

Keeping it simple, BIP39 is a protocol that takes the private key, and instead of printing it in one of the above formats, maps it to 12 words.  There’s a little more to it than that, but that’s the gist.  Now, instead of either of the above formats, you get something more like this:

army van defense carry jealous true garbage claim echo media make crunch

Which of these is easiest to transcribe?  What about memorize?  Which could you tell someone over the phone (don’t do that with your private key!)?  Which would be easier to type when restoring your wallet?

The 12-words problem

The “12 words problem” then becomes a problem in mindset around private keys.  Users are used to being bailed out by the service provider.  The internet industry has trained people not to keep their secrets.  Every “forgot password” button has taught us that it’s actually not important to keep that one bit of information that authenticates us to our banking provider safe or secret.  I can forget it and they will send me a new one.  I can authenticate by simply reading back a number sent via text to my phone, or to the email address I use.  I now rely on the security of my phone or email address for the security of every other service I attempt to authenticate to.

This mindset has opened vectors of attack to all sorts of services through social engineering.  From stealing coveted twitter accounts to crypto wallets secured via 2-factor authentication (again… tied to your phone number).  Phone port attacks are becoming more common in the world of cryptocurrency because it gives access to the 2FA mechanism, which then gives access to a user’s exchange account.  Yes, I’m simplifying, but all of that has happened.

Things Worse Than 12 Words

So, if I want to fix the 12-words-problem, what are some options?  If the 12 words are meant to allow a human to store this computer generated random data, what other representations could be used?

The strengths of 12 words include:

  • Maps to known concepts for humans
  • Easier to transcribe
  • Easier to communicate
  • Easier to input


What if the bits were encoded into music?  Perhaps the bits could represent timing and notes.  BIP39 has a small dictionary of commonly used words.  The music BIP could include notes that harmonize well so it’s always pleasant.  The bits could represent some combination of notes and timing, creating a ditty that is your private key.  It maps well to humans.  It’s difficult to transcribe, difficult to communicate, especially for the tone deaf, and difficult to input.  While you may be able to recognize your key if it was played back to you, you would likely have difficulty recreating it exactly.


What if the bits were encoded into a collection of images?  There could be a small dictionary of simple clip-art style pictures.  Your private key would create a sequential comic panel.  Again, it maps well to humans, is difficult to transcribe or communicate, and would be very difficult to input correctly.  Imagine if in order to restore your wallet you had to select 12 images out of a collection of 2096 options.

Alternately, it could be a single picture with multiple characters in it.  Maybe the bits represent the characters and their extremity positions.  For instance, my private key would be a bear, standing on his left leg, with his right arm in the air and left arm by his side.  Next to that is a pig on all fours, with its tail out straight.  This is probably even worse than the clip-art example.

Pin or Unlock Pattern

What if the user could drag their finger across dots like a mobile phone unlock pattern, or enter a PIN?  Since the private key is already 77 digits long, you would expect that the PIN-type entry would simply be those 77 digits.  Trying to remember that is the problem we are trying to solve originally.  If we translated that to a pattern, you would be dragging across 77 points… again, burdensome and error prone.

Secret Sharing

What if I could give part of my key to my friends and require them to come together to restore my key in case I lose it?  With Shamir’s secret sharing, it is possible to split a key into parts, and then require any subset of the parts to recreate the key.  I could give part of the key to my four siblings and ask them to keep it safe.  When I lose my key, I could require that three of them divulge their part and then I could reconstruct the key.

Chances are three of my family members would not collude to steal my funds because I think they’re good people.  But this still requires the trust of multiple third parties.  I’m also now assuming they won’t lose the key, like I did.  One nice aspect of this kind of key-splitting is that having a single slice of the key does not reveal anything about the key itself, so it does not actually help any brute force attack.

Making 12 Words Easier

If we decide that the 12 words mechanism is actually the best representation, then what could we do to ease that pain?

Online Backup

What if the 12 words were stored “in the cloud” because you can store everything securely in the cloud?!  Some people will take a picture of their twelve words.  Taken with their phone, it may have been uploaded to google photos without them even knowing.  And then what if it accidentally slips into a public folder?  That’s a quick way to lose all of your funds.

How about asking a trusted third party to keep it safe for me?  Maybe my wallet has a “backup in the cloud” button.  I click it, setup my password, and it heads to the cloud.  Maybe it’s even encrypted with my password so the third party can’t even read it.  Now that third party is a wonderful target for hackers.  Humans are terrible at choosing passwords.  Somehow, I still know people that use the same password on all of the sites they hit.  Your favorite password is in a rainbow table somewhere, and will be used to recover your private key when the hackers get that database of private keys, and you know they will get that database (Experian, Target, Home Depot, everyone…)

Some will say that there are two types of companies.  Those that know they’ve been hacked, and those that don’t.  Storing your private key online is a terrible idea if you have anything more than play money involved.

Partial Online Backup

What if I don’t store 12 words online.  What if I store 6 here and 6 there?  Some are already arguing that 12 words is not enough and are using 24 words instead.  Now, if you want to cut it to 6 you are further weakening the security of your private key.

For a brute force attack, you basically have a 1 in 1077 chance of picking correctly.  You may assume that getting 6 of the 12 words would cut that in half.  No, dividing 1077 by two yields 576.  Cutting in half would hardly reduce the security.  Revealing half of the private key (via 6/12 words) cuts the difficulty down to 1 in 1038 because you have gone from having to guess 256 bits, to only having to guess 128 bits.

Secret Sharing Online Backup

What if I used secret sharing and stored the Shamir slices in different online locations?  With Shamir secret sharing, the user could select multiple online locations to store Shamir secret sharing slices of the private key.  When they need to restore their wallet, they could retrieve the required number of slices from Amazon S3, dropbox, and google drive.  This is similar to the 12 words partial online backup, except the slices don’t help an attacker narrow down a brute force attack.

Once again, users are not so good at selecting passwords, and too-often share them amongst services.  You are now tying the security of your private key, which for all I know is storing your entire life-savings in bitcoin, to your password selection for your amazon, dropbox, and google accounts.

Security Vs. Convenience

I’ve had the luxury of working with a wonderful security department for the last 8 years of my career.  They understand the security concerns, but also the tradeoffs of implementing some forms of control.  In many aspects they will educate the business owner of the risk, and then defer the decision to that owner.  They will also exercise their right to stop a deploy or feature if the security risk is simply too great.  They exercise that right sparingly, so when they do, business knows it’s important.

Some providers, like the Edge wallet, do allow for backing up your key in the cloud.  They have put precautions in place to attempt to mitigate many of the security concerns of storing keys in the cloud.  It’s a decision they have made as a business, and it will work for some people.

The currency of security is convenience.  High security costs a lot of convenience.  BIP39 IS the convenience trade-off to the security of users storing their own private keys.  By converting a 256 but random number to a stream of known words, the user at least has a chance of keeping, and being able to restore their private key.

Robotic Xylophone – The Arduino Side

As I said in my previous post, I was working on a project with a neighbor to create a bell kit that was controlled by an Arduino mega and used solenoids and magnets to strike the correct bells in the correct timing to make music.

The Java program in the previous post creates an event stream file in a binary format.  It’s not the most efficient, but it’s easier than parsing midi on the Arduino.  I’m sure I could have done it in 3 bytes per event instead of 4, but it is what it is.

Most of the code in the Arduino sample has to do with reading the event stream from an SD card.  Once it’s read, it simply inspects the event and performs one of three actions.

  1. Set a pin to HIGH
  2. Set a pin to LOW
  3. delay N milliseconds.

All the hard work of making the song was already done in the Java application.

The one problem I have had is that if there are too many solenoids fired at the same time the Arduino simply resets.  My guess is it’s because the solenoids power supply is drawing from the Arduino power supply.  If we were to give the solenoids a separate power supply then I believe it would work without issue.  This didn’t manifest itself until my daughter played a nice, jazzy version of silver bells.

Following is the documented code.  Sorry for whitespace problems, wordpress doesn’t make it easy…

robotic xylophone player

created Nov 2015
by Joel Weight

SD card reading code based on ListFiles sample
with following credits:

created   Nov 2010
by David A. Mellis
modified 9 Apr 2012
by Tom Igoe
modified 2 Feb 2014
by Scott Fitzgerald

#include <SPI.h>
#include <SD.h>

File root;

// SD information
int SDPIN = 53;

// delay identifier
int DELAY = 255;

// struct representing the Event that is written by the java file
struct Event {
  byte pin;
  byte value;
  int duration;

Event* playEvents = 0;
int playEventsSize = 0;

// called by arduino to setup everything.
void setup()
  // Open serial communications and wait for port to open:
  while (!Serial) {
    ; // wait for serial port to connect. Needed for Leonardo only

  Serial.print("Initializing SD card...");
  // On the Ethernet Shield, CS is pin 4. It's set as an output by default.
  // Note that even if it's not used as the CS pin, the hardware SS pin
  // (10 on most Arduino boards, 53 on the Mega) must be left as an output
  // or the SD library functions will not work.
  pinMode(SDPIN, OUTPUT);

  // initialize our output pins, 2-13 and 22-41, and make sure they are set low.
  for ( int i = 2; i < 14; i++ ) {
    pinMode(i, OUTPUT);
    digitalWrite( i, LOW );
  for ( int i = 22; i < 42; i++ ) {
    pinMode( i, OUTPUT);
    digitalWrite( i, LOW );

  // more checking for reading the file from the SD card reader.
  if (!SD.begin(SDPIN)) {
    Serial.println("initialization failed!");
  Serial.println("initialization done.");

  root ="/");

  // debug code to print out the files in the root directory
  printDirectory(root, 0);

  Serial.println("done initializing!");

// Given an Event array and the number of Events, iterate through them
// processing each event sequentially.
void playSong( struct Event events[], int len ) {
  Serial.print( "song length: " );
  Serial.println( len);
  for ( int i = 0; i < len; i++ ) {
Event e = events[i];
// if it's a DELAY event, then simply delay for that amount of time.
if ( == DELAY ) {
Serial.println( "delay" );
delay( e.duration );
// otherwise write the specified value to the pin.
else {
printEvent( e );
digitalWrite(, e.value );

// debug method to print an event.
void printEvent( Event e ) {
Serial.print( );
Serial.print( " " );
Serial.print( e.value );
Serial.print( " " );
Serial.println( e.duration );

// method called over and over by arduino, which means we will loop through the song
// until the power is cut.
void loop() {
// load the song from the file.
// hard coded file name at this point.  Could put a UI on this or use
// hardware buttons to cycle from one song to the next, but that was more
// work than I was ready to do at this point.
loadSong( "/Joel.jwf" );
// once we have loaded the song, give a 5 second countdown.
for ( int i = 5; i < 0; i-- ) {
    Serial.println( i );
  // and finally play the song.
  playSong( playEvents, playEventsSize );

// Helper to load the custom binary file that was written to the SD card by the java app.
void loadSong( const char* fileName ) {
  File file =;
  Serial.print( "Loading song " );
  Serial.println( );
  if ( file ) {
    if ( playEvents != 0) {
      delete [] playEvents;
    // no real handling for a malformed file. Don't write a malformed file.
    playEventsSize = file.size()/sizeof(Event);
    playEvents = new Event[playEventsSize];
    Serial.print( "File contains " );
    Serial.print( playEventsSize );
    Serial.println( " events." );
    int i = 0;
    while ( file.available()) {
      playEvents[i++] = readEvent(file);
    Serial.print( "Done loading file with " );
    Serial.print( playEventsSize );
    Serial.println( " events in it." );
  else {
    Serial.println( "Unable to load file." );

// Helper to read a single event from an opened file.  Handles parsing the format
// of byte, byte, int.
Event readEvent(File file ) {
  Event result;
  byte bytes[4];
  file.readBytes( bytes, sizeof(Event)); = bytes[0];
  result.value = bytes[1];
  int duration = ((int)(bytes[2] << 8 ) | (int)bytes[3]);
  result.duration = duration;
  Serial.print( "Event read: " );
  printEvent( result );
  return result;

// Debug method to print the contents of a directory.
void printDirectory(File dir, int numTabs) {
   while(true) {

     File entry =  dir.openNextFile();
     if (! entry) {
       // no more files
     for (uint8_t i=0; i<numTabs; i++) {
     if (entry.isDirectory()) {
       printDirectory(entry, numTabs+1);
     } else {
       // files have sizes, directories do not
       Serial.println(entry.size(), DEC);


Robotic Xylophone – The Java Side

About a year ago a good friend called and asked if I would be interested in a little hobby program with him.  He wanted to create a robotic xylophone using an old bell kit he found in the local classifieds.  He wanted to control it using and Arduino Mega board.  His proposition was that he could do all the hardware work, but he didn’t know how to do the software. That’s where I came in.

He fabricated a mount for the bell kit.  It would hold the bell kit above solenoids mounted on a board.  There was one solenoid for each note in the bell kit.  He would then put neodymium magnets on top of the solenoid.  The magnets would be reversed from the polarity of the solenoid so that when current was applied to the solenoid it would shoot the pin up and so it would strike the bottom of the bell.  I thought it was pretty ingenious.

This was my first (and only) Arduino project, so I had a lot to learn.  We went through several iterations of ideas on how to get the music into the Arduino.  Maybe a web interface that would allow the user to “write” the music.  Sounded like a pain.

I have a child that I believe is talented musically (yeah, every dad will say that about their child) and I thought it would be fun for her to play a duet with herself.  The robotic bell kit replaying something she had already played, and then her playing the other part on her own bell kit.  We have a digital piano that allows us to record a track as it’s played and write it to USB in midi format.  So I decided that would be the way to get the music into the Arduino.

That meant I had to figure out how to parse the midi file on the Arduino.  It’s been a VERY long time since I have written any C code.  And I didn’t find any suitably easy libraries I could use to do it.  Finally, I didn’t want to learn the ins and outs of the midi format, so I looked to see if there was a midi parsing library for Java, which I’m very comfortable in.  Sure enough there was.  And even better, it was super easy to use and didn’t even require any other libraries, it’s part of core Java (maybe not once 9 comes out huh?)

So I decided what I would do is write a java program that would translate a midi file into a custom format that I could more easily read on the Arduino.  I would then write that file to an SD card which I would read from on the Arduino.  It took me longer to come to that design than it did to write all the code.

Speaking of code, here’s the Java side of things. I added a bunch of comments, so I won’t be doing any further explanation of it.

import java.util.ArrayList;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Queue;

import javax.sound.midi.InvalidMidiDataException;
import javax.sound.midi.MidiEvent;
import javax.sound.midi.MidiMessage;
import javax.sound.midi.MidiSystem;
import javax.sound.midi.Sequence;
import javax.sound.midi.ShortMessage;
import javax.sound.midi.Track;

 * Class that reads a midi file and outputs a custom format that is then read in my arduino code.
 * The customer format consists for an event stream, where each event is 3 data members constituting
 *   a total of 4 bytes per event.
 * An event is [outPin, hi/lo, duration(ms)]
 * outPin : if -1 then command is to delay, otherwise it contains the pin to change hi/low value for
 * hi/lo : if not delay then set out pin to hi on 1, low on 0. Otherwise ignore.
 * duration: if delay, then this is duration, otherwise ignore
 * So an event of [8,1,0] says to turn pin 8 high.
 *    an event of [-1,1,180] says to sleep for 180ms before handling the next event in the stream.
public class MidiToArduino {

    public static void main( String... args ) {
        if ( args.length <= 0 ) {
            System.out.println( "usage: java MidiToArduino <file1> <file2> ..." );
        for ( String filename : args ) {
            MidiToArduino instance = new MidiToArduino( filename );
            System.out.println( (instance.convert() ? "Converted: " : "Failed: " ) + filename );

    // arduino is expecting short, short, int (1 bytes, 1 bytes, 2 bytes)
    // pin, value, duration.

    private static Map<Short, Short> pinMap = new HashMap<>();

    private static final short LOWEST = 55;
    private static final short HIGHEST = 84;
    private static final short OCTAVE = 13;

    // we need to map notes from the midi stream to out pins on the arduino mega.
    // We only map pins 55 through 84 because that's all the notes that were available on
    // the bell kit we were using.  We map them to pins 2-13, and 22-38 becuase those
    // are the pins we are going to be using to ouptut signale to the solenoid.
        // 53 = F
        // 84 = C
        // middle C = 60
        pinMap.put((short)55, (short)2);
        pinMap.put((short)56, (short)3);
        pinMap.put((short)57, (short)4);
        pinMap.put((short)58, (short)5);
        pinMap.put((short)59, (short)6);
        pinMap.put((short)60, (short)7);
        pinMap.put((short)61, (short)8);
        pinMap.put((short)62, (short)9);
        pinMap.put((short)63, (short)10);
        pinMap.put((short)64, (short)11);
        pinMap.put((short)65, (short)12);
        pinMap.put((short)66, (short)13);
        pinMap.put((short)67, (short)22);
        pinMap.put((short)68, (short)23);
        pinMap.put((short)69, (short)24);
        pinMap.put((short)70, (short)25);
        pinMap.put((short)71, (short)26);
        pinMap.put((short)72, (short)27);
        pinMap.put((short)73, (short)28);
        pinMap.put((short)74, (short)29);
        pinMap.put((short)75, (short)30);
        pinMap.put((short)76, (short)31);
        pinMap.put((short)77, (short)32);
        pinMap.put((short)78, (short)33);
        pinMap.put((short)79, (short)34);
        pinMap.put((short)80, (short)35);
        pinMap.put((short)81, (short)36);
        pinMap.put((short)82, (short)37);
        pinMap.put((short)83, (short)38);
        pinMap.put((short)84, (short)39);


    // These are the commands within the midi stream that we are interested in.
    private static final int NOTE_ON = 0x90;
    private static final int NOTE_OFF = 0x80;
    // This is how long we want to allow the out pin on the arduino to remain high in order to play a note.
    private static final int DELAY_MS = 10;
    // This is the key for a DELAY event in the feed to the arduino
    private static final short DELAY = -1;

    private final String inputFileName;
    private final String outputFileName;

    public MidiToArduino( String filename ) {
        inputFileName = filename;
        outputFileName = getOutputFilename( inputFileName );

     * This is where the work happens.  Not thread safe. For each conversion you must create a new instance of MidiToArduino.
    public boolean convert() {
        // when a note is played we will then add an event to this queue so that we stop applying a high
        // signal to that pin after the appropriate amount of time, as specified in DELAY_MS.
        Queue<Event> liftEvents = new LinkedList<>();
        // This list keeps the final event stream that will be sent to the arduino.  It will be a merging of the
        // midi events and our newly created lift events.
        List<Event> allEvents = new ArrayList<>();
        try {
            // most of this is boilerplate to read the midi file.
            Sequence sequence = MidiSystem.getSequence(new File(inputFileName));

            // We need to map from the 'tick' in midi, to milliseconds, since that's what our delay is on arduino.
            long microseconds = sequence.getMicrosecondLength();
            long tickLength = sequence.getTickLength();

            long msPerTick = (microseconds/(tickLength*1000));

            System.out.println( "msPerTick = " + msPerTick );

            // choose the longest track of any multitrack midi file, assuming it is the one with the music notes.
            Track track = getLongestTrack( sequence );

            int lastTick = 0;
            for ( int i = 0; i < track.size(); i++ ) {
                // iterate through and output each event, but not key offs.
                MidiEvent event = track.get(i);
                MidiMessage message = event.getMessage();
                // all this message stuff is from the midi parsing library.
                if ( message instanceof ShortMessage )
                    long tick = event.getTick()*msPerTick;
                    // before we process the next event from the midi stream, we have to see if there are any
                    // notes that we are currently playing that need to be lifted.
                    while ( liftEvents.peek() != null && liftEvents.peek().tick < tick ) {                         lastTick = addEvent( allEvents, liftEvents.poll(), lastTick );                     }                     ShortMessage sm = (ShortMessage)message;                     int command = sm.getCommand();                     int velocity = sm.getData2();                     if ( command == NOTE_ON && velocity > 0 ) {
                        // we only want to handle this event if it's where the player played a note.
                        int key = sm.getData1();
                        Event e = new Event( (int)tick, (short)key, (short)1 );
                        lastTick = addEvent( allEvents, e, lastTick );
                        // make sure we insert a new event to lift this note at the appropriate time.
                        liftEvents.add(new Event((int)tick + DELAY_MS, (short)key, (short)0));
        } catch (InvalidMidiDataException | IOException e) {
            // bail
            return false;

        writeToFile( allEvents );

        return true;

    private void writeToFile( List<Event> events ) {
        try (FileOutputStream out = new FileOutputStream( outputFileName )) {
            // we write just the bytes because it was easy to read on the arduino
        } catch (IOException e) {

    private byte[] getAllBytes(List<Event> allEvents) {
        // we know that each event will take 4 bytes, so we can easily create an array of the appropriate size.
        byte[] allBytes = new byte[allEvents.size()*4];
        int pos = 0;
        for ( Event e : allEvents ) {
            System.out.println( e + "," );
            for ( byte b : e.getBytes()) {
                allBytes[pos++] = b;
        return allBytes;

     * Because I don't know which track contains the actual music, I just pick the track with the most events and assume.
    private Track getLongestTrack( Sequence sequence ) {
        Track result = null;
        Track[] tracks = sequence.getTracks();
        for ( int i = 0; i < tracks.length; i++ ) {
            if ( result == null ) {
                result = tracks[i];
            else if ( tracks[i].size() > result.size() ) {
                result = tracks[i];
        return result;

     * Add an event and return the time of the last event.
     * @param events
     * @param newEvent
     * @return
    private int addEvent( List<Event> events, Event newEvent, int lastTick ) {
        int duration = newEvent.tick - lastTick;
        if ( duration > 0 ) {
            events.add( new Event(duration, DELAY, (short)1 ));
        events.add(new Event( 0, getPin( ), newEvent.value));
        return lastTick + duration;

     * Get the output pin that will be used to play a note.  If the note from the midi stream is too high
     * or too low, it will be adjusted by an octave in the right direction until it is within the range
     * that can be played by our bell kit.
    private short getPin( short note ) {
      while ( note < LOWEST ) {
          System.out.println( "raising " + note );
          note += OCTAVE;
      while ( note > HIGHEST ) {
          System.out.println( "lowering " + note );
          note -= OCTAVE;
      return pinMap.get(note);

    private String getOutputFilename( String input ) {
        String base = input;
        int index = input.lastIndexOf(".");
        if ( index > 0 ) {
            base = input.substring(0, index + 1 );
        return base + "jwf";

     * Simple representation of an event within our event stream.
    private class Event {
        public final short pin;
        public final short value;
        public final int tick;

        public Event( int tick, short pin, short value) {
 = pin;
          this.value = value;
          this.tick = tick;

         * Return the bytes as they should be written to the file.
         * [ pin (1 byte), value (1 byte), duration (2 bytes) ]
         * @return
        public byte[] getBytes() {
            return new byte[] { (byte)pin, (byte)value, (byte)(tick >> 8), (byte)tick };

         * Output the note for debugging in a format that is easy to copy and paste into an array in the arduino code
         * for testing a static event stream.
        public String toString() {
            return "{ " + (pin != DELAY ? pin : "DELAY") + ", " + value + ", " + tick + "}";


Iterative Code vs. Reactive Code

At work we have an internal app that downloads a large JSON file and uses angular to create tables and allows for filtering and whatnot.  The user interface is SUPER slow.  Basically unusable, and it didn’t give me the data in the format I needed.  So, I decided to take the giant JSON dump and parse it with a little java program and output the information I need.  The JSON data looks like this:

apps – an ordered list of strings.

gavs – an ordered list of strings.

hosts – an ordered list of strings.

uses – a list of 3-item integer arrays where the first item in the array is the index into the app collection, the second is the index into the gavs collection, and the third is an index into the hosts collection.  Something like [45, 67, 189] would mean that it references the app at the 45th index, the gav at the 67th, and the host at the 189th.

I needed to get a list of all apps that are associated with certain gavs.  So my first pass was an iterative solution. It looked something like this:

public static void main( String... args ) {
    for ( blah : blah ) // stuff that gets the gavIndex I'm interested in...
        for ( List<Integer> use : uses ) {
            // if the index in the gav collection matches the gav that I'm testing now
            if ( gavIndex == use.get(1)) {
                // get the app string from the apps collection so I can show a user friendly
                // app name instead of just an index.
                String app = apps.get(use.get(0));
                // references is a multimap that is storing my apps per gav.
                if ( references.get(app) == null || !references.get(app).contains(module)) {
                    // so put the reference in the multimap if it's not already there.
                    references.put(app, module );
    // code to output my references.

Now the reactive code:

public static void main( String... args ) {
    PublishSubject<List<Integer>> subject = PublishSubject.create();
    for ( blah : blah ) // stuff that gets the gavIndex I'm interested in...
            .filter(use -> shouldInsert( apps, use, gavIndex, module, references))
            .subscribe(use -> references.put(apps.get(use.get(0)), module));
    // code to output my references.
// basically the condition that was inline in the iterative solution.
private static boolean shouldInsert( List<String> apps, List<Integer> use, int gavIndex, String module, Multimap<String, String> references ) {
    Collection<String> currentUses = references.get(apps.get(use.get(0)));
    boolean result = (gavIndex == use.get(1)) && (currentUses == null || !currentUses.contains( module ));
    return result;

I get the same result from both processes. The reactive I find interesting because I just set everything up, then when the subject subscribes to the observable it all just goes. I also think the succinctness of the reactive code is actually easier to read.

Are there fewer lines? Not really, and that shouldInsert method sure takes a lot of parameters. Currently it’s not thread safe, but I could get there easily, and then I would expect the reactive version to be faster than the iterative. Finally, the question is, what does rx buy me over java 8 streams in this case? Not really anything, but I don’t have a ton of experience with java 8 streams yet, and sadly I don’t have time to mess with it currently. Anyway, it was interesting to me, and thought it might be interesting to others.

RxJava concurrency demo application


Back when I was in a reading group and had 2 weeks to learn Erlang I wrote a little air traffic control application to highlight the concurrency capabilities of Erlang.  Here is my blog post regarding that.

A few months ago I volunteered to do a presentation on rxJava at my employer’s internal technical conference.  What I didn’t tell them is that I didn’t know anything about it other than it was a current buzz word.  I thought it would be interesting and that giving a presentation on it would be a great reason to learn it.  Fortunately for me, a brilliant co-worker also proposed to present on it so we were paired up together.  I’m not going to lie, he did nearly all the powerpoint work, including the flow of the presentation.  I got to talk about operators (combining, filtering, subscribing) and testing/debugging/error handling.  All in all I thought it went quite well.

Since I had no experience, I decided I needed some application to get up to speed on rxJava.  I wanted to be able to answer questions and have more experience than just having read the documentation before the people attending the session.

So, I decided to write the air traffic control application in java.  Sadly, I don’t know that I could even really read the Erlang anymore, but the ideas are pretty easy to understand.

Enough jabbering about that, let’s got to the code.  I will do some explaining as we go, but you should have some familiarity with reactive concepts.  If you need a primer, head to and spend some time reading.

The Setup

Here’s the premise.  There are airplanes that need to land.  There is a flight tower that directs them where to go.  If two planes enter the same place then they collide.  If an airplane enters the space occupied by the tower then it has landed.  Airspace is represented by a square grid and the tower is in the middle of the grid.  The air traffic controllers are not very smart (but smarter than the Erlang version!)

Here’s how I broke things out.

  • The Radio – Responsible for transmitting messages from the tower to the plane.
  • The Radar – Responsible for broadcasting the position of the planes.
  • The Planes
    • Each plane receives messages from the radar so they can determine if another plane has entered their space (in which case they collide
    • Each plane also receives messages from the radio.  These tell the plane where to move to next.
    • Each plane sends a blip on the radar to broadcast its current location.
  • The Tower
    • Receives blips on the radar with the location of each plane.
    • Sends messages on the radio that tell the plane where to go next.
  • Radar Screen
    • Receives blips from the radar
    • Displays a graph showing where each plane is on the grid.

The Code

The code can be found in it’s fullness at The following does actually contain most of it, but if you want to run things, you’ll want to fetch it from github.


PublishSubject<RadioMessage> radio = PublishSubject.create();

Ok, this one is simple.  The radio is a PublishSubject for RadioMessages.  A RadioMessage simply contains the target flight number and the location that the target flight should fly to next.  Because it’s a PublishSubject it can subscribe to Observables that emit RadioMessages, and will also pass those through to any subscribers that are observing the radio.


PublishSubject<Blip> radar = PublishSubject.create();

Another nice, simple one.  The radar is a PublishSubject for Blips.  A Blip contains the id of the blip source (in this case a flight number), and the location of the blip source.  Finally, it contains a blip type, like MOVE, LAND, or CRASH.  It being a PublishSubject here has the same benefits as the Radio.


Now we start getting into some of the guts and putting rxJava to use.  First, here is the code that creates the Tower.

 // create the tower, which is where the planes try to get to land.
 Tower tower = new Tower(TOWER_LOCATION, radar);
 // and allow the tower to emit on the radio
 Observable.create( tower ).subscribe(radio);

So first we create the tower with a nice, simple constructor.  I went back and forth a few time on how to handle the tower and the radar.  The subscription isn’t a simple one like the one above with the radio.  I decided to encapsulate it within the Tower class.

Here is the bulk of the interesting code within the Tower class:

public class Tower implements Observable.OnSubscribe<RadioMessage>{

  private Pair location;

  Subscriber<? super RadioMessage> radio;
  Observable<Blip.  radar;

  public Tower( Pair location, Observable<Blip> radar ) {
    this.location = location;
    this.radar = radar;


   * Implementation of the OnSubscribe interface
  public void call(Subscriber<? super RadioMessage> t) {
    radio = t;

  private void connectRadar() {
    // on a MOVE blip from a plane we will send them information on where to go next.
    radar.filter(b -> b.type == MOVE &amp;amp;&amp;amp; !b.location.equals(location))
        .subscribe(b -> {
          if ( !radio.isUnsubscribed()) {
            radio.onNext(new RadioMessage(, getNewCoordinates(b.location)));

The interesting part here is in the connectRadar method.  The rxJava APIs are very fluent, but here’s a simple english explanation.

  1. First we only want to see the blips on the radar that are of the MOVE type and that are not within the tower’s location.
  2. We don’t want to handle all these blips on the main thread, so we observe on the computation scheduler.
  3. In onNext we use the current coordinates of the blip and then emit on the radio a message that tells the plane where to go next.

That’s really it for the tower.


Here is the code that creates the planes:

    long totalSleep = 0;
    // create all the planes.
    for ( int i = 0; i < PLANES; i++ ) { Plane plane = new Plane(i, getNewSpeed(), getStartingPair(GRID_SIZE), TOWER_LOCATION, radar); // subscribe the plane to the radio // if we subscribe on a different thread, then we may not get our first message radio.filter(msg -> msg.flightNumber == plane.flightNumber )
          .subscribe( plane );
      // tell the radar to listen to blips from the plane.
      // rather than start them all at once, they will enter the grid when this Observable calls onNext.
      Observable.timer(totalSleep, TimeUnit.MILLISECONDS)
          .subscribe( n -> plane.takeoff());
      totalSleep += getNextSleep();

I ran into one gotcha on this one.  As you can see by the comment, if I subscribe on a different scheduler then I would occasionally see the case where the plane would send a blip, the tower would receive the blip and send a radio message, all before the plane subscribed to the radio.  Ah the joys of concurrency.  While rxJava makes this a lot easier, there are still all the same concerns when it comes to threaded code.

Another tidbit you’ll see here.  I wanted to create all the planes at once and then have them come onto the grid using a staggered schedule.  To accomplish this I used an Observable.timer that when it fires would tell the plane to take off.  I accumulate the timeout so that each plane takes off some random time after the previous one.

Here are the details of the Plane class:

public class Plane implements Observer<RadioMessage>, Observable.OnSubscribe<Blip> {

  public final int flightNumber;
  private Pair location;
  private Pair towerLocation;
  private int speed;
  private AtomicBoolean flying = new AtomicBoolean(false);

  Observable<Blip> radar;
  Subscription radarSubscription;
  Subscriber<? super Blip> blipSubscriber;

  public Plane( int flightNumber, int speed, Pair startingLocation,
        Pair towerLocation, Observable<Blip> radar ) {
    this.flightNumber = flightNumber;
    this.speed = speed;
    this.radar = radar;
    this.location = startingLocation;
    // when we get to the tower location we have landed.
    this.towerLocation = towerLocation;

   * Implementation of onNext for the Observer interface.  The way we subscribe
   * means that we will only get RadioMessages that are directed at our flightNumber
  public void onNext(RadioMessage m) {
    Observable.timer( speed, TimeUnit.MILLISECONDS)
    .subscribe( n -> {
        // when the timer goes off, it calls this onNext message
        if (flying.get()) {
          // if we haven't crashed while traveling to our new location then set our current to the new.
          this.location = m.location;
          if ( this.location.equals(towerLocation) ) {
          else {
            // if we haven't landed, then send a blip to tell the tower our new location.

   * Implementation of the OnSubscribe interface
  public void call( Subscriber<? super Blip> t ) {
    blipSubscriber = t;

  private void sendBlip( Blip blip ) {
    if ( !blipSubscriber.isUnsubscribed()) {

  public void takeoff() {
    sendBlip( new Blip( flightNumber, location, MOVE ));

  private void land() {
    sendBlip(new Blip(flightNumber, location, LAND));

  private void move() {
    sendBlip(new Blip(flightNumber, location, MOVE));

   * Subscribe to the radar
  private void connectRadar() {
    // get blips that are in our airspace that is not us.
    // if we get a blip it must be another airplane that will cause us to crash.
    radarSubscription = radar.filter(b -> != this.flightNumber)
      .filter(b -> b.location.equals(this.location))
      .filter(b -> b.type == MOVE || b.type == CRASH )
      .subscribe(blip -> {
          // on any blip on the radar in our space, it will cause us to crash if we are still flying.
          if ( flying.get()) {

  * We have received a blip on the radar.  That means someone else has entered our space.
  * Because of that we must crash.
  private Blip getRadarResponse( Blip blip ) {
    // whether landing or crashing we are done with this flight.
    // we will unsubscribe from the radar because once crashed we don't need any more blips
    // send a crash blip on the radar.  This will notify the plane that hit us that 
    // we were already in this space and cause them to crash also.
    return new Blip( flightNumber, location, Blip.BlipType.CRASH);

The plane class has comments that hopefully explain the bulk of the code, but here are a few notes on it anyway.

First, for brevity I left off the onCompleted and onError methods that would normally be required of an implementation of the Observer interface.

Second, I would have liked this class to implement Observer AND Observer but that’s illegal in java.  This is why I had to pass the radar in to the constructor, so the plane could subscribe to it.  I could have at that point also just had the radar subject subscribe back to the plane, but that made it more tightly coupled.  This way, having the Blip Observable and the Blip Observer separate, the plane doesn’t need to know that it’s implemented as a subject.

One way I may have been able to get around not being able to have the Plane implement Observer twice would be to go to composition.  The Plane could have had an accessor method that would return the Observer and another that would return the Observer.  Then it would only be implementing the OnSubscribe interface.

I should probably also point out that the Tower and the Planes each expect to only have one subscriber.  It should probably be a thread-safe collection of subscribers so that more than one Observer can subscribe.  The PublishSubject helps obviate this in my case, but I’m not sure my way is the best way here.

Ancillary Code

Since my application creates all the planes at once on different schedulers, If I don’t have a mechanism to make the main thread wait for all the planes to complete then the application would start and finish nearly instantly.  To get around this I use a CountDownLatch initialized with the number of planes that we create.  Then we simply subscribe to the radar and look for crash and land events and decrement the latch on those.  Finally, we just wait for the latch to complete and then the program can terminate.  Here’s the code:

// Create a latch so we don't end the program prematurely.
CountDownLatch latch = new CountDownLatch(PLANES);

// subscribe to the radar so we countdown whenever a plane lands or crashes.
radar.filter(b -> != -1 &amp;&amp; (b.type == LAND || b.type == CRASH))
.subscribe(b -> latch.countDown());

// plane creation code here

// complete when all planes have landed or crashed.

Next, we have the radar screen.  This is purely for output.  The Erlang version didn’t have this nicety, but it sure was great for debugging… and entertainment.

public class RadarScreen implements Observer<Blip> {

  private BiMap<Integer, Pair> flightMap = Maps.synchronizedBiMap(HashBiMap.create());
  private Map<Pair, Integer> crashes = Maps.newConcurrentMap();

  private int gridSize;
  private Pair location;

  public RadarScreen( int gridSize, Pair location ) {
    this.gridSize = gridSize;
    this.location = location; // location is the location of the tower.

  public void onNext(Blip t) {
    if ( t.type == LAND ) {
      // if they are landing just remove by id.
    else {
      // remove by location
      if ( t.type == MOVE ) {
        // if they are moving then put them back on the map
        flightMap.put(, t.location);
      else if ( t.type == CRASH ) {
        // if they are crashing do not put them back on the map, but add it to the crashes collection.
        crashes.put(t.location, 10);

It is a very simple implementation that simply rewrites the entire graph to System.out when a radar event is received.  On my mac I can watch it in the STS console and it looks fairly animated.  Fortunately, since you could subscribe to the radar with anything, you could just as easily put together a swing UI to show the planes.

I left out the implementation that prints the graph (since it’s just creating a StringBuffer and then spitting it to System.out) and I also left out the onCompleted and onError methods.  I also left out some extra code that is used to print asterisks in crash locations.

Lessons Learned

This was quite a fun experiment, and was a great way to learn about reactive programming in general, and rxJava specifically.  My part of the presentation had to do with the operators, debugging, testing, and error handling.  I didn’t build much error handling into this example, but I did get to play with the creation, filtering, and mapping operators, and spent plenty of time testing and debugging.

My first implementation was kind of similar to this implementation, but had a lot more garbage that I just didn’t need.  I had extra layers of Observables and Observers and it was just a mess.  My second implementation was almost purely functional.  It was all contained in one class with a lot of chained calls to map, flatMap, filter, etc.  I kind of liked it, but these both had a fatal flaw.  Both implementations depended on some global state.  This is something I didn’t have in the Erlang implementation because the state was always passed around from function to function.

My final implementation is the one you see here.  It’s not perfect, but I managed to get the global state hidden in the RadarScreen class.  It feels more properly reactive than the first sample, but still has some of the strengths of object oriented programming using the encapsulation of the plane, tower, and radar screen objects.  I’ve learned plenty of times that whatever you first implement with a new technology is just not going to be right.  I’m sure there are holes in this project, things that could have been done differently, and perhaps things that a reactive pro will simply look at and say, “huh?!” but in any case, it gave me a good start.  Here are a few things I learned:

  1. Reactive code is still concurrent code.  Treat it as such
  2. System.out and log.debug are your friends.
  3. Being able to subscribe to an Observable simply for debug purposes is really awesome.  I did this a couple times with the subjects I created just so I could validate the messages going through.  For instance when the message would come through the radio but would not be received by the plane because the plane hadn’t subscribed yet.
  4. If you can’t subscribe with a purely debug Observer then doOnNext is also really awesome.  See #3.
  5. Reactive programming is mind bending for someone with 15 years of experience in core java.  You have to start thinking a little bit more like Javascript and less like java
  6. Java 8 lambdas are really awesome.
  7. There are a lot of ways to do the same thing with rxJava.
  8. I want to do more reactive programming.

How to make Jackson serialize null strings differently than null Objects

We’ve got an API.  It uses a certain technology to write JSON data, and this certain technology writes null strings as empty string (“”), but writes null objects as ‘null’.  It’s in production, and therefore changes to it need to be backward compatible.  Jackson is awesome. We want to replace “certain technology” with Jackson.

My first thought was that I would write a custom StringSerializer and associate it with the String type and then whenever a String is null I would output empty string instead of null and poof, done.  Sadly, in Jackson, if the value is null it will always call the NullValueSerializer instead of the Serializer that is configured for the type of field that contains the null value.

So, how to overcome this conundrum?  It can still be approached with a custom serializer implementation, but you also need to touch the SerializerProvider.  Here’s a quick solution we threw together that appears to be working.

// We need to customize the DefaultSerializerProvider so that when it is looking for a NullSerializer it
// will use one that is class sensitive, writing strings as "" and everything else using the default value.
public static class CustomNullStringSerializerProvider extends DefaultSerializerProvider {

  // A couple of constructors and factory methods to keep the compiler happy
  public CustomNullStringSerializerProvider() { super(); }
  public CustomNullStringSerializerProvider(CustomNullStringSerializerProvider provider, SerializationConfig config,
    SerializerFactory jsf) {
    super(provider, config, jsf);
  public CustomNullStringSerializerProvider createInstance(SerializationConfig config,
    SerializerFactory jsf) {
    return new CustomNullStringSerializerProvider(this, config, jsf);

  // This is the interesting part.  When the property has a null value it will call this method to get the
  // serializer for that null value.  At this point, we have the BeanProperty, which contains information about
  // the field that we are trying to serialize (including the type!)  So we can discriminate on the type to determine
  // which serializer is used to output the null value.
  public JsonSerializer<Object> findNullValueSerializer(BeanProperty property) throws JsonMappingException {
    if (property.getType().getRawClass().equals(String.class)) {
      return EmptyStringSerializer.INSTANCE;
    } else {
      return super.findNullValueSerializer(property);

// This is our fancy serializer that takes care of writing the value desired in the case of a null string.  We could
// write whatever we want in here, but in order to maintain backward compatibility we choose the empty string
// instead of something like "joel is awesome."
public static class EmptyStringSerializer extends JsonSerializer<Object> {
  public static final JsonSerializer<Object> INSTANCE = new EmptyStringSerializer();

  private EmptyStringSerializer() {}

  // Since we know we only get to this seralizer in the case where the value is null and the type is String, we can
  // do our handling without any additional logic and write that empty string we are so desperately wanting.
  public void serialize(Object o, JsonGenerator jsonGenerator, SerializerProvider serializerProvider)
    throws IOException, JsonProcessingException {


Now, when we are configuring the ObjectMapper we can simply call setSerializerProvider and give it our custom provider and voila.

Adding Security to a Partially Exposed Web Service

In my previous post I talked about adding some conditional security to a web service by only exposing certain methods and model representations using the new Conditional annotation and a HandlerInterceptor in a Spring 4 based Spring Boot app. Tonight I decided to add some real Spring Security magic to it.

First, add the Spring Security dependency. I took this right from the Spring Security guide on the website.


Then I added the following new Configuration class to my existing Application.

  static class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {

Next, I modified my HandlerInterceptor and it ended up as follows:

public class PublicHandlerInterceptor extends HandlerInterceptorAdapter {

  public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal();
    if ( handler instanceof HandlerMethod ) {
      HandlerMethod method = (HandlerMethod)handler;
      if ( method.getMethodAnnotation(Public.class) != null
          && hasAnyRole( (User)principal, method.getMethodAnnotation(Public.class).forRoles())) {
        return true;
    return false;
  private boolean hasAnyRole( User principal, String[] rolesStrings ) {
    if ( rolesStrings == null || rolesStrings.length == 0 ) {
      return true;
    Set<String> roles = Sets.newHashSet(rolesStrings);
    for ( GrantedAuthority auth : principal.getAuthorities() ) {
      if ( roles.contains(auth.getAuthority())) {
        return true;
    return false;

In the previous iteration I was simply looking for the Public annotation. Now I am looking for a parameter on that annotation. The parameter defaults to empty, which behaves just like the previous iteration of the project, but now you can specify that it should only be public for certain roles. Obviously, this necessitated a change to the Public annotation, as follows:

public @interface Public {
  String[] forRoles() default {};

Finally, I modified the usage of the annotation to test out the new functionality:


You should also be able to pass an array of role names to the forRoles parameter of the annotation.

The Spring Security filters are executed before my HandlerInterceptor so the user would have gone through all the authentication checks by that point. Then, in my HandlerInterceptor, rather than let the user know that there is an endpoint at the given location they just have to try harder to hack it, they get a 404 if they are not allowed to access it.

I suspect you could probably get the same by using a custom handler for an AuthorizationException (or whatever spring-security throws) and using a spring security method annotation to check the role and return a 404 from the handler instead of the standard response but I wanted to build on the previous example and keep the conditional behavior.

Exposing a read-only view of a Spring-MVC web service

Alright, so this is actually more flexible than just a read-only view, but that was the case that prompted me to play around with things so that’s where I’m starting. I was partially inspired by a co-worker’s blog entry regarding creating resource filters with jersey and jaxrs 2.0.

So down to the scenario. I have a simple CRUD webservice that I’ve implemented in Spring-MVC. For my demonstration I used Spring Boot, but you can do it any way you want. One key is that this solution depends on a new feature found in Spring Framework version 4.0.

In my webservice I have a @Controller that has @RequestMappings for GET, PUT, POST, and DELETE, following the normal REST semantics for each method. Now, I have this webservice securely deployed in my production environment and all of my internal services can hit it and everything is awesome.

Now let’s pretend I want to expose some of the resources on the big, bad internet. I want to expose all the GET resources so my front end developers can read the information and put it in a web page, or so my mobile apps can get at it, but I don’t really want to expose the ability for them to create, update, or delete information. Now I’ve got a couple of options.

Option 1

I create a new webservice.  It shares the dependencies of the original so it has access to all the same services, but the controller doesn’t contain any RequestMappings other than the GET resources I want to expose.  This is very secure because I have total control over what is available.  IF the original service was designed appropriately so the Controllers don’t contain any business logic then you can easily reuse all of the logic in the previous webservice.  If not, then it’s a good opportunity to get that done I guess.  On the downside, you now have two artifacts to maintain and deploy.

Option 2

I create a webservice that will proxy requests from the big, bad internet and send them to my internal webservice.  The proxy returns a 404 for any resource/method that should not be exposed, and forwards other requests on to the internal webservice.  Again, my service is secure and I can manage which of the resources are exposed.  Also, again, I have two deployables, and this time they aren’t nearly as related as they were before.  The proxy can be very thin, possibly something as simple as nginx or apache with appropriate rules.

Option 3

This is the option I will explore.  With this option, I modify my webservice so that it can be deployed internally AND externally and lock down the resources that shouldn’t be exposed to the public without having to create a separate deployable artifact.  We will simply annotate those request handlers that should be exposed to the public, basically forming a white-list, and all those that are not explicitly exposed will be restricted from view when certain conditions are met.

In addition, this solution will automatically apply a Jackson JsonView to restrict which properties of the data are exposed, not just which request mappings are exposed.  This will allow us to give a restricted view of the response for the general public on the big bad internet, and the full data for those hitting our internal deployment of the webservice.  We would still be deploying to two environments, one for the public and one for internal, but it would be the same artifact in both places.

First, we are going to use the new @Conditional annotation that was introduced with Spring 4.0.  It allows you to conditionally create a Spring bean.  We will use conditionally defined beans to modify the behavior of the application at runtime.

To The Code

First, the Condition that allows us to change the behavior of the application without having to change any code. My condition is based on the IP address assigned to the server. You could modify the condition to whatever fits your needs. Maybe it checks an environment variable or something. It’s important to note that this condition is evaluated when the bean is created, so if it’s a singleton bean it’ll only be evaluated once. If you are looking to have the condition depend on something from the client then it would probably have to be a request scoped bean, but I haven’t checked to see if that actually works or not. It seems like it should.

 * Condition to check if we are in production or not.
public class ProductionCondition implements Condition {

  public boolean matches(ConditionContext context, AnnotatedTypeMetadata meta) {
    Enumeration ifaces;
    try {
      ifaces = NetworkInterface.getNetworkInterfaces();
      while ( ifaces.hasMoreElements()) {
        NetworkInterface iface = ifaces.nextElement();
        Enumeration addresses =  iface.getInetAddresses();
        while ( addresses.hasMoreElements()) {
          InetAddress address = addresses.nextElement();
          // Set whatever your public, production IP Address space is here!
          if ( address.getHostAddress().startsWith("192.168" )) {
            // If we match, then return true so the bean annotated with this conditional will be created.
            return true;
    catch (SocketException e) {
    return false;

Now we can use the above Condition to conditionally create Spring beans.

Here’s my Spring Boot application.  It also defines other beans for my spring-data-jpa repositories, but those aren’t relevant to what we are doing so I’ve left them out.

public class Application {

  public static void main (String[] args ) {, args );

  static class WebConfig extends WebMvcConfigurerAdapter {
    public void configureMessageConverters(List<HttpMessageConverter> converters) {
      MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter();
      ObjectMapper mapper = new ObjectMapper() {
        private static final long serialVersionUID = 1L;
        protected DefaultSerializerProvider _serializerProvider(SerializationConfig config) {
          return super._serializerProvider(config.withView(Views.Public.class));
      mapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false);

   * Only create this bean if we are in "production" mode.
   * @return
  public MappedInterceptor publicHandlerInterceptor() {
    return new MappedInterceptor(null, new PublicHandlerInterceptor());

  // Other beans here for JPA configuration

Notice that in the application I have two @Conditional beans. One is a new HandlerInterceptor that I’ll show in a second. The other is a full @Configuration. Because the publicHandlerInterceptor @Bean definition returns a MappedInterceptor it will automatically be configured within the Spring MVC application. If it returned a HandlerInterceptor then more work would have to be done to register it with the Spring MVC application.

Secondly, notice that the Conditional Configuration class extends the WebMvcConfigurerAdapter for allowing me to easily configure Spring MVC-type functionality. Sadly, configuring a custom Jackson ObjectMapper in Spring is much more painful (IMO) than it ought to be, so I’m going to get off on a bit of a tangent. Skip to the next section if you are confident in your ObjectMapper abilities.

ObjectMapper Tangent

It would be fantastic if I could configure the ObjectMapper used for a @ResponseBody by simply defining a @Bean named objectMapper and be good to go. Sadly, that’s not the case. I had to add the MessageConverter in the configuration, and set the ObjectMapper for that MessageConverter. Now, here’s the rub. I kept trying to make my configuration changes to the ObjectMapper by calling getSerializationConfig().blah(). Jackson SerializationConfig is immutable Calling getSerializationConfig() and then all of the handy .with(MappingFeature) or whatever just doesn’t work because it simply returns a new instance of SerializationConfig, but doesn’t modify the one that is in the ObjectMapper. You can see my learning process for this at StackOverflow

Back to the Show

So, the reason I needed to modify the ObjectMapper configuration was so that I could make it always use a given Jackson JsonView for every @ResponseBody encountered. The custom implementation of the ObjectMapper I pasted was the first way I found to configure it to always use the JsonView I specified, otherwise I had to call writeWithView on the writer, and I wasn’t sure where to do that. This configuration gives us the white-list of data properties that should be serialized in each response.

To use it, simply annotate the object returned as your @ResponseBody with the @JsonView annotation from Jackson, something like:

  public String getName() {
    return name;

Securing the RequestMappings

The Application configuration has a conditional bean for a HandlerInterceptor, which looks like this:

public class PublicHandlerInterceptor extends HandlerInterceptorAdapter {
  public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    HandlerMethod method = (HandlerMethod)handler;
    if ( method.getMethodAnnotation(Public.class) != null ) {
      return true;
    return false;

This HandlerInterceptor will be evaluated for every RequestMapping. Here, we look at the actual method that is being called to handle the request. If it is annotated with our custom @Public annotation, then we allow the request to proceed by returning true from the HandlerInterceptor. If it isn’t, then we return false and send a 404 to the client.

Finally, here’s the Public annotation definition

public @interface Public {}

And its usage:

  @RequestMapping(method=RequestMethod.GET, produces=MediaType.APPLICATION_JSON)
  public @ResponseBody Iterable getCollection(
    @RequestParam(value="ids", required=false) List ids,
    @RequestParam(value="limit", required=false, defaultValue="100") int limit ) {
      // lookup a collection of MyObjects and return them

  @RequestMapping( value="/{id}", method=RequestMethod.PUT, consumes=MediaType.APPLICATION_JSON, produces=MediaType.APPLICATION_JSON)
  public @ResponseBody MyObject putValue(@PathVariable Long id, @RequestBody MyObject d ) {
    // do some things to update an object and return the representation of the updated object

With this in place, I’m able to deploy my webservice (with spring-boot it’s just a jar that contains embedded tomcat!) and run it without any further alterations. The getCollection method would be available in both deployment locations. The putValue handler would only be available in those deployment locations that do NOT match the condition I have specified, so only those that are visible internally. The representation of MyObject is appropriate for the deployment location without any further changes to the webservice either. I merely select the properties of MyObject that I want exposed publicly and annotate them with the appropriate JsonView.

A white-list approach ensures that nothing slips through the cracks to the big, bad internet just because a developer forgot to restrict it. Instead, they must evaluate each request handler and data property and explicitly expose it in the public view.

I could have had my proof of concept developed and tested in under 2 hours had I not run into my difficulties configuring the ObjectMapper. That’s a lesson I won’t soon forget though. I tested all this by making the condition match my IP address when I was connected to my work VPN. When I started the application up and I was connected it would restrict the request handlers and the serialized properties. If I was not connected I could execute any method and would see all of the data properties.

It’s probably not perfect a perfect solution. Does such a thing exist? The one question I’ve thought of is what happens if my code is already using JsonViews? I’m not sure how it would play together. Nevertheless it is an interesting exploration of the capabilities of the @Conditional annotation and HandlerInterceptors.

Eclipselink static weaving

I’m playing with a new project and decided to get eclipselink static weaving working in this one. I started on the official eclipselink project documentation on the subject. That’s nice and all, but it doesn’t say anything about getting the weaving to work with maven or eclipse. I really wanted both. Here’s what I did.

First, in persistence.xml you should add the following property:

      <property name="eclipselink.weaving" value="static" />

My maven project has several modules. domain-api contains the entity definitions. domain-impl contains the code for interacting with the database. That means that my persistence.xml is contained in domain-impl and the @Entity classes are in domain-api. That’s alright. To get this all working I decided to use the command line option rather than use the ant task.

The weaving needs to take place on the entities, so the weaving step is placed in the domain-api pom.xml. Within build/plugins I added this plugin.

          <commandlineArgs>-classpath %classpath -loglevel FINE -persistenceinfo ${basedir}/../domain-impl/src/main/resources ${basedir}/target/classes ${basedir}/target/classes</commandlineArgs>

That’ll run the java program to weave the class files in place. Note that my -persistenceinfo argument points to ../domain-impl/src/main/resources. That’s because the StaticWeave class will look for META-INF/persistence.xml within that directory, and my persistence.xml is contained in the domain-impl module, not within domain-api. If you are using maven resource filtering on your persistence.xml this will cause a problem since domain-impl builds after domain-api. That’s not a problem in my case so I’m going to be lazy and not address it.

Once you have that in your pom you should be able to do a mvn clean install and see output something like:

[INFO] --- exec-maven-plugin:1.2.1:java (weave-classes) @ domain-api ---
[EL Config]: metadata: 2012-11-23 23:15:42.798--ServerSession(1166215941)--Thread(Thread[,5,])--
The access type for the persistent class [class com.xyg.model.JoelTest] is set to [FIELD].

I added line breaks for formatting. Anyway, that kind of output means things are working.

Sadly, Eclipse’s maven integration isn’t smart enough to figure this part of the pom out. If we tell Eclipse to ignore this part of the build then the classes won’t get weaved and it’s likely that if you try to run the project from within eclipse it’s not going to work properly. Fortunately, we can work around this.

Within the same pom (in my case, in domain-api) you can add another plugin. This plugin is NOT within the plugins element, but is within a pluginManagement element. You can get eclipse to generate the entry for you by telling it to ignore the execution when it tells you it has an error because it has no lifecycle mapping for the given execution. When you do that it’ll generate an xml block in your pom.xml that will look like this:

        <!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself.-->

The version eclipse generates will say something like <ignore></ignore> in the action section of this configuration. If you simply change that to <execute/> then eclipse will execute it. It will execute it every time, even on incremental builds, so hopefully it isn’t too intrusive. I’m just getting started on this project so I don’t have many entity classes yet, but if there’s an issue I’ll get back and update this post.

With this configuration I’m able to execute my integration tests from the command line and I’m also able to run the tests and launch the webapp from within eclipse, and I don’t incur the runtime penalty of dynamic weaving.

Jackson Mixins

You are working with a third party library. You need to serialize an object from that library to JSON. Or, in my case, I needed to serialize an implementation of an interface defined in the third party library. In any case you can’t modify the class you need to serialize, but you also need to change the way the class is mapped. Jackson provides a great mechanism to work around this using what they call MixIns.

Let’s say you are given this:

public interface ThirdPartyInterface {
  long getItemID();
  float getValue();

For some reason beyond your control, you need to use id instead of itemID in the JSON as the key for getItemID. To make things even more awesome, you need to have quotes around the itemID and the value, but you don’t want quotes around EVERY numeric field, just those two. With Jackson, I create my mixin interface, and I provide the Jackson annotations that I would like to apply on the instances of the ThirdPartyInterface to my new one.

public interface MyMixin {
  long getItemID();

  float getValue();

Now you can see that I have the JsonProperty annotations in there to change the name of the key, and I have the JsonSerialize annotation in there, using the Jackson builtin ToStringSerializer to convert the long and float values into Strings, which will ensure they are quoted in the JSON output.

In order to use my Mixin, I configure the ObjectMapper to do so as follows.

  ObjectMapper mapper = new ObjectMapper();
  mapper.getSerializationConfig().addMixInAnnotations(ThirdPartyInterface.class, MyMixin.class);

In my case, I have several implementations of the ThirdPartyInterface and configuring the mixin as above applies to all implementations. Pretty slick.