DigitalJoel

2014/01/27

Adding Security to a Partially Exposed Web Service

Filed under: development, java, spring, spring framwork, spring-mvc — Tags: , — digitaljoel @ 10:42 pm

In my previous post I talked about adding some conditional security to a web service by only exposing certain methods and model representations using the new Conditional annotation and a HandlerInterceptor in a Spring 4 based Spring Boot app. Tonight I decided to add some real Spring Security magic to it.

First, add the Spring Security dependency. I took this right from the Spring Security guide on the Spring.io website.

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security</artifactId>
            <version>0.5.0.M6</version>
        </dependency>

Then I added the following new Configuration class to my existing Application.

  @Configuration
  @EnableWebSecurity
  static class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication().withUser("user").password("password").roles("USER");
        auth.inMemoryAuthentication().withUser("admin").password("password").roles("SUPER");
    }
  }

Next, I modified my HandlerInterceptor and it ended up as follows:

public class PublicHandlerInterceptor extends HandlerInterceptorAdapter {

  @Override
  public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal();
    if ( handler instanceof HandlerMethod ) {
      HandlerMethod method = (HandlerMethod)handler;
      if ( method.getMethodAnnotation(Public.class) != null
          && hasAnyRole( (User)principal, method.getMethodAnnotation(Public.class).forRoles())) {
        return true;
      }
      response.setStatus(404);
    }
    return false;
  }
  
  private boolean hasAnyRole( User principal, String[] rolesStrings ) {
    if ( rolesStrings == null || rolesStrings.length == 0 ) {
      return true;
    }
    Set<String> roles = Sets.newHashSet(rolesStrings);
    for ( GrantedAuthority auth : principal.getAuthorities() ) {
      if ( roles.contains(auth.getAuthority())) {
        return true;
      }
    }
    return false;
  }
}

In the previous iteration I was simply looking for the Public annotation. Now I am looking for a parameter on that annotation. The parameter defaults to empty, which behaves just like the previous iteration of the project, but now you can specify that it should only be public for certain roles. Obviously, this necessitated a change to the Public annotation, as follows:

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Public {
  String[] forRoles() default {};
}

Finally, I modified the usage of the annotation to test out the new functionality:

  @Public(forRoles="ROLE_USER")

You should also be able to pass an array of role names to the forRoles parameter of the annotation.

The Spring Security filters are executed before my HandlerInterceptor so the user would have gone through all the authentication checks by that point. Then, in my HandlerInterceptor, rather than let the user know that there is an endpoint at the given location they just have to try harder to hack it, they get a 404 if they are not allowed to access it.

I suspect you could probably get the same by using a custom handler for an AuthorizationException (or whatever spring-security throws) and using a spring security method annotation to check the role and return a 404 from the handler instead of the standard response but I wanted to build on the previous example and keep the conditional behavior.

2014/01/10

Exposing a read-only view of a Spring-MVC web service

Alright, so this is actually more flexible than just a read-only view, but that was the case that prompted me to play around with things so that’s where I’m starting. I was partially inspired by a co-worker’s blog entry regarding creating resource filters with jersey and jaxrs 2.0.

So down to the scenario. I have a simple CRUD webservice that I’ve implemented in Spring-MVC. For my demonstration I used Spring Boot, but you can do it any way you want. One key is that this solution depends on a new feature found in Spring Framework version 4.0.

In my webservice I have a @Controller that has @RequestMappings for GET, PUT, POST, and DELETE, following the normal REST semantics for each method. Now, I have this webservice securely deployed in my production environment and all of my internal services can hit it and everything is awesome.

Now let’s pretend I want to expose some of the resources on the big, bad internet. I want to expose all the GET resources so my front end developers can read the information and put it in a web page, or so my mobile apps can get at it, but I don’t really want to expose the ability for them to create, update, or delete information. Now I’ve got a couple of options.

Option 1

I create a new webservice.  It shares the dependencies of the original so it has access to all the same services, but the controller doesn’t contain any RequestMappings other than the GET resources I want to expose.  This is very secure because I have total control over what is available.  IF the original service was designed appropriately so the Controllers don’t contain any business logic then you can easily reuse all of the logic in the previous webservice.  If not, then it’s a good opportunity to get that done I guess.  On the downside, you now have two artifacts to maintain and deploy.

Option 2

I create a webservice that will proxy requests from the big, bad internet and send them to my internal webservice.  The proxy returns a 404 for any resource/method that should not be exposed, and forwards other requests on to the internal webservice.  Again, my service is secure and I can manage which of the resources are exposed.  Also, again, I have two deployables, and this time they aren’t nearly as related as they were before.  The proxy can be very thin, possibly something as simple as nginx or apache with appropriate rules.

Option 3

This is the option I will explore.  With this option, I modify my webservice so that it can be deployed internally AND externally and lock down the resources that shouldn’t be exposed to the public without having to create a separate deployable artifact.  We will simply annotate those request handlers that should be exposed to the public, basically forming a white-list, and all those that are not explicitly exposed will be restricted from view when certain conditions are met.

In addition, this solution will automatically apply a Jackson JsonView to restrict which properties of the data are exposed, not just which request mappings are exposed.  This will allow us to give a restricted view of the response for the general public on the big bad internet, and the full data for those hitting our internal deployment of the webservice.  We would still be deploying to two environments, one for the public and one for internal, but it would be the same artifact in both places.

First, we are going to use the new @Conditional annotation that was introduced with Spring 4.0.  It allows you to conditionally create a Spring bean.  We will use conditionally defined beans to modify the behavior of the application at runtime.

To The Code

First, the Condition that allows us to change the behavior of the application without having to change any code. My condition is based on the IP address assigned to the server. You could modify the condition to whatever fits your needs. Maybe it checks an environment variable or something. It’s important to note that this condition is evaluated when the bean is created, so if it’s a singleton bean it’ll only be evaluated once. If you are looking to have the condition depend on something from the client then it would probably have to be a request scoped bean, but I haven’t checked to see if that actually works or not. It seems like it should.

/**
 * Condition to check if we are in production or not.
 */
public class ProductionCondition implements Condition {

  @Override
  public boolean matches(ConditionContext context, AnnotatedTypeMetadata meta) {
    Enumeration ifaces;
    try {
      ifaces = NetworkInterface.getNetworkInterfaces();
      while ( ifaces.hasMoreElements()) {
        NetworkInterface iface = ifaces.nextElement();
        Enumeration addresses =  iface.getInetAddresses();
        while ( addresses.hasMoreElements()) {
          InetAddress address = addresses.nextElement();
          // Set whatever your public, production IP Address space is here!
          if ( address.getHostAddress().startsWith("192.168" )) {
            // If we match, then return true so the bean annotated with this conditional will be created.
            return true;
          }
        }
      }
    }
    catch (SocketException e) {
    }
    return false;
  }
}

Now we can use the above Condition to conditionally create Spring beans.

Here’s my Spring Boot application.  It also defines other beans for my spring-data-jpa repositories, but those aren’t relevant to what we are doing so I’ve left them out.

@Configuration
@ComponentScan
@EnableAutoConfiguration
@EnableJpaRepositories
public class Application {

  public static void main (String[] args ) {
    SpringApplication.run(Application.class, args );
  }

  @Configuration
  @Conditional(ProductionCondition.class)
  static class WebConfig extends WebMvcConfigurerAdapter {
    @Override
    public void configureMessageConverters(List<HttpMessageConverter> converters) {
      MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter();
      ObjectMapper mapper = new ObjectMapper() {
        private static final long serialVersionUID = 1L;
        @Override
        protected DefaultSerializerProvider _serializerProvider(SerializationConfig config) {
          return super._serializerProvider(config.withView(Views.Public.class));
        }
      };
      mapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false);
      converter.setObjectMapper(mapper);
      converters.add(converter);
    }
  }

  /**
   * Only create this bean if we are in "production" mode.
   * @return
   */
  @Bean
  @Conditional(ProductionCondition.class)
  public MappedInterceptor publicHandlerInterceptor() {
    return new MappedInterceptor(null, new PublicHandlerInterceptor());
  }

  // Other beans here for JPA configuration
}

Notice that in the application I have two @Conditional beans. One is a new HandlerInterceptor that I’ll show in a second. The other is a full @Configuration. Because the publicHandlerInterceptor @Bean definition returns a MappedInterceptor it will automatically be configured within the Spring MVC application. If it returned a HandlerInterceptor then more work would have to be done to register it with the Spring MVC application.

Secondly, notice that the Conditional Configuration class extends the WebMvcConfigurerAdapter for allowing me to easily configure Spring MVC-type functionality. Sadly, configuring a custom Jackson ObjectMapper in Spring is much more painful (IMO) than it ought to be, so I’m going to get off on a bit of a tangent. Skip to the next section if you are confident in your ObjectMapper abilities.

ObjectMapper Tangent

It would be fantastic if I could configure the ObjectMapper used for a @ResponseBody by simply defining a @Bean named objectMapper and be good to go. Sadly, that’s not the case. I had to add the MessageConverter in the configuration, and set the ObjectMapper for that MessageConverter. Now, here’s the rub. I kept trying to make my configuration changes to the ObjectMapper by calling getSerializationConfig().blah(). Jackson SerializationConfig is immutable Calling getSerializationConfig() and then all of the handy .with(MappingFeature) or whatever just doesn’t work because it simply returns a new instance of SerializationConfig, but doesn’t modify the one that is in the ObjectMapper. You can see my learning process for this at StackOverflow

Back to the Show

So, the reason I needed to modify the ObjectMapper configuration was so that I could make it always use a given Jackson JsonView for every @ResponseBody encountered. The custom implementation of the ObjectMapper I pasted was the first way I found to configure it to always use the JsonView I specified, otherwise I had to call writeWithView on the writer, and I wasn’t sure where to do that. This configuration gives us the white-list of data properties that should be serialized in each response.

To use it, simply annotate the object returned as your @ResponseBody with the @JsonView annotation from Jackson, something like:

  @JsonView(value={Views.Public.class})
  public String getName() {
    return name;
  }

Securing the RequestMappings

The Application configuration has a conditional bean for a HandlerInterceptor, which looks like this:

public class PublicHandlerInterceptor extends HandlerInterceptorAdapter {
  @Override
  public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    HandlerMethod method = (HandlerMethod)handler;
    if ( method.getMethodAnnotation(Public.class) != null ) {
      return true;
    }
    response.setStatus(404);
    return false;
  }
}

This HandlerInterceptor will be evaluated for every RequestMapping. Here, we look at the actual method that is being called to handle the request. If it is annotated with our custom @Public annotation, then we allow the request to proceed by returning true from the HandlerInterceptor. If it isn’t, then we return false and send a 404 to the client.

Finally, here’s the Public annotation definition

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Public {}

And its usage:

  @Public
  @RequestMapping(method=RequestMethod.GET, produces=MediaType.APPLICATION_JSON)
  public @ResponseBody Iterable getCollection(
    @RequestParam(value="ids", required=false) List ids,
    @RequestParam(value="limit", required=false, defaultValue="100") int limit ) {
      // lookup a collection of MyObjects and return them
  }

  @RequestMapping( value="/{id}", method=RequestMethod.PUT, consumes=MediaType.APPLICATION_JSON, produces=MediaType.APPLICATION_JSON)
  public @ResponseBody MyObject putValue(@PathVariable Long id, @RequestBody MyObject d ) {
    // do some things to update an object and return the representation of the updated object
  }

With this in place, I’m able to deploy my webservice (with spring-boot it’s just a jar that contains embedded tomcat!) and run it without any further alterations. The getCollection method would be available in both deployment locations. The putValue handler would only be available in those deployment locations that do NOT match the condition I have specified, so only those that are visible internally. The representation of MyObject is appropriate for the deployment location without any further changes to the webservice either. I merely select the properties of MyObject that I want exposed publicly and annotate them with the appropriate JsonView.

A white-list approach ensures that nothing slips through the cracks to the big, bad internet just because a developer forgot to restrict it. Instead, they must evaluate each request handler and data property and explicitly expose it in the public view.

I could have had my proof of concept developed and tested in under 2 hours had I not run into my difficulties configuring the ObjectMapper. That’s a lesson I won’t soon forget though. I tested all this by making the condition match my IP address when I was connected to my work VPN. When I started the application up and I was connected it would restrict the request handlers and the serialized properties. If I was not connected I could execute any method and would see all of the data properties.

It’s probably not perfect a perfect solution. Does such a thing exist? The one question I’ve thought of is what happens if my code is already using JsonViews? I’m not sure how it would play together. Nevertheless it is an interesting exploration of the capabilities of the @Conditional annotation and HandlerInterceptors.

2012/11/23

Eclipselink static weaving

Filed under: development, eclipse, java — Tags: , , — digitaljoel @ 11:26 pm

I’m playing with a new project and decided to get eclipselink static weaving working in this one. I started on the official eclipselink project documentation on the subject. That’s nice and all, but it doesn’t say anything about getting the weaving to work with maven or eclipse. I really wanted both. Here’s what I did.

First, in persistence.xml you should add the following property:

    <properties>
      <property name="eclipselink.weaving" value="static" />
    </properties>

My maven project has several modules. domain-api contains the entity definitions. domain-impl contains the code for interacting with the database. That means that my persistence.xml is contained in domain-impl and the @Entity classes are in domain-api. That’s alright. To get this all working I decided to use the command line option rather than use the ant task.

The weaving needs to take place on the entities, so the weaving step is placed in the domain-api pom.xml. Within build/plugins I added this plugin.

      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>exec-maven-plugin</artifactId>
        <version>1.2.1</version>
        <executions>
          <execution>
            <id>weave-classes</id>
            <phase>process-classes</phase>
            <goals>
              <goal>java</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <mainClass>org.eclipse.persistence.tools.weaving.jpa.StaticWeave</mainClass>
          <commandlineArgs>-classpath %classpath -loglevel FINE -persistenceinfo ${basedir}/../domain-impl/src/main/resources ${basedir}/target/classes ${basedir}/target/classes</commandlineArgs>
        </configuration>
      </plugin>

That’ll run the java program to weave the class files in place. Note that my -persistenceinfo argument points to ../domain-impl/src/main/resources. That’s because the StaticWeave class will look for META-INF/persistence.xml within that directory, and my persistence.xml is contained in the domain-impl module, not within domain-api. If you are using maven resource filtering on your persistence.xml this will cause a problem since domain-impl builds after domain-api. That’s not a problem in my case so I’m going to be lazy and not address it.

Once you have that in your pom you should be able to do a mvn clean install and see output something like:

[INFO] --- exec-maven-plugin:1.2.1:java (weave-classes) @ domain-api ---
[EL Config]: metadata: 2012-11-23 23:15:42.798--ServerSession(1166215941)--Thread(Thread[org.eclipse.persistence.tools.weaving.jpa.StaticWeave.main(),5,
org.eclipse.persistence.tools.weaving.jpa.StaticWeave])--
The access type for the persistent class [class com.xyg.model.JoelTest] is set to [FIELD].

I added line breaks for formatting. Anyway, that kind of output means things are working.

Sadly, Eclipse’s maven integration isn’t smart enough to figure this part of the pom out. If we tell Eclipse to ignore this part of the build then the classes won’t get weaved and it’s likely that if you try to run the project from within eclipse it’s not going to work properly. Fortunately, we can work around this.

Within the same pom (in my case, in domain-api) you can add another plugin. This plugin is NOT within the plugins element, but is within a pluginManagement element. You can get eclipse to generate the entry for you by telling it to ignore the execution when it tells you it has an error because it has no lifecycle mapping for the given execution. When you do that it’ll generate an xml block in your pom.xml that will look like this:

    <pluginManagement>
      <plugins>
        <!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself.-->
        <plugin>
          <groupId>org.eclipse.m2e</groupId>
          <artifactId>lifecycle-mapping</artifactId>
          <version>1.0.0</version>
          <configuration>
            <lifecycleMappingMetadata>
              <pluginExecutions>
                <pluginExecution>
                  <pluginExecutionFilter>
                    <groupId>org.codehaus.mojo</groupId>
                    <artifactId>exec-maven-plugin</artifactId>
                    <versionRange>[1.2.1,)</versionRange>
                    <goals>
                      <goal>java</goal>
                    </goals>
                  </pluginExecutionFilter>
                  <action>
                    <execute/>
                  </action>
                </pluginExecution>
              </pluginExecutions>
            </lifecycleMappingMetadata>
          </configuration>
        </plugin>
      </plugins>
    </pluginManagement>

The version eclipse generates will say something like <ignore></ignore> in the action section of this configuration. If you simply change that to <execute/> then eclipse will execute it. It will execute it every time, even on incremental builds, so hopefully it isn’t too intrusive. I’m just getting started on this project so I don’t have many entity classes yet, but if there’s an issue I’ll get back and update this post.

With this configuration I’m able to execute my integration tests from the command line and I’m also able to run the tests and launch the webapp from within eclipse, and I don’t incur the runtime penalty of dynamic weaving.

2012/10/08

Jackson Mixins

Filed under: development, java — Tags: , , — digitaljoel @ 8:21 pm

You are working with a third party library. You need to serialize an object from that library to JSON. Or, in my case, I needed to serialize an implementation of an interface defined in the third party library. In any case you can’t modify the class you need to serialize, but you also need to change the way the class is mapped. Jackson provides a great mechanism to work around this using what they call MixIns.

Let’s say you are given this:

public interface ThirdPartyInterface {
  long getItemID();
  float getValue();
}

For some reason beyond your control, you need to use id instead of itemID in the JSON as the key for getItemID. To make things even more awesome, you need to have quotes around the itemID and the value, but you don’t want quotes around EVERY numeric field, just those two. With Jackson, I create my mixin interface, and I provide the Jackson annotations that I would like to apply on the instances of the ThirdPartyInterface to my new one.

public interface MyMixin {
  @JsonProperty("id")
  @JsonSerialize(using=ToStringSerializer.class)
  long getItemID();

  @JsonProperty("value")
  @JsonSerialize(using=ToStringSerializer.class)
  float getValue();
}

Now you can see that I have the JsonProperty annotations in there to change the name of the key, and I have the JsonSerialize annotation in there, using the Jackson builtin ToStringSerializer to convert the long and float values into Strings, which will ensure they are quoted in the JSON output.

In order to use my Mixin, I configure the ObjectMapper to do so as follows.

  ObjectMapper mapper = new ObjectMapper();
  mapper.getSerializationConfig().addMixInAnnotations(ThirdPartyInterface.class, MyMixin.class);

In my case, I have several implementations of the ThirdPartyInterface and configuring the mixin as above applies to all implementations. Pretty slick.

2012/05/05

Posting data from multiple forms

Filed under: javascript, jquery — Tags: , , — digitaljoel @ 6:17 pm

For some reason you want to submit the data from multiple forms in a single click. For this example lets assume the following:

  • You have 2 forms, form1, and form2.
  • When form1 is submitted it should submit only the data in input controls in form1.
  • When form2 is submitted it should submit all of the data in form1 AND all of the data in form2.

Option 1. form1 submits as normal.  form2 contains hidden inputs that mirror form1 and javascript is used in an onchange event for the inputs in form1 to keep the hidden inputs in form2 in sync with form1.  ugh.

Option 2. form1 submits as normal. form2 submits via jquery.  Something like this

$.post( 'where/i/want/to/post/to'
, $('#form1').serialize()+"&amp;"+$('#form2').serialize()
, function( response ) {
// here do something with the response from the form post.
alert( "response is " + response );
}
, "json" );

Because of the serialize calls to each form, we get the data from each. I concatenate them with an ampersand and they come through as one big form. Now I don’t have any messy javascript trying to keep hidden fields in sync etc.

Of course, there are going to be some drawbacks. First, now I am submitting the form via ajax, so if you are stuck on old fashioned html form submits then this may not be the solution for you. Second, if either form is entirely empty then the data you are posting may not be well formed, so you should put some error checking around each of those serialize calls and determine if you should use an ampersand to join them. Third, if you aren’t using jQuery, you will be after this… not sure that’s a drawback since jQuery make javascript usable for someone like me that is used to plain Java programming.

2011/08/10

Backing up your database to Amazon S3

Filed under: development — Tags: , , — digitaljoel @ 10:17 pm

So now that you have your application running on an AWS EC2 instance, you need to backup the data somehow.  In my case, it’s a postgres database and I wanted to back it up into Amazon S3 within my same AWS account.  I wanted to have a backup for every day of the week, which would roll.  What I mean is that I would have a backup for Monday, and every Monday it would overwrite the previous Monday’s backup.  That way I would have a rolling 7 day backup. but not have a bazillion copies of the database that I have to manually get rid of.  Anyway, on to the code.

I wrote a little bash script that I then put into a cron job.  First, there’s a touch of setup to be done.  Wherever you are going to be running the job, you will need to install and configure s3cmd.  It’s a great little utility for hitting s3 from the command line.  The very simple instructions for configuring s3cmd are on that first page and shouldn’t take you more than 5 minutes.  I’ve run it on OSX Lion and also on my AWS instance and had no issues.

Next, is the bash script.  Here it is.

#!/bin/bash

PGDUMP=pg_dump
EXPORTFILE=`date +%A`.sql
COMPRESSEDFILE=${EXPORTFILE}.tgz
BUCKET=<your bucket name>
S3CMD=~/bin/s3cmd-1.0.1/s3cmd

$PGDUMP -f ./$EXPORTFILE -cb -F p --inserts -U <your user> <your database>
tar -czf ${COMPRESSEDFILE} ${EXPORTFILE}

$S3CMD put ./${COMPRESSEDFILE} s3://${BUCKET}

You’ll need to set PGDUMP to point to your pg_dump script if it isn’t in your path. Also set S3CMD to point to wherever you installed s3cmd.  If you prefer other options for pg_dump, or if you are using some other database, you can modify the $PGDUMP line to do whatever you need.

On Monday the script will create a file named Monday.sql and a compressed archive named Monday.sql.tgz.  It’ll then upload Monday.sql.tgz to your s3 account.  You could easily add another line at the end of this script to remove the exported file and the archive using


rm $EXPORTFILE $COMPRESSEDFILE

Finally, you’ll need to schedule this to be run once per day.  This can be done by running crontab -e and then using the following line in the crontab file:


0 2 * * * ~/backupdb.sh

That will run the script every morning at 2.  You can change the hour for whatever fits your needs.

The final task for me is going to be creating a similar script that will run every week and keep the last 4 weeks of backup.  I’m planning to do that using %W on the date command to get the week of year and do some math using the week number in the file name to create the new file and remove the old ones.  I guess I’ll leave that as an exercise for the reader.

2011/08/04

Amazon Web Services Alarm for HTTP Server

Filed under: development — Tags: , , , — digitaljoel @ 10:53 pm

So you’ve written an app and you’re hosting it on an AWS EC2 instance.  For whatever reason you have only the one server up with no load balancer in front of it.  You want to set an alarm in AWS so that if the server goes down you’ll know right away, but how can you do it?

I wrote a simple bash script to ping a special URL in my web application.  The response from the URL is simply the text “healthcheck ok” with a 200 response code.  The script checks for that text.  If it exists in the response, then it sends a 1 up to AWS as a custom metric.  If it doesn’t, then it sends a 0.

#!/bin/bash
while :
do
  stat=0
  healthcheck=`curl --connect-timeout 5 --max-time 7 --fail --insecure --silent https://localhost/healthcheck`
  if [ "healthcheck ok" = "$healthcheck" ]
  then
    stat=1
  fi
  mon-put-data --metric-name HttpHealthCheck --namespace YourNamespace --dimensions "server=prod" --value $stat
  sleep 60
done

In order for the script to run, you’ll need to have done all the authentication setup for the AWS scripts and ensure you have a version of them that includes the mon-put-data script.  For testing, you can run the curl command on the command line.  You can do the same with mon-put-data.

In my experience, it took a few minutes for the custom metric to show up the first time I sent it.  Once it settles in you should be able to select it from the metrics in CloudWatch.  The final step is to setup the alarm.

You should be able to set an alarm to go off when the value of the metric is <= 0.  I tested it by shutting down my web server and I got the alarm notification within about a minute.

If your health check isn’t started (which you can do with $nohup ./healthcheck.sh &) then you won’t get samples and in my test no alarm was sounded.  So, I set another alarm.  For any metric, you can set an alarm based on the value, or based on the samples.  Just choose the “samples” statistic from the drop down.  Set the alarm to go off if samples <= 0.  Also add another action and set it to go off on INSUFFICIENT_DATA, meaning that there are not enough samples, which likely means your script wasn’t started, or has failed.

Once your app is super popular, you can look at the load balancer, which I believe allows for setting alarms based on HTTP response times etc. but I think this’ll do until I get there.

2011/06/15

Spring ConverterFactory Implementation

Filed under: java, spring — Tags: , , , — digitaljoel @ 10:12 pm

In my Spring MVC 3 based application I had recently implemented a few Converters for some of my JPA based data objects. It started with one, then another, and so on. By the time I got around to adding my fourth converter to the spring configuration file I knew it was time to pull it out and abstract it a bit. Thankfully, Spring allows you to implement a ConverterFactory that is responsible for creating the converters for some types.

Each of my entities extend an abstract base class that looks basically like this

@MappedSuperclass
public abstract class DataObjectAbstract<K extends Serializable>
        implements DataObject<K>
{
    protected transient String[] excludedEqualsFields = new String[] { "key", "version" };

    @Version
    protected int version;

    @Override
    public boolean equals( Object that )
    {
        return EqualsBuilder.reflectionEquals( this, that, excludedEqualsFields );
    }

    @Override
    public int hashCode()
    {
        return HashCodeBuilder.reflectionHashCode( this, excludedEqualsFields );
    }

    @Override
    public String toString()
    {
        return ToStringBuilder.reflectionToString( this, ToStringStyle.MULTI_LINE_STYLE );
    }
}

The DataObject interface simply declares a getKey and setKey method.

So, in my Spring MVC Controller methods I was originally accepting a String or Long, then using my own data access objects to lookup the entities I needed. The next iteration in my implementation was to implement the Converters as I mentioned above. That was very simple and worked well, but having many data objects I didn’t want to copy that implementation over and over again. This is where the ConverterFactory comes in. Here’s my implementation:

@Component
public class DataObjectConverterFactory
        implements ConverterFactory<String, DataObject<Long>>
{
    @PersistenceContext
    EntityManager em;

    @Override
    public <T extends DataObject<Long>> Converter<String, T> getConverter( Class<T> type )
    {
        return new GenericLongKeyedDataObjectConverter<T>( type, em );
    }
}

The ConverterFactory interface is basically as simple as the Converter interface. The Class<T> type parameter to the getConverter method tells us what type we are going to convert to.  One option from here is to have a big nasty if/else statement with a bunch of instanceof methods that create a new Converter.  I thought about doing this and passing in the appropriate data access object and performing the lookup.  That would be only two classes and then I could convert all of my DataObjects, but I didn’t like the idea of a bajillion instanceof statements.  So you can see I implemented a GenericLongKeyedDataObjectConverter which takes the target type and the EntityManager as a parameters.  Here’s the implementation of the generic converter class:

/**
 * A generic converter used for converting from a string representation of an entity key to the DataObject itself.
 *
 * @param <T> The type that is to be converted to.
 */
public class GenericLongKeyedDataObjectConverter<T extends DataObject<Long>>
        implements Converter<String, T>
{
    private Class<T> type;
    private EntityManager em;

    /**
     *
     * @param type An instance of Class for the type being converted to
     * @param em EntityManager used to perform the lookup.
     */
    public GenericLongKeyedDataObjectConverter( Class<T> type, EntityManager em )
    {
        this.type = type;
        this.em = em;
    }

    @Override
    public T convert( String stringKey )
    {
        Long key = Long.parseLong( stringKey );
        return em.find( type, key );
    }
}

An extremely simple parameterized class implementation of the Converter interface. Here, with no use of instanceof, I’m creating the appropriate converter implementation for all of my persisted classes.  If you have a group of objects that you want converted and they all inherit from a base class, a ConverterFactory may be a better solution than implementing a bunch of converters manually.

Finally, here’s the bean xml configuration:

<bean id="conversionService" class="org.springframework.format.support.FormattingConversionServiceFactoryBean">
    <property name="converters">
        <list>
            <ref bean="dataObjectConverterFactory" />
        </list>
    </property>
</bean>

Notice that we reference the dataObjectConverterFactory bean, but I never defined it in my xml config.  That’s because I used the @Component annotation on my implementation class.

2011/04/20

Removing A Dragged And Dropped List Item

Filed under: development, jquery — Tags: , — digitaljoel @ 11:11 pm

In a previous post I mentioned how to take a table row and drag it onto a sortable list. The problem with that is that there was no way to remove the item once it was dropped on the list. So, I modified the original code so that the dropped item now has a button that allows for removal of the item. Here is the new version in its entirety:

        var qTable;
        var newSurvey;
        // create the fancy datatable
        $(function() {
            // setup the datatable
            qTable = $('#questionTable').dataTable( {
                    "aoColumns": [
                                  { "asSorting": [ "desc", "asc" ] },
                                  { "asSorting": [ "desc", "asc", "asc" ] },
                              ]
                    , "bJQueryUI": true
                }
            );
            
            $(qTable.fnGetNodes()).draggable({
                opacity: 0.7,
                helper: function() {
                    var text = this.children[0].innerText;
                    var result = "<li id='"+this.id+"'>"+text+"</li>";
                    return result;
                },
                connectToSortable: '#newSurvey'
            });

            newSurvey = $('#newSurvey');
            newSurvey.sortable({
                beforeStop: function( event, ui ) {
                    var id = ui.helper.attr( "id" );
                    if ( id.indexOf( 'li' ) == -1 ) {
                        id = 'li' + id;
                    }
                    var text = ui.helper.text();
                    var li = "<li id='"+id+"'><span class='ui-icon ui-icon-circle-close' onclick='remove(\""
                            +id+"\")'></span>"+text+"</li>";
                    $(ui.item).replaceWith( li );
                },
            }).disableSelection();
        });
        
        function remove(id)
        {
            var li = $('#'+id);
            li.fadeOut('fast', function() { li.remove();});
        }

So, the biggest difference between this and the previous is in the “beforeStop” function. The first being this block:

                    if ( id.indexOf( 'li' ) == -1 ) {
                        id = 'li' + id;
                    }

The problem I had was that when dropping from the table row, everything was awesome, but if I re-ordered within the list, then I kept pre-pending another ‘li’ to the front of the id. So I would end up with a row with an id of ‘lilili123′ or something like that. Undesirable at best. So now, I check to ensure it only has one li prefix.

The second difference, and the main one for this post, is the addition of the remove function and the button to remove it when dropped, contained here:

                    var li = "<li id='"+id+"'><span class='ui-icon ui-icon-circle-close' onclick='remove(\""
                            +id+"\")'></span>"+text+"</li>";
                    $(ui.item).replaceWith( li );

and here:

        function remove(id)
        {
            var li = $('#'+id);
            li.fadeOut('fast', function() { li.remove();});
        }

The first section is the new code to replace the helper from the table row draggable with the list item, including the button for removal. If that sentence didn’t make sense, then go back and read the post linked above to get the details. Since this is all based on jquery, I used a jquery icon for the button. It is nice because then it will mesh with whatever jquery theme you are using.

The remove function uses a jquery animation to quickly fade the list item out and then remove it from the list. You must call .remove() to get it out of the list altogether.

The last wrinkle I have in this is that a user can drag the same item from the table onto the list, resulting in multiple copies of the list item, but that’s a problem for another day.

2011/04/05

Get the “Next” value in a Java Enum

Filed under: development, java — Tags: — digitaljoel @ 10:32 pm

Java Enums. An awesome addition to Java 1.5 so we could avoid using public static ints for that purpose. I’ve been using them for some time with success and never noticed one deficiency until now. You can get the ordinal of an enum value with the ordinal() method. That is basically the index in the order the values were declared. So, if your enum look something like this:

 public enum Planet { MERCURY, VENUS, EARTH, MARS, JUPITER, SATURN, URANUS, NEPTUNE }

In this case, MERCURY would have an ordinal of 0, then VENUS 1, and so forth. Now, what if you want to iterate through them? You can get all the values of an enumerated type as an array using the values() method. Cool, right? Well, what if I don’t want to iterate through them, but I want to simply progress from one to the next. It would be cool if math operators (like + and -) would let you go from one to the next, but that’s not the case. I had a need to go from one to the next, so I changed my enum by adding the following method.

 private enum Planet { MERCURY, VENUS, EARTH, MARS, JUPITER, SATURN, URANUS, NEPTUNE;
    public Planet getNext() {
      return this.ordinal() < Planet.values().length - 1
          ? Planet.values()[this.ordinal() + 1]
          : null;
    }
  }

Now, if I do Planet.MERCURY.getNext() I would get VENUS. This takes advantage of the ordinal of each entry (which you cannot assign in any other way than the order in which you declare the enum values) and the values method, indexing into the values array to get the next value. If you attempt to go off the end, it’ll return null. It would be simple to make it wrap instead if that makes sense for your case. It would also be trivial to take this and implement a “getPrevious” if you have a need to go in reverse.

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 225 other followers