Wednesday, July 20, 2011

Scala Named parameters

Named parameters is one of the features of Scala which allows you to define a class which has a constructor that defines default values for the named parameters which you can later override.

For example:

class Person(name: String = "Bill", age: Int = 25) {
      override def toString = name + ":" + age

First, this allows you to create an instance of this class without passing any arguments
println(new Person())
and the output will be:


This is because the instance is created with the default values specified in the class definition.

You can override any of these values by using the name of the argument and specifying its new value

For example:
println(new Person(age=35))
will result in:


println(new Person(age=55, name="John"))
will result in:


And of course you can still do this:
println(new Person("Chris", 23))

Wednesday, July 6, 2011

Implementing Spring's FactoryBean in Scala

Spring Framework provides many different extension points. One of them is FactoryBean
The details about FactoryBeans are described here. All I'll show is how to implement one in Scala, by extending on the previous post Scala Function as Spring Bean (Spring and Scala)
Let's say creating an instance of the Scala function requires some logic that could not or should not be handled by a simple constructor invocation. This would be a perfect case for a FactoryBean and we can easily implement one in Scala

So to create our PrintFunction via FactoryBean we can implement one in Java or in Scala. Below is the Scala implementation of Spring's FactoryBean which creates and instance of PrintFunction while also printing a message stating that it is creating a PrintFunction.
class ScalaFactoryBean extends FactoryBean[PrintFunction] {
  val myFunction = new PrintFunction()

  def getObject(): PrintFunction = {  
    println("Creating 'PrintFunction'")
 return myFunction

  def getObjectType(): Class[PrintFunction] = { 

  def isSingleton(): Boolean = { 
. . .and its configuration:

<bean id="prinitScalaFunctionFB" class="olegz.scala.spring.ScalaFactoryBean"/>
<bean id="functionViaFactoryBean" class="olegz.scala.spring.SimpleSpringBean">
     <property name="function" ref="prinitScalaFunctionFB"/>

From this point on it is pure Spring
public static void main(String[] args) {
    ApplicationContext context = new ClassPathXmlApplicationContext("scala-config.xml", SpringDemo.class);
    SimpleSpringBean functionViaFactoryBean = context.getBean("functionViaFactoryBean", SimpleSpringBean.class);
    functionViaFactoryBean.printMessage("Hello Spring-Scala");

For more info check out the source code here:

Scala Function as Spring Bean (Spring and Scala)

With Scala gaining popularity a lot of developers are now wondering how Scala can integrate with existing and popular JVM-based frameworks.
One of the questions I've been asked recently if Scala Functions could be used as Spring Beans in the typical Dependency Injection model provided by Spring Framework. In other words can I inject Scala function into a Spring configured Java bean? The answer is not only 'Yes' but it is actually very simple, since Scala itself is very nicely integrated with Java and Scala Function is just a class.

Let's say we have a Scala function called PrintFunction
class PrintFunction extends Function1[String, Unit]{
   def apply(in: String) = println("From Scala function: " + in)
It is worth pointing out that Scala defines a group of traits from scala.Function0 to scala.Function9, allowing you to define functions with 0 to 9 parameters so you can easily integrate them with other classes. In our case we are using Function1 since we only passing one parameter. However we can clearly see that there is a second type argument in our definition of PrintFunction (e.g., Function1[String, Unit]). That is the return value and in our case it is Unit which would be the equivalent of Java 'void'.

So, to bootstrap this function as a Spring Bean all we need is to define it as Spring Bean

<bean id="prinitScalaFunction" class="olegz.scala.spring.PrintFunction"/>

Now it is just another Spring Bean which could be injected into any other bean that has PrintFunction property (see below).
public class SimpleSpringBean {

    private PrintFunction function;

    public void setFunction(PrintFunction function) {
        this.function = function;
    public void printMessage(String message) {
. . . and its configuration:

<bean class="olegz.scala.spring.SimpleSpringBean">
<property name="function" ref="prinitScalaFunction"/>

Now all we need is start Spring Application Context and call printMessage(String) method of SimpleSpringBean. This method will invoke Function's apply(String) method, or we can simply get the function itself and call its apply(String) method.

public static void main(String[] args) {
    ApplicationContext context = new ClassPathXmlApplicationContext("scala-config.xml", SpringDemo.class);
    SimpleSpringBean bean = context.getBean(SimpleSpringBean.class);
    bean.printMessage("Hello Spring-Scala");
You should see the following output:
From Scala function: Hello Spring-Scala
For more details check out this sources here

Monday, July 4, 2011

Spring Integration and Channels

Spring Integration is a POJO-based light weight, embeddable messaging framework with a loosely coupled programming model aimed to simplify integration of heterogeneous systems based on Enterprise Integration Patterns and without requiring a separate ESB-like engine or proprietary development and deployment environment.

One of the fundamental differences between Spring Integration and other EIP-like frameworks and products is that we treat EIP as specification and follow it to the letter.
At the very core of the EIP is one fundamental and probably the most important pattern of all; Pipes-and-Filters which describes the communication model between Message producers and Message consumers where instead of exchanging Messages directly producers and consumers communicate through the pipe (i.e., channel)

Pipe is a core component of EIP for 2 main reasons:

1. Pipes give you logical and physical decoupling. Because of that components on either side of the pipe are completely un-aware of one another. You can change producing or consuming component without affecting the flow (e.g., consumer could be a mock while in the early developing stages and than switched to a real component later on)

2. Pipes in collaboration with Message Dispatcher and Polling Consumer are responsible for handling Message exchange protocol (e.g., point-to-point vs. pub-sub, sync or async, etc.).
Let's look at the example:
producerA -> channelA -> consumerA
In the above diagram producerA sends Message to a consumerA via channelA.
As you can see neither producer nor consumer are aware of one another. They are physically and logically decoupled. We can change producer without affecting the consumer and vice versa.

Message exchange protocol is being maintained by the channel? Is it point-to-point? pub-sub? sync or async? Well the diagram above does not show it and in Spring Integration the default channel implementation is synchronous point-to-point channel. However by simply changing the type of channel, one can completely change the Message exchange protocol without affecting producer or consumer.

Monday, December 8, 2008

JMX Connectivity through Firewall

Recently I’ve been asked to help out a customer who was having issues with JMX connectivity to Spring Source dmServer through the firewall. However, one thing I want to point out right up front is that the issue is rather generic and has nothing to do with dmServer. It is really about understanding JMX, RMI and proper configuration. But I will use dmServer and its configuration as an example.
Here is the sample JMX configuration options provided in the dmServer startup script:${jmxPort} \ \${jmxUsersPath} \${keystorePath} \${keystorePassword} \ \”

This will enable JMX agent (MBean Server) when you start dmServer. Once started you can now monitor your process via JMX-compliant tool such as jconsole. Connectivity could be local or remote.
The above configuration seem to provide everything we need to access this process through the firewall, since is obviously the port that we need to open in the firewall. However there is a caveat.
Once connected to JMXRegistry running on the port specified by property, the actual objects are served by RMIServer which is running on different port. Unfortunately this port is chosen randomly by default instance of JMX Agent and there is no –D option to specify it. Obviously going through the firewall would require opening up two ports and with random port it presents a delicate issue.
Fortunately it is easily solvable by writing a custom Java Agent where you can programmatically specify each port and externalize it through custom properties (I am attaching sample code).
More info here:
In the nutshell, the custom agent will take the port value provided by the property and will create a second port (RMIServer port) by incrementing it by 1. (in our case the port specified is 44444 which makes RMIServer port 44445)
Once such agent is in place (JAR) and the appropriate ports are open in the firewall all you need is modify the startup script to include –javaagent option providing the JAR.
. . . . .
$JAVA_HOME/bin/java \
. . . . .
Well, that really only solved one half of the problem, since by default RMI stubs sent to the client contain the server’s private address instead of the public

Just look at this tcpdump fragment while monitoring the client’s access (jconsole running on the local network):
. . . . . . .
09:41:23.778663 IP > . ack 71 win 65535
09:41:23.779958 IP > P 20:251(231) ack 71 win 65535
09:41:23.780456 IP > P 20:251(231) ack 71 win 65535
09:41:23.796075 IP > S 1334070579:1334070579(0) win 5840
09:41:23.796328 IP > S 1760846938:1760846938(0) ack 1334070580 win 65535
. . . . . . .
You can clearly see that (client i.e., jconsole) is attempting to connect directly to (server) instead of which is a public IP, although the JMX URL is:

If I was behind the firewall I would obviously had problems connecting to
Fortunately, this one is easy to fix. All you need is to provide additional option on the server side (java.rmi.server.hostname) and add it to the script This option represents the host name string that should be associated with remote stubs for locally created remote objects, in order to allow clients to invoke methods on the remote object:
. . . . . . .
$JMX_OPTS \${jmxPort} \
-Djava.rmi.server.hostname= \
. . . . . . . 
That is all .
Start jconsole: ./ service:jmx:rmi://:/jndi/rmi://:/jmxrmi
Once you modify the script and start the dmServer you should see output similar to this:
. . . . . .
oleg-2:bin olegzhurakousky$ ./
Getting the platform’s MBean Server
Local Connection URL: service:jmx:rmi://oleg-2.local:44445/jndi/rmi://oleg-2.local:44444/jmxrmi
Public Connection URL: service:jmx:rmi://
Creating RMI connector server
[2009-02-26 18:53:34.031] main Server starting.
[2009-02-26 18:53:35.943] main OSGi telnet console available on port 2401.
[2009-02-26 18:53:41.558] main Boot subsystems installed.