Sunday, March 31, 2013

The Three Layers of Management

We’ve always made project management an important and inseparable part of our development and design services. When Atomic Object was younger, we would tend to focus on the technical aspects of project management — story estimation and prioritization, velocity, burn charts, etc. This made us very good at the predictive or quantitative aspects of project management. Over time we’ve learned that while delivering a product on time and on budget is important, it doesn’t guarantee the success of the product. We also care deeply (or “give a shit”) about the actual market success of the products we build. Because of that, we have expanded our responsibilities (while still using self-managed teams) to include other aspects of management that we think are crucial to making a product successful.
 
 
I’ve broken them down into the following three categories:

  • Product Management
  • High-Level Project Management
  • Technical Project Management

Product Management

Product management tends to set up the other categories of management and is especially important at the beginning of a project.

Upfront Responsibilities

  • Understanding the business ecosystem.
  • Discovering and validating the value proposition of the product.
    • Market Competitors
    • Personas (Who’s going to use it?)
  • Creating a product backbone.

Ongoing Responsibilities

  • Elevate technical or budget constraints to a product level.
    • Does the constraint negatively affect the product’s core value proposition?
    • Can we remove a feature to make room for a more important one?
    • Who will be affected?
  • Conduct usability testing.
  • Empathize with or participate in the marketing, sales, and distribution aspects of the product.
  • Be cognizant of changes in the market or new products that may impact the product you are creating.
  • Know who needs what and cares about what.

High-Level Project Management

High-level project management is crucially important in larger products that cross-cut different teams and vendors, and the success or failure of each directly impacts the product. Larger products tend to have a multitude of interdependent projects where understanding the overall timeline and reaching across teams is absolutely necessary.

These responsibilities may include:

  • Creating a high-level timeline for the product.
  • Reaching across teams and understanding their project, constraints, timeline, and road blocks.
  • Managing external vendors.
  • Working through road blocks that aren’t directly related to your project.
  • Thinking about the legal implications of using an external product (licensing, terms of service, etc.).
  • Empathizing with the higher-level budgetary constraints.
  • Being responsive and not becoming the bottleneck.

Technical Project Management

Excellent technical project management helps to ensure that features needed to make a successful product are delivered on-time and on-budget. Additionally, it should provide transparency and flexibility so that everyone knows where your team is in their schedule and can make changes to the product as necessary.

Responsibilities usually include:

  • Tracking and reporting budget to scope completion.
  • Holding weekly iteration meetings.
  • Creating burn charts.
  • Performing feature breakdown and story estimation.
  • Day-to-day team management.
  • Task management.

Conclusion

It is vital — but extremely difficult — to be vigilant in all areas of management. Use tools, practices, rituals, and process to stay organized and effective. Doing so will help to prevent burn-out and losing sight of your management responsibilities. Lastly, feel empowered to share the worry and ownership of management responsibilities with your peers. Sharing responsibilities builds empathy and can also provide another layer of scrutiny on your project.
 

Reference: The Three Layers of Management from our JCG partner Justin DeWind at the Atomic Spin blog.


Source : feedproxy[dot]google[dot]com

Saturday, March 30, 2013

Testing Spring Data Neo4j Applications with NoSQLUnit

Spring Data Neo4j is the project within Spring Data project which provides an extension to the Spring programming model for writing applications that uses Neo4j as graph database. To write tests using NoSQLUnit for Spring Data Neo4j applications, you do need nothing special apart from considering that Spring Data Neo4j uses a special property called type in graph nodes and relationships which stores the fully qualified classname of that entity.

Apart from type property at node/relationship level, we also need to create one index for nodes and one index for relationships. In case of nodes, types index name is required, meanwhile rel_types is required for relationships. In both cases we must set key value to className and value to full qualified classname.

Type mapping

IndexingNodeTypeRepresentationStrategy and IndexingRelationshipTypeRepresentationStrategy are used as default type mapping implementation, but you can also use SubReferenceNodeTypeRepresentationStrategy which stores entity types in a tree in the graph representing the type and interface hierarchy, or you can customize even more by implementing NodeTypeRepresentationStrategy interface.

Hands on Work

Application

Starfleet has asked us to develop an application for storing all starfleet members, with their relationship with other starfleet members, and the ship where they serve. The best way to implement this requirement is using Neo4j database as backend system. Moreover Spring Data Neo4j is used at persistence layer. This application is modelized into two Java classes, one for members and another one for starships. Note that for this example there are no properties in relationships, so only nodes are modelized.

Member class

@NodeEntitypublic class Member {        private static final String COMMANDS = "COMMANDS";        @GraphId Long nodeId;        private String name;        private Starship assignedStarship;        public Member() {                super();        }        public Member(String name) {                this.name = name;        }        @Fetch @RelatedTo(type=COMMANDS, direction=Direction.OUTGOING)        private Set<Member> commands;        public void command(Member member) {                this.commands.add(member);        }        public Set<Member> commands() {                return this.commands;        }        public Starship getAssignedStarship() {                return assignedStarship;        }        public String getName() {                return name;        }        public void assignedIn(Starship starship) {                this.assignedStarship = starship;        }        //Equals and Hash methods}

Starship class

@NodeEntitypublic class Starship {        private static final String ASSIGNED = "assignedStarship";        @GraphId Long nodeId;        private String starship;        public Starship() {                super();        }        public Starship(String starship) {                this.starship = starship;        }        @RelatedTo(type = ASSIGNED, direction=Direction.INCOMING)        private Set<Member> crew;        public String getStarship() {                return starship;        }        public void setStarship(String starship) {                this.starship = starship;        }        //Equals and Hash methods}

Apart from model classes, we also need two repositories for implementing CRUD operations, and spring context xml file. Spring Data Neo4j uses Spring Data Commons infrastructure allowing us to create interface based compositions of repositories, providing default implementations for certain operations.

MemberRepository class

public interface MemberRepository extends GraphRepository<Member>,                RelationshipOperationsRepository<Member> {        Member findByName(String name);}

See that apart from operations provided by GrapRepository interface like save, findAll, findById, … we are defining one query method too called findByName. Spring Data Neo4j repositories (and most of Spring Data projects) provide a mechanism to define queries using the known Ruby on Rails approach for defining finder queries.

StarshipRepository class

public interface StarshipRepository extends GraphRepository<Starship>,                RelationshipOperationsRepository<Starship> {}

application-context file

<beans xmlns="http://www.springframework.org/schema/beans"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xmlns:context="http://www.springframework.org/schema/context"       xmlns:neo4j="http://www.springframework.org/schema/data/neo4j"       xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.1.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsdhttp://www.springframework.org/schema/data/neo4j           http://www.springframework.org/schema/data/neo4j/spring-neo4j.xsd">     <context:component-scan base-package="com.lordofthejars.nosqlunit.springdata.neo4j"/>     <context:annotation-config/>     <neo4j:repositories base-package="com.lordofthejars.nosqlunit.springdata.repository"/></beans>

Testing

Unit Testing

As it has been told previously, for writing datasets for Spring Data Neo4j, we don’t have to do anything special beyond creating node and relationship properties correctly and defining the required indexes. Let’s see the dataset used to test the findByName method by seeding
Neo4j database.

star-trek-TNG-dataset.xml file

<?xml version="1.0" ?><graphml xmlns="http://graphml.graphdrawing.org/xmlns"        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"        xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">     <key id="name" for="node" attr.name="name" attr.type="string"></key>     <key id="__type__" for="node" attr.name="__type__" attr.type="string"></key>     <key id="starship" for="node" attr.name="starship" attr.type="string"></key>     <graph id="G" edgedefault="directed">       <node id="3">        <data key="__type__">com.lordofthejars.nosqlunit.springdata.neo4j.Member</data>        <data key="name">Jean-Luc Picard</data>        <index name="__types__" key="className">com.lordofthejars.nosqlunit.springdata.neo4j.Member</index>      </node>      <node id="1">        <data key="__type__">com.lordofthejars.nosqlunit.springdata.neo4j.Member</data>        <data key="name">William Riker</data>        <index name="__types__" key="className">com.lordofthejars.nosqlunit.springdata.neo4j.Member</index>      </node>      <node id="4">        <data key="__type__">com.lordofthejars.nosqlunit.springdata.neo4j.Starship</data>        <data key="starship">NCC-1701-E</data>        <index name="__types__" key="className">com.lordofthejars.nosqlunit.springdata.neo4j.Starship</index>      </node>      <edge id="11" source="3" target="4" label="assignedStarship"></edge>      <edge id="12" source="1" target="4" label="assignedStarship"></edge>      <edge id="13" source="3" target="1" label="COMMANDS"></edge>    </graph></graphml>

See that each node has at least one type property with full qualified classname and an index with name types, key className and full qualified classname as value. Next step is configuring application context for unit tests. application-context-embedded-neo4j.xml

<beans xmlns="http://www.springframework.org/schema/beans"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xmlns:context="http://www.springframework.org/schema/context"       xmlns:neo4j="http://www.springframework.org/schema/data/neo4j"       xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.1.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsdhttp://www.springframework.org/schema/data/neo4j           http://www.springframework.org/schema/data/neo4j/spring-neo4j.xsd">        <import resource="classpath:com/lordofthejars/nosqlunit/springdata/neo4j/application-context.xml"/>        <neo4j:config storeDirectory="target/config-test"/></beans>

Notice that we are using Neo4j namespace for instantiating an embedded Neo4j database. And now we can write the JUnit test case: WhenInformationAboutAMemberIsRequired

@RunWith(SpringJUnit4ClassRunner.class)@ContextConfiguration("application-context-embedded-neo4j.xml")public class WhenInformationAboutAMemberIsRequired {        @Autowired        private MemberRepository memberRepository;        @Autowired        private StarshipRepository starshipRepository;        @Autowired        private ApplicationContext applicationContext;        @Rule        public Neo4jRule neo4jRule = newNeo4jRule()                        .defaultSpringGraphDatabaseServiceNeo4j();        @Test        @UsingDataSet(locations = "star-trek-TNG-dataset.xml", loadStrategy = LoadStrategyEnum.CLEAN_INSERT)        public void information_about_starship_where_serves_and_members_under_his_service_should_be_retrieved()  {                Member jeanLuc = memberRepository.findByName("Jean-Luc Picard");                assertThat(jeanLuc, is(createMember("Jean-Luc Picard")));                assertThat(jeanLuc.commands(), containsInAnyOrder(createMember("William Riker")));                Starship starship = starshipRepository.findOne(jeanLuc.getAssignedStarship().nodeId);                assertThat(starship, is(createStarship("NCC-1701-E")));        }        private Object createStarship(String starship) {                return new Starship(starship);        }        private static Member createMember(String memberName) {                return new Member(memberName);        }}

There are some important points in the previous class to take under consideration:

  1. Recall that we need to use Spring ApplicationContext object to retrieve embedded Neo4j instance defined into Spring application context.
  2. Since lifecycle of database is managed by Spring Data container, there is no need to define any NoSQLUnit lifecycle manager.

Integration Test

Unit tests are usually run against embedded in-memory instances, but in production environment you may require access to external Neo4j servers by using Rest connection, or in case of Spring Data Neo4j instantiating SpringRestGraphDatabase class. You need to write tests to validate that your application still works when you integrate your code with a remote server, and this tests are typically known as integration tests. To write integration tests for our application is as easy as defining an application context with SpringRestGraphDatabase and allow NoSQLUnit to control the lifecycle of Neo4j database.

.application-context-managed-neo4j.xml
<beans xmlns="http://www.springframework.org/schema/beans"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xmlns:context="http://www.springframework.org/schema/context"       xmlns:neo4j="http://www.springframework.org/schema/data/neo4j"       xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.1.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsdhttp://www.springframework.org/schema/data/neo4j           http://www.springframework.org/schema/data/neo4j/spring-neo4j.xsd">        <import resource="classpath:com/lordofthejars/nosqlunit/springdata/neo4j/application-context.xml"/>        <bean id="graphDatabaseService" class="org.springframework.data.neo4j.rest.SpringRestGraphDatabase">                <constructor-arg index="0" value="http://localhost:7474/db/data"></constructor-arg>        </bean>        <neo4j:config graphDatabaseService="graphDatabaseService"/></beans>

Note how instead of registering an embedded instance, we are configuring SpringRestGraphDatabase class to connect to localhost server. And let’s implement an integration test which verifies that all starships can be retrieved from Neo4j server.

WhenInformationAboutAMemberIsRequired

@RunWith(SpringJUnit4ClassRunner.class)@ContextConfiguration("application-context-managed-neo4j.xml")public class WhenInformationAboutStarshipsAreRequired {        @ClassRule        public static ManagedNeoServer managedNeoServer = newManagedNeo4jServerRule()                        .neo4jPath(                                        "/Users/alexsotobueno/Applications/neo4j-community-1.7.2")                        .build();        @Autowired        private StarshipRepository starshipRepository;        @Autowired        private ApplicationContext applicationContext;        @Rule        public Neo4jRule neo4jRule = newNeo4jRule()                        .defaultSpringGraphDatabaseServiceNeo4j();        @Test        @UsingDataSet(locations = "star-trek-TNG-dataset.xml", loadStrategy = LoadStrategyEnum.CLEAN_INSERT)        public void information_about_starship_where_serves_and_members_under_his_service_should_be_retrieved() {                EndResult<Starship> allStarship = starshipRepository.findAll();                assertThat(allStarship, containsInAnyOrder(createStarship("NCC-1701-E")));        }        private Object createStarship(String starship) {                return new Starship(starship);        }}

Because defaultSpringGraphDatabaseServiceNeo4j method returns a GraphDatabaseService instance defined into application context, in our case it will return the defined SpringRestGraphDatabase instance.

Conclusions

There is not much difference between writing tests for none Spring Data Neo4j applications than the ones they use it. Only keep in mind to define correctly the type property and create required indexes. Also see that from the point of view of NoSQLUnit there is no difference between writing unit or integration tests, apart of lifecycle management of the database engine. Download Code
 

Reference: Testing Spring Data Neo4j Applications with NoSQLUnit from our JCG partner Alex Soto at the One Jar To Rule Them All blog.


Source : feedproxy[dot]google[dot]com

Friday, March 29, 2013

Tomcat in Eclipse: 6 popular “how to” questions

Learning a new technology is always a hard process. This process becomes even more difficult when you are trying to learn two technologies which are going to interact with each other. Tomcat and Eclipse are ones of the most popular pre-requirements in the Java EE development. Therefore to be a professional developer you need to know how to perform the most required actions with this pair and how to make some configurations.

1. How to add Tomcat in Eclipse?

The simplest way is to download Eclipse IDE for Java EE Developers. There if you create a new Dynamic Web Project, Eclipse will download and install Tomcat
 
automatically. But what if you want to do this by yourself?

  1. Firstly you need to download the latest version of Tomcat and install it on your computer. Don’t forget the directory of the newly installed Tomcat
  2. Then open Eclipse and on the main menu choose Window -> Preferences.
  3. In the “Preferences” window select Server -> Runtime Environments from the left panel.
  4. Click on the “Add..” button, after that select the version of Tomcat you have already installed.
  5. Click on the “Next” button, specify the path to the installed Tomcat and press the “Finish” button.

That’s it. Now you can find the server in the “Server” view in your Eclipse.

2. Where is Eclipse-Tomcat working directory?

It’s seems ridiculous when you are working with Tomcat in Eclipse and you don’t know where your server is. In order to find this out, you need to make several simple steps:

  1. Open the “Server” view in Eclipse.
  2. Select the server the location of which you want to know.
  3. Double click on the server.
  4. In the opened window select the “Server location” section.
  5. Take notice of the “Server path” field.

Usually this path looks like this: %eclipse_work_space%\.metadata\.plugins\org.eclipse.wst.server.core\tmp0

3. Where is a log file in Tomcat?

The location of the logs directory in Tomcat (integrated with Eclipse): %eclipse_work_space%\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\logs

4. How to change Tomcat port in Eclipse?

The first way:

To change the HTTP port you should to repeat the first three steps from paragraph #2, then open the “Port” section and set the value for “HTTP/1.1? field (by default value is 8080) to that you want to use.

The second way:

Navigate to %eclipse_work_space%\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\conf
Open server.xml and find the following string:

<connector connectiontimeout="20000" port="8080" protocol="HTTP/1.1" redirectport="8443"></connector>

Modify the value of the “port” attribute in order to specify the port you need to use.

5. Where to put WAR file in Tomcat?

  • If you integrate Tomcat with Eclipse: %eclipse_work_space%\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps
  • If you want to deploy WAR file in standalone Tomcat: %server_location%\webapps

6. Tomcat debug mode in Eclipse

In order to launch Tomcat in debug mode from Eclipse, click on the button with bug icon in the “Server” view (view the image below for more details).

Eclipse Tomcat Debug Mode

Reference: Tomcat in Eclipse: 6 popular “how to” questions from our JCG partner Alex Fruzenshtein at the Fruzenshtein’s notes blog.


Source : feedproxy[dot]google[dot]com

Using JGroups Directly From JBoss AS 7 Component

JGroups is Bela Ban‘s piece of software for reliable message exchange that is highly configurable and can use either TCP or UDP as a transport protocol. Basically – you run the JGroups on number of clients, they form a cluster and they can send and receive messages within the cluster.

JGroups is used internally by JBoss Infinispan. Infinispan, however, unlike JGroups adds the distributed cache semantics (replicated / distributed modes, entries invalidation, transactional behavior, Map access API, etc.) It even allows you to use the cluster as a compute grid.
 
 
 
Infinispan in turn is used to provide JBoss AS 7 clustering functionalities. Therefore, it means that the underlying JGroups subsystem can and is configured using a standard JBoss AS 7 standalone*.xml file. You can access Infinispan cache from your Java EE component (e.g. EJB) without any problems as described here.

However, there are cases when you’d like to use just the underlying JGroups messaging instead of all the cache semantics Infinispan gives you. And here’s the place things are becoming more complicated. You can always use JGroups directly and store the configuration for it as an application-local resources. It might become arguable if this is or isn’t a violation of the Java EE spec which says that an application should not manage low-level connections, spawn threads, open sockets, etc. This is something that is better to be left to the application server – it also allows us to use one configuration file instead of spreading it across multiple places. So, the question is – how to access the JGroups subsystem from our EJB application? The whole solution involves few steps which will be described below. If you want to check the whole working project – take a look at my JGroups AS7 Github project.

1. Write Custom JBoss AS 7 Service Activator

This activator (JGroupsChannelServiceActivator.java) will do two things:

  • create the actual JGroups channel using JBoss protocol configuration,
  • bind the newly created JGroups channel to the JNDI.

First part is done in JGroupsChannelServiceActivator#createChannel(-). I don’t know the ServiceActivator nor other internals of the JBoss AS 7 modules but from what you can read:

InjectedValue<ChannelFactory> channelFactory = new InjectedValue<>();ServiceName serviceName = ChannelFactoryService.getServiceName(STACK_NAME);ChannelService channelService = new ChannelService(CHANNEL_NAME, channelFactory);target.addService(channelServiceName, channelService)      .addDependency(serviceName, ChannelFactory.class, channelFactory).install();  

it seems that it creates a new service (ChannelService) and let the JBoss MSC automatically inject its dependent ChannelFactory during the installation. The ChannelFactory will use the UDP protocol stack. The second part is done in JGroupsChannelServiceActivator#bindChannelToJNDI(-) and it binds the newly created Channel instance to the JNDI under user-defined location. In our case it is java:jboss/channel/myChannel.

2. Register the Activator

We now need to tell JBoss AS 7 to invoke our custom Activator. It is done using standardized JDK ServiceLoader API. In a nutshell it means we need to provide a META-INF/services/org.jboss.msc.service.ServiceActivator file with fully qualified name of our activator class. Take a look at this example.

3. Add Required Modules to our Application

Ok, so we have an activator that should do the magic. If we’d try to deploy it as such we’ll get a bunch of ClassNotFoundExceptions. It is because the JBoss Modules. Our application is not packed with all those JBoss artifacts like JGroups, ServiceActivator API and the JNDI related classes. We don’t want to clutter our app with those libraries – we just want to define the dependencies on modules provided by JBoss AS 7 itself. We do it in the META-INF/jboss-deployment-structure.xml. Note that we could do it in MANIFEST.MF Dependencies: section but Intellij IDEA doesn’t seem to be working with Maven generated MANIFEST.MF:

<?xml version='1.0' encoding='UTF-8'?><jboss-deployment-structure>    <deployment>        <dependencies>            <module name='org.jgroups'/>            <module name='org.jboss.as.naming'/>            <module name='org.jboss.as.clustering.jgroups'/>        </dependencies>    </deployment></jboss-deployment-structure>

JGroups modules are required for accessing the JChannel, ChannelService, etc. The naming module is required for the JNDI binding code.

4. Develop the EJB Using JGroups Channel

JGroupsSampleDataProducer is a Singleton EJB that shows how to access the JGroups channel. It’s rather simple because of the JNDI binding. We can just use:

@Resource(lookup = 'java:jboss/channel/myChannel')private JChannel channel;   

and there it is. This EJB registers a timer that is invoked every 2 seconds and sends some random String message.

Note we didn’t have to explicitly start the JChannel. We just inject it and use it straightaway. Take a look at the ChannelService used in our activator from step 1. Its start method looks like:

@Overrideprotected void start() throws Exception {    (...)    if (this.channel.getProtocolStack().findProtocol(STATE_TRANSFER.class, STATE.class, STATE_SOCK.class) != null) {        this.channel.connect(this.id, null, STATE_TRANSFER_TIMEOUT);    } else {        this.channel.connect(this.id);    }

So this service will connect to our channel automatically. Instead of using ChannelService we could develop our own service that will be responsible for starting and stopping our channel or we could even move this responsibility to the actual user of the channel.

5. Deploy the EJB-JAR

We are now ready to deploy our application to JBoss AS 7 server. The most important part here is to make sure our server will be running with the appropriate configuration, which means one with JGroups protocol stack defined. It is done using <subsystem xmlns='urn:jboss:domain:jgroups:1.1'>. I am using JBoss AS 7.1.1 and standalone-full-ha.xml config.

**Note: ** Because the Intellij IDEA doesn’t allow you to easily change the configuration file for your JBoss AS as the Eclipse does, we will need to specify it using VM options: -Djboss.server.default.config=standalone-full-ha.xml.

We also need to make sure JGroups will use IPv4 (it sometimes chooses the IPv6 which can lead to some weird and hard to solve problems.) To do so, add the -Djava.net.preferIPv4Stack=true option to the Server configuration.

6. Run the Client Application

You can find a rather simple Client code here. It just connects to JGroups cluster using specified configuration file. Mind that multicast port number and address should be set to the same values for server and client. Also remember to add -Djava.net.preferIPv4Stack=true VM option while running your client. Hope you’ll find this tutorial helpful and it’ll save you some configuration time. Great thanks to Bela Ban for a lot of important advice and Paul Ferraro for pointing me to relavant forum topics (like this or this one) regarding similar problems.
 

Reference: Using JGroups Directly From JBoss AS 7 Component from our JCG partner Piotr Nowicki at the Piotr Nowicki’s Homepage blog.


Source : feedproxy[dot]google[dot]com

Java StringBuilder myth debunked

The myth

Concatenating two Strings with the plus operator is the source of all evil

– Anonymous Java dev

NOTE: The source code for the tests discussed here can be found on Github

It’s from university time that I learned to regard String concatenation in Java using the ‘+’ plus operator as a deadly performance sin. Recently there has been an internal
 
review at Backbase R&D where such recurring mantra was dismissed as a myth due to javac using StringBuilder under the hood any time you use the plus operator to join Strings. I set myself up to prove such a point and verify the reality under different environments.

The test

Relying on your compiler to optimize your String concatenation means that things might change heavily depending on the JDK vendor you adopt. As far as platform support goes for my daily job, three main vendors should be considered:

  • Oracle JDK
  • IBM JDK
  • ECJ — for developers only

Moreover, while we officially support Java 5 through 6, we are also looking into supporting Java 7 for our products, adding another three-folded level of indirection on top of the three vendors. For the sake of lazyness simplicity, the ecj compiled bytecode will be run with a single JDK, namely Oracle JDK7. I prepared a Virtualbox VM with all the above JDK installed, then I developed some classes to express three different concatenation methods, amounting to three to four concatenations per method invocaiton, depending on the specific test case. The test classes are run a thousands times for each test round, with a total of 100 rounds each test case. The same VM is used to run all the rounds for the same test case, and it’s restarted across different test cases, all to let the Java runtime perform all the optimizations it can, without affecting the other test cases in any way. The default options were used to start all JVMs. More details can be found in the benchmark runner script.

The code

Full code for both test cases and the test suite is available on Github. The following different test cases were produced to measure performance differences of the String concatenation with plus against the direct use of a StringBuilder:

// String concat with plusString result = 'const1' + base;result = result + 'const2';
// String concat with a StringBuildernew StringBuilder()              .append('const1')              .append(base)              .append('const2')              .append(append)              .toString();}
//String concat with an initialized StringBuildernew StringBuilder('const1')              .append(base)              .append('const2')              .append(append)              .toString();

The general idea is to provide a concatenation both at the head and at the tail of constant Strings over a variable. The difference between the last two cases, both making explicit use of StringBuilder, is in the latter using the 1-arg constructor which initializes the builder with the initial part of the result.

The results

Enough talking, down below here you can have a look at the generated graphs, where each data point corresponds to a single test round (e.g. 1000 executions of the same test class). The discussion of the results and some more juicy details will follow.

catplus

catsb

catsb2

The discussion

Oracle JKD5 is the clear loser here, appearing to be in a B league when compared to the others. But that’s not really the scope of this exercise, and thus we’ll gloss over it for the time being. That said, there are two other interesting bits I observe in the above graph. The first is that indeed there is generally quite a difference between the use of the plus operator vs an explicit StringBuilder, especially if you’re using Oracle Java5 which performs tree times worse the the rest of the crew.

The second observation is that while it generally holds for most of the JDKs that an explicit StringBuilder will offer up to twice the speed as the regular plus operator, IBM JDK6 seems not to suffer from any performance loss, always averaging 25ms to complete the task in all test cases. A closer look at the generated bytecode reveals some interesting details

The bytecode

NOTE: the decompiled classes are also available on Github Across all possible JDKs StringBuilders are always used to implement String concatenation even in presence of a plus sign. Moreover, across all vendors and versions, there is almost no difference at all for the same test case. The only one that stands a bit apart is ecj, which is the only one to cleverly optimize the CatPlus test case to invoke the 1-arg constructor of the StringBuilder instead of the 0-arg version.

Comparing the resulting bytecode exposes what could affect performance in the different scnarios:

  • when concatenating with plus, new instances of StringBuilder are created any time a concatenation happens. This can easily result in a performance degradation due to useless invocation of the constructor plus more stress on the garbage collector due to throw away instances
  • compilers will take you literally and only initalize StringBuilder with its 1-arg constructor if and only if you write it that way in the original code. This results in respectively four and three invocations of StringBuilder.append for CatSB and CatSB2.

The conclusion

Bytecode analysis offers the final answer to the original question. Do you need to explicitly use a StringBuilder to improve performance? Yes The above graphs clearly show that, unless you’re using IBM JDK6 runtime, you will loss 50% performance when using the plus operator, although it’s the one to perform slightly worse across the candidates when expliciting StringBuilders. Also, it’s quite interesting to see how JIT optimizations impact the overall performance: for instance, even in presence of different bytecode between the two explicit StringBuilder test cases, the end result is absolutely the same in the long run.

myth-confirmed
 

Reference: Java StringBuilder myth debunked from our JCG partner Carlo Sciolla at the Skuro blog.


Source : feedproxy[dot]google[dot]com

Java EE 7 and EJB 3.2 support in JBoss AS 8

Some of you might be aware that the Public Final Draft version of Java EE 7 spec has been released. Among various other new things, this version of Java EE, brings in EJB 3.2 version of the EJB specification. EJB 3.2 has some new features compared to the EJB 3.1 spec. I’m quoting here the text present in the EJB 3.2 spec summarizing what’s new:

The Enterprise JavaBeans 3.2 architecture extends Enterprise JavaBeans to include the following new functionality and simplifications to the earlier EJB APIs:

       
       
    • Support for the following features has been made optional in this release and their description is moved to a separate EJB Optional Features document:
      • EJB 2.1 and earlier Entity Bean Component Contract for Container-Managed Persistence
      • EJB 2.1 and earlier Entity Bean Component Contract for Bean-Managed Persistence
      • Client View of an EJB 2.1 and earlier Entity Bean
      • EJB QL: Query Language for Container-Managed Persistence Query Methods
      • JAX-RPC Based Web Service Endpoints
      • JAX-RPC Web Service Client View
    • Added support for local asynchronous session bean invocations and non-persistent EJB Timer Service to EJB 3.2 Lite.
    • Removed restriction on obtaining the current class loader; replaced ‘must not’ with ‘should exercise caution’ when using the Java I/O package.
    • Added an option for the lifecycle callback interceptor methods of stateful session beans to be executed in a transaction context determined by the lifecycle callback method’s transaction attribute.
    • Added an option to disable passivation of stateful session beans.
    • Extended the TimerService API to query all active timers in the same EJB module.
    • Removed restrictions on javax.ejb.Timer and javax.ejb.TimerHandle references to be used only inside a bean.
    • Relaxed default rules for designating implemented interfaces for a session bean as local or as remote business interfaces.
    • Enhanced the list of standard activation properties.
  • Enhanced embeddable EJBContainer by implementing AutoClosable interface.

As can be seen, some of the changes proposed are minor. But there are some which are useful major changes. We’ll have a look at a couple of such changes in this article.

1) New API TimerService.getAllTimers()

EJB 3.2 version introduces a new method on the javax.ejb.TimerService interface, named getAllTimers. Previously the TimerService interface had (and still has) a getTimers method. The getTimers method was expected to return the active timers that are applicable for the bean on whose TimerService, the method had been invoked (remember: there’s one TimerService per EJB).

In this new EJB 3.2 version, the newly added getAllTimers() method is expected to return all the active timers that are applicable to *all beans within the same EJB module*. Typically, an EJB module corresponds to a EJB jar, but it could also be a .war deployment if the EJBs are packaged within the .war. This new getAllTimers() method is a convenience API for user applications which need to find all the active timers within the EJB module to which that bean belongs.

2) Ability to disable passivation of stateful beans

Those familiar with EJBs will know that the EJB container provides passivation (storing the state of the stateful bean to some secondary store) and activation (loading the saved state of the stateful bean) capability to stateful beans. However, previous EJB versions didn’t have a portable way of disabling passivation of stateful beans, if the user application desired to do that. The new EJB 3.2 version introduces a way where the user application can decide whether the stateful bean can be passivated or not.

By default, the stateful bean is considered to be “passivation capable” (like older versions of EJB). However, if the user wants to disable passivation support for certain stateful bean, then the user has the option to either disable it via annotation or via the ejb-jar.xml deployment descriptor. Doing it the annotation way is as simple as setting the passivationCapable attribute on the @javax.ejb.Stateful annotation to false. Something like:

@javax.ejb.Stateful(passivationCapable=false) // the passivationCapable attribute takes a boolean value   public class MyStatefulBean {   ....   }  

Doing it in the ejb-jar.xml is as follows:

<?xml version="1.0" encoding="UTF-8"?>   <ejb-jar xmlns="http://xmlns.jcp.org/xml/ns/javaee"        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"        xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee        http://xmlns.jcp.org/xml/ns/javaee/ejb-jar_3_2.xsd"        version="3.2">     <enterprise-beans>       <session>         <ejb-name>foo-bar-bean</ejb-name>         <ejb-class>org.myapp.FooBarStatefulBean</ejb-class>         <session-type>Stateful</session-type>         <!-- passivation-capable element takes either a true or a false value -->         <passivation-capable>false</passivation-capable>       </session>       ...     </enterprise-beans>   </ejb-jar>  

Two important things to note in that ejb-jar.xml are the version=3.2 attribute (along with the http://xmlns.jcp.org/xml/ns/javaee/ejb-jar_3_2.xsd schema location) on the ejb-jar root element and the passivation-capable element under the session element. So, using either of these approaches will allow you to disable passivation on stateful beans, if you want to do so.

Java EE 7 and EJB 3.2 support in JBoss AS8:

JBoss AS8 has been adding support for Java EE 7 since the Public Final Draft version of the spec has been announced. Support for EJB 3.2 is already added and made available. Some other Java EE 7 changes have also made it to the latest JBoss AS 8 builds. To keep track of the Java EE 7 changes in JBoss AS8, keep an eye on this JIRA https://issues.jboss.org/browse/AS7-6553.

To use the already implemented features of Java EE 7 in general or EJB 3.2 in particular, you can download the latest nightly build/binary of JBoss AS from here. Give it a try and let us know how it goes. For any feedback, questions or if you run into any kind of issues, feel free to open a discussion thread in our user forum here.
 

Reference: Java EE 7 and EJB 3.2 support in JBoss AS 8 from our JCG partner Jaikiran Pai at the Jaikiran My Wiki blog.


Source : feedproxy[dot]google[dot]com

Thursday, March 28, 2013

JArchitect became free for Java open source contributors

JArchitect is a static analysis tool for Java codebases that provides interactive GUI(s) and HTML reports for finding overly complex or problematic areas of your code, performing analysis for refactoring and comparing changes over time. In version 3, a LINQ-like query language was added that makes the tool an extremely powerful reporting engine and can be used to enforce coding standards rules on your build systems. Here are some useful JArchitect features:

CQLinq

The cool and powerful feature of the JArchitect is the support for Code Query Linq (CQLinq). The CQLinq lets the developers to query the Java code using LINQ queries,

for example CQlinq can answer to the following requests:

- Which methods create objects of a particular class?

from m in Methods where m.CreateA (“MyPackage.MyClass”) select m

-Which methods assign a particular field?

from m in Methods where m.AssignField(“MyNamespace.MyClass.m_Field”) select m

-Which complex method is not enough commented?

from m in Application.Methods where m.CyclomaticComplexity > 15 && m.PercentageComment < 10. Select new { m, m.CyclomaticComplexity, m.PercentageComment }. You can also be warned automatically when a CQLinq query returns a certain result. For example I don’t want that my User Interface layer to depend directly on the DataBase layer:

warnif count > 0

from p in Packages where p.IsUsing(“DataLayer”) && (n.Name == @”UILayer”) select p

JArchitect provides more than 80 metrics that are related to your code organization, code quality and code structure. These metrics could be used in CQLinq to create your coding custom rules, JArchitect could be integrated into your build system to enforce the quality of your codebase.

Dependency graph

The dependency graph is very useful to explore the existing codebase, we can go inside any project, package or class to discover dependencies between code elements.

photo1

Dependency Matrix

The DSM (Dependency Structure Matrix) is a compact way to represent and navigate across dependencies between components.

image2

Why using two different ways, graph and DSM, to represent the same information? Because there is a
trade-off:

  1. Graph is more intuitive but can be totally not understandable when the numbers of nodes and edges grow (a few dozens boxes can be enough to produce a graph too complex)
  2. DSM is less intuitive but can be very efficient to represent large and complex graph. We say that DSM scales compare to graph.

Metric view

In the Metric View, the code base is represented through a Treemap. Treemapping is a method for displaying tree-structured data by using nested rectangles. The tree structure used in JArchitect treemap is the usual code hierarchy:

  • Java projects contain packages
  • Packages contain types
  • Types contains methods and fields

With treemap rectangles represent code elements. The option level determines the kind of code element represented by unit rectangles. The option level can take the 5 values: project, package, type, method and field. The two screenshots below shows the same code base represented through type level on left, and namespace level on right.

image3

If a CQLinq query is currently edited, the set of code elements matched by the query is shown on the treemap as a set of blue rectangles. It’s very helpful to see visually the code elements concerned by a specific CQLinq request.

Compare build

In software development, products are constantly evolving. Hence, developers and architects must pay attention to modifications in code bases. Modern source code repositories handle incremental development. They can enumerate differences between 2 versions of source code files. JArchitect can tell you what has been changed between 2 builds but it does more than simple text comparison. It can distinguish between comment change and code change, between what has been added/removed and what has just been modified. With JArchitect, you can see how code metrics are evolving and you know if coupling between components is increasing or not. JArchitect can also continuously check modifications to warn you as soon as a breaking compatibility changes.

Generate custom reports

JArchitect can analyze source code and Java Projects through JArchitect.Console.exe. Each time it analyzes a code base, JArchitect yields a report that can inform you about the status of your development. You can customize sections shown in the report and you can even provide your own XSL sheet for full customization. You can also build your own set of CQLinq constraints that will be checked at each analysis. The report will warn you each time a constraint is violated. This feature makes automatic design and quality regression test a reality.

JArchitect provides a pro license to all open source Java contributors. It could be useful to analyze their code base. So, if you would like to give it a try, check here for more details. Happy coding!


Source : feedproxy[dot]google[dot]com

Enterprise Benefits on Service Oriented Architecture – SOA

Currently market push is towards SOA a Service-Oriented Architecture. SOA as a term is impressive but we need to understand what benefit we can achieve using SOA. Before turning towards benefit, it is necessary to discuss common understanding.  In brief, a service oriented architecture is paradigm which include service as a layer whereas service which is nothing but individual functionality that share across the applications. The primary goal of Service Oriented Architecture is to align business users with information technologies (IT). Service-oriented architecture (SOA) enables increased business agility, improved business workflows, extensible architecture, enhanced reuse, and a longer life span of applications. Adopting Service Oriented Architecture realize many benefits.
 

Loosely coupling

An underlying premise in the application of SOA to information technology is the principle of loose coupling i.e. avoiding or at least encapsulating temporal, technology and organizational constraints in the information system design. Loosely coupled system support for late or dynamically binding to other components while running, and can mediate the difference in the component’s structure, security model, protocols, and semantics, thus abstracting volatility. Loose coupling in SOA is how the services are implemented without impacting other services or application. The only interaction between the application and services is through the publish interfaces. This means application doesn’t interested how the services been implemented

image009

Location transparency

Location transparency means that the consumer of the service doesn’t care where the implementation of the services resides. It could be same server or other server across the internet. Consumer calls are agnostic to service location.

Reusability

SOA compliance to web services and hence applications running on either platform can also consume services running on the other as web services that facilitate reuse. Properly designed implemented SOA application provide infrastructure that makes reuse possibilities in heterogeneous environment such as C,C++,Java, .Net etc.

image010

Managed environments can also wrap COBOL legacy systems and present them  as services. This has extended the useful life of many core legacy systems indefinitely, no matter what language they originally used.

Rich Testability

Since SOA confers layer based architecture therefore it breaks testing into definable testing areas such as services, security, and governance etc. These testing areas would be tested separately using best tools and approach. For reference JUnit or NUnit allows for creation of a test suite. The test suite consists of number of procedure, each of which is designed to test a service or component. In SOA environment automation of testing is very common for frequently changing enterprise services which improve regression testing efficiency. The other aspect of SOA testing is that testing of independent reusable service which can be tested independently which force tester to not to test the overall application unless all the service passed successfully. More and better testing usually means fewer defects and a higher overall level of quality.

Parallel Development

Service Oriented Architecture advocate more parallelism in development environment as SOA is based on layers based architecture. Since Service Oriented Architecture confers layer based architecture therefore it advocates more parallel development. SOA consist of inventory of contract based independent services which could be developed in parallel.

image011

Above figure shows developers could develop independent services in parallel and services will be completed on the same schedule time. Business processes will be accessing the independent services orchestrate them and provide the concrete business functionality.

Higher Availability & Better Scalability

SOA a Multi-layered architecture can be individually clustered with appropriate load balancing to scale up the system. As we know redundancy is the key for high availability SOA achieve redundancy by introducing redundant elements via clustering.  SOA uses layer architecture to facilitate the logical decoupling that allow to design a very resilient system with each layer of the stack from dual communication links, to redundant routers and switches, to clustered servers and redundant databases.

image012

Re-routing load balancer such as F5 with server’s reverse proxy and software load balancer further increase availability and scalability in SOA environment.

Resources

Published at Java Code Geeks with permission of our JCG partner Nitin Kumar.


Source : feedproxy[dot]google[dot]com

MOXy’s Object Graphs – Input/Output Partial Models to XML & JSON

Suppose you have a domain model that you want to expose as a RESTful service. The problem is you only want to input/output part of your data. Previously you would have created a separate model representing the subset and then have code to move data between the models. In EclipseLink 2.5.0 we have a new feature called Object Graphs that enables you to easily define partial views on your model.

You can try this out today by downloading an EclipseLink 2.5.0 nightly download starting on March 24, 2013 from:

Java Model

Below is the Java model that we will use for this example. The model represents customer data. We will use an object graph to output just enough information so that someone could contact the customer by phone.

Customer

The @XmlNamedObjectGraph extension is used to specify subsets of the model we wish to marshal/unmarshal. This is done by specifying one or more @XmlNamedAttributeNode annotations. If you want an object graph applied to a property you can specify a subgraph for it. The subgraph can either be defined as a @XmlNamedSubgraph or as a @XmlNamedObjectGraph on the target class.

package blog.objectgraphs.metadata;import java.util.List;import javax.xml.bind.annotation.*;import org.eclipse.persistence.oxm.annotations.*;@XmlNamedObjectGraph(    name='contact info',    attributeNodes={        @XmlNamedAttributeNode('name'),        @XmlNamedAttributeNode(value='billingAddress', subgraph='location'),        @XmlNamedAttributeNode(value='phoneNumbers', subgraph='simple')    },    subgraphs={        @XmlNamedSubgraph(            name='location',            attributeNodes = {                 @XmlNamedAttributeNode('city'),                @XmlNamedAttributeNode('province')            }        )    })@XmlRootElement@XmlAccessorType(XmlAccessType.FIELD)public class Customer {    @XmlAttribute    private int id;    private String name;    private Address billingAddress;    private Address shippingAddress;    @XmlElementWrapper    @XmlElement(name='phoneNumber')    private List<PhoneNumber> phoneNumbers;}

Address

Because we defined the object graph for the Address class as a subgraph on the Customer class there is nothing we need to do here.

package blog.objectgraphs.metadata;import javax.xml.bind.annotation.*;@XmlAccessorType(XmlAccessType.FIELD)public class Address {    private String street;    private String city;    private String province;    private String postalCode;}

PhoneNumber

For the phoneNumbers property on the Customer class we specified that an object graph called simple should be used to scope the data. We will define this object graph on the PhoneNumber class. An advantage of this approach is that it makes the object graphs easier to be reused.

package blog.objectgraphs.metadata;import javax.xml.bind.annotation.*;import org.eclipse.persistence.oxm.annotations.*;@XmlNamedObjectGraph(    name='simple',    attributeNodes={        @XmlNamedAttributeNode('value'),    })@XmlAccessorType(XmlAccessType.FIELD)public class PhoneNumber {    @XmlAttribute    private String type;    @XmlValue    private String value;}

Demo Code

Demo

In the demo code below we will read in an XML document to fully populate our Java model. After marshalling it out to prove that everything was fully mapped we will specify an object graph on the marshaler (line 22), and output a subset to both XML and JSON.

package blog.objectgraphs.metadata;import java.io.File;import javax.xml.bind.*;import org.eclipse.persistence.jaxb.MarshallerProperties;public class Demo {    public static void main(String[] args) throws Exception {        JAXBContext jc = JAXBContext.newInstance(Customer.class);        Unmarshaller unmarshaller = jc.createUnmarshaller();        File xml = new File('src/blog/objectgraphs/metadata/input.xml');        Customer customer = (Customer) unmarshaller.unmarshal(xml);        // Output XML        Marshaller marshaller = jc.createMarshaller();        marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);        marshaller.marshal(customer, System.out);        // Output XML - Based on Object Graph        marshaller.setProperty(MarshallerProperties.OBJECT_GRAPH, 'contact info');        marshaller.marshal(customer, System.out);        // Output JSON - Based on Object Graph        marshaller.setProperty(MarshallerProperties.MEDIA_TYPE, 'application/json');        marshaller.setProperty(MarshallerProperties.JSON_INCLUDE_ROOT, false);        marshaller.setProperty(MarshallerProperties.JSON_WRAPPER_AS_ARRAY_NAME, true);        marshaller.marshal(customer, System.out);    }}

input.xml/Output

We will use the following document to populate our domain model. We will also marshal it back out to demonstrate that all of the content is actually mapped.

<?xml version='1.0' encoding='UTF-8'?><customer >   <name>Jane Doe</name>   <billingAddress>      <street>1 A Street</street>      <city>Any Town</city>      <province>Ontario</province>      <postalCode>A1B 2C3</postalCode>   </billingAddress>   <shippingAddress>      <street>2 B Road</street>      <city>Another Place</city>      <province>Quebec</province>      <postalCode>X7Y 8Z9</postalCode>   </shippingAddress>   <phoneNumbers>      <phoneNumber type='work'>555-1111</phoneNumber>      <phoneNumber type='home'>555-2222</phoneNumber>   </phoneNumbers></customer>

XML Output Based on Object Graph

The XML below was produced by the exact same model as the previous XML document. The difference is that we leveraged a named object graph to select a subset of the mapped content.

<?xml version='1.0' encoding='UTF-8'?><customer>   <name>Jane Doe</name>   <billingAddress>      <city>Any Town</city>      <province>Ontario</province>   </billingAddress>   <phoneNumbers>      <phoneNumber>555-1111</phoneNumber>      <phoneNumber>555-2222</phoneNumber>   </phoneNumbers></customer>

JSON Output Based on Object Graph

Below is the same subset as the previous XML document represented as JSON. We have used the new
JSON_WRAPPER_AS_ARRAY_NAME property (see Binding to JSON & XML – Handling Collections) to improve the representation of collection values.

{   'name' : 'Jane Doe',   'billingAddress' : {      'city' : 'Any Town',      'province' : 'Ontario'   },   'phoneNumbers' : [ '555-1111', '555-2222' ]}

External Metadata

MOXy also offers an external binding document which allows you to provide metadata for third party objects or apply alternate mappings for your model (see: Mapping Object to Multiple XML Schemas – Weather Example). Below is the mapping document for this example.

<?xml version='1.0'?><xml-bindings xmlns='http://www.eclipse.org/eclipselink/xsds/persistence/oxm'    package-name='blog.objectgraphs.metadata'    xml-accessor-type='FIELD'>    <java-types>        <java-type name='Customer'>            <xml-named-object-graphs>                <xml-named-object-graph name='contact info'>                    <xml-named-attribute-node name='name'/>                    <xml-named-attribute-node name='billingAddress' subgraph='location'/>                    <xml-named-attribute-node name='phoneNumbers' subgraph='simple'/>                    <xml-named-subgraph name='location'>                        <xml-named-attribute-node name='city'/>                        <xml-named-attribute-node name='province'/>                    </xml-named-subgraph>                </xml-named-object-graph>            </xml-named-object-graphs>            <xml-root-element/>            <java-attributes>                <xml-attribute java-attribute='id'/>                <xml-element java-attribute='phoneNumbers' name='phoneNumber'>                    <xml-element-wrapper/>                </xml-element>            </java-attributes>        </java-type>        <java-type name='PhoneNumber'>            <xml-named-object-graphs>                <xml-named-object-graph name='simple'>                    <xml-named-attribute-node name='value'/>                </xml-named-object-graph>            </xml-named-object-graphs>            <java-attributes>                <xml-attribute java-attribute='type'/>                <xml-value java-attribute='value'/>            </java-attributes>        </java-type>    </java-types></xml-bindings>

Reference: MOXy’s Object Graphs – Input/Output Partial Models to XML & JSON from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog.


Source : feedproxy[dot]google[dot]com

One way messaging with WSO2 ESB

As I posted before I am currently working with the WSO2 ESB. To get a good understanding of this ESB I have been walking through the samples (haven’t finished all of them yet). Example 12 is about one-way messaging with the ESB and makes use the TCP Monitor to make it visible. I have described before how to setup a similar tool called ‘TcpTunnelGUI’ but actually I prefer the TCP Monitor. To use the tool see the manual here or here. By the way, the tool is available with the WSO2 ESB installation so you don’t have to download it and install it. Simply go to the ‘$CARBON_HOME/bin’ directory and give the command:./tcpmon.sh
 
 
 
 
To see the example 12 in action with Tcp Monitor do the following:

    • Start the WSO2 ESB

This example uses the ESB setup that is similar as the one for example 1 so start the ESB by navigating to the ‘$CARBON_HOME/bin’ directory in a terminal and enter the following command:
./wso2esb-samples.sh -sn 1

    • Start the Apache Axis server

Next step is to start the Axis server where the SimpleStockQuote is deployed. To do this open a new terminal and navigate to ‘$CARBON_HOME/samples/axis2Server’ directory. Enter the command ./axis2server.sh.

    • Start the TcpMonitor

If you haven’t done already start the Tcp Monitor. Do this by opening a new terminal and browse to ‘$CARBON_HOME/bin’ and enter the command ./tcpmon.sh
This should start the Tcp Monitor tool:
Screen Shot 2013-03-14 at 21.09.12

    • Configure the TcpMonitor

We are going to listen to port 8281 and forward the incoming traffic to 8280 (that is where our ESB is running it’s proxy service).
Here is how to set this up in the Tcp Monitor:
Screen Shot 2013-03-14 at 21.25.42
After clicking the ‘Add’ button you see the TcpMonitor waiting for a connection:
Screen Shot 2013-03-14 at 21.26.06
So let’s send a message through it.

    • Run the Axis client

I made a small change to the statement as shown in the example page. Open a new terminal and run the following command from the directory ‘$CARBON_HOME/samples/axis2Client’: ant stockquote -Daddurl=http://localhost:9000/services/SimpleStockQuoteService -Dprxurl=http://localhost:8281/ -Dmode=placeorder

    • Check the results

In the TCP Monitor we see that there is a line added to the TCP Monitor and in the lower part we see the incoming and outgoing request:
Screen Shot 2013-03-14 at 21.55.41

Here is the request sent by the Axis client:

<soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/'>   <soapenv:Header xmlns:wsa='http://www.w3.org/2005/08/addressing'>      <wsa:To>http://localhost:9000/services/SimpleStockQuoteService</wsa:To>      <wsa:ReplyTo>         <wsa:Address>http://www.w3.org/2005/08/addressing/none</wsa:Address>      </wsa:ReplyTo>      <wsa:MessageID>urn:uuid:44ba7c6b-1836-4a62-8e40-814813a64022</wsa:MessageID>      <wsa:Action>urn:placeOrder</wsa:Action>   </soapenv:Header>   <soapenv:Body>      <m0:placeOrder xmlns:m0='http://services.samples'>         <m0:order>            <m0:price>154.76332953114107</m0:price>            <m0:quantity>8769</m0:quantity>            <m0:symbol>IBM</m0:symbol>         </m0:order>      </m0:placeOrder>   </soapenv:Body></soapenv:Envelope>

The important thing to notice in the request is the following element in the header:

 <wsa:ReplyTo>         <wsa:Address>http://www.w3.org/2005/08/addressing/none</wsa:Address>      </wsa:ReplyTo>

With this element in the header we tell the we service we don’t expect a response. So what we get as a response is just the 202 response code as we can see in the TCP Monitor:

HTTP/1.1 202 AcceptedContent-Type: text/xml; charset=UTF-8Server: Synapse-HttpComponents-NIODate: Thu, 14 Mar 2013 20:30:19 GMTTransfer-Encoding: chunked0

That completes this example, only a few more to go!
 

Reference: One way messaging with WSO2 ESB from our JCG partner Pascal Alma at the The Pragmatic Integrator blog.


Source : feedproxy[dot]google[dot]com

Exception Handling with the Spring 3.2 @ControllerAdvice Annotation

A short time ago, I wrote a blog outlining how I upgraded my Spring sample code to version 3.2 and demonstrating a few of the little ‘gotchas’ that arose. Since that I’ve been perusing Spring 3.2′s new feature list and whilst it doesn’t contain any revolutionary new changes, which I suspect the Guys at Spring are saving for version 4, it does contain a few neat upgrades. The first one that grabbed my attention was the new @ControllerAdvice annotation, which seems to neatly plug a gap in Spring 3 functionality. Let me explain…

If you take a look at my blog on Spring 3 MVC Exception Handlers you’ll see that the sample code contains a flaky controller with a request handler method that throws an
IOException. The IOException is then handled by another method in the same
 
controller that’s annotated with @ExceptionHandler(IOException.class). The problem is that your method that’s annotated with @ExceptionHandler(IOException.class) can only handle IOExceptions thrown by its containing controller. If you want to create a global exception handler that handles exceptions thrown by all controllers then you have to revert to something like Spring 2′s SimpleMapingExceptionHandler and some XMLconfiguration. Now things are different. To demonstrate the use of @ControllerAdvice I’ve created a simple Spring 3.2 MVC application that you can find on github. The application’s home page ostensively allows the user to display either their address or credit card details,

…except that when the user attempt to do this, the associated controllers will throw an
IOException and the application displays the following error page:

The controllers that generate the exceptions are fairly straightforward and listed below:

@Controllerpublic class UserCreditCardController {  private static final Logger logger = LoggerFactory.getLogger(UserCreditCardController.class);  /**   * Whoops, throw an IOException   */  @RequestMapping(value = "userdetails", method = RequestMethod.GET)  public String getCardDetails(Model model) throws IOException {    logger.info("This will throw an IOException");    boolean throwException = true;    if (throwException) {      throw new IOException("This is my IOException");    }    return "home";  }}
@Controllerpublic class UserAddressController {  private static final Logger logger = LoggerFactory.getLogger(UserAddressController.class);  /**   * Whoops, throw an IOException   */  @RequestMapping(value = "useraddress", method = RequestMethod.GET)  public String getUserAddress(Model model) throws IOException {    logger.info("This will throw an IOException");    boolean throwException = true;    if (throwException) {      throw new IOException("This is my IOException");    }    return "home";  }}

As you can see, all that this code does is to map userdetails and useraddress to the getCardDetails(...) and getUserAddress(...) methods respectively. When either of these methods throw an IOException, then the exception is caught by the following class:

@ControllerAdvicepublic class MyControllerAdviceDemo {  private static final Logger logger = LoggerFactory.getLogger(MyControllerAdviceDemo.class);  @Autowired  private UserDao userDao;  /**   * Catch IOException and redirect to a 'personal' page.   */  @ExceptionHandler(IOException.class)  public ModelAndView handleIOException(IOException ex) {    logger.info("handleIOException - Catching: " + ex.getClass().getSimpleName());    return errorModelAndView(ex);  }  /**   * Get the users details for the 'personal' page   */  private ModelAndView errorModelAndView(Exception ex) {    ModelAndView modelAndView = new ModelAndView();    modelAndView.setViewName("error");    modelAndView.addObject("name", ex.getClass().getSimpleName());    modelAndView.addObject("user", userDao.readUserName());    return modelAndView;  }}

The class above is annotated by the new @ControllerAdvice annotation and contains a single public method handleIOException(IOException.class). This method catches all IOExceptions thrown by the controllers above, generates a model containing some relevant user information and then displays and error page. The nice thing about this is that,no matter how many controllers your application contains, when any of them throws an IOException, then it’ll be handled by the MyControllerAdviceDemo exception handler.

@ModelAttribute and @InitBinderOne final thing to remember is that the although the ControllerAdvice annotation is useful for handling exceptions, it can also be used the globally handle the @ModelAttribute and @InitBinder annotations. The combination of ControllerAdvice and @ModelAttribute gives you the facility to setup model objects for all controllers in one place and likewise the combination of ControllerAdvice and @InitBinder allows you to attach the same custom validator to all your controllers, again, in one place.
 

Reference: Exception Handling with the Spring 3.2 @ControllerAdvice Annotation from our JCG partner Roger Hughes at the Captain Debug’s Blog blog.


Source : feedproxy[dot]google[dot]com

Wednesday, March 27, 2013

How to replace a build module with Veripacks

Compare the two trees below. In both cases the goal is to have an application with two independent modules (frontend and reporting), and one shared/common module (domain). The code in frontend shouldn’t be able to access code in reporting, and vice versa. Both modules can use the domain code. Ideally, we would like to check these access rules at build-time.
 
 
 
 
 
 

2013-03-20_1910

On the left, there’s a traditional solution using Maven build modules. Each build module has a pretty elaborate pom.xml, e.g.:

<?xml version='1.0' encoding='UTF-8'?><project xmlns='http://maven.apache.org/POM/4.0.0'         xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'         xsi:schemaLocation='http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd'>    <parent>        <artifactId>parent</artifactId>        <groupId>org.veripacks.battle</groupId>        <version>1.0.0-SNAPSHOT</version>    </parent>    <modelVersion>4.0.0</modelVersion>    <name>Veripacks vs Build Modules: Frontend</name>    <artifactId>frontend</artifactId>    <dependencies>        <dependency>            <groupId>org.veripacks.battle</groupId>            <artifactId>domain</artifactId>            <version>1.0.0-SNAPSHOT</version>        </dependency>    </dependencies></project>

On the right, on the other hand, we have a much simpler structure with only one build module. Each application module now corresponds to one top-level project package (see also this blog on package naming conventions).

Notice the package-info.java files. There, using Veripacks, we can specify which packages are visible where. First of all, we specify that the code from top-level packages (frontend, reporting and domain) should be only accessible if explicitly imported, using @RequiresImport. Secondly, we specify that we want to access the domain package in frontend and reporting using @Import; e.g.:

@RequiresImport@Import('org.veripacks.battle.domain')package org.veripacks.battle.frontend;import org.veripacks.Import;import org.veripacks.RequiresImport;

Now, isn’t the Veripacks approach simpler? There is still build-time checking, which is possible by running a simple test (see the README for details). Plus, you can also use other Veripacks features, like @Export annotations, which is a generalized version of package-private scope, taking into account package hierarchies. There are also other benefits, like trivial sharing of test code (which is kind of hard with Maven), or much easier refactoring (introducing a new application module is a matter of adding a top-level package).

The immediate question that arises is – what about 3rd party libraries? Most probably, we’d like frontend-specific libraries to be accessible only in the frontend module, and reporting-specific ones in the reporting module. Well, not supported yet, but good news – that will be the scope of the next Veripacks release. You can view the example projects on GitHub.
 

Reference: How to replace a build module with Veripacks from our JCG partner Adam Warski at the Blog of Adam Warski blog.


Source : feedproxy[dot]google[dot]com

Drools Planner renames to OptaPlanner: Announcing www.optaplanner.org

We’re proud to announce the rename Drools Planner to OptaPlanner starting with version 6.0.0.Beta1. We’re also happy to unveil its new website: www.optaplanner.org. OptaPlanner optimizes business resource usage. Every organization faces planning problems: provide products or services with a limited set of constrained resources (employees, assets, time and money). OptaPlanner optimizes such planning to do more business with less resources. Typical use cases include vehicle routing, employee rostering and equipment scheduling.

OptaPlanner is a lightweight, embeddable planning engine written in Java™. It helps normal Java™ programmers solve constraint satisfaction problems efficiently. Under the hood, it combines optimization heuristics and metaheuristics with very efficient
 
score calculation. OptaPlanner is open source software, released under the Apache Software License. It is 100% pure Java™, runs on the JVM and is available in the Maven Central Repository too. For more information, visit the new website: http://www.optaplanner.org

Why change the name?

OptaPlanner is the new name for Drools Planner. OptaPlanner is now standalone, but can still be optionally combined with the Drools rule engine for a powerful declarative approach to planning optimization.

  • OptaPlanner has graduated from the Drools project to become a top-level JBoss Community project.
    • OptaPlanner is not a fork of Drools Planner. We simply renamed it.
    • OptaPlanner (the planning engine) joins its siblings Drools (the rule engine) and jBPM (the workflow engine) in the KIE group.
  • Our commitment to Drools hasn’t changed.
    • The efficient Drools rule engine is still the recommended way to do score calculation.
    • Alternative score calculation options, such as pure Java calculation (no Drools), also remain fully supported.

How will this affect your business?

From a business point of view, there’s little or no change:

  • The mission remains unchanged:
    • We’re still 100% committed to put business resource optimization in the hands of normal Java developers.
  • The license remains unchanged (Apache Software License 2.0). It’s still the same open source license.
  • The release lifecycle remains unchanged: OptaPlanner is still released at the same time as Drools and jBPM.
  • Red Hat is considering support subscription offerings for OptaPlanner as part of its BRMS and BPM platforms.
    • A Tech Preview of OptaPlanner is targeted for BRMS 6.0.

What has changed?

  • The website has changed to http://www.optaplanner.org
  • The distributions artifacts have changed name:
    • Jar names changes:
      • drools-planner-core-*.jar is now optaplanner-core-*.jar
      • drools-planner-benchmark-*.jar is now optaplanner-benchmark-*.jar
    • Maven identification groupId’s and artifactId’s changes:
      • groupId org.drools.planner is now org.optaplanner
      • artifactId drools-planner-core is now optaplanner-core
      • artifactId drools-planner-benchmark is now optaplanner-benchmark
    • As usual, for more information see the Upgrade Recipe in the download zip.
  • The API’s namespace has changed. As usual, see the upgrade recipe on how to deal with this efficiently.
    • Starting from 6.1.0.Final, OptaPlanner will have a 100% backwards compatible API.
  • OptaPlanner gets its own IRC channels on Freenode: #optaplanner and #optaplanner-dev

Reference: Drools Planner renames to OptaPlanner: Announcing www.optaplanner.org from our JCG partner Geoffrey De-Smet at the Drools & jBPM blog.


Source : feedproxy[dot]google[dot]com