Functional Transaction Management – old dog, new tricks! 3   Recently updated !

This blog post is all about the new Transaction Control service which is proposed to be part of the next release of the OSGi Enterprise Specification. Paremus has been leading this specification work, which arose from a collaboration with one of our customers, McCarthys. The current state of the RFC is available on GitHub, and there’s even an implementation available in Apache Aries.

Before we delve into the cool features of the Transaction Control service, it’s probably worth remembering why we need yet another transaction abstraction…

A Short History of Transaction Management

Software controlled transactions have existed for a long time — commercial products that are still available now can trace their origins back to the 1960s. Since that time a lot has changed, first we saw the rise of C, then of Object Oriented programming, then of the Web, and now of Microservices.

Over the same time period there was one significant change to the way that transactions were managed – originally transaction boundaries had to be explicitly declared.

    transactionManager.begin()
    try {
        // Work goes in here
    } finally {
        transactionManager.commit();
    }

Unfortunately it turns out that properly handling these transactions is complex, for example the snippet above is not sufficient to rollback if the work throws an exception!

In addition to the complexity of managing transactions correctly, imperative transaction management also adds a lot of noise to the application code. A three line business method can rapidly grow to fill your screen if you have to suspend an existing transaction, start a new transaction, complete the transaction, and resume the original transaction.

 

Declarative Transaction Management

Recognising that imperative transaction management was a nightmare for developers, the creators of Application Servers (and later the Java EE specifications) designed declarative ways to manage transactions. This avoided the user needing to write any code, dramatically simplifying the model.

    @TransactionAttribute(REQUIRES_NEW)
    public void doWork() {
        // Work goes in here
    }

For many years this model was seen as the gold standard for transaction management. It eliminated mistakes, and it simplified code. There were, however, some problems…

Container Dependencies

The original EJB transaction management required specific interfaces to be implemented, specific classes to be extended, and pre-deployment steps before code could run, each of these things adds its own separate layers of complexity to the code, and ties your application to a container.

The rise of the Spring framework was a reaction to this complexity, and the heavy-touch management of Java EE. Instead Spring focussed on “pure POJO” programming, designed to make your code easily portable, runnable and testable inside or outside the container.

While Spring did a much better job of hiding the dependencies, the fundamental problem with a pure declarative approach is that there must be a container somewhere. Without a container there is no code to start or end the transaction. Spring’s promise of simpler, container independent components was only possible if you were still implicitly tied to the Spring container, even when running tests. With Spring’s transaction management we’ve just swapped one container for another.

Fun with Proxies

Another problem idiosyncrasy with declarative transaction management is that it must proxy your POJO in order to work, and that means that not everything will work as you expect. Firstly, some proxying is limited to interfaces only, meaning that you can only have transaction management on interface methods. In other cases the container may be able to use subclassing, which allows transaction management on public and protected methods, but still not on private or final methods!

You may be able to live with these limitations, however there is a much more insidious problem with declarative qualities of service added through proxying.

The proxy must be invoked to trigger the quality of service

What does this actually mean? To put it simply, if you call my object’s foo() method from outside the proxy it will start a transaction. If I call my object’s foo() method from inside the object then it will not start a transaction!

    @TransactionAttribute(REQUIRES_NEW)
    public void doWork() {
        // Work goes in here
    }

    @TransactionAttribute(SUPPORTS)
    public void doWork() {
        // A bit of Non transactional work goes in here

        // This internal call does not hit the proxy, 
        // so no new transaction is started
        doWork();
    }

Enlisting Resources in the Transaction

The third big problem that people encounter with declarative transaction management is that it becomes incredibly hard to work out whether the resource that you’re using is actually participating in the transaction or not. If I get a connection from a DataSource then it is the container, not my application, that makes sure the connection is linked to the transaction. While it is good to avoid adding this code explicitly, it does mean that my application is reliant on yet more “container magic” behind the scenes. If I use a non-JTA datasource, or I forget to attach my datasource to the Spring PlatformTransactionManager, then my resource access will not be transactional!

Fixing the Problems with Declarative Transaction Management

The big problem with declarative transaction management was that it tried to take away too much from the application code, replacing it with “container magic”. The problem with relying on magic is that the resulting system ends up being more complex, not less. We therefore should be aiming to simplify and minimise transaction management code, not eliminate it entirely.

 

Explicit Transaction Management Using Scoped Work

With the advent of Java 8 lambda expressions and functional interfaces, we now have a new, better way to define transactional work. Rather than a messy try/catch/finally block, we can now simply pass a closure to be run in a transaction!

    txControl.required(() -> {
            // Work goes in here
        });

As the work we pass is actually a closure (not just a function) it can capture variables and state from its context. For example this closure captures the parameters passed to the saveMessage method.

    public void saveMessage(String user, String message) {
        txControl.required(() -> {
                PreparedStatement ps = connection.prepareStatement(
                        "Insert into MESSAGES values ( ?, ? )");
                ps.setString(1, user);
                ps.setString(2, message);
                return ps.executeUpdate();
            });
    }

 

Explicit Resource Enlistment

While explicitly enlisting resources with every transaction is cumbersome, there is value in explicitly linking a resource with the transactions runtime. This link means that the resource can be relied upon without any magic in the background, whether running in a unit test, or having been migrated to a pure Java SE microservice. The Transaction Control Service creates this link using a ResourceProvider. A ResourceProvider is a generic interface that is usually specialised to return a particular resource type. For example the JDBCConnectionProvider provides a javax.sql.Connection resource object when connected to a TransactionControl service.

The important thing about the resource objects returned by the provider is that they are thread-safe, and automatically integrate with the scope of the current thread. In particular, you don’t need to worry about tidying up the resource. It can be retrieved once and then cached and used in any piece of scoped work. For example the following Declarative Services component provides a transactional service using JDBC and the Transaction Control service.

    @Component
    public class MyDaoImpl implements MyDao {

        @Reference
        TransactionControl control;

        Connection dbConn;

        @Reference
        void setResource(JDBCConnectionProvder provider) {
            dbConn = provider.getResource(control);
        }

        @Override
        public void saveMessage(String user, String message) {
            control.required(() -> {
                    PreparedStatement ps = dbConn.prepareStatement(
                            "Insert into MESSAGES values ( ?, ? )");

                    ps.setString(1, user);
                    ps.setString(2, message);

                    return ps.executeUpdate();
                });
        }

        @Override
        public void getMessagesForUser(String user) {

            return control.supports(() -> {
                    PreparedStatement ps = dbConn.prepareStatement(
                            "Select MESSAGE FROM MESSAGES WHERE USER = ?");

                    ps.setString(1, user);    

                    List<String> result = new ArrayList<>();

                    ResultSet rs = ps.executeQuery();

                    while(rs.next()) {
                        result.add(rs.getString(1));
                    }

                    return result;
                });
        }
    }

 

Simpler Resource Lifecycles

You may have noticed that the previous example doesn’t close any of the JDBC resources. Normally this would trigger all sorts of unpleasant leaks, however in this case we actually can rely on the Transaction Control service. Because we have explicitly defined a scope using a closure, it is clear when that scope ends. At the end of the scope all of the resources used in that scope are notified, and can tidy themselves up. In many ways this is like a try-with-resources block:

    try(Connection c = getConnection()) {
        // Work goes in here
    }

The main advantage of scoped work over try-with-resources is that we don’t need to know which resources are going to be used in the scope. This is particularly useful when calling out to external services.

In Summary

The Transaction Control Service is a new way of thinking about Transaction management. By applying functional techniques we can enable simple, reliable transaction management without the need for any containers or proxying. This shift enables microservices to be written more maintainably, and with fewer dependencies.

For more information please do look at the Apache Aries site, and at the OSGi RFC. Real implementations for transactional JDBC and JPA are available now — feel free to try them out!


Share This:
twitterlinkedinFacebookredditpinterestmail

The Paremus Service Fabric has no Opinions…

The emergence of the ‘Opinionated’ Microservices Platform continues apace.

We are told that business systems should be REST/Microservice based. We are told that containers should be used everywhere and container images are the deployment artefact du jour: I mean lets face it Virtual Machine Images are so last year! And of-course Reactive is the new cool.

A depressing state of affairs. These messages are at best half-truths. At worst – the IT industry’s ongoing equivalent of Snake Oil.

So (more…)


Share This:
twitterlinkedinFacebookredditpinterestmail

Modularity, Microservices and Containers

My colleague, Derek Baum, had an article published in Jaxenter last week called “Modularity, MicrJaxenter Feb 2016oservices and Containers“.

The article discusses how Microservices and Containers are examples of a general industry drive towards modularity. It goes on to demonstrate how OSGi’s Service-centric approach, its Requirements & Capabilities model, and the OSGi Remote Services specification provide an excellent solution for a containerised microservices solution.

Yes thats right, these concepts/technologies/trends aren’t competing with each other as many would have you believe. In fact they can all be complimentary when used with Paremus Packager.

Paremus Packager integrates the lifecycle of external (non-Java) applications with OSGi and provides a consistent means to start, stop, configure and inter-connect services and we will be making an early access release of the new Docker-based Paremus Packager available to a restricted audience in Q1 2016.  You can sign up for this online with just your email address.

The article is a is a follow up to the presentation of the same name given at the OSGi Community Event in 2015, by Neil Bartlett [ Slides / Video ] and these are a good source of info if you would like to learn more. Of course you can also add comments below if you have any questions.


Share This:
twitterlinkedinFacebookredditpinterestmail

Asynchronous Event Streams @ MadridJUG

Some of the Paremus team were in Madrid last week (Jan 11 to 14, 2016) for the OSGi Expert Group meetings and also an OSGi Alliance Board Meeting.

Thanks to Jorge Ferrer for picture from twitter (@jorgeferrer)

Thanks to Jorge Ferrer for picture from Twitter

While we were in town, our CTO, Tim Ward, was invited to speak at the MadridJUG on the work he has been leading within the OSGi Enterprise Expert Group on Asynchronous Event Streams. This relatively new OSGi Alliance specification is highly relevant to the use of OSGi in IoT as well as in Enterprise.

The subject proved to be an interesting topic to the MadridJUG members with good attendance and lots of questions.

Thanks to Liferay Spain (@liferay_es) for hosting the Meetup and MadridJUG (@MadridJUG) for inviting Tim to present.

 

Tim also presented this talk at the OSGi Community Event last year and you can find a video of this here and the slides here.

 

It looks like the OSGi Alliance will be having face to face meetings in Chicago, Ghent and Ludwigsburg in the coming months. If you are interested in getting someone from Paremus to come and present on anything OSGi or our products while we are in your neighbouthood then please let us know.

 


Share This:
twitterlinkedinFacebookredditpinterestmail

2016 and the OSGi Alliance?

A decade ago Paremus fused Java’s dynamic service framework (Jini) with OSGi to create a modular, distributed, Microservces platform (known as Infiniflow); thereby creating the ancestor of the current Paremus Service Fabric. Seeing the importance of OSGi (strong modularity / isolation, dynamic dependency resolution, semantic versioning, a nascent but potentially powerful service architecture, all of which defined by open industry standards ) Paremus joined the OSGi Alliance as ‘Adopter Associate’ – a small step, a minimal commitment.

As Service Fabric concepts evolved, (more…)


Share This:
twitterlinkedinFacebookredditpinterestmail