What’s New in Declarative Services 1.3?

I’d like to tell you about two really cool new features in OSGi Declarative
. They’re so cool that I feel kind of guilty for blogging about them,
because some of the kudos may accrue to me even though I have done absolutely
none of the work to either specify or implement them. So bear in mind that I’m
only your humble messenger.

Configuration Property Types

As a bit of background, in Declarative Services (DS) it has always been possible to write configurable
components by passing a Map into the activate method:

public class ServerComponent {

  public void activate(Map<String, Object> configProps) {
    String bindHost = (String) configProps.get("host");
    int port = (Integer) configProps.get("port");
    ServerSocket sock = new ServerSocket();
    sock.bind(new InetSocketAddress(bindHost, port))
    // ...

There are some great things about the way DS handles configuration. First, the
component itself has no idea about the origin of the config data… it could
come from a properties file, or an XML file, or a database record, or
transmitted over the air from a cellular provider, etc. Because the component
is not coupled to the physical mechanism, we can change that mechanism easily
without updating the code of all our components.

Second, the configuration is passed all-at-once and can be dynamically changed.
Some other DI frameworks inject configuration data one field at a time
(setHost, setPort…). Where they support dynamic reconfiguration at all, they
tend to need methods to let the component know that a series of config changes
is starting and ending, so that it knows when it’s safe to reconfigure. Otherwise
the component could end up using a mixture of old and new config.

There is a problem, however. Did you notice that my code sample above had no
error handling at all? What if the host property was missing from the map?
What if the port was given as a String rather than an integer? What if it
contained invalid numeric characters? Some fields may be optional, where do we
specify the defaults? It turns out that most components have to do quite a bit
of work to convert, parse and validate their configuration. There are libraries
that encapsulate this – such as the Configurable API from bnd – but
they add a runtime dependency, which is inconvenient (though manageable in OSGi
with static linking).

Declarative Services 1.3 – part of the OSGi Release 6 compendium spec, which
was released to the public last week – improves this enormously. It’s now
possible to define a custom type for the configuration, rather than a generic
Map. For example:

@interface ServerConfig {
  String host() default "";
  int port() default 8080;
  boolean enableSSL() default false;

public class ServerComponent {

  public void activate(ServerConfig cfg) {
    ServerSocket sock = new ServerSocket();
    sock.bind(new InetSocketAddress(cfg.host(), cfg.port()));
    // ...


Now the DS runtime is doing all the work of pulling out fields, converting them
to the correct type and so on. If there are any conversion errors, or missing
fields that don’t have defaults, then DS will refuse to load our component and
will write an appropriate message into the OSGi Log.

This feature is known as “Configuration Property Types”. You may wonder why
an annotation is used rather than an interface, given that we don’t actually
use it like an annotation. There are a few reasons: in
annotations it’s trivial to provide defaults; the methods cannot have
parameters; and the return types allowed in an annotation are restricted to
types that mesh well with configuration data.

Of course this version of DS interoperates with the OSGi Metatype specifcation,
so that the administrators of our application can know what fields and types are
actually expected. The metadata is generated at build time if we just add a couple
more annotations:

@ObjectClassDefinition(name = "Server Configuration")
@interface ServerConfig {
  String host() default "";
  int port() default 8080;
  boolean enableSSL() default false;

@Designate(ocd = ServerConfig.class)
public class ServerComponent {
  // ...

… and, from this definition, tools like the Apache Felix Web Console can automatically
generate an admin GUI for our component:

Prototype Scope Services

The second really cool feature is support for prototype scope in services.
The prototype scope has been available in the OSGi Core R6 specification for
about a year now, but there was a significant gap between the release of the
Core and Compendium documents. So until now we had to drop down to the
low-level OSGi API in order to use prototype scope. In DS 1.3 we can use
annotations and a more convenient runtime API instead.

Prototype scope is where a service has the ability to create multiple instances
on demand. Traditionally, most OSGi services were conceptually singletons: each
registry entry corresponded to a single back-end service instance that was
shared by all consumers. (Aside: there were additionally bundle-scoped
services, where each consumer bundle got its own instance of the service
object, but if you used a service multiple times from within the same bundle
you would still share the same instance).

This model had to be enhanced, since sometimes each consumer really needs to
have its own instance of the service. Also some consumers need to have
programmatic control over the creation and destruction of instances, in order
to fit in with an external lifecycle. An example of this would be in web
requests – some web standards (for example JAX-RS) require services to be
instantiated and then destroyed in every request. Hence the prototype scope was
added. This is an opt-in feature: both the provider and the consumer of the
service need to be aware that it is being used.

In DS 1.3, a provider can opt in to the prototype scope very easily with an
attribute on the @Component annotation:

@Component(scope = ServiceScope.PROTOTYPE)
public class MyComponent {
  // ...

As a consumer there are a few more choices. When we consume a service with the
@Reference annotation, we can use an attribute to request a new instance per

@Reference(scope = ReferenceScope.PROTOTYPE_REQUIRED)
public void setFoo(Foo foo) {
  // ...

Using this annotation, each component that uses the Foo service will get its
own instance. However within each component, the same Foo instance will be used
repeatedly. Alternatively, the consumer can create and destroy instances
programmatically whenever it wants – this requires the consumer to use a bit
of API from DS as follows:

private ComponentServiceObjects<Foo> fooFactory;

@Reference(scope = ReferenceScope.PROTOTYPE_REQUIRED)
public void setFooFactory(ComponentServiceObjects<Foo> fooFactory) {
  this.fooFactory = fooFactory;

public void doSomething() {
  Foo myFoo = fooFactory.getService();
  try {

    // do something interesting with myFoo ...

  } finally {

The use of the ComponentServiceObjects interface means we are no longer
building strict POJOs, but still this interface is easily mockable for testing

Getting It

The specification for DS 1.3 is available now from http://www.osgi.org/Download/Release6. Felix SCR 2.0 implements the specification and can be downloaded from http://felix.apache.org/downloads.cgi. For full metatype support you will need Felix Metatype 1.1 from the same page.

To build components using the annotations you will need a development build of bnd/Bndtools 3.0. To install this, go to http://bndtools.org/installation.html and follow the instructions for installing a developer snapshot build.

Announcing the bnd Maven Plugin

I am pleased to announce the availability of a new Maven plugin for building
OSGi bundles, based on bnd. The plugin is called bnd-maven-plugin and it will
be developed as part of the wider bnd and Bndtools projects.

The plugin has been kindly donated by one of Paremus’ customers – a very well
known company whose identity we expect to be able to confirm in due course –
under the terms of the Apache License version 2.

You might very well wonder: why did we build a new Maven plugin for OSGi
development? Most OSGi developers who use Maven today are using the
maven-bundle-plugin from Apache Felix. That plugin is mature and stable but
it does have a number of issues that our customer wanted resolved.

The first major issue with maven-bundle-plugin is that it replaces Maven’s
default plugin for JAR generation (the maven-jar-plugin). This means that it
must use a different packaging type – “bundle” instead of “jar” – and must
use the extensions=true setting to override Maven’s default lifecycle. As a
result, the output artifacts are harder to consume in ordinary non-OSGi
projects. Therefore, many Maven developers either don’t bother to release OSGi
bundles at all because of the hassle, or they release both a “plain JAR” and
a separate OSGi bundle. This is deeply frustrating because OSGi bundles already
are plain JARs!… they can be used unalterered in non-OSGi applications.

The new bnd-maven-plugin instead hooks into Maven’s process-classes phase. It
prepares all of the generated resources, such as MANIFEST.MF, Declarative
Services descriptors, Metatype XML files and so on, under the target/classes
directory, where they will be automatically included by the standard
maven-jar-plugin (with one caveat discussed below).

The second issue is that maven-bundle-plugin has some questionable features and
design choices. For example it automatically exports all packages that do not
contain the substrings “impl” or “internal”. The motivation was
understandable at the time – it is closer to normal Java with all packages
being implicitly exported – but this is really just wrong from a modularity point
of view, and goes against OSGi best practices. The bnd-maven-plugin instead
includes all packages from the project in the bundle but exports nothing by
default. The developer must choose explicitly which packages are exported.

The third issue with maven-bundle-plugin is that it is difficult to use in an
incremental build environment. In Bndtools we have had literally years of
discussions about how to best accommodate Maven users while still supporting
the rapid development cycle that Bndtools users love. The new plugin will make
this easier: for example after the process-classes phase has completed, the
content of the target/classes directory is already a valid OSGi bundle,
albeit in folder form rather than a JAR file. Also the new plugin takes its bnd
instructions from the bnd.bnd file in each project, rather than from a funky
XML encoding of those instructions deep inside the POM.

Finally, we believe that by delivering a Maven plugin directly from the bnd
project, we can more quickly take advantage of new features in bnd, and develop
features in bnd that directly benefit Maven users. Note however that the Gradle
and Ant integrations of bnd will still be supported: we want everybody to use
bnd to build their OSGi bundles, irrespective of IDE and build system.

So much for motivation, let’s look at some of the technical details.

The plugin has been released as version 2.4.1 because there was a previous
experimental plugin with the same name. That experiment was not a success, and
we abandoned it some time ago – the new plugin is a complete rewrite. The
version of the plugin will track the version of bnd itself, so 2.4.1 of the
plugin builds against 2.4.1 bnd, which is released and available from Maven
Central. From now on we will work on version 3.0, which will track version 3.0
of bnd.

The plugin generates the JAR manifest into
target/classes/META-INF/MANIFEST.MF, but by default the maven-jar-plugin
ignores this file and replaces it with an empty, non-OSGi manifest. In order
to pick up the generated manifest it is necessary to set the following
configuration, which can be done just once in your parent POM:


The plugin is available now from Maven
or jCenter
It is not yet thoroughly documented, although bnd itself does have very good
documentation. All options from bnd automatically
work in bnd-maven-plugin, with the exception of “sub bundles” (-sub) since
Maven projects are meant to produce exactly one output artifact per project.

To add the plugin to your project, insert the following into your POM:



We have example projects to get started. Questions, problems or offers of help can be can be directed to the bnd/Bndtools mailing list.

Microservices, Platforms & OSGi?

The concept of a ‘Service’ is hardly new. In the late 1990’s Service Oriented Architecture enabled large monolithic business systems to be decomposed into a number of smaller loosely coupled business components.

Modularity all the way Down

The purpose of any ‘Service’ strategy should be to break large monolithic entities into groups of smaller interacting components; i.e. modular systems. The interaction between the components in a modular system is defined by some form of ‘Contract’, ‘Service Agreement’ or ‘Promise’: the nature of which dictates the interaction model between the components; i.e. the ‘architecture’.

Relative to their monolithic counterparts, well designed modular systems are by their nature significantly simpler to change and maintain. Benefits include:

  • Increased Agility – A subset of the components used in the composite system may be rapidly changed in-order to meet new, previously unforeseen business requirements or opportunities.
  • Reduced Maintenance Costs – As long as the contracts between components remain unchanged, the internal implementation of each component can be independently refactored and maintained. The ability to cost effectively maintain the composite system avoids the accrual of technical debt.

Microservices simply continue this modularity trend: i.e. the process of decomposition by breaking business components into a number of finer grained functional components.

The justification is again the same:

  1. To build more scalable, robust and maintainable systems.
  2. To simplify development by assembling composite systems from a number of small single function software components: these simpler to develop in-house, or where appropriate, sourced from third parties.

However the logic that argues that business services should be composed of business components, and that business components should be composed of simpler single functional microservices, also applies to the internal implementation of EACH microservice.

If a microservice is to be maintainable, the internal implementation must be modular.

Microservices: It’s not an Architecture

Modularity concepts are fundamental and will underpin any successful IT strategy. Yet modularity is frequently misunderstood.

One common mistake is to confuse general modularity principles with architectural approaches (currently fashionable or otherwise), issues encountered with vendor implementations, or ill-conceived industry standards. As explained by Kirk Knoernschild, structural modularity and architectural patterns are actually orthogonal concerns.

In the late 1990’s Service Oriented Architecture enabled large monolithic business systems to be decomposed into a number of smaller, but still coarse grained, loosely coupled business components. However, outside the area of Business to Business (B2B), the original implementations – i.e. WS-* protocols and UDDI Directories – are now widely seen as a mistake: rather REST and messaging protocols (either directly or indirectly via a message broker) are the current popular approaches.

Yet looking behind these architectural differences, one can see that modularity principles have been successfully adopted by each approach. Indeed, more so than the advent and influence of the virtual machine, the application of modularity through generic SOA principles is directly responsible for the increasing dominance of today’s commercial Web and Cloud based Services.

No Free Lunch

Being built from a number of simple functional units, a microservices based business application is, in principle, simpler to create, maintain and change.

Yet, as noted by Senior Gartner Analyst Gary Oliffee (http://blogs.gartner.com/gary-olliffe/2015/01/30/microservices-guts-on-the-outside/.), microservices are not a zero cost option. Gary describes microservices from two perspectives:

  1. Internal Structure: Usually a single function service (hence the term microservice) which – in principle – is simple to develop. Also, as the communication mechanism is usually embedded, a microservice is easy to unit test as heavy weight applications servers are not required.
  2. External Structure: This refers to the new platform capabilities that are now needed to help manage the interdependencies, life-cycle and configurations between the myriad of microservices. Whereas the unit test was simple, the integration testing of the complete solution requires the deployment and configuration of all these inter-related components.

To conclude, the ‘observable’ composite system is now significant more complex than the monolithic application it replaced.

The ideal microservices platform?

The purpose of a microservices platform is to shield this runtime complexity from Operations: to automate the discovery, configuration, life-cycle management and governance of a possibly changing set of interdependent runtime entities.

What are the fundamental attributes of an ideal microservices platform? Unfortunately there is no one simple answer, as it depends on context.

All businesses will value a platform’s ability to abstract and shield hosted microservices from the underlying compute resource used. However, whereas a business providing vanilla hosted websites will have very simple application requirements, a business, comprised of many business units, each potential involved in different markets, may have extremely diverse requirements from a common platform.

For the latter group the platform solution must allow for Architectural Agility – meaning the platform solution must not constrain either:

  1. The internal structure of the functional components.
  2. The external structure: The type of interactions allowed between these components.

To promote interoperability, component re-use, and prevent direct or in-direct (via ‘OSS’) vendor lock-in, the platform solution should also be based upon relevant industry standards.

Finally, given that the composite application cannot now function without the microservices platform; the platform itself must be engineered to new levels of robustness and agility and must be evolvable: the platform itself must be extremely modular.

Current Industry Fashions

It is my opinion that the current generation of popular ‘microservice platform‘ offerings fall well short of these objectives. The reason why is easy to understand.

Mainstream vendors pursue ‘low hanging fruit’ by focusing on enabling developers to quickly and easily assemble simple ‘microservice’ based applications. For example, the deployment of simple three Tier Web based applications built upon popular RESTful architectural patterns.


  • Opaque software artifacts are deployed via a light weight container.
  • The container provides isolation in multi-tenancy environments.
  • The services are usually simple REST based services.
  • The platform, which itself is not dissimilar from the previous generation of Grid Compute solutions, provides some level of deployment, discovery and configuration of these simple services.

Yet while providing instant gratification, these same platforms fail to provide sufficient flexibility for more complex applications or diverse business needs:

  1. The platform solution may or may not be transparent to the deployed applications: some platforms enforcing rigid restrictions on inter-container communication.
  2. The platform may fail to adequately address the scoping and versioning of (i.e.  interaction between) these hosted microservices.
  3. The platform may only support a subset of interaction patterns or middleware options.

If one finds oneself ‘force fitting’ a broad set of business applications to a limited set of architectural patterns provided by the microservices platform – then the platform is most probably an inappropriate choice for your organization. More importantly, the platform will continue to remain an inappropriate choice – a point of constriction, reducing business agility without delivering the long term cost saving benefits provided via a modularity first strategy.

Microservices and OSGi

OSGi began in the late 1990’s as the open industry standard for enforcing structural modularity for Java code running within a JVM. OSGi bundles enforce strong isolation:

  • The internal implementation is private to each bundle,
  • The behaviour exposed by the bundle is described by its stated ‘Capabilities‘,
  • The dependencies a bundle has on its local environment are described by its stated ‘Requirements‘.
  • Finally semantic versioning is used. A bundle’s Capabilities are versioned (major.minor.micro). Meanwhile a bundle’s Requirements specify the acceptable version ranges within which the Capabilities of third parties must fall.

Due to strong isolation OSGi bundles may be dynamically loaded or unloaded from a running JVM. As the bundles are self-describing, a process know as ‘resolution’ may be  used to ensure that all inter-related bundles are automatically loaded into the runtime and wired together.

These aspects of OSGi all relate to structural modularity and the concepts are quite generic. Self-describing semantically versioned artifacts are important concepts at all layers of the structural hierarchy.

In an orthogonal decision OSGi also decouples the interaction between bundles via a local Service Registry. In so doing the OSGi Alliance created an extremely powerful microservices architecture for Java. Due to OSGi’s modularity first mindset, OSGi’s service architecture is extremely powerful and evolvable with advertised Service Contracts representing:

  • Synchronous or asynchronous remote procedure calls – with a choice of language specific or agnostic serialization mechanisms.
  • Event based interactions.
  • Message based interaction.
  • Actor style interactions.
  • Or, RESTful based interactions.

Where appropriate, pluggable discovery and serialisation mechanisms are supported.

But OSGi is difficult?

Not anymore.

Well designed modular systems do require some thought. However the OSGi Alliance is actively making OSGi simpler to adopt via ongoing investment in tooling (see http://bndtools.org) and investment in tutorials for typical application patterns. For example, the OSGi enRoute project demonstrates the creation of a simple OSGi based modular application. For such requirements the enRoute tutorial demonstrates that OSGi can be as easily to use as Spring Boot or Dropwizard. Additional OSGi enRoute tutorials are planned which will address other common architectural patterns used in IoT and the Enterprise. The implementation of sophisticated business systems in a modular manner does require an enhanced level of engineering and architecture skills. However the OSGi Alliance again provide support to achieve this in-terms of OSGi Alliance Member training (e.g. Paremus OSGi training) and the new OSGi Developer Certification programme.

To conclude, OSGi provides the basis for a compelling microservices strategy. However unlike the alternatives, this is only part of a larger coherent strategy. OSGi provides the necessary open industry standards upon which the next generation of modular, and so highly maintainable, software systems will be built.

It has been a while…

It has been a while since my last post. In my defence, Paremus have been incredibly busy on a number fronts.

Adoption of OSGi through 2013 / 2014 has been significant and continues to accelerate. While interest in ‘MicroServices’ and Container Technologies like Docker are undeniable; Paremus are increasingly finding that mature organisations realise that complexity, technical debt and maintenance costs can only be addressed if the in-house Java applications are either mothballed or reengineered. Assuming the business functionality is still required, the former simply ignores the problem. In contrast the latter, to avoid repeating the same mistakes, requires structural Modularity; for which the only industry standard is OSGi.

For organizations that appreciate this, and aspire to their own internal distributed Cloud runtime; then BNDTools and the Paremus Service Fabric are no longer a curiosity but a increasingly compelling Build / Release / Run / proposition.

My intent over the next few posts will be to revisit Service Fabric concepts and capabilities and compare and contrast these against current IT industry fashions.

Something along the lines of…

  • An introduction to Service Fabric 1.12 and our ‘Entire’ management framework. A really cool demonstration of the use of OSGi RSA and DTO specifications – crafted by some of the Grand Masters of the Art :)
  • A look at the Service Fabric with respect to the Microservices trend: (well the name fits!)
  • Also what about the Service Fabric and non OSGi artefacts? What about Docker?  How is the Service Fabric different to solutions like Mesos and Kubernetes?

And then we’ll get onto the interesting stuff…

Static Linking in OSGi

When building native executables in a language like C or C++, we have a choice about how to deal with the libraries our code depends on. We can choose to link them statically, meaning that the library contents are physically copied into the executable file at build time, or we can link dynamically which means the library contents must be found externally to the executable by a special runtime component called the linker.

Static linking has the great advantage that the executable can rely on always having the library it needs, and it’s always the correct version. Installation is a breeze because there’s just a single file to copy. The clear downside of static linking is that commonly used libraries will be duplicated in lots of executables, and therefore take more disk space and perhaps memory.

In standard Java, a JAR is a kind of executable that has dependencies on libraries. However static linking is unheard of in this world because it is terribly unsafe. All JARs sit together in a global classpath – if one JAR contains a copy of a library and another JAR contains a copy of the same library then the first one on the classpath will always “win”. If the two copies are of different versions then chaos ensues.

Therefore the state of the art in standard Java is a kind of dynamic linking, except without any metadata in the JARs to declare exactly what we should be linked with… and no linker! Instead, developers often have find dependencies through a process of trial-and-error: keep adding JARs to the classpath until all the NoClassDefFoundErrors go away.

OSGi applications predominantly use dynamic linking as well. It’s a lot more manageable because we have precise metadata defining the dependencies, and the OSGi Framework itself can be seen as a kind of linker. However, static linking is also perfectly possible and safe because of the isolation provided by OSGi, and it has many of the same advantages as statically linked native executables. Bundles with fewer dependencies are simply easier to manage. To put it another way: OSGi is the best system I have ever seen for managing dependencies, but the easiest artefacts to manage are still those with no dependencies at all.

Static linking in OSGi is sometimes done by means of the Bundle-ClassPath, which allows us to embed an entire library JAR inside a bundle. This works, but it can inflate the size of our bundle quickly because we pull in the entire JAR rather than just the parts we need. An alternative solution is to use Private-Package to include individual packages out of the dependency JARs. However once a package is added this way it becomes a permanent part of our bundle until we explicitly remove it from the Private-Package list. In the future our core code might change such that some or all of these packages will not be needed. Will we remember to review Private-Package regularly to ensure everything there is actually still used? Not likely.

A Better Way

And so we come to a little-known bnd instruction called Conditional-Package. This instruction is used to include a package or set of packages if (and only if) they are used from the core packages of our bundle -– where core packages are defined as those that are listed in Private-Package and/or Export-Package. We can even use wildcards, and bnd will calculate the minimum set of packages that need to be included. For example, many of my bnd files include a line like this:

Conditional-Package: org.example.util.*

where the org.example.util prefix is for a whole set of packages designed as generic utilities.

Service vs Library

Static linking with Conditional-Package can help to resolve a common head-scratcher when designing OSGi bundles.

Sometimes we want to write a piece of utility code that will be used across a number of other bundles, but that doesn’t seem to warrant a service interface. Services are great for application components that can be implemented in more than one way, but sometimes there is only one conceivable implementation. A classic example is encoding, e.g. Base64. When a client wants to convert to/from Base64 representation, it doesn’t want to have an alternative encoding swapped in at runtime! In this case the client is quite happy to be tightly coupled to a specific implementation of the function it is calling.

It feels wrong to export a package that contains implementation code in OSGi. Ideally, exports are for service interfaces, i.e. contracts. Implementation code is subject to rapid change and is hard to version correctly (tools like bnd’s baselining can’t detect semantic changes in executable code, only on method signatures). Nevertheless we still want to reuse the Base64 encoder across all the bundles that need it. So we use Conditional-Package to include that functionality directly. No versioning headaches because the bundle always gets the exact version was built with.

Incidentally this practice doesn’t violate the DRY (Don’t Repeat Yourself) principle because the source code is still defined in just one place; it’s the compiled class files that are copied.


A few words of warning. Like all power tools, Conditional-Package is dangerous if misused. When we pull a package into our bundle we inherit all of its dependencies, so it is best to do this with small, coherent packages that do not have a large tree of transitive dependencies. Try to design your packages so that they do only one thing each… don’t make a big generic util package, instead make subpackages such as util.xml, util.io, util.encoding and so on. Note that Conditional-Package will pull in any transitive package dependencies that match the pattern we have specified – all others are treated as external dependencies and end up in Import-Package.

Using a qualified or prefixed wildcard such as org.example.util.* is okay. Using a bare wildcard is not okay. I have seen this done accidentally: as a result, the bundle contained large chunks of the JRE as well as OSGi core packages.

Another danger is if we pull in packages that are used by an exported interface. Take the following example:

package b;
import a.A;
public interface B {
     void doSomeThing(A a);

This should be familiar to OSGi developers as a uses constraint, because B communicates type A from package a. It would be very bad to include a private copy of package a in our bundle, because it would not be compatible with other bundles’ idea of what the package looks like. Actually bnd will report a build warning when it sees an exported package having a uses constraint on a private package, so you easily can avoid this situation.


Static linking in OSGi can improve your bundles, making them safer and easier to deploy. As always there is a trade-off and the technique can lead to larger bundles overall, but this can be mitigated by keeping packages small and coherent.

Paremus at the OSGi Community Event

Some of the Paremus team will be at the OSGi Community Event next week from Tues 29 to Thurs 31 Oct. The conference is being co-located  with EclipseCon Europe at the Forum am Schlosspark in Ludwigsburg.


OSGi_CE_2013 Logo png


There’s lots of other OSGi goodness with all 3 days packed with talks and tutorials and an OSGi BOF on Weds evening from 19.00.

I will be there too, chairing a track on Weds and moderating the Lightning Talks on Tues.

As with all great conferences there is plenty of social time too with organised activities including a drinks reception, Stammtisch and even a Circus!  And if memory serves me well they all include the consumption of lots of yummy german beer….

And if you are heading to the Community Event you might also be interested in the Code Camp that the OSGi Users’ Forum Germany are running on Mon 28 Oct in Ludwigsburg.  Neil Bartlett will be working with Peter Kriens to guide attendees through the coding for Building OSGi based HTML5 Web Application.

Hope to see you in Ludwigsburg next week.  Feel free to grab any of us if you have any questions or would like to chat.  If you want to arrange a slot to meet up (the schedule is gruelling I know) then feel free to drop us a mail.


Maven Support in Bndtools — Future Directions

Enhancing support for Maven is a perennial topic amongst Bndtools developers and users, and during the Bndtools Hackathon last week we discussed the topic in-depth.
As a result, we have determined that there will be two broad approaches towards Maven support in Bndtools. The choice between these approaches will be largely a matter of taste, and the user’s attitude and motivation towards the tools.

“Maven First”

One set of developers comes from the Maven world and is generally happy with the “Maven Way”; they do not want it to change significantly. We feel that these users are already well served by the Felix Bundle Plugin, m2eclipse and even other IDEs such as NetBeans.

Bndtools cannot and should not compete with those tools as a generic Maven-centric IDE; yet we can still add value. For example even when using m2eclipse, we can offer our wizards for creating Declarative Services or Blueprint components, and we can provide a great way to launch, test and debug OSGi applications from within the IDE.

Also our support for the OSGi R5 Repositories and Resolver specifications means that you can work with bundles built by Maven and deployed to a repository – e.g. Nexus, Artifactory, or even just the bundles installed in your “local” repository – resolve your top-level bundles and generate an application descriptor. This descriptor can then be fed back into the Maven build chain to create a fully assembled and deployed application. We feel we can do a much better job of this than other tools because we take full advantage of OSGi’s rich Capabilities and Requirements dependency model.

We call the above a “Maven first” approach. To make it work, certain parts of Bndtools will have to be decoupled from the internal build system so that, for example, we can resolve and run without having to be inside a bnd-style project. This decoupling is anyway good for Bndtools’ internal modularity, and we expect to complete it in time for the 2.2 release.

“Bnd First”

The second approach is for developers who prefer a more OSGi-centric development process, and who want to use the advanced build-time features of bnd/Bndtools while still using a fully Maven-based offline build.

For example one of the most exciting features we are working on in 2.2 is baselining, which (briefly) means the tool helps to ensure your packages and bundles are correctly versioned according to the OSGi Semantic Versioning guidelines, by breaking the build when versions are incorrect, and offering quick-fixes to get it back into shape. Ferry Huberts has written an overview on his blog.

Another feature that Bndtools has always offered is very fast incremental builds, which are integrated with the launching subsystem so that your code is compiled, built and already running in your application as soon as you save it. Incremental building is always a problem with Maven, and even m2eclipse doesn’t do it very well (though it seems to be improving gradually).

These features are hard-to-impossible to implement given the control (or rather, lack of control) afforded to us by the existing Felix Bundle Plugin. That plugin basically only creates the JAR file during the packaging phase, whereas in Bndtools bnd maintains the build model so that it can be used in Eclipse, Ant, Maven, Gradle and others.

Therefore we are working on a new plugin for Maven that allows bnd to take more control. We call this the “bnd first” approach. Actually the plugin was started by Toni Menzel, and Peter Kriens has begun enhancing it. So far it looks extremely promising, but unfortunately it is still quite experimental and will probably not be release-quality in time for 2.2. If you are interested in trying it before then, please engage on the Bndtools mailing list.

Other Tools

As ever, enhancing Maven support does not mean that we are reducing our support for existing Ant builds. We will continue to offer a modular Ant-based build system with our project templates, and in fact some of the work done at the hackathon by PK Søreide helped to improve the performance of our Ant tasks by a factor of three to four.

Also, Paul Bakker worked on a new Gradle build template, which turned out up to ten times faster than the old Ant build! We hope to have this ready in 2.2.

So our focus continues to be this: enabling you to be as productive as possible as an OSGi developer, irrespective of your choice of build tool.

Agility and Structural Modularity – part III

Click Here for Part 1 in the series

Click Here for Part 2 in the series

The first post in this series explored the fundamental relationship between Structural Modularity and Agility. In the second post we learnt how highly agile, and so highly maintainable, software systems are achievable through the use of OSGi.

This third post is based upon a presentation entitled ‘Workflow for Development, Release and Versioning with OSGi / Bndtools: Real World Challenges‘ (http://www.osgi.org/CommunityEvent2012/Schedule), in which Siemens AG’s Research & Development engineers discussed the business drivers for, and subsequent approach taken to realise, a highly agile OSGi based Continuous Integration environment.

The Requirement

Siemens Corporate Technology Research has a diverse engineering team with skills spanning computer science, mathematics, physics, mechanical engineering and electrical engineering. The group provides solutions to Siemens business units based on neural network technologies and other machine learning algorithms. As Siemens’ business units require working examples rather than paper concepts, Siemens Corporate Technology Research engineers are required to rapidly prototype potential solutions for their business units.


Figure 1: Siemens’ Product Repository.

To achieve rapid prototyping the ideal solution would be repository-centric, allowing the Siemens research team to rapidly release new capabilities, and also allowing Siemens Business units to rapidly compose new product offerings.

To achieve this a solution must meet the following high level objectives:

  1. Build Repeatability: The solution must ensure that old versions of products can always be rebuilt from exactly the same set of sources and dependencies, even many years in the future. This would allow Siemens to continue supporting multiple versions of released software that have gone out to different customers.
  2. Reliable Versioning: Siemens need to be able to quickly and reliably assemble a set of components (their own software, third party and open source) and have a high degree of confidence that they will all work together.
  3. Full Traceability: the software artifacts that are released are always exactly the same artifacts that were tested by QA, and can be traced back to their original sources and dependencies. There is no necessity to rebuild in order to advance from the testing state into the released state.

Finally, the individual software artifacts, and the resultant composite products, must have a consistent approach to application launching, life-cycle and configuration.

The Approach

OSGi was chosen as the enabling modularity framework, this decision was based upon the maturity of OSGi technology, the open industry specifications which underpin OSGi implementations, and the technology governance provided by the OSGi Alliance. The envisaged Continuous Integration solution was based upon the use of Development and Release/Production OSGi Bundle Repositories (OBR). As OSGi artifacts are fully self-describing (Requirements and Capabilities metadata), specific business functionality could be dynamically determined via automated dependency resolution and subsequent loading of the required OSGi bundles from the relevant repositories.

The Siemens AG team also wanted to apply WYTIWYR best practices (What You Test Is What You Release). Software artifacts should not be rebuilt post testing to generate the release artifacts; between the start and end of the test cycle the build environment may have changed. Many organisations do rebuild software artifacts as part of the release process (e.g.1.0.0.BETA –> 1.0.0.RELEASE); this unfortunate but common practice is caused by dependency management based on artifact name.

Finally from a technical perspective the solution needed to have the following attributes:

  • Work with standard developer tooling i.e. Java with Eclipse.
  • Have strong support for OSGi.
  • Support the concept of multiple repositories.
  • Support automated Semantic Versioning (i.e. automatic calculation of Import Ranges and incrementing of Export Versions)- as this is too hard for human beings!

For these reasons Bndtools was selected.

The Solution

The following sequence of diagrams explain the key attributes of Siemens AG solution.


 Figure 2:  Repository centric, rapid iteration and version re-use within development.

Bndtools is a repository centric tool allowing developers to consume OSGi bundles from one or more OSGi Bundle Repositories (a.k.a OBR). In addition to the local read-write DEV OSGi bundle repository, developers may also consume OSGi bundles from other managed read-only repositories; for example, any combination of corporate Open Source repositories, corporate proprietary code repositories and approved 3rd Party repositories. A developer simply selects the desired repository from the list of authorised repository, the desired artifact in the repository, dragging this into the Bndtools workspace.

Developers check code from their local workspaces into their SVN repository. The SVN repository only contains work in progress (WIP). The Jenkins Continuous Integration server builds, tests and pushes the resultant OSGi artifacts to a shared read-only Development OBR. These artifacts are then immediately accessible by all Developers via Bndtools.

As developers rapidly evolve software artifacts, running many builds each day, it would be unmanageable – indeed meaningless – to increment versions for every development build. For this reason, version re-use is permitted in the Development environment.


Figure 3:  Release.

When ready, a software artifact may be released by the development team to a read-only QA Repository.


Figure 4:  Locked.

Once an artifact has been released to QA it is read-only in the development repository. Any attempt to modify and re-build the artifact will fail. To proceed, the Developer must now increment the version of the released artifact.


Figure 5:  Increment.

Bndtools’ automatic semantic versioning can now be used by the developer to ensure that the correct version increment is applied to express the nature of the difference between the current WIP version and its released predecessor. Following the Semantic Versioning rules discussed in previous posts:

  • 1.0.0 => 1.0.1 … “bug fix”
  • 1.0.0 => 1.1.0 … “new feature”
  • 1.0.0 => 2.0.0 … “breaking change”

we can see that the new version (1.0.1) of the artifact is a “bug fix”.

The Agility Maturity Model – Briefly Revisited

In the previous post we introduced the concept of the Agility Maturity Model. Accessing Siemens’ solution against this model verifies that all the necessary characteristics required of a highly Agile environment have been achieved.

  • Devolution: Enabled via Bndtools’ flexible approach to the use of OSGi repositories.
  • Modularity & Services: Integral to the solution. Part and parcel of the decision to adopt an OSGi centric approach.

As discussed by Kirk Knoernschild in his Devoxx 2012 presentation ‘Architecture All the Way Down‘, while the Agile Movement have focused extensively on the Social and Process aspects of achieving Agile development, the fundamental enabler – ‘Structural Modularity’ – has received little attention. Those of you that have attempted to realise ‘Agile’ with a monolithic code base will be all to aware of the challenges. Siemens’ decision to pursue Agile via structural modularity via OSGi provides the bedrock upon which Siemens’ Agile aspirations, including the Social and Process aspects of Agile development, can be fully realised.

Bndtools was key enabler for Siemens’ Agile aspirations. In return, Siemens’ business requirements helped accelerate and shape key Bndtools capabilities. At this point I would like to take the opportunity to thank Siemens AG for allowing their work to be referenced by Paremus and the OSGi Alliance.

More about Bndtools

Built upon Peter Kriens‘ bnd project, the industries de-facto tool for creation of OSGi bundles, the Bndtools GitHub project was created by Neil Bartlett early 2009. Bndtools roots included tooling that Neil developed to assist students attending his OSGi training course and the Paremus SIGIL project.

Bndtools objectives have been stated by Neil Bartlett  on numerous occasions. The goal, quite simply is to make is easier to develop Agile, Modular Java applications, than not. As demonstrated by the Siemens’ project, Bndtools is rapidly achieving this fundamental objective. Bndtools is backed by an increasing vibrant open source community with increasing support from a number of software vendors; including long term commitment from Paremus. Current Bndtool community activities include support for OSGi Blueprint, stronger integration with Maven and the ability to simply load runtime release adaptors for OSGi Cloud environments like the Paremus Service Fabric.

Further detail on the rational for building Java Continuous Integration build / release chains on OSGi / Bndtools can be found in the following presentation given by Neil Bartlett to the Japan OSGi User Forum, May 2013: NeilBartlett-OSGiUserForumJapan-20130529. For those interested in pursuing a Java / OSGi Agile strategy, Paremus provide in-depth engineer consultancy services to help you realise this objective. Paremus can also provide in-depth on-site OSGi training for your in-house engineering teams. If you are interested in ether consulting or training please contact us.

The Final Episode

In the final post in this Agility and Structural Modularity series I will discuss Agility and Runtime Platforms. Agile runtime platforms are the area that Paremus has specialised in since the earliest versions of our Service Fabric product in 2004 (then referred to as Infiniflow), the pursuit of runtime Agility prompted our adoption of OSGi in 2005, and our membership of the OSGi Alliance in 2009.

However, as will be discussed, all OSGi runtime environments are not alike. While OSGi is a fundamental enabler for Agile runtimes,  in itself, the use of OSGi is not sufficient to guarantee runtime Agility. It is quite possibly to build ‘brittle’ systems using OSGi. ‘Next generation’ modular dynamic platforms like the Paremus Service Fabric must not only leverage OSGi, but must also leverage the same fundamental design principles upon which OSGi is itself based.

Click Here for Part 1 in the series

Click Here for Part 2 in the series

Japan Bound

Japan May 2013A few of the Paremus team will be over in Japan this week for a number of meetings and activities.

The first to note is a Japan JUG meeting on Monday 27 May where Neil Bartlett will be presenting “OSGi Pure and Simple”, an introduction to OSGi for Java developers new to it.

If you are in Japan we would be pleased to see you there.  The Japan JUG starts at 19.00hrs and is being held at Tokyo-to Shinjuku-ku shinjuku 5-17-17 Watabishi building. [Map]

You can find full details and how to register at http://kokucheese.com/event/index/90755/.


Wednesday 28 sees the OSGi Users’ Forum Japan meeting.  This a full day meeting and a number of the OSGi Alliance Board will be in attendance and presenting.  The presentations from Paremus members include:

  • Richard Nicholson presenting the OSGi Alliance strategy
  • Neil Bartlett presenting on “Continuous Delivery with OSGi Semantic Versioning” and
  • Mike Francis providing the Marketing update in Susan Schwarze absence.

The days events will be followed with a cocktail party and networking event in the evening.  Full details and how to register for the forum meeting can be found here.

Thursday and Friday is the OSGi Alliance Board Meeting and we also have a couple of other meetings around this, so all in all a hectic, but no doubt extremely enjoyable week ahead.


(I hope Google translate hasn’t let me down – apologies if it has!)

Agility and Structural Modularity – part II

In this second Agility and Structural Modularity post we explore the importance of OSGi™; the central role that OSGi plays in realising Java™ structural modularity and the natural synergy between OSGi and the aims of popular Agile methodologies.

But we are already Modular!

Most developers appreciate that applications should be modular. However, whereas the need for logical modularity was rapidly embraced in the early years of Object Orientated programming (see http://en.wikipedia.org/wiki/Design_Patterns), it has taken significantly longer for the software industry to appreciate the importance of structural modularity; especially the fundamental importance of structural modularity with respect to increasing application maintainability and controlling / reducing  environmental complexity.

Just a Bunch of JARs

In Java Application Architecture, Kirk Knoernschild explores structural modularity and develops a set of best practice structural design patterns. As Knoernschild explains, no modularity framework is required to develop in a modular fashion; for Java the JAR is sufficient.cover-small-229x300

Indeed, it is not uncommon for ‘Agile’ development teams to break an application into a number of smaller JAR’s as the code-base grows. As JAR artifacts increase in size, they are broken down into collections of smaller JAR’s. From a code perspective, especially if Knoernschild’s structural design patterns have been followed, one would correctly conclude that – at one structural layer – the application is modular.

But is it ‘Agile’ ?

From the perspective of the team that created the application, and who are subsequently responsible for its on-going maintenance, the application is more Agile. The team understand the dependencies and the impact of change. However, this knowledge is not explicitly associated with the components. Should team members leave the company, the application and the business are immediately compromised. Also, for a third party (e.g. a different team within the same organisation), the application may as well have remained a monolithic code-base.

While the application has one layer of structural modularity – it is not self-describing. The metadata that describes the inter-relationship between the components is absent; the resultant business system is intrinsically fragile.

What about Maven?

Maven artifacts (Project Object Model – POM) also express dependencies between components. These dependencies are expressed in-terms of the component names.

A Maven based modular application can be simply assembled by any third party. However, as we already know from the first post in this series, the value of name based dependencies is severely limited. As the dependencies between the components are not expressed in terms of Requirements and Capabilities,  third parties are unable to deduce why the dependencies exist and what might be substitutable.

It is debatable whether Maven makes any additional tangible contribution to our goal of application Agility.

The need for OSGi

As Knoernschild demonstrates in his book Java Application Architecture, once structural modularity is achieved, it is trivially easy to move to OSGi – the modularity standard for Java. 

Not only does OSGi help us enforce structural modularity, it provides the necessary metadata to ensure that the Modular Structures we create are also Agile structures

OSGi expresses dependencies in terms of Requirements and Capabilities. It is therefore immediately apparent to a third party which components may be interchanged. As OSGi also uses semantic versioning, it is immediately apparent to a third party whether a change to a component is potentially a breaking change.

OSGi also has a key part to play with respect to structural hierarchy.

At one end of the modularity spectrum we have Service Oriented Architectures, at  the other end of the spectrum we have Java Packages and Classes. However, as explained by Knoernschild, essential layers are missing between these two extremes.


Figure 1: Structural Hierarchy: The Missing Middle (Kirk Knoernschild – 2012).

The problem, this missing middle, is directly addressed by OSGi.


Figure 2: Structural Hierarchy: OSGi Services and Bundles

As explained by Knoernschild the modularity layers provided by OSGi address a number of critical considerations:

  • Code Re-Use: Via the concept of the OSGi Bundle, OSGi enables code re-use.
  • Unit of Intra / Inter Process Re-Use: OSGi Services are light-weight Services that are able to dynamically find and bind to each other. OSGi Services may be collocated within the same JVM, or via use of an implementation of OSGi’s remote service specification, distributed across JVM’s separated by a network. Coarse grained business applications may be composed from a number of finer grained OSGi Services.
  • Unit of Deployment: OSGi bundles provide the basis for a natural unit of deployment, update & patch.
  • Unit of Composition: OSGi bundles and Services are essential elements in the composition hierarchy.

Hence OSGi bundles and services, backed by OSGi Alliance’s open specifications, provide Java with essential – and previously missing – layers of structural modularity. In principle, OSGi technologies enable Java based business systems to be ‘Agile – All the Way Down!’.

As we will now see, the OSGi structures (bundles and services) map well to, and help enable, popular Agile Methodologies.

Embracing Agile

The Agile Movement focuses on the ‘Processes’ required to achieve Agile product development and delivery. While a spectrum of Lean & Agile methodologies exist, each tends to be; a variant of, a blend of, or an extension to, the two best known methodologies; namely Scrum and Kanbanhttp://en.wikipedia.org/wiki/Lean_software_development.

To be effective each of these approaches requires some degree of structural modularity.


Customers change their minds. Scrum acknowledges the existence of ‘requirement churn’ and adopts an empirical (http://en.wikipedia.org/wiki/Empirical) approach to software delivery. Accepting that the problem cannot be fully understood or defined up front. Scrum’s focus is instead on maximising the team’s ability to deliver quickly and respond to emerging requirements.

Scrum is an iterative and incremental process, with the ‘Sprint’ being the basic unit of development. Each Sprint is a “time-boxed” (http://en.wikipedia.org/wiki/Timeboxing) effort, i.e. it is restricted to a specific duration. The duration is fixed in advance for each Sprint and is normally between one week and one month. A Sprint is preceded by a planning meeting, where the tasks for the Sprint are identified and an estimated commitment for the Sprint goal is made. This is followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next Sprint are identified.

During each Sprint, the team creates finished portions of a product. The set of features that go into a Sprint come from the product backlog, which is an ordered list of requirements (http://en.wikipedia.org/wiki/Requirement).

Scrum attempts to encourage the creation of self-organizing teams, typically by co-location of all team members, and verbal communication between all team members.


‘Kanban’ originates from the Japanese word “signboard” and traces back to Toyota, the Japanese automobile manufacturer in the late 1940’s ( see http://en.wikipedia.org/wiki/Kanban ). Kanban encourages teams to have a shared understanding of work, workflow, process, and risk; so enabling the team to build a shared comprehension of a problems and suggest improvements which can be agreed by consensus.

From the perspective of structural modularity, Kanban’s focus on work-in-progress (WIP), limited pull and feedback are probably the most interesting aspects of the methodology:

  1. Work-In-Process (WIP) should be limited at each step of a multi-stage workflow. Work items are “pulled” to the next stage only when there is sufficient capacity within the local WIP limit.
  2. The flow of work through each workflow stage is monitored, measured and reported. By actively managing ‘flow’, the positive or negative impact of continuous, incremental and evolutionary changes to a System can be evaluated.

Hence Kanban encourages small continuous, incremental and evolutionary changes. As the degree of structural modularity increases, pull based flow rates also increase while each smaller artifact spends correspondingly less time in a WIP state.


An Agile Maturity Model

Both Scrum and Kanban’s objectives become easier to realize as the level of structural modularity increases. Fashioned after the Capability Maturity Model (see http://en.wikipedia.org/wiki/Capability_Maturity_Model – which allows organisations or projects to measure the improvements on a software development process), the Modularity Maturity Model is an attempt to describe how far along the modularity path an organisation or project might be; this proposed by Dr Graham Charters at the OSGi Community Event 2011. We now extend this concept further, mapping an organisation’s level of Modularity Maturity to its Agility.

Keeping in step with the Modularity Maturity Model we refer to the following six levels.

Ad Hoc – No formal modularity exists. Dependencies are unknown. Java applications have no, or limited, structure. In such environments it is likely that Agile Management Processes will fail to realise business objectives.

Modules – Instead of classes (or JARs of classes), named modules are used with explicit versioning. Dependencies are expressed in terms of module identity (including version). Maven, Ivy and RPM are examples of modularity solutions where dependencies are managed by versioned identities. Organizations will usually have some form of artifact repository; however the value is compromised by the fact that the artifacts are not self-describing in terms of their Capabilities and Requirements.

This level of modularity is perhaps typical for many of today’s in-house development teams. Agile processes such are Scrum are possible, and do deliver some business benefit. However ultimately the effectiveness & scalability of the Scrum management processes remain limited by deficiencies in structural modularity; for example Requirements and Capabilities between the Modules usually being verbally communicated. The ability to realize Continuous Integration (CI) is again limited by ill-defined structural dependencies.

Modularity – Module identity is not the same as true modularity. As we’ve seen Module dependencies should be expressed via contracts (i.e. Capabilities and Requirements), not via artifact names. At this point, dependency resolution of Capabilities and Requirements becomes the basis of a dynamic software construction mechanism. At this level of structural modularity dependencies will also be semantically versioned.

With the adoption of a modularity framework like OSGi the scalability issues associated with the Scrum process are addressed. By enforcing encapsulation and defining dependencies in terms of Capabilities and Requirements, OSGi enables many small development teams to efficiently work independently and in parallel. The efficiency of Scrum management processes correspondingly increases. Sprints can be clearly associated with one or more well defined structural entities i.e. development or refactoring of OSGi bundles. Meanwhile Semantic versioning enables the impact of refactoring is efficiently communicated across team boundaries. As the OSGi bundle provides strong modularity and isolation, parallel teams can safely Sprint on different structural areas of the same application.

Services – Services-based collaboration hides the construction details of services from the users of those services; so allowing clients to be decoupled from the implementations of the providers. Hence, Services encourage loose-coupling. OSGi Services‘ dynamic find and bind behaviours directly enable loose-coupling, enabling the dynamic formation, or assembly of, composite applications. Perhaps of greater import, Services are the basis upon which runtime Agility may be realised; including rapid enhancements to business functionality, or automatic adaption to environmental changes.

Having achieved this level of structural modularity an organization may simply and naturally apply Kanban principles and achieve the objective of Continuous Integration.

Devolution – Artifact ownership is devolved to modularity-aware repositories which encourage collaboration and enable governance. Assets may selected on their stated Capabilities. Advantages include:

  • Greater awareness of existing modules
  • Reduced duplication and increased quality
  • Collaboration and empowerment
  • Quality and operational control

As software artifacts are described in terms of a coherent set of Requirements and Capabilities, developers can communicate changes (breaking and non-breaking) to third parties through the use of semantic versioning. Devolution allows development teams to rapidly find third-party artifacts that meet their Requirements. Hence Devolution enables significantly flexibility with respect to how artifacts are created, allowing distributed parties to interact in a more effective and efficient manner. Artifacts may be produced by other teams within the same organization, or consumed from external third parties. The Devolution stage promotes code re-use and efficient, low risk, out-sourcing, crowd-sourcing, in-sources of the artifact creation process.

Dynamism This level builds upon Modularity, Services & Devolution and is the culminatation of our Agile journey.

  • Business applications are rapidly assembled from modular components.
  • As strong structural modularity is enforced (isolation by the OSGi bundle boundary),  components may be efficiently and effectively created and maintained by a number of small – on-shore, near-shore or off-shore developement teams.
  • As each application is self-describing, even the most sophisticated of business systems is simple to understand, to maintain, to enhance.
  • As semantic versioning is used; the impact of change is efficiently communicated to all interested parties, including Governance & Change Control processes.
  • Software fixes may be hot-deployed into production – without the need to restart the business system.
  • Application capabilities may be rapidly extended applied, also without needing to restart the business system.

Finally, as the dynamic assembly process is aware of the Capabilities of the hosting runtime environment, application structure and behavior may automatically adapt to location; allowing transparent deployment and optimization for public Cloud or traditional private datacentre environments.


Figure 3: Modularity Maturity Model

An organization’s Modularisation Migration strategy will be defined by the approach taken to traversing these Modularity levels. Mosts organizations will have already moved from an initial Ad- Hoc phase to Modules. Meanwhile organizations that value a high degree of Agility will wish to reach the endpoint; i.e. Dynamism. Each organisation may traverse from Modules to Dynamism via several paths; adapting migration strategy as necessary.

  • To achieve maximum benefit as soon as possible; an organization may choose to move directly to Modularity by refactor the existing code base into OSGi bundles. The benefits of Devolution and Services naturally follow. This is also the obvious strategy for new greenfield applications.
  • For legacy applications an alternative may be to pursue a Services first approach; first expressing coarse grained software components as OSGi Services; then driving code level modularity (i.e. OSGi bundles) on a Service by Service basis. This approach may be easier to initiate within large organizations with extensive legacy environments.
  • Finally, one might move first to limited Devolution by adoption OSGi metadata for existing artifacts. Adoption of Requirements and Capabilities, and the use of semantic versioning, will clarify the existing structure and impact of change to third parties. While structural modularity has not increased, the move to Devolution positions the organisation for subsequent migration to the Modularity and Services levels.

diverse set of choices and the ability to pursue these choices as appropriate, is exactly what one would hope for, expect from, an increasingly Agile environment!

[email protected]