Showing posts with label distributed osgi. Show all posts
Showing posts with label distributed osgi. Show all posts

Tuesday, August 28, 2012

Dynamic Provisioning OSGi Cloud Ecosystems

In my recent posts I looked at building a cloud ecosystem using OSGi and adding extra control to remote service invocations in such ecosystems.

In this post I'm starting to look at the provisioning of such ecosystems. OSGi frameworks are dynamic which means they can dynamically be assigned a function and when the need arises they can be dynamically repurposed. This repurposing is interesting in particular because it saves you from sending the cloud vm image over the network again. And those images can be big. You can reuse the image that you started with, changing its function and role over time.

In my previous post I was using pre-configured VM images, each representing a different deployment in the ecosystem. In this post I will start out with a number of identical images. In addition, I will deploy one other special image: the provisioner. However, since that one is also running in an OSGi framework, even that one can be repurposed or moved if the need arises.

You'll see that, as in the previous posts, none of the OSGi bundles are statically configured to know where the remote services they use are located. They are all part of the dynamic OSGi Cloud Ecosystem which binds all services together via the OSGi Remote Services compliant Discovery mechanism.
I wrote a very simple demo provisioner which keeps an eye on what frameworks are up, provisions them with OSGi bundles, and if one goes down it redeploys services to an alternative.

Altogether, the setup is as follows:
Provisioning an ecosystem of initially identical OSGi frameworks
The ecosystem contains 5 OSGi frameworks, 4 of which are identical: osgi1, osgi2, osgi3 and web. The only difference wrt to the last one is that it has a well-known hostname because it's hosting the web component that will be accessed from the outside world. The 5th framework has the same baseline as the first 4, but it acts as the provisioner/management agent, so this one has some extra bundles deployed that can provision OSGi bundles in the other frameworks in the ecosystem.

To be able to work with such a dynamic ecosystem of OSGi frameworks, the provisioner needs to monitor the frameworks available in it. Since these frameworks are represented as OSGi services this monitoring can simply be done through OSGi service mechanics, in my case I'm using a simple ServiceTracker, but you could do the same with Blueprint or Declarative Services or any other OSGi services-based framework.

Filter filter = bundleContext.createFilter(
  "(&(objectClass=org.coderthoughts...api.OSGiFramework)(service.imported=*))");
frameworkTracker = new ServiceTracker(bundleContext, filter, null) {
  @Override
  public Object addingService(ServiceReference reference) {
    handleTopologyChange();
    return super.addingService(reference);
  }

  @Override
  public void removedService(ServiceReference reference, Object service) {
    handleTopologyChange();
    super.removedService(reference, service);
  }
};
My ServiceTracker listens for services with OSGiFramework as objectClass which have the service.imported property set. This means that it listens to remote frameworks only. Whenever a framework appears or disappears the handleTopologyChange() will be called.

For this posting I wrote a really simple demo provisioner that has the following rules that are evaluated when the topology changes (frameworks arrive and disappear in the ecosystem):
  • Deploy the web bundles to a framework with an ip address starting with 'web-'.
  • Deploy one instance of the service provider bundles to the framework with the largest amount of memory available
  • Ensure that if the vm hosting the service provider bundles disappears for some reason, they get redeployed elsewhere. Note that by virtue of using the OSGi Remote Services Discovery functionality the client will automatically find the moved service.
Obviously these aren't real production rules, but they give you an idea what you can do in a management/provisioning agent:
  • Decide what to deploy based on image characteristics, I used image name for the web component, but you can also use other metadata such as image type (Red Hat OpenShift/Amazon/Azure/etc), location (e.g. country where the VM is hosted) or other custom metadata.
  • Dynamically select a target image based on a runtime characteristic, in my case I'm using memory available.
  • Keep an eye on the topology. When images disappear or new ones arrive the provisioner gets notified - simply through OSGi Service dynamics - and changes can be made accordingly. For example a redeploy action can be taken when a framework disappears.
  • No hardcoded references to where the target services live. The service consumer use ordinary OSGi service APIs to look up and invoke the services in the cloud ecosystem. I'm not using a component system like Blueprint or DS here but you can also use those, of course. 
the demo provisioner code is here: DemoProvisioner.java

As before, I'm using Red Hat's free OpenShift cloud instances, so anyone can play with this stuff, just create a free account and go.

My basic provisioner knows the sets of bundles that need to be deployed in order to get a resolvable system. In the future I'll write some more about the OSGi Repository and Resolver services which can make this process more declarative and highly flexible.

However, the first problem I had was: what will I use to do the remote deployment to my OSGi frameworks? There are a number of OSGi specs that deal with remote management, most notably JMX. But remote JMX deployment doesn't really work that well in my cloud environment as it requires me to open an extra network port, which I don't have. Probably the best solution would be based on the REST API that is being worked on in the EEG (RFC 182) and hook that in with the OSGi HTTP service. For the moment, I decided to create a simple deployment service API that works well with OSGi Remote Services:

public interface RemoteDeployer {
    long getBundleID(String location);
    String getSymbolicName(long id);
    long installBundle(String location, byte [] base64Data);
    long [] listBundleIDs();
    void startBundle(long id);
    void stopBundle(long id);
    void uninstallBundle(long id);
}

This gives me very basic management control over remote frameworks and allows me to find out what they have installed. It's a simple API that's designed to work well with Remote Services. Note that the installBundle() method takes the actual bundle content as base-64 encoded bytes. This saves me from having to host the bundles as downloadable URLs from my provisioner framework, and makes it really easy to send over the bytes.

I'm registering my RemoteDeployer service in each framework in the ecosystem using the Remote Services based cloud ecosystem service distribution I described in my previous post. This way the provisioner can access the service in the remote frameworks and deploy bundles to each.

That's pretty much it! I've simplified the web servlet a little from the previous post to only show a single TestService invocation, let's try it out...

Try it out!
First thing I'm doing is start the Discovery Server in the ecosystem. This is done exactly as in my earlier post: Cloud Ecosystems with OSGi.

Having the discovery server running, I begin by launching my provisioner framework.  After that I'll be adding the empty OSGi frameworks one-by-one to see how it all behaves.
Finally I'll kill a VM to see redeployment at work.

The common OSGi Framework bits
The provisioner framework is created by first adding the baseline infrastructure to a new OpenShift DIY clone:
$ git clone ssh://[email protected]/~/git/provisioner.git/
$ cd provisioner
$ git fetch https://github.com/bosschaert/osgi-cloud-infra.git
$ git merge -Xtheirs FETCH_HEAD
Don't forget to set up the Discovery SSH tunnel, which is also described in Cloud Ecosystems with OSGi.

The specific Provisioner bits
Specific to the provisioner vm are the provisioner bundles. The demo provisioner I wrote can be built and deployed from the osgi-cloud-prov-src github project:
$ git clone https://github.com/bosschaert/osgi-cloud-prov-src.git
$ cd osgi-cloud-prov-src
$ mvn install 
$ ./copy-to-provisioner.sh .../provisioner
finally add the changed configuration and provisioner bundle to git in the provisioner clone, then commit and push:
$ cd .../provisioner
$ git add osgi
$ git commit -m "Provisioner VM"
$ git push

The provisioner image will be uploaded to the cloud and if you have a console visible (which you'll get by default by doing a git push on OpenShift) you will eventually see the OSGi Framework coming up.

Adding the identical OSGi Frameworks
I'm launching 3 identical cloud images all containing an unprovisioned OSGi framework. As they become active my demo provisioner will exercise the RemoteDeployer API to provision them with bundles.

First add one called web-something. Mine is called web-coderthoughts. The demo provisioner looks for one with the 'web-' prefix and will deploy the servlet there, this will allow us to use a fixed URL in the browser to see what's going on. This is exactly the same as what is descibed above in the common OSGi Framework bits:
$ git clone ssh://[email protected]/~/git/web.git/
$ cd web
$ git fetch https://github.com/bosschaert/osgi-cloud-infra.git
$ git merge -Xtheirs FETCH_HEAD
And don't forget to set up the SSH tunnel.

Once the web framework is started, the provisioner starts reacting, if you have a console, you'll see messages appear such as:
remote: *** Found web framework
remote: *** Web framework bundles deployed

Great! Let's look at the servlet page:
So far so good. We are seeing 2 frameworks in the ecosystem: the web framework and the provisioner. No invocations where made on the TestService because it isn't deployed yet.

So let's deploy our two other frameworks. The steps are the same as for the web framework, except the vm names are osgi1 and osgi2 and reload the web page:
click to see full size
The two new frameworks are now visible in the list. Our TestService, which is invoked by the servlet/web vm, is apparently deployed to osgi1. You can see that by comparing the UUIDs.

Reacting to failure
Let's kill the framework hosting the TestService and see what happens :)
OpenShift has the following command for this, if you're using a different cloud provider I'm sure they provide a means to stop a Cloud VM image.
$ rhc app stop -a osgi1

Give the system a few moments to react, then refresh the web page:
click to see full size
The osgi1 framework has disappeared from the list and the TestService invocation reports that it's now running in the osgi2 framework. The provisioner has reacted to the disappearance of osgi1 and reprovisioned the TestService bundles to osgi2. The service client (the servlet) was automatically reconfigured to use osgi2 throught the Remote Services Discovery mechanism. No static configuration or binding file, simply OSGi service dynamics at work across VMs in the cloud.

Repurposing
Obviously the provisioner in this posting was very simplistic and only really an example of how you could write one. Additionally, things will become even more interesting when you start using an OSGi Repository Service to provide the content to the provisioner. That will bring a highly declarative model to the provisioner and allows you do create a much more generic management agent. I'll write about that in a future post.

Another element not worked out in this post is the re-purposing of OSGi frameworks. However it's easy to see that with the primitives available this can be done. Initially the frameworks in the cloud ecosystem are identical, and although provisioner will fill them with different content, the content can be removed (the bundles can be uninstalled) and the framework can be given a different role by the provisioner when the need arises.

Note that all the code in the various github projects used in this post has been tagged 0.3.

Thursday, July 12, 2012

Controlling OSGi Services in Cloud Ecosystems

In my previous post I looked at the basics of setting up an OSGi Cloud Ecosystem. With that as a basis I'm going to look at an issue that was brought up during the second OSGi Cloud Workshop, held this year at EclipseCon in Washington, where this list of ideas was produced. One of topics on this list is about new ways in which services can fail in a cloud scenario. For example, a service invocation might fail because the service consumer's credit card has expired. Or the number of invocations allocated to a certain client has been used up, etc.

In this post I'll be looking at how to address this in the OSGi Cloud Ecosystem architecture that I've been writing about.

First lets look at the scenario where a cloud service grants a maximum number of invocations to a certain consumer.

An invocation policy for a remote OSGi service


From the consumer side, I'm simply invoking my remote demo TestService as usual:
  TestService ts = ... from the Service Registry ...
  String result = ts.doit();

On the Service Provider side we need to add additional control. As this control (normally) only applies to remote consumers from another framework in the cloud I've extended the OSGi Remote Services implementation to provide extra control points. I did this on a branch of the Apache CXF-DOSGi project, which is the Remote Services implementation I'm using. I came up with the idea of a RemoteServiceFactory. Conceptually a bit similar to a normal OSGi ServiceFactory. Where the normal OSGi ServiceFactory can provide a new service instance for each client bundle, a RemoteServiceFactory can provide a new service (and exert other control) for each remote client. I'm currently distinguishing each client by IP address, not completely sure whether this covers all the cases, maybe some sort of a security context would also make sense in here.
My initial version of the RemoteServiceFactory interface looks like this:
  public interface RemoteServiceFactory {
    public Object getService(String clientIP, ServiceReference reference);
    public void ungetService(String clientIP, ServiceReference reference, Object service);
  }

Now we can put the additional controls in place. I can limit service invocations to 3 per IP address:
public class TestServiceRSF implements RemoteServiceFactory, ... {
  ConcurrentMap<String, AtomicInteger> invocationCountnew ConcurrentHashMap<>();

  @Override
  public Object getService(String clientIP, ServiceReference reference) {
    AtomicInteger count = getCount(clientIP);
    int amount = count.incrementAndGet();
    if (amount > 3)
      throw new InvocationsExhaustedException("Maximum invocations reached for: " + clientIP);

    return 
new TestServiceImpl(); // or reuse an existing one
  }
 

  private AtomicInteger getCount(String ipAddr) {
    AtomicInteger newCnt = new AtomicInteger();
    AtomicInteger oldCnt = invocationCount.putIfAbsent(ipAddr, newCnt);
    return oldCnt == null ? newCnt : oldCnt;
  }

A RemoteServiceFactory allows me to add other policies as well, for example it can prevent concurrent invocations from a single consumer, see here for an example, select provided functionality based on the client or even charge the customer per invocation.

To register the RemoteServiceFactory in the system I'm currently adding it as a service registration property:
public class Activator implements BundleActivator {
  // ...
  public void start(BundleContext context) throws Exception {
    TestService ts = new TestServiceImpl();
    Dictionary<String, Object> tsProps = new Hashtable<>();
    tsProps.put("service.exported.interfaces", "*");
    tsProps.put("service.exported.configs", "org.coderthoughts.configtype.cloud");
    RemoteServiceFactory tsControl = new TestServiceRSF(context);
    tsProps.put("org.coderthoughts.remote.service.factory", tsControl);
    tsReg = context.registerService(TestService.class.getName(), ts, tsProps);
   ...

More control to the client

Being able to control the service as described above is nice, but seeing an exception in the client when trying to invoke the service isn't great. It would be good if the client could prevent such a situation by asking the framework whether a service it's hosting will accept invocations. For this I added service variables to the OSGiFramework interface. You can ask frameworks in the ecosystem for metadata regarding the services it provides:
  public interface OSGiFramework {
    String getServiceVariable(long serviceID, String name);
  }

I implemented this idea using the OSGi Monitor Admin service (chapter 119 of the Compendium Specification)

Service Variables are accessed via OSGi Monitor Admin

From my client servlet I'm checking the service status before calling the service:
  fw.getServiceVariable(serviceID, OSGiFramework.SV_STATUS);
(note given a service reference, you can find out whether it's remote by checking for the service.imported property, you can find the hosting framework instance by matching their endpoint.framework.uuid properties and you can get the service ID of the service in that framework by looking up the endpoint.service.id of the remote service - see here for an example).
So I can ask the OSGiFramework whether I can invoke the service, it can respond with various return codes, as a starting point for possible return values, I took the following list, largely inspired by the HTTP response codes:
  SERVICE_STATUS_OK // HTTP 200
  SERVICE_STATUS_UNAUTHORIZED // HTTP 401
  SERVICE_STATUS_PAYMENT_NEEDED // HTTP 402
  SERVICE_STATUS_FORBIDDEN // HTTP 403
  SERVICE_STATUS_NOT_FOUND // HTTP 404
  SERVICE_STATUS_QUOTA_EXCEEDED // HTTP 413
  SERVICE_STATUS_SERVER_ERROR // HTTP 500
  SERVICE_STATUS_TEMPORARY_UNAVAILABLE // HTTP 503

It might also be worth adding some response codes around saying that things are still ok, but not for long, I was thinking of these responses:
  SERVICE_STATUS_OK_QUOTA_ALMOST_EXCEEDED
  SERVICE_STATUS_OK_PAYMENT_INFO_NEEDED_SOON
These need more thought but I think they can provide an interesting mechanism in preventing outages.

Under the hood I'm using the OSGi Monitor Admin Specification. I took an implementation from KnowHowLab (thanks guys!). It gives a nice Whiteboard pattern-based approach to providing metadata via a Monitorable service. As the RemoteServiceFactory is the place where I'm implementing the policies for my TestService, it provides me a natural place to publish the metadata too.

When the client calls OSGiFramework.getServiceVariable(id, SV_STATUS) the OSGiFramework service implementation in turn finds the matching Monitorable, which provides the status information. The Monitorable for my TestService is implemented by its RemoteServiceFactory:
public class TestServiceRSF implements ..., Monitorable {
  ConcurrentMap<String, AtomicInteger> invocationCount = new ConcurrentHashMap<>(); 

  // ...

  @Override
  public StatusVariable getStatusVariable(String var) throws IllegalArgumentException {
    String ip = getIPAddress(var);
    AtomicInteger count = invocationCount.get(ip);

    String status;
    if (count == null) {
      status = OSGiFramework.SERVICE_STATUS_NOT_FOUND;
    } else {
      if (count.get() < 3)
        status = OSGiFramework.SERVICE_STATUS_OK;
      else
        status = OSGiFramework.SERVICE_STATUS_QUOTA_EXCEEDED;
    }
    return new StatusVariable(var, StatusVariable.CM_SI, status);
  }

  private String getIPAddress(String var) {
    // The client IP is the suffix of the variable name
    if (!var.startsWith(OSGiFramework.SERVICE_STATUS_PREFIX))
      throw new IllegalArgumentException("Not a valid status variable: " + var);


    String ip = var.substring(OSGiFramework.SERVICE_STATUS_PREFIX.length());
    return ip;
  }

Using MonitorAdmin thought the OSGi ServiceRegistry gives me a nice, loosely coupled mechanism to provide remote service metadata. It fits nicely with the RemoteServiceFactory approach but can also be implemented otherwise.

When I use my web servlet again I can see it all in action. It invokes the test service 5 times:



In the Web UI you can see the 2 OSGi Frameworks in this cloud ecosystem. The local one (that also hosts the Web UI) and another one in a different cloud VM.
The Servlet hosting the webpage invokes the TestService 5 times. In this case there is only a remote instance available. After 3 invocations it reports that the invocation quota have been used up.

The Web UI servlet also invokes another remote service (the LongRunningService) twice concurrently. You can see the policy that prevents concurrent invocation in action where only one invocation succeds (it waits for a while then returns 42) and the other reports an error and does not invoke the actual service.

The demo simply displays the service status and the return value from the remote service, but given this information I can do some interesting things.
  • I can make the OSGi service consumer aware and avoid services that are not OK to invoke. Standard OSGi service mechanics allow me to switch to another service without much ado. 
  • I can go even further and add a mechanism in the OSGi framework that automatically hides services if they are not OK to invoke. I wrote a blog post a while ago on how that can be done: Altering OSGi Service Lookups with Service Registry Hooks - the standard OSGi Service Registry Hooks allow you to do things like this.

Do it yourself!

I have updated osgi-cloud-infra (and its source) with the changes to the OSGiFramework service. The bundles in this project are also updated to contain the Monitor Admin service implementation and the changes to CXF-DOSGi that I made on my branch to support the RemoteServiceFactory.

Additionally I updated the demo bundles in osgi-cloud-disco-demo to implement the RemoteServiceFactory, add Service Variables and update the webui as above.

There are no changes to the discovery server component.

Instructions to run it all are identical to what was described in the previous blog post - just follow the steps from 'try it out' in that post and you'll see it in action.
Note that it's possible that this code will be updated in the future. I've tagged this version as 0.2 in git.

Thursday, June 28, 2012

Cloud ecosystems with OSGi

One of the areas where I think that the dynamic services architecture of OSGi can really shine is in the context of cloud. And what I have in mind here is a cloud ecosystem comprised of multiple nodes in a cloud, or possibly across clouds, where each node in this ecosystem potentially has a different role from the other. In such a system the various nodes need to be able to work together to perform some function, and hooking the pieces together is really where the fun starts because how do you know from inside one cloud vm where the other ones are? Various people are working on solutions for this which range from elastic IP addresses to plugging in variables when launching a VM and various other ones. While I agree that these solutions provide value I think that they should not necessarily bleed into the space of the developer or even the deployer. The deployer should simply be able to select a cloud, create a few instances and associate them together. At that point they should nicely work together. 


This is where OSGi Services come in. OSGi Services implement a Java interface (we might see OSGi services in other languages too in the not too distant future) and are registered in the OSGi Service Registry. Consumers of these services are not tied to the provider as they select the service on its interface or other properties. The provider could be any other bundle in the OSGi Framework, or when using OSGi Remote Services they could be in a different framework. The OSGi Remote Services specs also describe a mechanism for discovery which makes it possible to find remote OSGi services using the standard OSGi Service Registry mechanisms (or component frameworks such as Blueprint, DS, etc).


So I started prototyping such a cloud ecosystem using Red Hat's OpenShift cloud combined with OSGi Remote Services. However you'll see that my bundles are pure OSGi bundles that don't depend on any type of cloud - they simply use the OSGi Service Registry as normal...



In the diagram each OSGi Framework is potentially running in its own Cloud VM, although multiple frameworks could also share VMs (this would be a deployment choice and doesn't affect the architecture).

Before diving into the details, my setup allows me to:
  • register OSGi Services to be shared with other OSGi frameworks within the ecosystem.
  • see what other frameworks are running in this ecosystem. This would be useful information for a provisioning agent.
What's so special about this? Isn't this just OSGi Remote Services? Yes, I'm using those, but the interesting bit is the discovery component which binds the cloud ecosystem together. The user of the Remote Services doesn't need to know where they physically are. Similarly, the provider of the remoted service doesn't need to know how its distributed.

As with any of my blog articles, I'm sharing the details of all this below, so do try this at home :) Most of the work here relates to setting up the infrastructure. Hopefully we can see cloud vendors provide something like this in the not too distant future which would give you a nice and clean deployment infrastructure for creating dynamic OSGi-based cloud ecosystems (i.e. an OSGi PAAS).

The view from inside an OSGi bundle

Very simple. The provider of a service marks it as shared for use in the cloud ecosystem by adding 2 extra service registration properties. I'm using the standard OSGi Remote Service property service.exported.interfaces for this, however with a special config type: org.coderthoughts.configtype.cloud. This config type is picked up by the infrastructure to mean that it needs to be shared in the current cloud ecosystem.

I wrote a set of demo bundles to show the OSGi cloud ecosystem in action. One of the demo bundles registers a TestService, using the standard BundleContext API and adds these properties:

public class Activator implements BundleActivator {
  public void start(BundleContext context) throws Exception {
    TestService dr = new TestServiceImpl();
    Dictionary props = new Hashtable();
    props.put("service.exported.interfaces", "*");
    props.put("service.exported.configs",
              "org.coderthoughts.configtype.cloud");
    context.registerService(TestService.class.getName(), dr, props);
  }
  ....
}
You can see the full provider class here: Activator.java

Consuming the service is completely non-intrusive. My demo also contains a Servlet that provides a simple Web UI to test the service and makes invocations on it. It doesn't need to specify anything special to use an OSGi service that might be in another framework instance. It uses a standard OSGi ServiceTracker to look up the TestService:

ServiceTracker testServiceTracker = new ServiceTracker(context, TestService.class.getName(), null) {
  public Object addingService(ServiceReference reference) {
    testServicesRefs.add(reference);
    return super.addingService(reference);
  }

  public void removedService(ServiceReference reference, 
                             Object service) {
    testServicesRefs.remove(reference);
    super.removedService(reference, service);
  }
};
testServiceTracker.open();
For the whole thing, see here: MyServlet.java

I used plain OSGi Service APIs here, but you can also use Blueprint, DS or whatever OSGi Component technology to work with the services...

The main point is that we are doing purely Service Oriented Programming. As long as the services are available somewhere in the ecosystem their consumers will find them. If a cloud VM dies or another is added, the dynamic capabilities of OSGi Services will rebind the consumers to the changed service locations. The code that deals with the services doesn't deal with the physical cloud topology at all.


Try it out!

As always on this blog I'm providing detailed steps to try this out yourself. Note that I'm using Red Hat's OpenShift which gives you 3 cloud instances for development purposes for free. The rest is all opensource software so you can get going straight away.

Also note that you can set this up using other clouds too, or even across different clouds, the OSGi bundles aren't affected by this at all. So if you prefer another cloud the only thing you need to do there is setup the Discovery system for that cloud; the same OSGi bundles will work.

Cloud instances

For this example I'm using 3 cloud VMs to create my ecosystem. All of which are based on the OpenShift 'DIY' cartridge as I explained in my previous posting. They have the following names:
  • discoserver - provides the Discovery functionality
  • osgi1 and osgi2 - two logically identical OSGi frameworks

Discovery

The Discovery functionality is based on Apache ZooKeeper and actually runs in its own cloud vm. Everything you need is available from the github project osgi-cloud-discovery.

Here's how I get it into my cloud image (same as described in my previous post):
$ git clone ssh://[email protected]/~/git/discoserver.git/ (this is the URL given to you when you created the OpenShift vm)
$ cd discoserver
$ git fetch https://github.com/bosschaert/osgi-cloud-discovery.git
$ git merge -Xtheirs FETCH_HEAD 

then launch the VM:
$ git push
... after a while you'll see:
Starting zookeeper ... STARTED
Done - I've got my discovery system started in the cloud.

I didn't replicate discovery (for fault tolerance) here for simplicity, that can be added later.


The OSGi Frameworks

For the OSGi Frameworks I'm starting off with 2 identical frameworks which contain the baseline infrastructure. I put this infrastructure in the osgi-cloud-infra github project. To get this into your VM clone as provided by the OpenShift 'DIY' cartridge do similar to the above:
$ git clone ssh://[email protected]/~/git/osgi1.git/
$ cd osgi1
$ git fetch https://github.com/bosschaert/osgi-cloud-infra.git
$ git merge -Xtheirs FETCH_HEAD 

At this point it gets a little tricky as I'm setting up an SSH tunnel to the discovery instance to make this vm part of the discovery domain. To do this, I create an SSH key which I'm adding to my OpenShift account, then each instance that's part of my ecosystem uses that key to set up the SSH tunnel.

Create the key and upload it to OpenShift:
$ cd disco-tunnel
$ ssh-keygen -t rsa -f disco_id_rsa
$ rhc sshkey add -i disco -k disco_id_rsa.pub 

Create a script that sets up the tunnel. For this we also need to know the SSH URL of the discovery VM. This is the [email protected] identifier (or whatever OpenShift provided to you). In the disco-tunnel directory is a template for this script, copy it and add the identifier to the DISCOVERY_VM variable in the script:
cp create-tunnel-template.sh create-tunnel.sh
$ vi create-tunnel.sh
... set the DISCOVERY_VM variable ...

finally add the new files in here to git:
$ git add create-tunnel.sh disco_id_rsa*

For any further OSGi framework instances, you can simply copy the files added to git here (create-tunnel.sh and disco_id_rsa*) and add to the git repo. 

As you can see, this bit is quite OpenShift specific. It's a once-off thing that needs setting up and it's not really ideal, I hope that cloud vendors will make something like this easier in the future :)


Add Demo bundles

At this point I have my cloud instances set up as far as the infrastructure goes. However, they don't do much yet given that I don't have any application bundles. I want to deploy my TestService as described above and I'm also going to deploy the little Servlet-based Web UI that invokes it so that we can see it happening. The demo bundles are hosted in a source project: osgi-cloud-disco-demo.

To deploy, clone and build the demo bundles:
$ git clone git://github.com/bosschaert/osgi-cloud-disco-demo.git
$ cd osgi-cloud-disco-demo
$ mvn install

Next thing we need to do is deploy the bundles. For now I'm using static deployment but I'm planning to expand to dynamic deployment in the future.

I'm deploying the Servlet-based Web UI bundle first. The osgi-cloud-demo-disco source tree contains a script that can do the copying and updates the configuration to deploy the bundles in the framework:
$ ./copy-to-webui-framework.sh ~/clones/osgi1

In the osgi1 clone I can now see that the bundles have been added and the configuration to deploy them updated:
$ git status
#    modified:   osgi/equinox/config/config.ini
# Untracked files:
#    osgi/equinox/bundles/cloud-disco-demo-api-1.0.0-SNAPSHOT.jar
#    osgi/equinox/bundles/cloud-disco-demo-web-ui-1.0.0-SNAPSHOT.jar
Add them all to git and commit and push the git repo:
$ git add osgi/equinox
$ git commit -m "An OSGi Framework Image"
$ git push

The cloud VM is started as part of the 'git push'.

Let's try the demo web ui, go to the /webui context of the domain that OpenShift provided to you and it will display the OSGi Frameworks known to the system and all the TestService instances:
There is 1 framework known (the one running the webui) and no TestService instances. So far so good.

Next we'll make the TestService available in another Cloud vm.
Create another cloud VM (e.g. osgi2) identical to osgi1, but without the demo bundles.

Then deploy the demo service provider bundles:
$ ./copy-to-provider-framework.sh ~/clones/osgi2

In the osgi2 clone I can now see that the bundles have been added and the configuration to deploy them updated:
$ git status
# On branch master
#    modified:   osgi/equinox/config/config.ini
#
# Untracked files:
#    osgi/equinox/bundles/cloud-disco-demo-api-1.0.0-SNAPSHOT.jar
#    osgi/equinox/bundles/cloud-disco-demo-provider-1.0.0-SNAPSHOT.jar

Add them all to git and commit and push the git repo:
$ git add osgi/equinox 
$ git commit -m "An OSGi Framework Image"
$ git push

Give the system a minute or so, then refresh the web UI:

You can now see that there are 2 OSGi frameworks available in the ecosystem. The web UI (running in osgi1) invokes the test service (running in osgi2) which, as a return value, reports its UUID to show that its running in the other instance.

Programming model

The nice thing here is that I stayed within the OSGi programming model. My bundles simply use an OSGi ServiceTracker to look up the framework instances (which are represented as services) and the TestService. I don't have any configuration code to wire up the remote services. This all goes through the OSGi Remote Services-based discovery mechanism.
Also, the TestService is invoked as a normal OSGi Service. The only 'special' thing I did here was to mark the TestService as exported in the cloud with some service properties.

Conclusion

This is just a start... I think it opens up some very interesting possibilities and I intend to write more posts in the near future about dynamic provisioning in this context, service monitoring and other cloud topics. The example I've been running here is on my employer's (Red Hat) OpenShift cloud - but it can work on any cloud or even across clouds and the bundles providing the functionality generally don't need to know at all what cloud they're in...

Some additional notes

Cloud instances typically have a limited set of ports they can open to the outside world. In the case of OpenShift you currently get only one: port 8080 which is mapped to external port 80. If you're running multiple applications in a single cloud VM this can sometimes be a problem as they each may want to have their own port. OSGi actually helps here too. It provides the OSGi HTTP Service where bundles can register any number of resources, servlets etc on different contexts of the same port. So in my example I'm running my Servlet on /webui but I'm also using Apache CXF-DOSGi as the Remote Services implementation which exposes the OSGi Services on the same port but different contexts. As many web-related technologies in OSGi are built on the HTTP Service they can all happily coexist on a single port.

Friday, May 8, 2009

Questions from the RFC 119 webinar

There were quite a number of questions on the webinar and I didn't get to answer all of them, so I'm providing some answers here. For those of you who missed the webinar, you can still view the recorded version of the Distributed OSGi webinar here.
The source code of the GreeterDemo that I used during the demo session can be found in the Apache CXF demos SVN here: http://svn.apache.org/repos/asf/cxf/dosgi/trunk/samples/greeter for details on how to run the demo see the demo walkthrough page on the CXF Distributed OSGi website.

Below are the questions asked during the webinar, and an attempt to answer them. Feel free to send comments if I missed anything…

Q: Does it use RMI?
A: The wire-protocol and data-binding used are not specified by Distributed OSGi. They are up to the implementation. The Apache CXF implementation used in the demo uses SOAP/HTTP but other implementations could be written to use other wire protocols, such as RMI.

Q: Why don't you use JaxWS?
A: It's all about choice for the developer. JaxWS is a specific programming model for Web Services in Java. Distributed OSGi is for those who like the OSGi Services programming model, and would like to stay within that programming model to distribute OSGi services. It's not incompatible with other programming models, such as JaxWS, JaxRS, CORBA etc and can be used alongside them.

Q: Can I used Distributed OSGi services from Visual Basic?
A: Yes you can! Visual Basic programs can invoke on Web Services that are defined by WSDL. If you use the Apache CXF Distributed OSGi implementation it will generate a WSDL for your OSGi service which you can use from your Visual Basic client.

Q: Is there an implementation of Distributed OSGi available in Open Source?
A: Yes, the Apache CXF implementation, Eclipse ECF implementation and Apache Tuscany implementation are all open source and freely available to everyone.

Q: Can I use Distributed OSGi services from an Ajax browser client?
A: Yes, this is possible. I wrote an example of this on this blog a little while ago, see here: http://coderthoughts.blogspot.com/2009/02/distributed-osgi-powered-ajax-webapp.html.

Q: Can i use cxf with equinox, specifically pre 3.5?
A: The short answer is no, but you can use the current betas of Equinox 3.5 (builds since December). Any implementation of Distributed OSGi will need the Service Registry Hooks. Since this is a new API, it is only available in Equinox 3.5 and Apache Felix 1.4.1 or newer.

Q: Why is it based on webservices?
A: The design of Distributed OSGi is not based on Web Services and you may find implementations that don't use web services at all. The Apache CXF implementation is based on Web Services. Web services provide a standards-based approach for distributing software and allow integration with other (non-OSGi) systems. Although not required per se by an implementation of RFC 119, ‘Legacy integration' is one of the use cases that drove the design of Distributed OSGi, and by using Web Services this use-case can be addressed.

Q: You are using CXF for cross-jvm/osgi communication and mentioned that the rfc does not define the communication protocol. Does that mean that it is possible to implement a custom communication protocol?
A: Absolutely!

Q: How can I specify which protocol(s) to be used for some remote service?
A: Use intents mechanism for that. The intents mechanism allows you to declaratively specify constraints on the remote service. In short, you can put a property on the service that you want to remote that declares using what binding/protocol you the service should be exposed. So you could do something like: osgi.remote.requires.intents=JMS which means that the service should only be exposed over JMS. If the Distribution Software (DSW) can't satisfy the intent, it will not be able to expose the service. Although some values (for SOAP and JMS) have been defined in the RFC for this property, the actual possible values depend on the Distribution Software implementation used.

Q: How do I set the discovery service dynamically without the remote-services.xml?
A: Use an RFC 119 compliant Discovery implementation, these don't use the remote-services.xml files. We're currently working on one in the Apache CXF codebase that is based on Apache Zookeeper, but other implementations, such as the one based on SLP at Eclipse ECF are also available.

Q: Can I specify more than one protocol for one service? For example Web Services, CORBA, RMI, etc.
A: If you specify the protocol to use using the osgi.remote.requires.intents mechanism as described above, you can only specify one. E.g. specifying that the service has to satisfy both the RMI and CORBA intent at the same time is a contradiction, however you can register the same POJO twice with the OSGi Service registry, once for each protocol.

Q: Is Discovery service a separate OSGi container?
The RFC 119 Discovery architecture typically uses a centralized Discovery Server (potentially a replicated one, like we have with the CXF/Zookeeper implementation). The Discovery Service in the OSGi container provides a standardized interface to such a Discovery Server. The Distribution Software (DSW) interacts with the Discovery Service in the local OSGi service registry, which then communicates with the centralized Discovery Server.

Q: Service Registry hooks- limiting visibility of service-This is similar to access control the service- what are your views on this? Thanks
A: I forwarded this question to BJ Hargrave from IBM, the main author of the Service Registry Hooks RFC, he provided the following answer: "It is more about controlling visibility to services (but I guess some people may call that access [see http://blog.bjhargrave.com/2009/03/i-am-visible-but-am-i-accessible.html]). But the primary purpose is not to secure the system since without a Security Manager installed, there is no actual security. Its primary purpose is to control service visibility to influence the services a bundle sees so that it selects a desired service rather than another service. For example, the proxy for X rather than X. "

Q: so if they [OSGi Services] don't implement any signature, how are they identified within the registry?
A: OSGi Services don't need to implement a particular OSGi signature or interface, but they would typically implement an interface that relates to their purpose. In the presentation I mentioned a Purchase Service, which implements a foo.bar.PurchaseService interface. Normally services are looked up via their interface, but you can also look up services in the OSGi Service Registry purely on other properties that they might have.

Q: When you say 'Dynamically come and go', can you elaborate on that what does that really mean?
A: Example: A running OSGi system. A consumer in that system might be using a service that satisfies its lookup criteria. While the system is running another service that also satisfies these criteria could arrive (either because a new bundle that creates this service is installed or because an existing bundle registers a new service). Depending on the consumer's requirements, it could now potentially be using both services, the old and the new one. At a later time one of the services might go away at which point the consumer will only be using the one that's left. All (properly developed) OSGi service consumers are capable of dealing with dynamics like this.

Q: is the WSDL, automatically generated by some tool?
A: Yes, the Apache CXF DOSGi implementation does this dynamically at runtime, based to the Java interface of the service. You don't have to run a tool for this with CXF, although other implementations might require the running of a tool to achieve something like this.

Q: What does pojo mean?
A: Plain Old Java Object.

Q: I suppose that the parameters of the service exposed are of type 'primitive' like int, string, double, DateTime but cannot be of type by example SessionFactory ?
A: It depends on the DOSGi implementation used. If you want your code to be suitable for a number of DOSGi implementations it's best to restrict the use of data types to primitives, collections and arrays. The CXF DOSGi implementation also supports classes and interfaces as part of the signature as long as they in turn are defined in terms of the data types mentioned.

Q: Can you highlight the security capabilities that are provided by distributed OSGi?
and Q: Can we secure the communication between remote OSGI servers (and authenticate, authorise access to the services)?
A: Security is a Quality of Service that DOSGi implementations could provide. The intents mechanism can be used to only select remote services that are secure, e.g. by requesting the ‘confidentiality' intent, or by requesting a custom intent (such as AcmeSecurity) that your deployer defines with the appropriate security configuration. For more information see chapter 5.5.3 of RFC 119 (in http://www.osgi.org/download/osgi-4.2-early-draft3.pdf).

Q: How do local discovery services know about other discovery services?
A: The Discovery Service registered in the OSGi container communicates with a centralized Discovery Server. This Discovery Service needs to be configured with the connection details of the central discovery server.

Q: what implementations of distributed OSGi will be available in Equinox 3.5? Anything faster than webservices?
A: Equinox 3.5 does not have any Distributed OSGi implementations, but Eclipse 3.5 provides the ECF implementation. I guess I'm wondering what really the performance difference is that you are looking for. Have you measured the performance of CXF webservices versus something else? Could you provide us with some target figures? I know that CXF performs pretty well…

Q: Can you specify multiple instance endpoints for the same service?
A: On the server side one remote service typically maps to one remote endpoint (although exceptions are possible, especially in case the service implements multiple interfaces some implementations will give you one endpoint per interface). If you want more endpoints you can always register more OSGi services for the same POJO (although I'm not sure what the use-case for that would be).
On the consumer side, the consumer can be informed of multiple remote endpoints satisfying the OSGi Service Lookup criteria, in which case the consumer could get multiple client-side proxies to the remote services.

Q: Can you talk more about discovery with ZooKeeper and how that would work?
A: In the CXF project I started implementing the RFC 119 Discovery implementation using Apache ZooKeeper as the underlying Discovery mechanism. ZooKeeper is based on a replicated central server that holds a virtual file system. Consumers connecting to the ZooKeeper server will all view the same filesystem which holds the discovery information. The way this works in practice is that you have your cluster (1 or more) of zookeeper servers running. The CXF Discovery implementation bundle is installed in your OSGi containers. It will communicate with the ZooKeeper server for the discovery information. The net effect is that the remote service consumers will see the remote services without needing any extra configuration.

Q: Question : Is there other org.osgi.remote.type foreseen (e.g. POJO) ? What is the particularity of the type 'POJO' except its name ?
A: The POJO configuration type is specific to CXF. There is one configuration type specified in RFC 119: SCA, which is optional. Using SCA as a way to configure Distributed OSGi will give you portability on the configuration level. CXF does not implement the SCA configuration type yet…

Q: Question : Will it be possible using 'blueprint or spring DM' to integrate remote OSGI with Spring XML file ?
A: Yes - there is a demo that comes with the CXF DOSGi implementation that uses Spring-DM to create a remote OSGi service and on the consumer side it uses Spring-DM to create the consumer. Distributed OSGi is orthogonal to any OSGi component model and should therefore work fine with any of them. See here for the demo code: http://svn.apache.org/repos/asf/cxf/dosgi/trunk/samples/spring_dm

Q: is it possible to load balance across distributed OSGi bundles?
A: Yes – if multiple remote instances exist the consumer could be informed about all of them: the consumer will get a service proxy for each remote instance. The consumer can use these to implement some load-balancing algorithm.

Q: Why does a copy of the interface and class have to live on the client?
A: Only the interface needs to be available to the service consumer. The implementation class does not. I think this is a very natural thing to do. If I write a consumer to a service, like a PurchaseService, I need to know the interface to that PurchaseService, how am I otherwise going to use this in a meaningful way? The interface is the contract between the consumer and the service provider. This is also the case for local OSGi Services. They can't use an OSGi service if they don't have access to the interface.

Q: It seems like you are cheating if you get a copy of the interface and class in your client code prior to starting the demo.
A: No I'm not cheating, only the interface is present in the consumer. When I run the consumer I can see that the value I'm entering in the consumer running in Felix is echoed in the Equinox server window. The implementation resides in Equinox.

Q: Does distributed OSGi support multiple instance of the same service and thus software based failover rof services across remote services?
A: Yes, see above.

Q: for properties is there a namespace type construct to prevent collisions? or is it inherent based on the scope of a service?
A: No there isn't really a namespace (although the Distributed OSGi properties use the osgi. prefix), but it is good practice to qualify your own properties with your own organization just like a Java package, e.g. com.acme.myproperty to avoid clashes.

Q: Are there standard properties of services, like names, that every OSGI service should define?
A: No. Its absolutely fine to register a service with no properties. OSGi will typically add a few properties, like service.id and objectClass but as a service provider you don't need to worry about these.

Q: How does a Discovery service, choose if the service is deployed both locally and remotely?
A: If a good match is available locally, DOSGi will not look for remote services. If you want to influence your match, forcing either a local or a remote match you can refine your lookup by adding extra criteria in the LDAP filter.

Q: Can the contract by dynamically found and transmitted remotely?
A: You could write a system to do this, as bundles can be installed at any time in a running OSGi system, but the Distributed OSGi standard doesn't define this.

Q: With multiple instances of the same service, would they have the same endpoint, or could they have different endpoints? How would a discovery pick which one?
A: The will have different endpoints. Everything being equal Discovery will simply inform you about all of them. If your consumer only needs to use one of them, which one the consumer will get is undefined. If the consumer has a preference for a particular one it needs to refine its filter to only match that one. You can also have the consumer receive proxies to all remote instances.

Q: Can you turn existing code (POJOs) into distributed OSGI services just using configuration (i.e. no java re-coding)?
A: Yes, OSGi Blueprint/Spring-DM provide a lot of support here. You configure them with an XML file that will instantiate your POJO and register it as an OSGi service.

Q: Are you familiar with the Paremus fabric, distributed OSGi runtime that uses SCA as the "wire-up" model? If so, does it use SCA in the way you described?
A: I'm not really familiar with this product, but I have seen demos of it. The SCA part of RFC 119 is really about defining qualities of service for the distributed services. It's not used for wiring services, the OSGi Service registry is used for that.

Q: Will both local and remote services be returned from a search?
A: There is no difference in the API to use to obtain local services or remote services. However, if appropriate local matches are available, the system won't look for remote services. If you want all local and remote matches you can do lookups with and without the osgi.remote property. The osgi.remote property is only set on proxies for remote services.

Q: Are there any drawbacks, using OSGI distributed services that you may know of?
A: None :) Seriously, yes remote computing is not the same as local computing. There are additional failure scenarios, additional latency, additional security impacts and so on. So if your system works perfect with all your services locally, keep it that way. However, if you want to distribute your OSGi services, then RFC 119 is a good way to do it!

Tuesday, April 21, 2009

Webinar and demo of Distributed OSGi (RFC 119) on Tue April 28

Just to let y'all know... I'll be doing a webinar and demo of Distributed Services OSGi on Tuesday April 28.
Distributed OSGi is a major new feature in the OSGi 4.2 specification. For more info, see http://fusesource.com/resources/video-archived-webinars

The demo will be done using the Apache CXF Reference Implementation.
Registration and attendance is free :)

Monday, February 9, 2009

A Distributed OSGi Powered AJAX WebApp

A pretty cool new way of using Distributed OSGi services is from a non-OSGi environment. In the previous post I used an OSGi AuctionService from an OSGi AuctionConsumer in a different VM.
This posting is about using that OSGi service from a non-OSGi consumer. From an AJAX-powered web application that's running in the browser!

I'm writing the webapp using the Google Web Toolkit (GWT). Both the webapp as well as my OSGi service are hosted in the same OSGi container. With AJAX you can create a rich client application running inside the browser. So no Servlet/JSP style server-side execution. The webapp runs on the client machine. GWT makes creating the AJAX app easy: you write them in plain old Java. No need to sidestep into Javascript as GWT turns the Java into a Javascript AJAX app and all (well, almost) of the browser specific madness is taken care of by GWT too.

As with any website, the end-users users simply open it in their browser. No client install, no browser plugins. Any modern browser supports this stuff natively. Remote communication, such as obtaining a list of available items, listing a new item, placing a bid, is done via SOAP/HTTP calls directly on my OSGi AuctionService (the one from the previous posting) which has been remoted using the Apache CXF-based Distributed OSGi Reference Implementation.

To do all this I'm adding a component called Pax Web Extender to my OSGi container, on top of the Distributed OSGi bundles. Pax Web Extender turns the OSGi container into a web container which means that I can deploy a .WAR file into it. It internally leverages the OSGi HTTP Service. (Spring DM Server can do this too BTW).

My OSGi container is configured like this:

Besides the CXF Distributed OSGi and Pax-Web-Extender Bundles I have my AuctionService and AuctionInterface bundles deployed (see previous posting for the details), plus the .WAR file that holds my AJAX webapp.

The reason I can invoke my OSGi Service is because it was made available remotely over a standards-based mechanism. Distributed OSGi itself doesn't prescribe how you do your remoting, it only says that you can use standards-based wire protocols and bindings if you wish and that this can make OSGi Services available to non-OSGi consumers.

The CXF implementation of Distributed OSGi turns an OSGi Service into a Web Service that is accessible via SOAP/HTTP. GWT can already natively make remote HTTP invocations. I didn't actually even care looking for a SOAP library for GWT. SOAP is just XML and GWT has some pretty good XML parsing functionality.
The last thing that I need to worry about (besides security, but that's a separate topic) is the browser's single-origin policy. But hey: I've deployed my webapp in the same runtime as my OSGi Service so as long as I make them available over the same port there's no problem! I achieved this by sharing the same OSGi HTTP Service by both the Distributed OSGi runtime and the Pax Web Extender. Thanks to the OSGi Services architecture!

So I'm taking the AuctionService of the previous posting, but first of all I want to expose it on a slightly simpler URL. The default URL of http://localhost:9000/org/coderthoughts/auction/AuctionService is just a bit too long, let's use /auction as the context instead. I also want to make it use the OSGi HTTP Service (which CXF doesn't do by default). I can do both in one shot by setting the osgi.remote.configuration.pojo.httpservice.context service property to the value /auction. The HTTP Service that comes with CXF runs by default on port 8080 (you can change this by setting the org.osgi.service.http.port system property) so my Web Service now runs on http://localhost:8080/auction. Change the activator that registers the service to the following:
public class Activator implements BundleActivator {
  private ServiceRegistration sr;

  public void start(BundleContext context) throws Exception {
    Dictionary props = new Hashtable();
    props.put("osgi.remote.interfaces", "*");
    props.put("osgi.remote.configuration.type", "pojo");
    props.put("osgi.remote.configuration.pojo.httpservice.context",
      "/auction");
    sr = context.registerService(AuctionService.class.getName(),
      new AuctionServiceImpl(), props);
  }

  public void stop(BundleContext context) throws Exception {
    sr.unregister();
  }
}

To double check that the service is operational on the new address, simply open http://localhost:8080/auction?wsdl and see if the WSDL appears again (just like in the previous posting).

I won't go into all the detail of writing the GWT app here. Look at the code in the SVN project for all of that.
I constructed my GWT application the usual way by creating a starting point with the projectCreator and applicationCreator scripts that come with GWT. This creates a nice Eclipse project skeleton for GWT from which I can start hacking my AJAX site in Plain Old Java :)

All you need to do in the GWT app to invoke on the Distributed OSGi service is:
  • figure out where it's running, since the Web Service is running on the same host and port as where the AJAX app is served from on the /auction context, I can construct the Web Service URL as follows:
    webServiceURL = "http://" + Window.Location.getHost() + "/auction/";
  • create a SOAP message, just a bit of XML really.
  • send it via a HTTP POST request to the WebService using a com.google.gwt.http.client.RequestBuilder that comes with GWT.
  • register a callback object to process the SOAP (XML) response.

All of this happens in the AuctionSite class which is the webapp's main class. There are a few more classes such as data classes and a few dialogs. Check out the project from SVN to see them all...

To prepare my webapp (.WAR file) for running with the Pax Web Extender I'm turning it into an OSGi bundle by adding the following META-INF/MANIFEST.MF file:
Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-SymbolicName: AuctionSite
Bundle-Version: 1.0.0
Webapp-Context: auctionsite

The Webapp-Context is important as this tells the Pax Web Extender where to register the webapp.

I also have to add a WEB-INF/web.xml file to the .WAR as this is the signature file that the Pax-Web Extender needs in order to spot a bundle as a webapp. However, I don't really need to put anything in it. My webapp consists only of static files (no servlets etc.) as far as the webserver is concerned, so my web.xml is simply:
<web-app>
  <display-name>AuctionSite</display-name>
</web-app>


GWT comes with scrips that you run to turn your Java into AJAX.
I wrote a tiny little ant script that calls the GWT script and then creates a .WAR file from my GWT project in Eclipse.

Now lets boot is all up! I'm using Equinox 3.5M5 with the Multi Bundle Distribution of CXF-DOSGi.

To install the Multi Bundle Distribution of CXF-DOSGi, download it from here. Then unzip it in your Felix/Equinox installation dir.
Take the contents of the felix.config.properties.append or equinox.config.ini.append and append it to the configuration file of your OSGi container.
This will automatically load all the DOSGi bundles when starting up the OSGi container.


  • Start Equinox with the DOSGi Multi Bundle configuration to make it load all the DOSGi bundles at startup.
  • Then install and start the PAX Web Extender from here and your AuctionIntf and AuctionImpl bundles.
  • Finally install and start the AuctionSite.war file, just like any other bundle in OSGi.


You should now have the following bundles running in Equinox:
osgi> ss
Framework is launched.
id State Bundle
0 ACTIVE org.eclipse.osgi_3.5.0.v20090127-1630
1 ACTIVE org.eclipse.osgi.services_3.2.0.v20081205-1800
2 ACTIVE org.apache.geronimo.specs.geronimo-annotation_1.0_spec_1.1.1
3 ACTIVE org.apache.geronimo.specs.geronimo-activation_1.1_spec_1.0.2
4 ACTIVE org.apache.geronimo.specs.geronimo-javamail_1.4_spec_1.2.0
5 ACTIVE org.apache.geronimo.specs.geronimo-ws-metadata_2.0_spec_1.1.2
6 ACTIVE com.springsource.org.apache.commons.logging_1.1.1
7 ACTIVE com.springsource.org.jdom_1.0.0
8 ACTIVE org.springframework.bundle.spring.core_2.5.5
9 ACTIVE org.springframework.bundle.spring.beans_2.5.5
10 ACTIVE org.springframework.bundle.spring.context_2.5.5
11 ACTIVE com.springsource.org.aopalliance_1.0.0
12 ACTIVE org.springframework.bundle.spring.aop_2.5.5
13 ACTIVE org.springframework.bundle.osgi.io_1.1.2
14 ACTIVE org.springframework.bundle.osgi.core_1.1.2
15 ACTIVE org.springframework.bundle.osgi.extender_1.1.2
16 ACTIVE org.ops4j.pax.web.service_0.5.1
17 ACTIVE org.apache.servicemix.specs.locator-1.1.1
18 ACTIVE org.apache.servicemix.bundles.jaxb-impl_2.1.6.1
19 ACTIVE org.apache.servicemix.bundles.wsdl4j_1.6.1.1
20 ACTIVE org.apache.servicemix.bundles.xmlsec_1.3.0.1
21 ACTIVE org.apache.servicemix.bundles.wss4j_1.5.4.1
22 ACTIVE org.apache.servicemix.bundles.xmlschema_1.4.2.1
23 ACTIVE org.apache.servicemix.bundles.asm_2.2.3.1
24 ACTIVE org.apache.servicemix.bundles.xmlresolver_1.2.0.1
25 ACTIVE org.apache.servicemix.bundles.neethi_2.0.4.1
26 ACTIVE org.apache.servicemix.bundles.woodstox_3.2.7.1
27 ACTIVE org.apache.cxf.cxf-bundle-minimal_2.2.0.SNAPSHOT
28 ACTIVE org.apache.servicemix.specs.saaj-api-1.3_1.1.1
29 ACTIVE org.apache.servicemix.specs.stax-api-1.0_1.1.1
30 ACTIVE org.apache.servicemix.specs.jaxb-api-2.1_1.1.1
31 ACTIVE org.apache.servicemix.specs.jaxws-api-2.1_1.1.1
32 ACTIVE cxf-dosgi-ri-discovery-local_1.0.0.SNAPSHOT
33 ACTIVE cxf-dosgi-ri-dsw-cxf_1.0.0.SNAPSHOT
34 ACTIVE org.ops4j.pax.web.extender.war_0.4.0
35 ACTIVE AuctionIntf_1.0.0
36 ACTIVE AuctionImpl_1.0.0
37 ACTIVE AuctionSite_1.0.0



Open the webapp at http://localhost:8080/auctionsite/AuctionSite.html. This kicks off the AJAX app in your browser. When you hit the reload button it makes a SOAP/HTTP invocation on the server to list all the items and get their details. The SOAP messages it sends map to the AuctionService.getItemIDs() and AuctionService.getItems() APIs. (At the bottom of the screen you see the SOAP message as it comes back form the server)


Let's list a new item...


After filling in the Javascript popup, hit OK.
It now sends the following SOAP message from the browser to the server:
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
 <soap:Body>
  <ns1:listNewItem xmlns:ns1="http://auction.coderthoughts.org/">
   <ns1:arg0>Joke</ns1:arg0>
   <ns1:arg1>1000</ns1:arg1>
   <ns1:arg2>2009-02-09T15:48:22-00:00</ns1:arg2>
  </ns1:listNewItem>
 </soap:Body>
</soap:Envelope>


Once the remote invocation has completed, the list of items is updated again, which is the same as hitting the reload button.


I can still access the web service from my remote OSGi client that I developed in the previous post. The same remote OSGi service can obviously be shared among many consumers. When I run the AuctionClient bundle I'm also seeing my newly listed item.
Before I can do that I need to update the remote-services.xml file in the consumer bundle with the new location of the Web Service (when using Discovery this is not needed BTW):
<service-descriptions xmlns="http://www.osgi.org/xmlns/sd/v1.0.0">
 <service-description>
  <provide interface="org.coderthoughts.auction.AuctionService"/>
  <property name="osgi.remote.interfaces">*</property>
  <property name="osgi.remote.configuration.type">pojo</property>
  <property name="osgi.remote.configuration.pojo.address">
   http://localhost:9000/auction
  </property>
 </service-description>
</service-descriptions>


Now run the AuctionClient bundle:
Items available for auction:
 1:                 Doll $  7.99 Sat Dec 05 08:00:00 GMT 2009
10:                 Joke $ 10.00 Mon Feb 09 15:48:12 GMT 2009
 2: Empty sheet of paper $ 12.00 Tue May 11 21:10:00 IST 2010
 3:            Bike shed $126.75 Tue Sep 15 08:12:00 IST 2009

You can get all the code in this posting under the Apache license from the google code project. It's all in SVN at http://coderthoughts.googlecode.com/svn/trunk/gwt_auctionsite and http://coderthoughts.googlecode.com/svn/trunk/osgi_auctionsite .
Since it's all Eclipse projects, the easiest way to get it by doing the SVN checkout of the projects directly in Eclipse. It will give you a workspace that looks like this: