Exchanging digital signatures with Python and Java

4 Comments

I’ve been testing my Content-Signature framework discussed earlier and wanted to see if I could exchange digital signatures generated and verified from both Python and Java code.   After a bit of research here’s what I’ve found so far on how to do this.

Generate keys with openssl

The first step is to generate a private key and a certificate using the openssl program.  This is a common utility.  Do a search if it is not available on your computer and you’ll find support and instructions to install on various platforms.  It came with my macbook pro (I think maybe with Darwin tools).  You’ll have to generate the keys in both .pem format (for Python) and .der format (for Java).

# generate pems
$ openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert-private.pem -out mycert.pem

# create private key .der file
$ openssl pkcs8 -topk8 -nocrypt -in mycert-private.pem -out mycert-private.der -outform der

# create certificate .der file
$ openssl x509 -in mycert.pem -out mycert.der -outform der

From this you should have 2 sets of files: mycert-private.pem, mycert-private.der and mycert.pem and mycert.der

Import private key sign in Java

Here’s a nice tool for loading in the .der files created into a Java KeyStore.  I’ve extracted some of the code so that you can see the whole manual, programmatic process of importing a private key and signing a message.

import org.jboss.resteasy.util.Hex;
import java.io.DataInputStream;
import java.io.*;
import java.security.*;
import java.security.cert.*;
import java.security.spec.PKCS8EncodedKeySpec;

public class ExampleSignTest
{
   @Test
   public void testDerFile() throws Exception
   {
      // import private key
      InputStream is = Thread.currentThread().getContextClassLoader().getResourceAsStream("mycert-private.der");
      DataInputStream dis = new DataInputStream(is);
      byte[] derFile = new byte[dis.available()];
      dis.readFully(derFile);
      KeyFactory kf = KeyFactory.getInstance("RSA");
      PKCS8EncodedKeySpec spec = new PKCS8EncodedKeySpec(derFile);
      PrivateKey privateKey = kf.generatePrivate(spec);

      Signature instance = Signature.getInstance("SHA256withRSA");
      instance.initSign(privateKey);
      instance.update("from-java".getBytes());
      byte[] signatureBytes = instance.sign();
      System.out.println("Signature: ");
      System.out.println(Hex.encodeHex(signatureBytes));
   }
}

The code prints out the signature in hex using a simple routine from Resteasy.

Import certificate and verify in Java

Here’s an example of verifying:

@Test
public void testDerFile() throws Exception
{
   CertificateFactory cf = CertificateFactory.getInstance("X.509");
   is = Thread.currentThread().getContextClassLoader().getResourceAsStream("mycert.der");
   Certificate cert = cf.generateCertificate(is);
   PublicKey publicKey = cert.getPublicKey();

   String hexSignature = "4e3014a3a0ff296c07927e846221ee68f70e0b06ed54a1fe974944ea17b836b92279635a7e0bb6b8923df94f4023de95ef07fa76506888897a88ac440eb185b6b117f4c906cba989ffb4e1f81c6677db12e7dc22d51d9369df92165709817792dc3e647dae6b70a0d84c386b0228c2442c9a6a0107381aac8e4cb4c367435d52";
   // loading CertificateChain
   Signature verify = Signature.getInstance("SHA256withRSA");
   verify.initVerify(publicKey);
   verify.update("from-python".getBytes());
   Assert.assertTrue(verify.verify(Hex.decodeHex(pythonHexSignature)));
}

The code has hardcoded a generated signature produced from signing the “from-python” string.

Import private key and sign in Python

The Python code requires the M2Crypto library.  I tried PyCrypto, but I could get it to work.  My code was tested on macbook pro with Python 2.6.1 M2Crypto version   0.21.1.  Also notice that the .pem files are used instead of .der.  I couldn’t figure out if M2Crypto fully supported .der so I just used the .pems.

from M2Crypto import EVP, RSA, X509
import binascii

key = EVP.load_key("mycert-private.pem")
key.reset_context(md='sha256')
key.sign_init()
key.sign_update("from-python")
signature = key.sign_final()
print "Signature:"
print binascii.b2a_hex(signature)

Importing certificate and verifying in Python

Here’s the verification:

rom M2Crypto import EVP, RSA, X509
import binascii

hexSignature = "0a11ab4ebcd2b0803d6e280a1d45b5b5d5d53688949f5a4f2d6436f15df3b10633c79760b9fe3b64eb9d84371c35e8b7d946052dfdd99ebb5cf7f3092762e1a91b261117e6675f2d28afe2ec4\
d90abfe3559a1259d2c66f3dc42ca3bfce7498705833445170bd8c293d60448b6c599abfe2d06882d3fff9ef887379eb7da3fe0"
java_sig = binascii.a2b_hex(hexSignature)

cert = X509.load_cert("mycert.pem")
pubkey = cert.get_pubkey()
pubkey.reset_context(md="sha256")
pubkey.verify_init()
pubkey.verify_update("from-java")
assert pubkey.verify_final(java_sig) == 1

Hope you enjoy.  If you know a better way to set up the certs and key files, let me know.  Using openssl was the best way I could find.

Adding objects that are @Context injectable

2 Comments

One thing I’ve forgotten to document thoroughly is how to add objects that are injectable via the @javax.ws.rs.core.Context.  Usually you’ll want to use CDI or Spring to inject your dependencies or configuration into a provider or a service, but you may have situations where you cannot depend on these facilities being available to you.

import org.jboss.resteasy.core.Dispatcher;

import javax.ws.rs.core.Application;
import javax.ws.rs.core.Context;
import java.io.InputStream;
import java.util.HashSet;
import java.util.Set;

public class MyApplication extends Application
{
   public MyApplication(@Context Dispatcher dispatcher)
   {
      MyClass myInstance = new MyClass();
      dispatcher.getDefaultContextObjects().put(MyClass.class, myInstance);
   }

}

The myInstance variable is now available for injection via the @Context annotation.

Multiple uses for Content-Signature

4 Comments

After describing Content-Signature in my last blog, it was picked up by InfoQ.  Also had a great private email exchange with Jean-Jacques Dubray in which we discussed various usecases for signature protocols.  Firstly, before I dive in, a disclaimer.  I am not a security expert and don’t pretend to be one.  While I have used various authentication and authorization protocols over the years, I have not been a designer or implementer of them.  So, here’s some use cases for Content-Signature:

The NULL Use Case

I think one of the most important aspects of something like Content-Signature is that this information can be ignored by any party in the request/response chain.  The signature becomes just another thing that describes the entity being passed around.  Why is this important?  I’ll give a simple example first, then later in the blog a more complex one.

Consider a simple blog.  Let’s say I posted some really stupid comment on somebody’s blog.  Its actually very easy to impersonate somebody in the comments section of anyone’s blog.  So, if a reader read my stupid comment and thought “Did Bill Burke really say that?!?”, how would they know if I really did post or not?  While not that practical in reality, what I could do is sign each comment I made to a blog.  That way, a reader could verify my signature if they so desired.

What’s interesting about this use case is that the blog itself doesn’t care about the signature.  Nor do most comment readers care about the signature.  Only a specific party cares about the signature.  With a header based approach like Content-Signature, renderers can completely ignore the signature applied to the comment if they do not care or understand how to process it.  This is why something like Content-Signature is better than multipart/signed, IMO.  Another interesting thing is that if the blog moved, lets say from Blogspot to WordPress, the import could take along the comment signature with it.  Even though the comment is served under a different URL, the signature is still valid.

Authentication, Authorization, and Message Integrity All In One

Another use for Content-Signature is that it could be used for authentication, authorization, and message integrity, all at the same time.  When a server received a request signed with Content-Signature, it could look into the metadata of Content-Signature to determine the signer.  (This assumes a asymetric key-pair solution)  Look up the public key of the signer in private registry.  Verify the signature with this public key.  If it is successful, the server knows a) that it is the signer that sent the message, and b)that the integrity of the message is good as well.  Now that the identity of the signer is known and valid, the server can determine internally whether the signer is authorized to make the request.  Because Content-Signature is flexible and allows you to add as much metadata as you wish to the signature, additional information like the request URL, a timestampe, a NONCE, whatever could be added to create a more secure process.

Approval Process

Consider a vacation request application.  An employee creates a vacation request form.  Signs it by adding a Content-Signature header and posts it to his manager.  The manager reads the request form, signs it, forwards the document and appends his signature to the Content-Signature header.  Forwards the doc and the new Content-Signature header to HR.  HR knows both parties approved of the document and processes the vacation.

Workflow

Consider a simple order entry workflow where each phase of order fulfillment needs to happen in a specific order.  Each phase also needs to know that the previous phases really happened.  i.e. don’t ship the product until it has been payed for.  It could work like this:

  1. Customer posts order to order-entry system.  Signing it with his information.
  2. Order entry verifies signature.  It also adds an additional signature “order-entry” which is customer-sig+message body.
  3. Billing gets the order next.  It verifies the customer signature and that the “order-entry” signature.  Because “order-entry” was created with the customer-sig and message body, the billing system knows that the order is valid and that the exact order was looked at by the order-entry system.  The Billing system signs the message with customer-sig+message body.
  4. Shipping gets the order next.  It verifies the customer and billing signatures and ships the product.

Ignorant Gateways and Authorization of Actions

Another use case that JJ talked to me about is the ignorant gateway scenario.  Imagine an application that would listen to your twitter messages and forward these messages, via SMS, to your friends’ mobile.  You would automatically be billed instead of the application forwarding the tweets.  In this case, Twitter is the ignorant, pass-through, gateway.   It knows nothing about the whole authorization process.  In an imaginary world, this is how it could work:

  1. You post a twitter message.  You sign (“AT&T Auth Code” + “timestamp” + “message-id” + message body) and attach it as a signature to the method.
  2. The App is listening to twitter.  Does an SMS of message and sends along signature too.
  3. AT&T gets the SMS, looks at the signature.  Verifies it came from the user.  Because the “AT&T Auth Code” is part of the signature, AT&T knows that “Bill Burke” sent the SMS.  Since the “timestamp” and “Message-id” are part of the signature, AT&T can check to see if the SMS is a duplicate.  If all of these pass, then AT&T can bill “Bill Burke” instead of the App for the SMS.

This is also an example of authorization of a specific action via a signature.  I dont think you need separate signatures for each action you want to authorize.  It can just be a matter of concatenating multiple auth-codes within the same signature.  The hole in this approach is that hostile apps could trick users into adding an authorization to their signatures. i.e. “pay-me-$20-from-your-bank-account”.  This is why it is important for providers be involved in authorization code creation.

Complex Workflow

A complex workflow could combine some or all of these use cases together with the coordination of many different applications.

Conclusion

What it boils down to, is that, IMO, something like Content-Signature gives you a lot of flexibility when defining a distributed interface.  It allows you to combine metadata about a representation to the signing of a representation.  Because it is a header, it can be ignored if desired.  Since it is a set of simple name value pairs, it is very easy to create and parse.  (Well, depending on your platform, actually signing the message might be difficult, but, hey…).  Personally, I’m very interested in applying signatures to the RESTful interface we’re creating for our workflow engine.  Signatures just seem like a simpler way to manage multi-tier authentication and authorization.  Who knows, maybe I’m wrong here…

Proposed HTTP digital signature protocol and API

29 Comments

4/5/11: After a lot of feedback from the IETF HTTP WG, I found some work is already being done in this area in the DOSETA specification.  I’ll be retiring Content-Signature for the time being.

3/23/11: I’ve been encouraged to bring this to the IETF and have submitted an Internet-Draft on the subject.  Please go there to see further iterations on this specification.


Recently a RESTEasy user asked for the ability to digitally sign requests and responses.  They were pushing HTTP requests through one or more intermediaries and wanted to make sure that the integrity of the message was maintained as it hopped around the network.  They needed digital signatures.

There’s always been multipart/signed, but I never really liked the data format.  One, what if some clients support the format and some don’t?  Two, signature data seems really to belong in the HTTP header rather than enclosed within an envelope.  I found a nice blog that shared and added a bunch more to the conversation.  So, without finding a match by doing a google search, I decided to define our own protocol. (FYI, OAuth does have signatures as part of its protocol, but I wanted something that could be orthogonal to authentication as the client and server may not be using OAuth for authentication.)

Protocol Goals

The protocol goals and features we wanted to have were:

  • Metadata defining exactly how the message was signed
  • Ability to specify application metadata about the signature and have that metadata be a part of the signature
  • Simplicity of headers.  Have all signature information be stored within HTTP request or response headers.  This makes it easier for frameworks and client and server code in general to handle signature verification.
  • Expiration.  We wanted the option to expire signatures.
  • Signer information.  We wanted the ability to know who signed the message.  This would allow receivers to look up verification keys within internal registries.
  • Ability to ignore the signature if you don’t care about that information or if the client or server doesn’t know how to process it.
  • Ability to forward representation/message to multiple endpoints/receivers
  • Allow multiple different URLs to publish the same signed message
  • Although it could be used as an authorization mechanism, it is not meant to replace existing OAuth or Digest protocols that ensure message integrity

The Content-Signature Header

The Content-Signature header contains all signature information.  It is an entity header that is transmitted along with a request or response.  It is a semicolon ‘;’ delimited list of name value pairs.  Values must be enclosed within quotes if they use any delimiting character within their name or value.  These attributes are metadata describing the signature as well as the signature itself.  Also, the Content-Signature may have more than one value, in other words, more than one signature may be included with the Content-Signature header.  Multiple signatures are delimited by the ‘,’ character.

These are the core attributes of the Content-Signature header:

signature – (required) This is the hex encoded signature of the message.  Hex encoding was chosen over Base64 because Base64 inserts cr/lf characters after 76 bytes which screws up HTTP header parsing.

values – (optional) this is a colon “:” delimited list of attributes that are included within Content-Signature header that are used to calculate the signature.  The order of these listed attributes defines how they are combined to calculate the signature.  The message body is always last when calculating the signature.  If this attribute is omitted, then no Content-Signature attribute is used within the calculation of the signature.

headers –(optional) List of colon “:” delimited  HTTP request or response headers that were included within the signature calculation.  The order of these listed headers defines how they are combined to calculate the signature.

algorithm – (optional) The algorithm used to sign the message.  The allowable values here are the same as those allowed by java.security.Signature.getInstance().  If there is a W3C RFC registry of signing algorithms we could use those instead.

signer – (optional) This is the identity of the signer of the message.  It allows the receiver to look up verification keys within an internal registry.  It also allows applications to know who sent the message.

id – (optional) This is the identity of the signature.  It could be used to describe the purpose of a particular signature included with the Content-Signature header.

timestamp – (optional) The time and date the message was signed.  This gives the receiver the option to refuse old signed messages.  The format of this timestamp is the Date format described in RFC 2616.

expiration – (optional) The time and date the message should be expired.  This gives the sender the option to set an expiration date on the message.  The format of this attribute is the Date format described in RFC 2616.

signature-refs – This is a ‘:’ delimited list referencing other signatures by their id attribute within the Content-Signature header.  This means that these referenced signature values will be included within the calculation of the current signature.  The hex-encoded value of the referenced signature will be used .

Other attributes may be added later depending on user requirements and interest.   URI and query parameters were specifically left out of the protocol as integrity between two parties should be handled by HTTPS/SSL, the Digest authentication scheme discussed in RFC 2617, or OAuth.  Remember, the point of writing this protocol is so that representations can be signed and exchanged between multiple parties on multiple machines and URLs.

Signing and Verifying a message

The signer of a message decides which Content-Signature attributes and HTTP headers it wants to include within the full signature.  The signature is calculated by signing the concatenation of

attribute-values + header-values + signature-refs + message-body

Attribute-values pertain to the list of attribute names defined within the ‘values’ attribute of the Content-Signature element.  Header-values pertain to the list of header names defined within the ‘headers’ attribute of the Content-Signature element.  Signature-refs pertains to referenced signatures that also appear in the Content-Signature header.  Attributes must always precede headers.  Headers must precede signature refs.  The message-body always comes last.  For example, if the signer decides to include the signer, expiration attributes and Content-Type and Date headers with a text/plain message of “hello world”, the base for the signature would look like this:

billSunday, 06-Nov-11 08:49:37 GMTtext/plainFriday, 11-Feb-11 07:49:37 GMThello world

The Content-Signature header transmitted would look like:

Content-Signature: values=signer:expiration;
                   headers=Content-Type:Date;
                   signer=bill;
                   expiration="Sunday, 06-Nov-11 08:49:37 GMT";
                   signature=0f341265ffa32211333f6ab2d1

To verify a signature, the verifier would recreate the signature string by concatenating the attributes specified in the “values” attribute, HTTP headers defined in “headers” attribute, and finally the message body. Then apply the verification algorithm.

If there is an attribute declared within the “values” attribute that isn’t specified in the Content-Signature header, it is assumed it is a secret held between the signer and verifier.  i.e. the signer.  The value of this attribute must be determined in an undefined way.

If there is a header declared within the “headers” attribute that doesn’t exist, the server may choose to abort if it cannot figure out how to reproduce this value.

Here’s an example of multiple signatures.  Let’s say the Content-Signature header is initially set up like this with a message body of “hello”:

Content-Signature: id=husband;
                   signature=0001,
                   id=wife;
                   signature=0002

Here, we have two initial signatures signed by two different entities, husband and wife (found by their id attribute).  We want to define a third signature, marriage, that includes those signatures.

Content-Signature: id=husband;
                   signature=0001,
                   id=wife;
                   signature=0002,
                   id=marriage;
                   signature-refs=husband:wife
                  signature=00033

The marriage signature would be calculate by the signing of this string:

00010002hello

Which is:

husband’s signature + wife’s signature + message body

If there is a signature reference declared within the signature-refs attribute that doesn’t exist, the server may choose to abort if it cannot figure out how to reproduce this value.

Other similar protocols out there?

I only spent about an hour looking to see if there were similar protocols out there.  If somebody knows, let me know.  It would be cool to get feedback on this proposal as well.

Edited:

People in the comments section of this entry keep mentioning two-legged OAuth, but I don’t see how they describe anything other than in the Authorization header.  This is something we don’t want as we want to be able to use traditional authentication mechanisms so that signing can be supported on servers or clients that don’t understand OAuth (or don’t want to use it).

Should REST be Organic?

7 Comments

Since our kids were born years ago, my wife has generally been pretty Organic crazy.  The Organic food movement was created so that we don’t “poison” our bodies with food grown and treated with harmful pesticides and chemicals, that don’t introduce dangerous genetically modified seeds or growth hormones, and finally, contribute generally to society as a whole by promoting sustainable farming that doesn’t deplete or damage the environment.

From Movement to Certification

Organic food and farming started out as a movement, but quickly turned into a branding and certification effort.  Many farmers and companies found that following strict organic principles was expensive and added to the cost of the goods they produced.  While many customers were willing to pay the extra cost involved to avoid “poisoning” their bodies, still many others are more interested in saving money now and worrying about the long term consequences to their bodies and environment later down the road.  So, some companies would avoid pesticides, but still use genetically modified seeds, and call their products organic.  Or milk companies wouldn’t use growth hormones, but still use non-organic feeds for their livestock, and still call their products organic.

To fight this pollution and misrepresentation of organic principles a branding and certification effort was introduced so that each product on the market could be officially approved as organic or not.  Organic food customers would know what to expect from a product by seeing this brand on their packaging.  If you wanted to sell organic products, you’d have to be officially certified and inspected by a third party.

Is REST following Organic’s path?

Roy Fielding, the father of REST, has the same expectations that organic food consumers have.  When something is deemed RESTful he has certain expectations that have to be 100% fulfilled.  Its very understandable as Roy deals with Web standards, they have to scale, their going to be used by millions.  The stability and usability of the Web are too important to propagate flawed protocols.  Roy says in his PhD:

While mismatches [to REST] cannot be avoided in general, it is possible to identify them before they become standardized.

So, maybe Roy should trademark REST, brand REST, and promote the creation of an official organization that can bless an application or API as RESTful much like we have an Organically Certified label.  Let’s do a tiny thought exercise on this…

One of the consequences of heading down this route is that REST evangelists will lose the Web as their prime example of REST in action.  While the HTML and HTTP standards and specifications will be a great example of REST in action, most of the applications on the Web don’t follow the strict criteria of REST (a static web page is not an application).  As Roy states, its hard to avoid mismatches in the RESTful style, even if you try.  This is especially true if you’re building a distributed interface for a legacy system.  The rest of us will lose a lot of language and vocabulary to describe our apps if REST is branded/certified.

Well…You may be laughing (or fuming) about the idea of branding and certifying REST.  The thing is though, often, when you interact with the REST community you get condescending comments like:

Then do not call it REST. It is that simple. And saying that some academic has not written a line of code detracts from what you are saying. REST has a clear enough definition. XXXX  (insert whatever here) is not RESTful, so don’t call it so!!

You get this sort of interaction when a thought, API, or interface breaks or bends a RESTful principle AND you want to still use REST to name and/or describe your thing.  Reactions like this is very akin to treating REST as a certified brand rather than a style.

Expectations are just not the same

The thing is though, the expectations of IANA, W3C, and people Roy Fielding are very different than the average application developer.  There’s really two main reasons why a developer initially is attracted to REST:

  • Simplicity.  You can use a RESTful interface without having to define a WSDL document, generate stubs, and download (or pay for) some complex middleware stack or library just to implement a “Hello World” example.
  • “Lightweight” interoperability.  Because HTTP is everywhere, if you write a RESTful interface, it should be usable in most languages and platforms without much effort (i.e. iphone, android, python, Ruby, Java, Perl, etc…)

Sure, there are a lot of other goodies you get when you learn more about REST architectural guidelines and start applying them, but these two reasons, IMO, are the reason app-developers initially look into REST.  W3C, IANA, etc. have different expectations, though, as they are defining standards that the Web will be built upon.  A screwup here can have a much more far reaching affects on things compared to messing up an application that can usually be refactored over time.  The domain REST is going to be applied to should have a direct correlation on how critical we should be of the interface.

Focus on benefits and consequences instead of definition

While strictly following REST works for Web standards, its just not always feasible to follow them religiously in everyday applications.  After all, REST is a style and set of guidelines, not a set of laws or, gasp, a branded certified trademark.  Instead, people should be allowed to use REST to name and describe themselves without harassment.  Instead, critics should focus on the consequences of a specific app or API not following a specific RESTful constraint and the benefits if they did modify the interface to follow the guideline.

I think we’ve all had negative experiences with trying to follow an architectural principle (REST, OOP, or whatever) religiously.  I think most of us realize that focusing on delivery to market through multiple iterations and user requirements and feedback is much more important than whether we religiously follow a specific guideline.  The easy answer is “Then don’t call it REST!”, but we’d have a very limited vocabulary in software engineering if this mantra was followed with every new architectural style that was created.

Freetards vs. Apple

6 Comments

Recently I’ve complained a bit about the righteousness of Apache and, at times,  its negative effect on open source and Java.  Well, recently the GNU Freetards got into the act against Apple.  It seems a piece of GPL licensed software, VLC, was published by iTunes, and one of the original developers of VLC sued Apple to remove it from iTunes stating that the GPL conflicted with the DRM licensing required by anything distributed by the App-store.  It seems that the DRM licensing forbids reverse engineering of downloaded apps, while GPL allows (and encourages) it.

Its the spirit, not the fine print, that matters

The spirit of the GPL, IMO, is that once something is open source, it stays open source, derivative works and all.  If you want to link it with your software, then your software must also become open source.  Denis-Courmont’s beef with Apple is just semantics.  You don’t need to be able to reverse engineer VLC binaries, because the source is available.  You can fork VLC into your own app and post it on the Apple App-Store with no problem.  I’ve also heard that the DRM forbids more than 5 copies of your downloaded binary to be distributed on various devices, which also breaks the GPL.  So what?  You can always download, for free, another duplicate binary.

There are consequences to fundamentalism

The first and obvious consequence to the actions is that iPhone users can’t get VLC anymore, unless of course they have the technical know-how to jailbreak their phones (which the vast majority don’t).  The second is that all the hard work of the developers who created the iPhone app is thrown out the door.  I don’t know about you, but I’d be pretty pissed.  Third, and most important is, does this screw the rest of us developers that want to distribute GPL/LGPL software for iPhone/iPad?  Which pisses me off, because, I prefer the LGPL, and more importantly the spirit of that license, not the fine-print.

I think a better approach would have been to be less confrontational and more cooperative.  For instance, I bet Red Hat and other large companies, if you asked, would be more than willing to officially ask Apple to change their policies to be more GPL friendly.  There are a lot of different ways to pressure Apple, while at the same time, not screwing the rest of us who want to distribute GPL/LGPL based software on iTunes.

This general approach to life that ideals and principle should always trump compromise gets us things like crappy health care bills, deadlocked legislatures, poor union contracts across various industries, and probably a lockout for the NFL season next year.  Hopefully this unwavering/uncompromising idealism that seems to permeate our society as of late is just a fad.  One can only dream…

Hopefully Apple does the right thing

Finally, hopefully Apple revises its terms of use to be GPL, LGPL, and open-source-license friendly.  But, IMO, iTunes and the App-Store is their baby.  Its a privilege, not a right to use it.  (Java falls into the same boat, unfortunately).  Sady, we can’t expect this change to happen.  In the meantime, we probably have to use a different license to distribute open source software on iTunes.  Thanks Denis-Courmont…

Remember why we don’t have Java 7

22 Comments

This is a bit of a reiteration of my previous blog, but, I wanted to be a bit more clear:

Ask yourself this question…Why do we not have a Java 7 release?  Mainly it is because of Apache (not the developers, but the bureaucrats) filibustering the Java 7 vote in the JCP Executive Committee all because they didn’t want a Field Of Use restriction for Harmony.   They felt entitled to the Java brand just because they are Apache.  For those of you who don’t know the Field Of Use, (IIRC) was that Harmony wouldn’t have been able to be used within a mobile environment.  IMO, I’d much rather have had a Java 7 release than to lift the FOU restriction just to make one Apache open source project  happy.  I’m upset with my company for supporting this fiasco.

Another side point:

The “I’m leaving the JCP because it isn’t working” play that seems to be popular at the moment, is, IMO, a big slap in the face to those of us who have put a lot of time, effort, engineering, and dollars to improve the Java platform, specifically on the EE side of things.  Specifically, the Apache CXF project who have created a top-notch SOAP implementation, as well, of course the Tomcat effort.  For Red Hat, we’ve put huge amount of engineering time into EJB, JPA, JSF, CDI, JAX-RS, and Validation.  There are many other companies, individuals, and open source projects that have made similar contributions.  Those of us who cared enough about the platform (and Sun and Oracle are both in this camp) have improved and evolved Java EE so that it is viable platform into the next decade, despite the best efforts of the “Party of NO” coalition of non-contributors on the EC and on the Java EE JSR.

IMO, if you are unwilling to give up something to obtain the Java brand, if you’re creating competing technologies that you have no intention of bringing back to the JCP to be standardized, if you or your company are not consumers or implementors of JCP specifications, then, you probably should leave the JCP.  In fact, I encourage it, so that the rest of us can have less obstacles in moving the platform forward.

And we care why?

15 Comments

So, Apache leaves JCP.  Surprise surprise.  Their biggest contribution in the past few years has been to filibuster the Java 7 JSR, and is the primary reason why there is no final version of Java 7 (or 8) today.  I’m all for non-contributing members leaving the JCP.  Less noise, and more people, who actually care about the language and EE platform working on improving it.

JCP is Salvageable

14 Comments

In the wake of Doug Lea leaving the JCP, I just want to say that I think the JCP is salvageable.  This idea the JCP is an unworkable entity is plain and utter myth.  A myth propagated by those that want to see it fail (i.e. SpringSource) or those that want to create their own, and controlled, specification efforts (IBM), or those that are more interested in doing their own thing than collaborating with others (i.e. Google and SpringSource).  Don’t believe me?  Well, let’s discuss it a little more.

First case in point is JPA.  In J2EE 1.4 and earlier we had the crap that was CMP and its unloved and unwanted step-sister JDO.  The Persistence story and message within EE was divided, unclear, and (in CMP’s case) inferior and fundamentally flawed.  This allowed Hibernate to flourish and practically become a de facto standard.  If the JCP was broken and unworkable, we would have never been able to get JPA into the EE 5 specification.  JPA was such an important direction for the EE platform.  Firstly, it brought the innovations of using annotations, but more importantly it unified the platform bringing de facto proprietary implementations like Hibernate and Toplink under the EE umbrella, retiring CMP and bringing the EJB community into the fold, and finally, forcing a shot-gun marriage with the JDO crowd.  The platform needed this, needed this badly to remain relevant.

Second case in point is CDI.  Spring’s rise has always been more about EJB’s incompetence rather than any real technology.  Java EE had three huge holes to fill: injection/IoC, true integration across various specifications, and most importantly an SPI to be able to extend the platform and foster innovation outside the specification process within the Java community.  CDI filled all these holes.  The fact that little Red Hat who is dwarfed in market cap and marketing muscle by the likes of Google, IBM, and Oracle, could push such a game-changing specification through with all the political opposition in place, is a testament that the JCP does work and can work if the participants are willing to focus on technology.

Third, the JCP was already changing pre-Oracle acquisition.  It was already becoming much more open.  Specifications like CDI, JSF, Validation, and JAX-RS all were defined within a completely open process.  Many had open source RIs and TCKs.  Things were improving.

As far as Doug goes, its a big loss.  I know a few of my JBoss colleagues enjoyed working with him.  Personally, I think his departure was premature.   I don’t think the dust has settled yet after the Sun acquisition.  All Java JCP major participants knew that the JCP had to change.  We wanted change.  We still have to give Oracle the benefit of the doubt and, more importantly, time.  Time to get organized.  Acquisitions take time to settle (believe me I know).  From what I’ve seen, Oracle had in the past always been an innovator supporter on the EE specification process.  I don’t see why this couldn’t continue.  They have some good people over there that recognize innovation and want to move the Java EE platform forward.  Whether or not they will have say over the Oracle business people remains to be seen IMO.

HornetQ REST Interface Beta 2 Released

1 Comment

A user requested Selector support. Follow the links and doco from:

http://jboss.org/hornetq/rest

To download, etc…

Older Entries Newer Entries