Is anybody doing HTTP message signing and encryption?

1 Comment

Over the past 6 months off and on I’ve been researching and prototyping various security related features for Resteasy.  One thing I’ve wondered is, is anybody really doing anything with HTTP message signing and encryption?  Email seems pretty well rounded in this area with specifications like DOSETA/DKIM and SMIME.  You could theoretically apply these specifications to HTTP, and I have, but I could find no examples of people doing so on the Web.  Maybe its just that my Google searching skillz are poor.

Another thing I’ve noticed is that the crypto libraries (bouncycastle and python’s M2Crypto) pretty much center around email as the protocol and you have to dive into the codebase a bit to figure out ways to transmit things over HTTP. Bouncycastle relies on javax.mail multipart implementation which is a bit limited and not very lenient on parsing (Didn’t like python’s SMIME output).

Anyways, I hope to do a Resteasy 2.3 beta soon with SMIME support.  With it I’ll have examples of Python clients posting to Resteasy services transmitting SMIME formated requests.  I’ll post a few blogs on the subject so you can see how to transmit SMIME between M2Crypto and Bouncycastle. (Python and Java).

In the meantime, does anybody have any experience in this area?

Resteasy 2.2.2 Released

Leave a comment

This is just a maintenance release to fix a few minor and critical bugs found by the community.  You can download 2.2.2 here.  Release notes are here.

Hopefully we can now focus on getting a 2.3 beta out the door.  Currently I’m working on S/MIME integration as well as a decentralized auth protocol discussed in previous blogs.

Resteasy 2.2.1 Released

4 Comments

This is just a maintenance release to fix a few minor and critical bugs found by the community.  You can download 2.2.1 here.  Release notes are here.

Decentralized Auth Ideas

8 Comments

Distributed workflow has to be the most complex use case to secure.  In it you could have multiple participants being coordinated both synchronously and asynchronously.  All forwarding and distributing information and data in between each other.  All needing to trust one another.  If you could define a relatively scalable and simple solution for workflow, you’d have something that would work in less complex scenarios.

The hub and spoke model that seems to be popular involves a central identity management provider (IDP) that participants ping to authenticate requests and to receive security information.  The biggest problem I foresee with this approach is that the IDP becomes a central point of failure.  The IDP needs to be available for applications to work.  It needs to be on the same network.  There’s a lot of extra distributed requests that need to be made.

All these problems bring me to thinking about the stateless principle of REST.  RESTful services can have state, but not session state.  The idea is that session state travels with the request.  Could we do something similar with security information?  Sure why not!  How could you trust the integrity of such information?  Digital Signatures.  I’m sure there are protocols out there that have thought of similar ideas, but its cool to think things out for yourself.  If your ideas match a particular existing protocol or specification you know you’re on the right track.  The idea have have works as follows.

Let’s pretend we have a User named Bill that wants to interact with a Travel Agent service that will buy a ticket for him on an airline, reserve an airport taxi, and reserve a hotel room.  So, Bill is interacting with the Travel Agent directly.  The Travel Agent is acting on behalf of Bill when it interacts with the airline, taxi, and hotel services.  The airline, tax, and hotel have to trust both the travel agent and Bill.

Step 1: Bill authenticates with an IDP saying he wants to interact with the Travel Agent.  The IDP returns metadata that specifies both Bill’s and the Travel Agent’s permissions for all the interactions that must take place.  It also returns the public keys for Bill and the Agent.  The IDP digitally signs all this information using its private key.

Step 2:  Bill sends a reservation request to the Travel Agent service.  Bill signs the request including the signed permissions and keys provided by the IDP.  Here’s what the request might look like:

POST /travel
Host: travelagent.com
Content-Type: application/reservation+xml
Authorization: doseta-auth user=bill;h=Visa:Permissions:Public-Keys:Host;verb=POST;path=/travel;bh=...;b=...
Visa: initiator=bill;h=Permissions:Public-Keys;d=idp.com;b=...
Permissions: bill="agent hotel airline taxi"; agent="reserve-hotel reserve-taxi reserve-flight"
Public-Keys: bill=23412341234;agent=3423412341234

<reservation>...</reservation>

Step 3: The Travel Agent authenticates and authorizes Bill’s request.  The Authorization header contains metadata that is signed by Bill.  The metadata signed by bill is the HTTP verb and path of the request (POST and /travel), and the hash of the XML posted by the request, as well as the Visa, Permissions, and Public-Key headers included within the request.  The Travel Agent verifies this signed metadata by finding and using Bill’s public key in the transmitted Public-Keys header.  If the signature passes, then the Travel Agent knows that Bill sent the request.  But….It does not know yet if Bill is a trusted identity.

Step 4: How does the Travel Agent know Bill is a valid person?  How does it know that Bill is allowed to make a reservation?  To answer these questions, the Travel Agent first looks at the transmitted Visa header.  What it boils down to is that the Travel Agent only trusts the IDP.  The Visa header was generated by the IDP and  is a digital signing of the Permissions and Public-Keys header.  The IDP  through the Visa header tells the Agent the permissions involved with the request and who will participate in the overall interaction.   The Agent only needs to know the IDP’s public key prior to the request being initiated.  So, the Agent verifies the digital signed Visa header using the stored public key of the IDP.  A successful verification also means that the Agent can trust that Bill initiated the request.  It can then look at the Permissions header to determine whether or not Bill is allowed to perform the action.

Step 5:  Next the Travel Agent needs to interact with the Airline, Hotel and Taxi services on behalf of Bill.  Here’s what a request to the Airline might look like.

POST /flights/tickets
Host: airline.com
Content-Type: application/ticket-purchase+xml
Authorization: doseta-auth user=agent;h=Visa:Permissions:Public-Keys:Host;verb=POST;path=/flights/tickets;bh=...;b=...
Visa: initiator=bill;h=Permissions:Public-Keys;d=idp.com;b=...
Permissions: bill="agent hotel airline taxi"; agent="reserve-hotel reserve-taxi reserve-flight"
Public-Keys: bill=23412341234;agent=3423412341234
<purchase>...</purchase>

You’ll notice that the Visa, Permissions, and Public-Keys headers are the same values as the original request made by Bill.  The Authorization header is different as the Travel Agent is making the request.  The airline services does authentication and authorization of the Agent’s request the same exact way the Agent did for Bill’s request.  Again, the key part of this is that only the IDP is trusted and only the IDP’s public key needs to be known ahead of time.

Vulnerabilities

Disclaimer, I’m new to security so dealing and thinking about attacks is new to me.  Generally a lot of attacks can be prevented by specifying a timestamp and expiration with each sign piece of data.  Services can refuse to honor old requests.  Nonces could also be included within signature metadata to avoid replays.

User’s Private Key is compromised

User’s authentication with the IDP doesn’t have to be key based.  It could be TOTP based where the user has to login through his browser providing a password along with a device-generated time-based key.  The IDP could then return a temporary private key the client uses to sign requests.

IDP’s Private Key is compromised

This is a scary one.  Maybe it could be prevented by requiring and acquiring Visa’s from multiple IDPs?  A service would verify signatures from two or more IDPs.  The probability of more than one IDP’s private key being compromised becomes less and less the more IDPs you have involved with the interadtion.

Summary

So here’s a summary of this brainstormed protocol:

  • The Public-Keys header’s purpose is two-fold.  First, its a list of public keys.  More importantly it is a list of principles that are involved with the interaction.
  • The Permissions header is a list of permissions of each principle involved for each service they will interact with.
  • The Visa header is a digital signature of the Public-Keys and Permissions header.  It also will probably have a timestamp and an expiration as well (all digitally signed of course).
  • The Authorization header exists to verify the integrity of the HTTP request of the entity sending the request.  It is a digital signature of the HTTP verb, path, host, message body, Visa, Permissions, and Public-Keys headers.
  • The IDP is the only trusted entity in the whole multi-tier distributed interaction.
  • Each service must have the IDP’s public key stored at deployment time prior to servicing any requests
  • There is no communication to the IDP by any service.  Even the initiating client’s first interaction with the IDP to obtain a Visa could be done ahead of time and re-used for multiple interactions.

This is just a rough outline, but there’s probably other things that could be added.  Like nonce’s for instance.  Its just a matter of implementing it and getting people to use it.  The real question is, is there an existing protocol already out there that does this sort of thing?

Brainstorming REST Security Part I

6 Comments

If you went to my presentations at JUDCon/JBossWorld/RHS 2011 or read my recent blog posting you’ve probably noticed that I’m starting to focus on REST+Security.  This will be the start of a series of blogs that attempts to solidify a common vision around Security+REST and spec out what we’re going to do for RESTEasy and JBoss.

Internet Security is A Ghetto

One thing I’ve noticed is what a ghetto Internet security is, or even security in general.  There are old and new specifications, various industry collaborations efforts that succeed sort of (OpenID), start to succeed then have mutinies (OAuth), WS-* specs trying to bleed into the Web space (SAML), and promising specs that have had success in the email world (DKIM).  That’s just the small list of examples.  Its a freakin mess!  One common thread seems to be that most of them focus on providing security for the Internet (Internet with a capital ‘I’) and most have their roots in providing security for browser based apps.  Enterprise apps, while they can build off of security specs defined for the Internet, can often have very different requirements.  Web services can also have different requirements as well as a human (browser) may not be involved with client/server interactions.  In summary, I guess what I’m saying is that there are too many specs, no clear winners, too browser focused, and very little Enterprise focused.

What I’m trying to do with this and subsequent blogs is to brainstorm what high-level requirements for security enterprise apps should have, how can we make deployment of a security solution easier, what existing specs are applicable, what existing specs are open to input, what new specs have to be implemented, how can we make the protocols as easy to implement as possible in multiple languages, and finally, how can we design security services to make it as easy as possible to deploy to our Enterprise applications.

If I had to deploy a security solution…

A security solution I’d like to have would take enterprise as well as the difference between browser and non-browser clients in mind.  Its gotta balance strong security with ease of deployment, ease of use, and ease of implementation.  Many of these will be obvious, but I want to write it down.

  • For browser based clients I’d to authenticate using a user password and a one-time-password (OTP) generated by a soft or hard token generator.  Plain passwords are just not viable.   I myself have had both my GMail and World of Warcraft accounts hacked.  A combo of password + random key allows users to have simple to remember passwords yet be secure enough not to get hacked.  With smart phones like iPhone and Android, its easy to acquire a soft key generator (or implement one) without paying RSA an arm and a leg.
  • After authentication, the browser client should obtain an expirable cookie that it forwards with each request that contains authentication information the server will use to authenticate subsequent requests.
  • For non-browser clients,  I like the idea of digitally signed requests.  Verification of a digitally signed request would be the authentication mechanism.  What’s good about this (like the OTP of browser-based clients) is that credentials are different per request in that they are part of the attached signature.  A nonce and/or an expiration can be included within the digital signature to avoid replay attacks.
  • I foresee the need for non-browser clients to make requests on behalf of other services to other services.  Attaching multiple signatures to a request might be the way here.
  • It would be really cool to have a decentralized way to to both authenticate and authorize.  The hub and spoke approach that Picketlink STS uses creates a bit of a single point of failure and can require extra network round trips.  This decentralized mechanism should be able to work in an environment where services are making requests to other services on behalf of one or more identities.
  • A user had a really interesting case where they wanted to provide access to content through signed URLs.  The idea is that they would generate a signed URL and email it to a user to click on.  Very interesting.

Applicable Specs

Here’s some specs that I thought of off the top of my head that could be useful.  If anybody has ideas of others, let me know.

  • Time-based One Time Password Algorithm (TOTP).  Anil already did some work in Picketlink to implement this protocol.  We still need to integrate it as a Authenticator Valve in JBossWeb.  There’s also a nice iPhone app that supports TOTP.  I actually forked and re-implemented a lot of it on my own when I was learning Objective C a few months ago.  We’re looking at creating an Apple App Store account to distributed this forked implementation so we can brand it Red Hat.
  • SAML.  This may be what we need to do decentralized authorization.  I’m not fully versed in the spec, but I have read up on their HTTP bindings.  I’m not sure if there is any way to tunnel assertions through an HTTP header. (We don’t want to send SOAP requests).  If we can use SAML, we can piggyback off of a lot of the efforts already done in the Picketlink project.
  • Doseta.  I’ve already blogged about this protocol.  Using DNS to distribute keys is a little weird, but cool.  I’m asking that working group for this spec to break out Doseta into a few different specifications so that we can re-use the signature calculation algorithm in a standard way and to also make DNS public key publication optional and maybe also to provide an HTTP way to distribute keys.
  • Amazon REST Authentication.  Specs out how to sign URLs.  Maybe this could be standardized at IETF.
  • OpenID.  OpenID seems interesting for decentralized authentication, but I’m not sure if it can be used as a mechanism to do decentralized authorization.  OpenID is also more of a browser-based technology.
  • OAuth.  OAuth has both browser and non-browser bindings.  OAuth 2.0 pretty much leaves out what a token looks like.  I also don’t really want a token based system for non-browser clients

Possible Middleware Services

Here’s some ideas for services/integration we would implement.

  • HTTP Identity Proxy.  While implementing just an HTTP Proxy Cache is boring what might make these feasible is applying Identity to the mix.  This would delegate authentication and even authorization to an outside service.  Requests would be authenticated/authorized through the proxy, digitally signed, then forwarded to the target service.  The target service then only need to verify the signed request using the public key of the proxy.  While there’s obvious performance drawbacks, what’s interesting about this is that the application doesn’t have to think much about security and it could possibly be added even after the service is deployed.
  • TOTP Authenticator Valve.  Nuff said…I tihnk Anil already has this.
  • Better Auth integration with JBossWeb and the JBoss Security Domain abstraction.  Right now there’s just too many steps to enable things.
  • Various auth plugins for JBossWeb to realize our vision here.

Resteasy 2.2 Released

2 Comments

After baking in the oven the last few months, Resteasy 2.2 has been released to the world and is available for download.  You can view our documentation here.  We fixed a lot of bugs since the 2.1 release which can be viewed in the release notes of previous beta and RC releases:

Features wise we’re starting to focus on security solutions for RESTful web services.  In this release we focused on a digital signature framework based on DOSETA and DKIM.  I wrote a blog a few months ago about some possible use cases for digital signatures.  It will be interesting to see how people use our digitial signature framework, but more importantly how and if they want to use the DOSETA and DKIM protocols for digital signature propagation.  We are extremely interested in feedback and suggestions for improving the protocol and how it might solve (or not solve) any security use cases you might have.

Beyond that, writing the digital signature framework also helped to flush out the Resteasy interceptor API.  For instance, we found that it was very useful to hold off marshalling header objects into string formats until the stream is written to.  This allowed us to pass information through header objects to the interceptors that are performing signing and verification.  Writing down these requirements will be very applicable to the JAX-RS 2.0 JSR as we’re currently focusing on interceptors there.

What’s Next?

Further 2.x releases will focus mainly on adding security features.  We’re also going to be developing Resteasy 3.0 in parallel.  Here’s some points:

  • message body encryption with both multipart/encrypted and develop a new Content-Encoding. This will also help us flush out interceptors more I think
  • SAML/Picketlink. I think we may be able to integrate with SAML, specifically Picketlink to provide some hub/spoke authentication/authorization.
  • Clean up our OAuth support.
  • JAX-RS 2.0 has started which we will implement in Resteasy 3.0. The client API is shaping out and I might deliver a prototype of it when the next revision is submitted by the JAX-RS spec leads.

Interceptors in JAX-RS 2.0

1 Comment

If you don’t know already, JAX-RS 2.0 JSR has started.  Right now things are focused on the Client API and also interceptor model.  The initial proposal for the client API and its corresponding interceptor model is based on Jersey:

I’ve submitted a counter proposal that tries to simplify the class hierarchy and model interceptors based more on what Resteasy has to offer.

Santiago Pericas-Geertsen, one of the spec leads, recently blogged about another proposed interceptor model.  He does a great job of setting some precedence by looking at EJB and CDI interception models.  I think there are some requirements he has overlooked though with his initial proposal that I’d like to address in this blog (and that is addressed in the Red Hat proposal linked above).

Interceptor Use Cases

Resteasy’s interceptor model was driven by use cases.  There were a bunch of features I, and others, wanted Resteasy to have and an interceptor model provided the needed abstractions to implement these features.  Specifically:

  • Server-side custom security
  • Client response caching
  • Sever response caching
  • content encoding: GZIP
  • Header decoration: i.e. annotations that add Cache-Control header
  • Digital Signature generation: the DKIM stuff I’ve been working on lately

All these features have been implemented using our interceptor model.  Another feature I want to add, that I also think might effect the requirements of an interceptor API is:

  • Message Body encryption and the ability to transparently handle it for the client or server.

Interceptor Requirements

Interceptor APIs aren’t a new thing.  They have been implemented in many different frameworks over the years.  One thing that I think throws a wrench in JAX-RS is asynchronous invocations (both on client and server side).  Asynchronous HTTP has become pretty popular both on client and server side.  In this case, different threads may post a request and process the response.

An interceptor model much take into account asynchronous invocations

The Red Hat proposal has 4 different types of interceptors:  Pre, Post, Reader, and Writer.  They are invoked in the following way on the client (pseudo code):

public ClientResponse execute(...) {
  ClientResponse response = invokePreProcessors();

  if (res == null) {
     invokeWriterInterceptors();
     response = invokeHttpInvocation();
  }

  response = invokePostProcessors();
  return response;
}
// application code
ClientResponse response = execute(...);
Something Something = response.getEntity(Something.class); // application acquires entity

// getEntity() invokes ReaderInterceptors.

The server side pseudo code would be very similar.  Why the need for 4 interfaces? 4 interception points?  What is the purpose of each interception point?  Let’s look at our original list of use cases to see:

  • First and foremost, we need to be able to support an asynchronous model.  On the client, different threads may be sending and processing requests and responses.  This is the reason for the pre and post splits.
  • Notice that if a pre-processor returns a response object, no HTTP invocation is done.  Client cache use case needs this because it may have the requested entity cached.  In that scenario, HTTP invocation will want to be circumvented.
  • On the server, with custom security, a pre-processor needs to be able to abort an incoming invocation before it reaches the JAX-RS method if the request is not authenticated.
  • A pre-processor may want to decorate request headers.  The client cache implementation will want to set If-None-Match and If-Modified-Since headers if it believes a cached entry is stale (to perform a conditional GET).

So, thats all the things that might be done be a pre-processor.

What is a WriterInterceptor for? Why is a specific WriterInterceptor needed instead of just piggy backing off of the pre-processor (on client) or post processor (on server).

  • There are two separate use cases for WriterInterceptors.  GZIP encoding and Digital Signatures.  A GZIP WriterInterceptor needs to compress the outgoing response, so it needs to wrap the OutputStream in a GzipOutputStream.  For Digital Signatures (in the DKIM case), a hash of the body needs to be calculated and added to the DKIM-Signature request (client-side) or response (server-side) header.  THis means the outgoing body needs to be buffered as well as hashed so that the header can be set before the body is written.
  • Why a separate interface from pre-processor (client) post-processor (server)? The most compelling reason to have a separate WriterInterceptor is reusability on client and server.  Writer interception happens in different places from the client and server.  Client it happens during request pre-processing.  SErver it happens during response post-processing.
  • Another reason for a separate interface is that a WriterInterceptor has a clear order and interception point.  A client cache interceptor wants to avoid streaming an entity body altogether.  While a content-encoding interceptor wants to intercept stream output.

What are post-processors for?  Why the separation/distinction of a ReaderInterceptor compared to a PostProcessor?

  • On the client side, a cache interceptor will want to cache the raw bits of a response entity *BEFORE* it is unmarshalled.  Also, based on the status code (i.e. NOT MODIFIED), it may want to pull an entry from the cache itself and set the input stream and override some response headers.  A post processor would be used for this.
  • One of the problems on the client is that application code basically needs to decide when unmarshalling happens.  Application code may make decisions based on a status code and/or a response header before it decides how a entity body is unmarshalled, or even if it is unmarshalled.  Because a cross cutting concern (like caching) may need to modify a response code or header, you need this distinction between post processing of a response, and reader interception.
  • One last use case for post-processor is header decoration on the server side.  Think of a @CacheControl annotation that builds and sets a Cache-Control response header.

What are ReaderInterceptors used for?

  • decoding GZIP encoded streams.  Verifying digital signatures.
  • Like WriterInterceptors, it is nice to have the concept of a ReaderInterceptor as it can be used both on the client and server side.

Review of Requirements

Here’s a shorter list of requirements, without the explanation.  A few others are added in here without detailed explanation

  • Need to support both synchronous and asynchronous invocation styles seemlessy without a lot of redundant code.
  • ability to add/modify/remove headers from an outgoing or incoming request
  • ability to add/modify/remove headers from an outgoing or incoming response
  • Ability to abort/interrupt/bypass request processing and return a custom response
  • Ability to intercept before unmarshalling to add/modify/remove headers or change the status code.
  • You need to be able to pass information between interceptors.  Servlet API has request attributes.  Something similar is needed in a JAX-RS interceptor model
  • Interceptors need to be able to obtain metadata from the things they are intercepting.  They need to be able to introspect anntotations on the server side (on the client side too if we standardize Resteasy’s proxy framework).

Hopefully I didn’t miss anything here.

Interceptor ordering

Another thing to talk about is how should interceptors be ordered?  While interceptor developers should try to make their implementations as order independent as possible, this isn’t always possible.  If you are writing a library of interceptors you want to be usable by a wide variety of applications (like the ones we have in Resteasy), you don’t want to require any extra configuration by the user to specify interceptor ordering.  You want them to just be able to pick up interceptors just as they would automatically have their services scanned for and deployed.

To help mitigate this problem, Resteasy has the concept of logical ordering, or “named” precedence.  Resteasy defines a default set of precedence catagories:  SECURITY, HEADER_DECORATOR, DECODER, ENCODER.  If an application interceptor falls into one of these catagories, they just annotate their interceptor with the precendence catagory desired.  New catagories can be created and defined as coming before or after a preconfigured precedence catagory.

It probably doesn’t need to be that complicated.  In Santiago’s blog he suggested a numeric ordering.  What an application could do is define constants that represent a catagory.  Much easier to plug things in this way than the Resteasy model. 🙂

Anyways, this blog is getting quite long.  Hopefully I’ve articulated the use cases and requirements of interceptors good enough so that you can see that the Red Hat proposal is a sound one based on extensive experience using the model.  I also want to say that the JAX-RS 2.0 process seems to be moving along pretty smoothly.  With Paul and Roberto leaving so abruptly I was a little worried at first, but I think Santiago and Marek have things in hand.

Investigating DOSETA(DKIM) For Signatures

1 Comment

Recently I blogged about my proposed Content-Signature header for transmitting digital signatures.  I created a Internet Draft and submitted it to the IETF.  After a bunch of discussions with some helpful folks on the IETF HTTP WG list, I found that email already has such a system called Domain Keys Identified Mail (DKIM).  Its designed specifically for email messages, but some work is being done by David Crocker  and friends to make it applicable to other protocols via the DOSETA specification.

One particular interesting feature is how public keys are discovered.  Basically DNS names are used for identity and acquiring public keys for verification is just a matter of getting a text record from a particular domain.  It sounds exciting because even in an IT organization you could have distributed non-centralized authentication and authorization. DNS gives you a structure so that you could authorize a whole domain of users or one user at a time.  It would be interesting to be able to see how this structure could be mapped onto a URI instead too.

So, my short lived support for Content-Signature in Resteasy 2.2-beta-1 will be retired and I’m going to look into using DOSETA instead for 2.2.Final.

HornetQ 2.2.2 Released (Has latest REST interface)

3 Comments

HornetQ 2.2.2 has been released.  The HornetQ REST interface is now distributed and bundled with it.  The source code has also moved to the HornetQ SVN.  Visit hornetq.org for more details.

Resteasy 2.2-beta-1 released with new digital signature framework

2 Comments

Fixed a lot of bugs check out jira.  Also some notable new features, specifically:

– Our new digital signature framework inspired by Greg Totsline.  This is the implementation and JAX-RS integration I was talking about the last few blogs.
– Improved interceptors a little bit by allowing attribute passing.

Hopefuly an RC release in April (about a month, I”m traveling a little bit the next month) followed by a quick GA release very soon after.  As always go to our main resteasy page for download and documentation links.

Older Entries Newer Entries