Distributed workflow has to be the most complex use case to secure. In it you could have multiple participants being coordinated both synchronously and asynchronously. All forwarding and distributing information and data in between each other. All needing to trust one another. If you could define a relatively scalable and simple solution for workflow, you’d have something that would work in less complex scenarios.
The hub and spoke model that seems to be popular involves a central identity management provider (IDP) that participants ping to authenticate requests and to receive security information. The biggest problem I foresee with this approach is that the IDP becomes a central point of failure. The IDP needs to be available for applications to work. It needs to be on the same network. There’s a lot of extra distributed requests that need to be made.
All these problems bring me to thinking about the stateless principle of REST. RESTful services can have state, but not session state. The idea is that session state travels with the request. Could we do something similar with security information? Sure why not! How could you trust the integrity of such information? Digital Signatures. I’m sure there are protocols out there that have thought of similar ideas, but its cool to think things out for yourself. If your ideas match a particular existing protocol or specification you know you’re on the right track. The idea have have works as follows.
Let’s pretend we have a User named Bill that wants to interact with a Travel Agent service that will buy a ticket for him on an airline, reserve an airport taxi, and reserve a hotel room. So, Bill is interacting with the Travel Agent directly. The Travel Agent is acting on behalf of Bill when it interacts with the airline, taxi, and hotel services. The airline, tax, and hotel have to trust both the travel agent and Bill.
Step 1: Bill authenticates with an IDP saying he wants to interact with the Travel Agent. The IDP returns metadata that specifies both Bill’s and the Travel Agent’s permissions for all the interactions that must take place. It also returns the public keys for Bill and the Agent. The IDP digitally signs all this information using its private key.
Step 2: Bill sends a reservation request to the Travel Agent service. Bill signs the request including the signed permissions and keys provided by the IDP. Here’s what the request might look like:
POST /travel Host: travelagent.com Content-Type: application/reservation+xml Authorization: doseta-auth user=bill;h=Visa:Permissions:Public-Keys:Host;verb=POST;path=/travel;bh=...;b=... Visa: initiator=bill;h=Permissions:Public-Keys;d=idp.com;b=... Permissions: bill="agent hotel airline taxi"; agent="reserve-hotel reserve-taxi reserve-flight" Public-Keys: bill=23412341234;agent=3423412341234 <reservation>...</reservation>
Step 3: The Travel Agent authenticates and authorizes Bill’s request. The Authorization header contains metadata that is signed by Bill. The metadata signed by bill is the HTTP verb and path of the request (POST and /travel), and the hash of the XML posted by the request, as well as the Visa, Permissions, and Public-Key headers included within the request. The Travel Agent verifies this signed metadata by finding and using Bill’s public key in the transmitted Public-Keys header. If the signature passes, then the Travel Agent knows that Bill sent the request. But….It does not know yet if Bill is a trusted identity.
Step 4: How does the Travel Agent know Bill is a valid person? How does it know that Bill is allowed to make a reservation? To answer these questions, the Travel Agent first looks at the transmitted Visa header. What it boils down to is that the Travel Agent only trusts the IDP. The Visa header was generated by the IDP and is a digital signing of the Permissions and Public-Keys header. The IDP through the Visa header tells the Agent the permissions involved with the request and who will participate in the overall interaction. The Agent only needs to know the IDP’s public key prior to the request being initiated. So, the Agent verifies the digital signed Visa header using the stored public key of the IDP. A successful verification also means that the Agent can trust that Bill initiated the request. It can then look at the Permissions header to determine whether or not Bill is allowed to perform the action.
Step 5: Next the Travel Agent needs to interact with the Airline, Hotel and Taxi services on behalf of Bill. Here’s what a request to the Airline might look like.
POST /flights/tickets Host: airline.com Content-Type: application/ticket-purchase+xml Authorization: doseta-auth user=agent;h=Visa:Permissions:Public-Keys:Host;verb=POST;path=/flights/tickets;bh=...;b=... Visa: initiator=bill;h=Permissions:Public-Keys;d=idp.com;b=... Permissions: bill="agent hotel airline taxi"; agent="reserve-hotel reserve-taxi reserve-flight" Public-Keys: bill=23412341234;agent=3423412341234 <purchase>...</purchase>
You’ll notice that the Visa, Permissions, and Public-Keys headers are the same values as the original request made by Bill. The Authorization header is different as the Travel Agent is making the request. The airline services does authentication and authorization of the Agent’s request the same exact way the Agent did for Bill’s request. Again, the key part of this is that only the IDP is trusted and only the IDP’s public key needs to be known ahead of time.
Vulnerabilities
Disclaimer, I’m new to security so dealing and thinking about attacks is new to me. Generally a lot of attacks can be prevented by specifying a timestamp and expiration with each sign piece of data. Services can refuse to honor old requests. Nonces could also be included within signature metadata to avoid replays.
User’s Private Key is compromised
User’s authentication with the IDP doesn’t have to be key based. It could be TOTP based where the user has to login through his browser providing a password along with a device-generated time-based key. The IDP could then return a temporary private key the client uses to sign requests.
IDP’s Private Key is compromised
This is a scary one. Maybe it could be prevented by requiring and acquiring Visa’s from multiple IDPs? A service would verify signatures from two or more IDPs. The probability of more than one IDP’s private key being compromised becomes less and less the more IDPs you have involved with the interadtion.
Summary
So here’s a summary of this brainstormed protocol:
- The Public-Keys header’s purpose is two-fold. First, its a list of public keys. More importantly it is a list of principles that are involved with the interaction.
- The Permissions header is a list of permissions of each principle involved for each service they will interact with.
- The Visa header is a digital signature of the Public-Keys and Permissions header. It also will probably have a timestamp and an expiration as well (all digitally signed of course).
- The Authorization header exists to verify the integrity of the HTTP request of the entity sending the request. It is a digital signature of the HTTP verb, path, host, message body, Visa, Permissions, and Public-Keys headers.
- The IDP is the only trusted entity in the whole multi-tier distributed interaction.
- Each service must have the IDP’s public key stored at deployment time prior to servicing any requests
- There is no communication to the IDP by any service. Even the initiating client’s first interaction with the IDP to obtain a Visa could be done ahead of time and re-used for multiple interactions.
This is just a rough outline, but there’s probably other things that could be added. Like nonce’s for instance. Its just a matter of implementing it and getting people to use it. The real question is, is there an existing protocol already out there that does this sort of thing?
Jun 19, 2011 @ 08:24:58
You seem to have come up with something similar to what I was working on a few years ago and described here: http://www.jillesvangurp.com/static/pervasivemag.pdf.
In short, oauth is a really nice protocol but it has the weakness of a lot of overhead being involved with the notion of establishing whether the resource owner has given permission for accessing the resource. Basically OAuth 1.0 did this via redirects + unspecified UI work, which doesn’t scale.
In the paper I linked above, we replaced this with a notion of group membership that is vouched for by a group owner who expresses memberships by giving out signed, cacheable tokens. Token authenticity is verified simply by verifying the signatures and requires no network traffic. This is very similar to what you describe above. Think of groups as services rather than static lists and you can come up with usecases that involve many different services all vouching for different things that would be very hard to verify by a single service. We built a modified openid IDP server that in addition to authentication also did the job of collecting membership tokens.
You can vouch for things like presence, employment, conference registrations, subscriptions, country citizen ship, etc. If you start combining claims about things like that, you can build some very sophisticated authorization rules that you can verify simply by verifying a series of signatures.
If you combine this idea with communication over https, you gain basically the notion of defense against man in the middle attacks + certificates that are signed by trusted parties. This allows you to establish tokens are handed out by whomever they are supposed to be handed out by.
Jun 20, 2011 @ 11:24:30
Awesome Awesome Awesome! Thanks for the feedback. I also haven’t been satisfied with OAuth (or OpenID). I’ll take a look at the paper. Its probably a bit more thought out than what I dreamed up drinking a beer by the pool Saturday. FYI, I always thought security was the most boring thing in the world. Its getting more interesting by the day…(I can see Anil rolling his eyes.)
Jun 20, 2011 @ 18:22:16
Check out the Grid Security Infrastructure: http://www.globus.org/security/overview.html
It is based on X.509 certificates + PKI and is widely used in grid computing infrastructures. The unique GSI feature might be of interest to you is Proxy Certificates which are used to dynamically enable some entity to “act on behalf” of other entity:
“Proxy Certificates allow an entity holding a standard X.509 public key certificate to
delegate some or all of its privileges to another entity which may not hold X.509
credentials at the time of delegation. This delegation can be performed dynamically,
without the assistance of a third party, and can be limited to arbitrary subsets of the
delegating entity’s privileges. Once acquired, a Proxy Certificate is used by its bearer to
authenticate and establish secured connections with other parties in the same manner as a
normal X.509 end-entity certificate.”
See this paper: http://www.globus.org/alliance/publications/papers/pki04-welch-proxy-cert-final.pdf
Though not a RESTful one, that’s a closest security protocol I’m aware of for the problem you described.
Jun 21, 2011 @ 13:28:19
Seems to work in the same high level way as OAuth, except instead of exchanging a token, you’re exchanging private keys.
Jun 22, 2011 @ 19:36:52
Not really. Here are some differences:
1. In GSI the Resource Provider is not involved in delegation (generation and signing of proxy certificate). A service that wants to aquire proxy certificate generates asymmetric key pair and sends its public key to user client (private key should never leave the host, so “exchanging private keys” is a bad idea). User client receives public key, creates proxy certificate signed with user’s private key (just the same way as any CA signs certificates) and sends proxy certificate to the service.
2. Aquired proxy certificate can be used to access arbitrary Resource Provider on behalf of the user, as long as provider trusts CAs that issued certificates for user and service. Actions allowed to perform on behalf of the user can be limited by specifying it inside the proxy certificate. Also proxy certificate has limited lifetime (in grids it is usually around 12-24 hours).
3. Aquired proxy certificate can be used by service to perform delegation to another service to act on behalf of the user, without involvement of the user. The scenario is the same as original proxy generation, except that new proxy is signed with existing proxy private key. This enables dynamic creation of “chains of delegations”. A typical use case from the grid world is the following. A user sends computing job to central grid scheduler that dynamically dispatches the job to the most suitable computing resource. Grid scheduler uses proxy certificate to authenticate on computing resource on behalf of the user. The job is started, but it needs to access input data that is stored on other storage resource. In order to authenticate on storage resource on behalf of the user the job can aquire proxy certificate from the scheduler. In the general case a user needn’t know in advance where, when and how long its job will run and which providers will be involved, so delegation should work automatically without user involvement.
The big downside of GSI is that each user should have X.509 certificate issued by trusted CA. This can be partially overcome by creating OpenID providers that issue short lived certificates. I’m very interested to see a lightweight RESTful protocol that can do same thing as does GSI. OAuth simply doesn’t scale to grids and other dynamic multiparty environments (eg. smart spaces mentioned by Jilles), because it requres “one-to-one” relationships and interactions between each user and each resource provider. By the way the notion of group membership in Jille’s paper is very similar to VOMS service that adds group membership attribute inside proxy certificate:
http://en.wikipedia.org/wiki/Voms
Jun 22, 2011 @ 20:41:38
Oleg, I think this is basically what I’m proposing in this blog. But, instead of the user authorizing somebody to act on behalf of it, it is the IDP that authorizes everybody. This way, you can manage the storage of permissions within one place, the IDP, yet every participant in an interaction need only know the public key of the IDP.
What I’m proposing is “lightweight” from a configuration and deployment perspective, but requests would be quite lengthy given that I want to transmit one or more public keys with each request.
Decentralized Auth with Cookies « Bill the Plumber
Mar 19, 2012 @ 23:11:35