json-home format: resource discovery

7 Comments

Thank you Mark Little for turning me on to the JSON-Home format Internet Draft.  “application/json-home” is a format that describes resources available from a particular site as well as possible hints on how to interact with those services.

   GET / HTTP/1.1
   Host: example.org
   Accept: application/json-home

   HTTP/1.1 200 OK
   Content-Type: application/json-home
   Cache-Control: max-age=3600
   Connection: close

   {
     "resources": {
       "http://example.org/rel/widgets": {
         "href": "/widgets/"
       },
       "http://example.org/rel/widget": {
         "href-template": "/widgets/{widget_id}",
         "href-vars": {
           "widget_id": "http://example.org/param/widget"
         },
         "hints": {
           "allow": ["GET", "PUT", "DELETE", "PATCH"],
           "representations": ["application/json"],
           "accept-patch": ["application/json-patch"],
           "accept-post": ["application/xml"],
           "accept-ranges": ["bytes"]
         }
       }
     }
   }

While I like the format, my own personal opinion is that hints are not needed.  Most(99%?) non-browser clients already know how to interact with the resources.  What they are looking for, really, is the actual URL to the resource.  IMO, a separate format(s) should be defined for resource description and the link relation URL can offer up that representation if it wants to.

Another beef I have with this (and the atom link XML format too) is that the value for the relationship, rel, can be a URL.  I’d much rather define a logical name, and have a separate attribute that specifies a URL that describes the relationship.  For applications, especially intra-net based ones, URLs can change more frequently than their Internet counterparts.  A logic name attribute could remain fixed and the description URL could be more dynamic.

BTW, I’m glad that the powers-that-be at IETF are showing some love to non-browser clients. Json-home is something similar I’ve done for a few of the RESTful services I’ve written.

Decentralized Auth with Cookies

Leave a comment

Way back in June I was brainstorming about ideas for decentralized authentication.  Here’s a summary of the requirements I wanted:

  • Competely stateless servers.  Servers that host browser applications and restful services would not have to store usernames, passwords, or permission metadata (roles allowed).
  • Servers would not have to handshake with an Identity Provider (IDP).  An HTTP request should contain all the information a server needs to authenticate and authorize a client.
  • A single web request can spawn complex authenticated and authorized interactions between underlying distributed web services.  This single web request would have all the metadata needed to invoke these complex underlying interactions between distributed services.

Unifying Interactions With Cookies

The problem with the protocol discussed in my previous blog was that it relied on new headers being transmitted between the client and server.  This sort of mechanism just wouldn’t work with browser-based applications.  Why?  Well, a browser isn’t going to know how to transmit and process new headers.  The only way to get a browser to store and forward metadata is via a cookie.  Most browser-based apps already use a session cookie to authenticate users (after a log-in of course).  There’s no reason we couldn’t re-use the digital signature techniques discussed in my previous blog with cookies.  Here’s how it could work:

  1. Browser points to example.com
  2. example.com redirects browser to idp.com (the identity provider)
  3. User logins into the IDP
  4. IDP redirects back to example.com.  The forward URL has all security metadata needed for the request, digitally signed (a query parameter would have the signature).  The amazon url signing technique could be used.
  5. Example.com would authenticate and authorize based on the query parameters of the forward URL and also verify the signature.
  6. Example.com would send back a set of cookies that contained all the security metadata expressed as cookie name/value pairs.  A special digital signature cookie would be used to sign them all so that on subsequent requests, the server could verify all the information stored in these cookies.

Step #4 might be problematic as the URLs could get quite large.  Who knows if a browser barfs on absurdly long URLs.  In this cast we could do a double form-post.  IDP could response from a successful login with an HTML Form whose target is Example.com.  This form would have all hidden fields within it containing security metadata.  One particulr form parameter would have a digital signature (I think SAML HTTP bindings work like this).

One vulnerability here is the cross-site scripting hack.  Most website already have this vulnerability I believe, so using existing techniques would be best.  I’m not sure how website solve this particular problem, but the HttpOnly flag could be used with each session cookie.  Javascript apps could have their javascript dynamically generated by the server and include the necessary code to manually apply and send the appropriate cookies.  Another thing that might mitigate things, is to include a timestamp with the cookies.  The application server would check for stale timestamps and with each request reset the digitally signed cookies with a new timestamp.

Non-Browser Clients Use Cookies Too

For non-browser clients, they could use a simpler RESTful protocol to obtain a signed URL or the set of signed form parameters.  There’s also no reason they couldn’t get a set of signed cookies instead of either of these approaches.

 

 

Web Sockets, a disaster in waiting?

12 Comments

Mark posted a really nice article to InfoQ: WebSockets vs. REST?

From what I understand of Websockets, its bascially used to set up a two-way socket connection and not really an application protocol. What worries me the most is that you’ve basically rolled back 20 years of protocol consolidation, and we’re now back to a free-for-all of everybody’s pet protocol. Not so bad if your client and server are a tightly coupled, unreusable UI application. Really bad if you’re writing a web service that is supposed to be reusable by unknown heterogenous clients. With Web Sockets, web services are not only going to have to negotiate the media type, but also the application protocol. Seems like a huge step backward to me in terms of integration.  Did we forget all the problems we had with Oracle Forms, PowerBuilder, Visual Basic and all the UI/framework specific protocols all those developer frameworks introduced?  Do we really want to go back to those days?

What about security issues?  With an anything-goes socket protocol, isn’t this a security nightmare for our operations folks?

Disclaimer:  You could say that I’m both biased and threated by the concept of Web Sockets given my involvement in REST frameworks and APIs.  But in all honesty, I’d be very happy to embrace a new protocol that is both ubiquitous and easily supportable and interoperable in many different languages and platforms.  There’s much to be said about the simple request/response text-based approach of HTTP (and REST over HTTP).  While it may not be uber-efficient, its just so easy to hack and support.

Resteasy 2.3.1 Released

1 Comment

This is a maitenance release of 2.3.x series.

As always, to download and see documentation follow the links from our website.  Take a look at our Jira release notes.  You might also want to check out the Migration guide to view what has broken as far as backward compatibility if you’re upgrading from an earlier version.

World of RESTCraft

3 Comments

An online buddy of mine drew my attention to Blizzard’s new Community API for World of Warcraft.  For those of you who aren’t familiar with World of Warcraft, it is a massive multi-player online role playing game.  They have millions of players.  The game is so successful and generates so much cash that Blizzard pays out a dividend to stock holders.  Not only do they have millions of players, there’s also a very large community around WoW.  The game itself has its own scripting language which you can use to write add-ons.  This add-on community is huge with thousand upon thousands of apps written.

There’s also a large variety of third-party sites that provide character and guild management, quest information, gear info, damage simulators, and gear optimization.  These types of tools need to access Blizzard’s databases.  This is where Blizzard’s new REST-based Community API comes.  Originally, a lot of these sites did screen scraping on WoW’s main website to grab information and access character management.  Since April, they’ve been developing and publishing a full read and write RESTful interface for their applications.  Its seems they picked REST because of the ease of integration between many languages.

Things to note

In browsing the API documentation here’s a few things that jumped out at me

Document by example

The first thing to note is that the API is documented by example.  Here’s the URL pattern you use.  This is what the HTTP request looks like.  This is the JSON data you should send, and this is what the JSON data looks like.  IMO, this is what REST API documentation should look like.  No WADL.  No schema.  Just plain, here’s what you can send, here’s what the request looks like.  This is the approach I’ve taken with my API documentation.  You gotta remember, the people that are going to be integrating with these APIs don’t come from SOAP-land, WS-*-land, CORBA-land, enterprise programming land.  All will understand HTTP and JSON pretty easily.  This is what I love about REST: “lightweight” interoperability with a very low barrier to entry.

Signature-based Authentication

Hackers are ruthless when it comes to World of Warcraft.  I myself was hacked once and had to get my account restored.  Blizzard is very careful about this as it creates a lot of support headaches for them.  You can use a soft-token via your smart-phone.  Or order and get an RSA-like physical token generator when you log into your game.  As for the REST api, you need to acquire a public and private key.  Authentication is done by hashing your private key along with the current time, URL, and HTTP method.

UrlPath = <HTTP-Request-URI, from the port to the query string>
StringToSign = HTTP-Verb + "\n" +
    Date + "\n" +
    UrlPath + "\n";

Signature = Base64( HMAC-SHA1( UTF-8-Encoding-Of( PrivateKey, StringToSign ) ) );
Header = "Authorization: BNET" + " " + PublicKey + ":" + Signature;

Amazon does something very similar for many of it’s public REST apis.  While not true a true digital signature (sigs are encrypted hashes and don’t include the private key), its very close, and a lot simpler to use and understand for users.

Not very link driven

Can you imagine this API being explained via a set of link publishings rather than a set of URI patterns?  I’ve taken advantage of HATEOAS, especially within the HornetQ REST API, but in many cases, just publishing the URI scheme can be very useful.  Maybe its data-publishing vs. interaction?  With a data-publishing app (WoW) it makes more sense to publish a URI scheme for your REST interface.  With an interactive application (i.e. HornetQ REST), HATEOAS, link-driven interfaces make a lot more sense and give you a lot more flexibility.

Versioning?

On one of the forum posts, the developer talked about how he/she planned to version the API in the future.  It seems that they will version using URIs.  The latest and greatest will always use the same top-level URI schemes.  If you want to tie yourself to an older version of the API, the URI scheme will be predicated ith a version identifier:

New API:
/api/wow/realms

Old API
/api/wow/v1/realm/status"

All and all it will be great to see this API evolve over time.  This will be a great public display of a REST API and it will be very interesting to see how Blizzard tackles various issues.  There’s a lot we can learn here.

They are guidelines not laws

3 Comments

I’m catching up on some blog reading.  A great blog on REST, if you don’t read it already, is Subbu Allamaraju‘s (in my blog links too).  I like to call him Dr. REST.  Back in May he wrote about Richardson’s Maturity Model and how measuring your APIs against the model is the wrong thing to do (I think he’s followed it up with a presentation).  I can’t agree more.  What I like about this model (and other articles like it) is that I like to compare it to my own history of growing my understanding of REST.  IMO, what you should do these models and guidelines is read them, examine them, see if they spark any ideas for improving your application.  They just might improve your understanding of REST and why certain constraints are good.  Don’t try to fit your API to REST.  Let REST help you write a better API.  Don’t apply REST for the sake of REST.  This is primarily why I unplugged myself from the rest-discuss mailing list.  If you treated applying REST as a set of guidelines instead of a set of laws you were castigated for it.  Wrong approach.

Anyways, as usual, great blog Subbu.  BTW, you should check out his book too.

Is anybody doing HTTP message signing and encryption?

1 Comment

Over the past 6 months off and on I’ve been researching and prototyping various security related features for Resteasy.  One thing I’ve wondered is, is anybody really doing anything with HTTP message signing and encryption?  Email seems pretty well rounded in this area with specifications like DOSETA/DKIM and SMIME.  You could theoretically apply these specifications to HTTP, and I have, but I could find no examples of people doing so on the Web.  Maybe its just that my Google searching skillz are poor.

Another thing I’ve noticed is that the crypto libraries (bouncycastle and python’s M2Crypto) pretty much center around email as the protocol and you have to dive into the codebase a bit to figure out ways to transmit things over HTTP. Bouncycastle relies on javax.mail multipart implementation which is a bit limited and not very lenient on parsing (Didn’t like python’s SMIME output).

Anyways, I hope to do a Resteasy 2.3 beta soon with SMIME support.  With it I’ll have examples of Python clients posting to Resteasy services transmitting SMIME formated requests.  I’ll post a few blogs on the subject so you can see how to transmit SMIME between M2Crypto and Bouncycastle. (Python and Java).

In the meantime, does anybody have any experience in this area?

Older Entries Newer Entries

%d bloggers like this: