Proposed HTTP digital signature protocol and API

29 Comments

4/5/11: After a lot of feedback from the IETF HTTP WG, I found some work is already being done in this area in the DOSETA specification.  I’ll be retiring Content-Signature for the time being.

3/23/11: I’ve been encouraged to bring this to the IETF and have submitted an Internet-Draft on the subject.  Please go there to see further iterations on this specification.


Recently a RESTEasy user asked for the ability to digitally sign requests and responses.  They were pushing HTTP requests through one or more intermediaries and wanted to make sure that the integrity of the message was maintained as it hopped around the network.  They needed digital signatures.

There’s always been multipart/signed, but I never really liked the data format.  One, what if some clients support the format and some don’t?  Two, signature data seems really to belong in the HTTP header rather than enclosed within an envelope.  I found a nice blog that shared and added a bunch more to the conversation.  So, without finding a match by doing a google search, I decided to define our own protocol. (FYI, OAuth does have signatures as part of its protocol, but I wanted something that could be orthogonal to authentication as the client and server may not be using OAuth for authentication.)

Protocol Goals

The protocol goals and features we wanted to have were:

  • Metadata defining exactly how the message was signed
  • Ability to specify application metadata about the signature and have that metadata be a part of the signature
  • Simplicity of headers.  Have all signature information be stored within HTTP request or response headers.  This makes it easier for frameworks and client and server code in general to handle signature verification.
  • Expiration.  We wanted the option to expire signatures.
  • Signer information.  We wanted the ability to know who signed the message.  This would allow receivers to look up verification keys within internal registries.
  • Ability to ignore the signature if you don’t care about that information or if the client or server doesn’t know how to process it.
  • Ability to forward representation/message to multiple endpoints/receivers
  • Allow multiple different URLs to publish the same signed message
  • Although it could be used as an authorization mechanism, it is not meant to replace existing OAuth or Digest protocols that ensure message integrity

The Content-Signature Header

The Content-Signature header contains all signature information.  It is an entity header that is transmitted along with a request or response.  It is a semicolon ‘;’ delimited list of name value pairs.  Values must be enclosed within quotes if they use any delimiting character within their name or value.  These attributes are metadata describing the signature as well as the signature itself.  Also, the Content-Signature may have more than one value, in other words, more than one signature may be included with the Content-Signature header.  Multiple signatures are delimited by the ‘,’ character.

These are the core attributes of the Content-Signature header:

signature – (required) This is the hex encoded signature of the message.  Hex encoding was chosen over Base64 because Base64 inserts cr/lf characters after 76 bytes which screws up HTTP header parsing.

values – (optional) this is a colon “:” delimited list of attributes that are included within Content-Signature header that are used to calculate the signature.  The order of these listed attributes defines how they are combined to calculate the signature.  The message body is always last when calculating the signature.  If this attribute is omitted, then no Content-Signature attribute is used within the calculation of the signature.

headers –(optional) List of colon “:” delimited  HTTP request or response headers that were included within the signature calculation.  The order of these listed headers defines how they are combined to calculate the signature.

algorithm – (optional) The algorithm used to sign the message.  The allowable values here are the same as those allowed by java.security.Signature.getInstance().  If there is a W3C RFC registry of signing algorithms we could use those instead.

signer – (optional) This is the identity of the signer of the message.  It allows the receiver to look up verification keys within an internal registry.  It also allows applications to know who sent the message.

id – (optional) This is the identity of the signature.  It could be used to describe the purpose of a particular signature included with the Content-Signature header.

timestamp – (optional) The time and date the message was signed.  This gives the receiver the option to refuse old signed messages.  The format of this timestamp is the Date format described in RFC 2616.

expiration – (optional) The time and date the message should be expired.  This gives the sender the option to set an expiration date on the message.  The format of this attribute is the Date format described in RFC 2616.

signature-refs – This is a ‘:’ delimited list referencing other signatures by their id attribute within the Content-Signature header.  This means that these referenced signature values will be included within the calculation of the current signature.  The hex-encoded value of the referenced signature will be used .

Other attributes may be added later depending on user requirements and interest.   URI and query parameters were specifically left out of the protocol as integrity between two parties should be handled by HTTPS/SSL, the Digest authentication scheme discussed in RFC 2617, or OAuth.  Remember, the point of writing this protocol is so that representations can be signed and exchanged between multiple parties on multiple machines and URLs.

Signing and Verifying a message

The signer of a message decides which Content-Signature attributes and HTTP headers it wants to include within the full signature.  The signature is calculated by signing the concatenation of

attribute-values + header-values + signature-refs + message-body

Attribute-values pertain to the list of attribute names defined within the ‘values’ attribute of the Content-Signature element.  Header-values pertain to the list of header names defined within the ‘headers’ attribute of the Content-Signature element.  Signature-refs pertains to referenced signatures that also appear in the Content-Signature header.  Attributes must always precede headers.  Headers must precede signature refs.  The message-body always comes last.  For example, if the signer decides to include the signer, expiration attributes and Content-Type and Date headers with a text/plain message of “hello world”, the base for the signature would look like this:

billSunday, 06-Nov-11 08:49:37 GMTtext/plainFriday, 11-Feb-11 07:49:37 GMThello world

The Content-Signature header transmitted would look like:

Content-Signature: values=signer:expiration;
                   headers=Content-Type:Date;
                   signer=bill;
                   expiration="Sunday, 06-Nov-11 08:49:37 GMT";
                   signature=0f341265ffa32211333f6ab2d1

To verify a signature, the verifier would recreate the signature string by concatenating the attributes specified in the “values” attribute, HTTP headers defined in “headers” attribute, and finally the message body. Then apply the verification algorithm.

If there is an attribute declared within the “values” attribute that isn’t specified in the Content-Signature header, it is assumed it is a secret held between the signer and verifier.  i.e. the signer.  The value of this attribute must be determined in an undefined way.

If there is a header declared within the “headers” attribute that doesn’t exist, the server may choose to abort if it cannot figure out how to reproduce this value.

Here’s an example of multiple signatures.  Let’s say the Content-Signature header is initially set up like this with a message body of “hello”:

Content-Signature: id=husband;
                   signature=0001,
                   id=wife;
                   signature=0002

Here, we have two initial signatures signed by two different entities, husband and wife (found by their id attribute).  We want to define a third signature, marriage, that includes those signatures.

Content-Signature: id=husband;
                   signature=0001,
                   id=wife;
                   signature=0002,
                   id=marriage;
                   signature-refs=husband:wife
                  signature=00033

The marriage signature would be calculate by the signing of this string:

00010002hello

Which is:

husband’s signature + wife’s signature + message body

If there is a signature reference declared within the signature-refs attribute that doesn’t exist, the server may choose to abort if it cannot figure out how to reproduce this value.

Other similar protocols out there?

I only spent about an hour looking to see if there were similar protocols out there.  If somebody knows, let me know.  It would be cool to get feedback on this proposal as well.

Edited:

People in the comments section of this entry keep mentioning two-legged OAuth, but I don’t see how they describe anything other than in the Authorization header.  This is something we don’t want as we want to be able to use traditional authentication mechanisms so that signing can be supported on servers or clients that don’t understand OAuth (or don’t want to use it).

Should REST be Organic?

7 Comments

Since our kids were born years ago, my wife has generally been pretty Organic crazy.  The Organic food movement was created so that we don’t “poison” our bodies with food grown and treated with harmful pesticides and chemicals, that don’t introduce dangerous genetically modified seeds or growth hormones, and finally, contribute generally to society as a whole by promoting sustainable farming that doesn’t deplete or damage the environment.

From Movement to Certification

Organic food and farming started out as a movement, but quickly turned into a branding and certification effort.  Many farmers and companies found that following strict organic principles was expensive and added to the cost of the goods they produced.  While many customers were willing to pay the extra cost involved to avoid “poisoning” their bodies, still many others are more interested in saving money now and worrying about the long term consequences to their bodies and environment later down the road.  So, some companies would avoid pesticides, but still use genetically modified seeds, and call their products organic.  Or milk companies wouldn’t use growth hormones, but still use non-organic feeds for their livestock, and still call their products organic.

To fight this pollution and misrepresentation of organic principles a branding and certification effort was introduced so that each product on the market could be officially approved as organic or not.  Organic food customers would know what to expect from a product by seeing this brand on their packaging.  If you wanted to sell organic products, you’d have to be officially certified and inspected by a third party.

Is REST following Organic’s path?

Roy Fielding, the father of REST, has the same expectations that organic food consumers have.  When something is deemed RESTful he has certain expectations that have to be 100% fulfilled.  Its very understandable as Roy deals with Web standards, they have to scale, their going to be used by millions.  The stability and usability of the Web are too important to propagate flawed protocols.  Roy says in his PhD:

While mismatches [to REST] cannot be avoided in general, it is possible to identify them before they become standardized.

So, maybe Roy should trademark REST, brand REST, and promote the creation of an official organization that can bless an application or API as RESTful much like we have an Organically Certified label.  Let’s do a tiny thought exercise on this…

One of the consequences of heading down this route is that REST evangelists will lose the Web as their prime example of REST in action.  While the HTML and HTTP standards and specifications will be a great example of REST in action, most of the applications on the Web don’t follow the strict criteria of REST (a static web page is not an application).  As Roy states, its hard to avoid mismatches in the RESTful style, even if you try.  This is especially true if you’re building a distributed interface for a legacy system.  The rest of us will lose a lot of language and vocabulary to describe our apps if REST is branded/certified.

Well…You may be laughing (or fuming) about the idea of branding and certifying REST.  The thing is though, often, when you interact with the REST community you get condescending comments like:

Then do not call it REST. It is that simple. And saying that some academic has not written a line of code detracts from what you are saying. REST has a clear enough definition. XXXX  (insert whatever here) is not RESTful, so don’t call it so!!

You get this sort of interaction when a thought, API, or interface breaks or bends a RESTful principle AND you want to still use REST to name and/or describe your thing.  Reactions like this is very akin to treating REST as a certified brand rather than a style.

Expectations are just not the same

The thing is though, the expectations of IANA, W3C, and people Roy Fielding are very different than the average application developer.  There’s really two main reasons why a developer initially is attracted to REST:

  • Simplicity.  You can use a RESTful interface without having to define a WSDL document, generate stubs, and download (or pay for) some complex middleware stack or library just to implement a “Hello World” example.
  • “Lightweight” interoperability.  Because HTTP is everywhere, if you write a RESTful interface, it should be usable in most languages and platforms without much effort (i.e. iphone, android, python, Ruby, Java, Perl, etc…)

Sure, there are a lot of other goodies you get when you learn more about REST architectural guidelines and start applying them, but these two reasons, IMO, are the reason app-developers initially look into REST.  W3C, IANA, etc. have different expectations, though, as they are defining standards that the Web will be built upon.  A screwup here can have a much more far reaching affects on things compared to messing up an application that can usually be refactored over time.  The domain REST is going to be applied to should have a direct correlation on how critical we should be of the interface.

Focus on benefits and consequences instead of definition

While strictly following REST works for Web standards, its just not always feasible to follow them religiously in everyday applications.  After all, REST is a style and set of guidelines, not a set of laws or, gasp, a branded certified trademark.  Instead, people should be allowed to use REST to name and describe themselves without harassment.  Instead, critics should focus on the consequences of a specific app or API not following a specific RESTful constraint and the benefits if they did modify the interface to follow the guideline.

I think we’ve all had negative experiences with trying to follow an architectural principle (REST, OOP, or whatever) religiously.  I think most of us realize that focusing on delivery to market through multiple iterations and user requirements and feedback is much more important than whether we religiously follow a specific guideline.  The easy answer is “Then don’t call it REST!”, but we’d have a very limited vocabulary in software engineering if this mantra was followed with every new architectural style that was created.

HornetQ REST Interface Beta 2 Released

1 Comment

A user requested Selector support. Follow the links and doco from:

http://jboss.org/hornetq/rest

To download, etc…

Is Google Protocol Buffers RESTful?

11 Comments

Recently, a RESTEasy user had a need to transfer a lot of data as efficiently as possible and found Google Protocol Buffers. He came across these blogs questioning the viability of Goggle Protocol Buffers within a RESTful architecture: Ryan’s, Vinoski’s, and Subbu’s. Although I am very late to the discussion, I thought I’d blog about my opinion on the subject to give my RESTEasy user some form of answer.

Who cares if its RESTful or not?

Firstly, who cares if it is RESTful or not?  Does PB fill a need?  If so, don’t worry about the rants of some academic or some architect that hasn’t written a line of code in years (just to be clear, neither Steve, Subbu, or Ryan fall into this academic/non-coding-architect catagory!!!).  REST is described as an architectural style, a set of guidelines, a set of attributes on what the uniqueness of the Web is.  The key words are style and guideline, NOT laws!  Whether you’re borrowing from object-oriented, aspect-oriented, or RESTful principles and guidelines, there’s always going to be tradeoffs you have to make.  It is always much more important to get a working, maintainable, on-time, on-budget system than to satisfy some academic checklist.

Treat Protocol Buffers as a Representation Format

In Steve’s blog, he rants that PB is just a starter drug that puts you on the path to RPC crack-cocaine. Steve says:

In fact, if Google had stopped there [as a definition of a message format], I think Protocol Buffers could be a superb little package.

I agree with this statement.  Treat Protocol Buffers as a representation format only.  Follow REST principles when designing your interface.  Don’t use the RPC junk available in the PB specification.

Not Self-Describing

Ryan makes a good point that PB is not self-describing.  IMO, this is a weak argument.  Unless your clients are going to be rendering agents, self-description is pretty much a pedantic exercise.  Code-based clients generally can’t make many dynamic decisions so self-description information is pretty much useless to Code-based clients.  They have to understand interactions and formats before hand, or they just can’t work.

Subbu, in the comments section of his blog, and Ryan suggest that custom media types are going to have to be defined to satisfy self-description.  Because PB is very brittle (I’ll get into this later) you’ll need to define custom (and versioned) media types to support both older and newer clients.  Something like:

application/customer+x-protobuf

and/or even embed a URL pointing to a .proto file:

application/x-protobuf;format=http:/.../customer.proto

Not Hypermedia Driven?

Ryan’s statement that Protocol Buffers is not hypermedia driven because:

Protocol Buffers do not have a means to describe links to external messages.

This is untrue.  If you’re exchanging PB representations over HTTP, there’s no reason you can’t embed a URL within a PB message body. i.e.

message BookOrder {
...
   repeated Link links = ...;
   message Link {
        required string url = 1;
        optional string type = 2;
   }
}

You have to declare these same kinds of “types” within JSON and XML as well, so I don’t see an issue here.

Stubs means its UnRESTful?

I have to disagree with this point as well.  Stubs are just a means to productively interact with the data format.  You have this issue with XML and strongly typed languages.  Is using XML schema generated JAXB classes in Java any different here?  IMO, no.

Protocol Buffers is a Very Brittle Format

IMO, perhaps the most compelling reason not to use Protocol Buffers is that it is a very very brittle format.  You need to have access to .proto file metadata to parse a PB message body.  Because PB defines a very strict message definition you’re going to have a hard time having older and newer clients co-existing as you add more information to each different message format.  XML and JSON are much more forgiving formats as generally you can ignore extra information.  I’m not sure this is the case with PB.

Edited 10/26:  I was wrong, PB is not so brittle.  Read Bjorg’s comments below.  Apologies for only scanning the PB documentation.  While the stub requirement does make it a little brittle, it does seem you can design your messages to be backward compatible.

As Ryan states in his blog, this brittleness may violate RESTful principles.  IMO though, it should always be a  measure of less RESTful vs. more RESTful, rather than the black and white approach of is RESTful vs. is not RESTful.  This is because, again, there’s always going to be tradeoffs you have to make when implementing your applications.  If you’re following RESTful constraints when applying Protocol Buffers in your implementation, it should be fairly easy to move to less-brittle types like JSON or XML if you no longer find the need to use an ultra-efficient message format like Protocol Buffers.

Conclusion

Protocol Buffers can be used RESTfully if you treat it solely as a representation format.  You can still embed URLs within message definitions to make your representations hypermedia driven.  PB is brittler format than what we’re used to and you may have versioning issues as your interfaces evolve.  Unless PB radically improves the performance of your system, you should probably stick to using formats like XML or JSON as its probably going to be easier to support them across the variety of languages that are now used within the industry.

New HornetQ REST Interface

6 Comments

After being distracted a lot with RESTEasy releases over the past few months, I finally have something usable (and more importantly, documented) for the HornetQ REST Interface I’ve been working on.  The interface allows you to leverage the reliability and scalability features of HornetQ over a simple REST/HTTP interface. Messages are produced and consumed by sending and receiving simple HTTP messages containing the XML or JSON (really any media type) document you want to exchange.

Other than being buzzword compliant here are some of the reasons you might want to use the HornetQ REST Interface:

  • Usable by any programming language that has an HTTP client library.
  • Zero client footprint. We want HornetQ to be usable by any client/programming language that has an adequate HTTP client library. You shouldn’t have to download, install, and configure a special library to interact with HornetQ.
  • No envelope (i.e. SOAP) or feed (i.e. Atom) format requirements. You shouldn’t have to learn, use, or parse a specific XML document format in order to send and receive messages through HornetQ’s REST interface.
  • Lightweight interoperability. Since interactions are RESTful the HTTP uniform interface provides all the interoperability you need to communicate between different languages, platforms, and even messaging implementations that choose to implement the same RESTful interface as HornetQ (i.e. the REST-* effort.)
  • Leverage the reliability, scalability, and clustering features of HornetQ on the back end without sacrificing the simplicity of a REST interface.

HornetQ REST Features

  • Duplicate detection when producing messages
  • Pull and Push consumers
  • Acknowledgement and Auto-acknowledgement protocols
  • Create new queues and topics
  • Mix and match JMS and REST producers and consumers
  • Simple transformations

Visit the HornetQ REST Interface web page to find links for downloading and browsing docs, source code, and examples.

RESTEasy 2.0.0 Released!

1 Comment

Our first major release of 2010.  After a bunch of betas, I’m pleased to announce that RESTEasy 2.0.0.GA has been released.  A lot of work within the RESTEasy community has been done to improve on our last successful GA.  Follow the links from the RESTEasy website to download the new release.  Special thanks to Jozef Hartinger for the CDI integration, Eoghan Glynn for fixing a bunch of bugs, Stef Epardaud for the new Javascript client, and many others for their continuing support.  Some highlights:

  • CDI Support
  • Spring 3.0 Support
  • TCK 1.1 Compliance
  • Async Servlet 3.0 Support
  • A new Javascript API.  A javscript servlet scans JAX-RS deployments and generates Javascript code that can be downloaded that can be used as stubs.
  • Relicensed under ASL 2.0.  We switched to be compatible with HornetQ and Drools as we’re developing REST interfaces for these guys.
  • Tons of bugfixes and performance improvements reported by the community over the past 8 months.

Browse our release notes for the last few betas to look at all the bugs and features implemented.

The upcoming JBoss AS 6-Milestone 4 release will also have deeper integration with RESTEasy so that you can do automatic scanning, EJB injection, CDI injection, etc.  All the stuff you’d expect from a JAX-RS integrated EE 6 solution.

REST core values

2 Comments

REST attempts to answers the questions of: What properties of the web have made it so successful?  How can I apply these properties to my applications?  While REST is simple, it takes awhile to figure out how to follow this architectural style effectively when designing distributed interfaces.  REST promises greater decoupling, change resistance, and scalability if its principles are followed.  The thing is though, for anybody not fresh out of school, we’ve heard these types of promises again and again over the years made by various industry efforts: DCE, CORBA, WS-*, and insert-your-favorite-distributed-framework.   While the success of the Web leads me to believe that REST principles are strong, there has to be something more fundamental that it promotes or we’re never going to break the cycle of complex, bloated, infrastructure runtimes.  That’s why I’ve adopted a set of core values for myself when writing RESTful interfaces.

Simplicity

Perhaps the most important reason why developers initially are attracted to REST is that, at its core, REST is very simple HTTP interactions.  A good restful service can be understood simply by looking at the messages that are exchanged.  If you were around in the early days of the web, did you look at HTML source to learn how a specific website wrote their page?  Same applies to RESTful web services.

Zero Footprint

To write a RESTful web service all you need is a platform that can web requests.  You can write restful services using Apache+CGI+(python/php), Servlets, JAX-RS, Restlet, Rails, Grails, whatever.  From a client perspective, all you need is an HTTP client library.  While frameworks can make you more productive, just using an HTTP client library is practically productive enough.

Lightweight, true, interoperability

Have you ever had trouble getting two different CORBA or SOAP vendors to interoperate?  What about getting two different versions of the same vendor’s CORBA or WS-* stack to interoperate?  These are common problems that happen every day with traditional stacks.  Because REST focuses on exchanging representations through ubiquitous protocols like HTTP, these interoperability problems, on the protocol level, pretty much don’t exist.  HTTP is everywhere and well supported.  HTTP content negotiation allows clients and servers to decide, at runtime, what formats they want to exchange.  REST+HTTP removes the vendor interoperability problem and allows developers to focus on application integration.

Frameworks for productivity, not for coupling

REST frameworks should help with developer productivity and not create a lockin between the client and server that requires the framework’s runtime be installed on both sides of the pipe.  I feel this will lead to interoperability problems and complexity.

I’ve been yelled at before for stating my core values on why REST is important to me.  I think I was misunderstood.  I think there’s a distinction between what you want to accomplish and how you accomplish it.  Right now REST is the how, and my core values are the what.  Also, these core values are why I’ve been skeptical openly about things like WADL, Atom, RDF, and PATCH and why I’m nervous about how transactions are going to turn out in our REST-* effort.  After all these years, its so important to get things right.

Possible iteration on avoiding duplicates

1 Comment

Draft 5 of REST-* messaging talked about iterating on the reliable posting protocol.  Recently, I was arguing with Jan Algermissen on a completely unrelated subject.  As a result of this conversation, I ended up re-reading Roy’s post on hypermedia for the 4th time in 2 years. Re-reading Roy’s blog got me thinking a bit about improving the message posting protocol of REST-* Messaging so that it is driven more by in-band information, rather than out-of-band information.

Firstly, I want to remove the post-message-once and post-message link relationships.  Instead, a destination resource would only publish the create-next link.  When a client wants to post a message to a queue or topic, it will use this link to create a new message resource on the server.    The type of this link would be “*/*” meaning it accepts any media type.

The key change to the protocol would be for the client to be aware that responses to creating messages through a create-next link may contain a new create-next links.  The client is encouraged to use these new, on-the-fly, create-next links to post additional messages to the topic or queue.  An important point of this change is that the server is not required to send a create-next link with its responses.  How, and if, the server protects itself from duplicate message posting is up to the server.

So how could the server protect itself from duplicate message posting?  One implementation could be that the server returns a 307 response code, “Temporary Redirect” for the initial POST to the static top-level create-next link published by the destination resource.  This 307 requires the client to re-POST the request to a URL contained in a response Location header as defined by the HTTP 1.1 specification.  The Location header would point to a one-off URL (like the previous protocol defined in Draft 5).    If a network failure happens, then the client re-POSTs to this  URL.  If the messages was previously successfully processed by the server, the server would respond with a 405, Method Not Allowed.  If no network failure happens on the re-POST from the 307 redirection, then the server would just return a success code.  In either response, the server would also return a new create-next link as a Link header within the response.  The client would use this new create-next link to post new messages.  Subsequent posts to these new links would not have to go through the re-direct protocol because they would be newly generated one-off URLs.

I have been reading a bit that some weird or undesirable behavior may be experience with some user agents when using 307 and POST/PUT.  So, I think that if the REST-* Messaging specification leaves it undefined how a server implementation handles the initial response of a duplicate-message protection protocol, we can let it evolve on its own.  The key here is that the client should be encouraged to look within the response for new create-next links even from error responses.  For example, if instead of 307, the initial POST return a 415, Preconditions Failed and that error response contained a create-next link header, the client should use that link to re-post the request.  NOTE!  I think 307 is probably the best implementation, but maybe its best to give flexibility to implementors.

Keep the *-batch links

I still want to have separate links for submitting batches of messages.  Specifically rename post-batch to create-next-batch (and remove post-batch-once).  I want the distinction so that the server knows that it is receiving a collection of messages vs. the server just forwarding a message to message consumers that just happens to be a collection media type.

Links instead of PATCH

28 Comments

Here’s some random thoughts I’ve had about REST while interacting with commenters on my previous blog.  Please, please, they are just thoughts, not rules of thumb I’m taking on.  Some things to think about and ponder…

Use Links instead of PATCH

The idea of using the new PATCH HTTP operation makes me squeamish.  I’m not sure why, but intuitively I don’t like it.  Maybe its because so many semantics can be hidden behind it?  Maybe its because a user is going to have to read a lot of documentation to understand how to interact with it?  I think I’d rather use links instead.  Let me elaborate.  Consider this customer XML document:

<customer>
   <first-name>Bill</first-name>
   <last-naem>Burke</last-name>
   <address>
       <link rel="edit" href="/customers/333/address" type="application/xml"/>
       <street>...</street>
       ...
   </address>
   <billing-address>
       <link rel="edit" href="/customers/333/billing-address" type="application/xml"/>
       ...
   </billing-address>
...
</customer>

If you were using links instead of PATCH, doing a GET on a specific customer resource returns you a document that pretty much describes to you how to interact with it.  The “edit” link elements under address and billing-address let you know that you can partially edit the document without having to refer to any user documentation (or yuck, and WADL document) on whether PATCH is supported or how you use PATCH on the resource.

RESTEasy 2.0-beta-2 released

1 Comment

I don’t usually blog about beta or RC releases, but people have had a few problems with Apache Client 4.0 integration with RESTEasy, specifically, a bunch of connection cleanup bugs.  I have fixes those bugs reported for this.  Also, this release ran successfully against the JAX-RS 1.1 TCK.  I had to make a bunch of encoding fixes here.

You can download from the usual places.  Go to our home page for more info.

Older Entries Newer Entries

%d bloggers like this: