HornetQ REST Interface Beta 2 Released

1 Comment

A user requested Selector support. Follow the links and doco from:


To download, etc…

New HornetQ REST Interface


After being distracted a lot with RESTEasy releases over the past few months, I finally have something usable (and more importantly, documented) for the HornetQ REST Interface I’ve been working on.  The interface allows you to leverage the reliability and scalability features of HornetQ over a simple REST/HTTP interface. Messages are produced and consumed by sending and receiving simple HTTP messages containing the XML or JSON (really any media type) document you want to exchange.

Other than being buzzword compliant here are some of the reasons you might want to use the HornetQ REST Interface:

  • Usable by any programming language that has an HTTP client library.
  • Zero client footprint. We want HornetQ to be usable by any client/programming language that has an adequate HTTP client library. You shouldn’t have to download, install, and configure a special library to interact with HornetQ.
  • No envelope (i.e. SOAP) or feed (i.e. Atom) format requirements. You shouldn’t have to learn, use, or parse a specific XML document format in order to send and receive messages through HornetQ’s REST interface.
  • Lightweight interoperability. Since interactions are RESTful the HTTP uniform interface provides all the interoperability you need to communicate between different languages, platforms, and even messaging implementations that choose to implement the same RESTful interface as HornetQ (i.e. the REST-* effort.)
  • Leverage the reliability, scalability, and clustering features of HornetQ on the back end without sacrificing the simplicity of a REST interface.

HornetQ REST Features

  • Duplicate detection when producing messages
  • Pull and Push consumers
  • Acknowledgement and Auto-acknowledgement protocols
  • Create new queues and topics
  • Mix and match JMS and REST producers and consumers
  • Simple transformations

Visit the HornetQ REST Interface web page to find links for downloading and browsing docs, source code, and examples.

Possible iteration on avoiding duplicates

1 Comment

Draft 5 of REST-* messaging talked about iterating on the reliable posting protocol.  Recently, I was arguing with Jan Algermissen on a completely unrelated subject.  As a result of this conversation, I ended up re-reading Roy’s post on hypermedia for the 4th time in 2 years. Re-reading Roy’s blog got me thinking a bit about improving the message posting protocol of REST-* Messaging so that it is driven more by in-band information, rather than out-of-band information.

Firstly, I want to remove the post-message-once and post-message link relationships.  Instead, a destination resource would only publish the create-next link.  When a client wants to post a message to a queue or topic, it will use this link to create a new message resource on the server.    The type of this link would be “*/*” meaning it accepts any media type.

The key change to the protocol would be for the client to be aware that responses to creating messages through a create-next link may contain a new create-next links.  The client is encouraged to use these new, on-the-fly, create-next links to post additional messages to the topic or queue.  An important point of this change is that the server is not required to send a create-next link with its responses.  How, and if, the server protects itself from duplicate message posting is up to the server.

So how could the server protect itself from duplicate message posting?  One implementation could be that the server returns a 307 response code, “Temporary Redirect” for the initial POST to the static top-level create-next link published by the destination resource.  This 307 requires the client to re-POST the request to a URL contained in a response Location header as defined by the HTTP 1.1 specification.  The Location header would point to a one-off URL (like the previous protocol defined in Draft 5).    If a network failure happens, then the client re-POSTs to this  URL.  If the messages was previously successfully processed by the server, the server would respond with a 405, Method Not Allowed.  If no network failure happens on the re-POST from the 307 redirection, then the server would just return a success code.  In either response, the server would also return a new create-next link as a Link header within the response.  The client would use this new create-next link to post new messages.  Subsequent posts to these new links would not have to go through the re-direct protocol because they would be newly generated one-off URLs.

I have been reading a bit that some weird or undesirable behavior may be experience with some user agents when using 307 and POST/PUT.  So, I think that if the REST-* Messaging specification leaves it undefined how a server implementation handles the initial response of a duplicate-message protection protocol, we can let it evolve on its own.  The key here is that the client should be encouraged to look within the response for new create-next links even from error responses.  For example, if instead of 307, the initial POST return a 415, Preconditions Failed and that error response contained a create-next link header, the client should use that link to re-post the request.  NOTE!  I think 307 is probably the best implementation, but maybe its best to give flexibility to implementors.

Keep the *-batch links

I still want to have separate links for submitting batches of messages.  Specifically rename post-batch to create-next-batch (and remove post-batch-once).  I want the distinction so that the server knows that it is receiving a collection of messages vs. the server just forwarding a message to message consumers that just happens to be a collection media type.

Webinar on REST, RESTEasy, JBoss – March 23rd

Leave a comment

I’m doing a webinar tomorrow on REST, JAX-RS, RESTEasy, and REST-*.  I only have 40 minutes, so it will be a brief overview of all those subjects and how they fit into our EAP product.  I’ll be giving it twice:

9am – EST

2pm – EST

For more information, click here

REST-* Messaging Draft 5: new post and subscribe patterns

1 Comment

I’ve made some small changes to REST-* Message Draft 5.  First is to the reliable posting of messages to a message destination.  The second is to the push model default subscription creation method.

New post-message-once protocol

Previously, the post-message-once link used the POE pattern to avoid duplicate message posting.  I asked around and it seems that the POE pattern isn’t used a lot in practice.  I’m glad because it kinda breaks the uniform interface (unsafe GET) and isn’t really consistent with the other protocols I defined.  It is also very inefficient as you have to make two round trips to post each message.  Nathan Winder, on the reststar-messaging list suggested using a one-off link generated with each message post.  Here’s how it looks:

The post-message-once link URL provided by this link is not used to actually create a message, but rather to obtain a new, one-off, URL. An empty POST should be executed on the post-message-once link. The response provides a new “create-next” link which the client can then post their message to. The link is a “one-off” URL. What that means is that is that if the client re-posts the message to the create-next URL it will receive a 405 error response if the message has already successfully been posted to that URL. If the client receives a successful response or a 405 response, there should be a Link header returned containing a new “create-next” link that the client can post new messages to. Continuously providing a “create-next” link allows the client to avoid making two round-trip requests each and every time it wants to post a message to the destination resource. It is up to the server on whether the create-next URL is a permanent URL for the created message. If it is not permanent, the server should return a Content-Location header to the message.

post-message-once example

  1. Query the destination root resource for links.Request:
    HEAD /topics/mytopic HTTP/1.1
    Host: example.com


    HTTP/1.1 200 OK
    Link: <...>; rel="post-message",
          <...>; rel="post-batch",
          <http://example.com/topics/mytopic/messages>; rel="post-message-once",
          <...>; rel="message-factory"
  2. Client performs a POST request to the post-message-once Link to obtain a create-next link.Request:
    POST /topics/mytopic/messages
    Host: example.com


    HTTP/1.1 200 OK
    Link: <http://example.com/topics/mytopic/messages/111>; rel="create-next"
  3. Client POSTs message to create-next LinkRequest:
    POST /topics/mytopic/messages/111
    Host: example.com
    Content-Type: application/json
    {'something' : 'abitrary'}


    HTTP/1.1 200 Ok
    Link: <http://example.com/topics/mytopic/messages/112>; rel="create-next"

Change to push model subscription

I also added a minor change to the push model’s subscriber registration protocol.  In the previous version of the spec, the client would post form parameters to a subscribers URL on the server.  The form parameter would define a URL to forward messages to and whether or not to use the POE protocol to post this message.  I changed this to simple require the client to post an Atom Link.  Since links define protocol semantics, the server can look at the link relationship registered to know how to interact with the subscriber when forwarding messages.  So, if the client registers a post-message-once link when it creates its subscription, the server knows how to interact with the link.  This gives the client and server a lot of simple flexibility in describing how messages should be forwarded.  For example:

This example shows the creation of a subscription and the receiving of a message by the subscriber.

  1. Client consumer queries topic resource for subscribers link.

    HEAD /mytopic
    Host: example.com


    HTTP/1.1 200 OK
    Link: <http://example.com/mytopic/subscribers, rel=subscribers, type=application/atom+xml
  2. Client does POST-Created-Location pattern to create subscriber

    POST /mytopic/subscribers
    Host: example.com
    Content-Type: applicatin/atom+xml
    <atom:link rel="post-message-once" href="http://foo.com/messages"/>


    HTTP/1.1 201 Created
    Location: /mytopic/subscribers/333
  3. A message comes in, the message service does a POST to this subscriber based on the interaction pattern described for post-message-once

    POST /messages
    Host: foo.com


    HTTP/1.1 200 OK
    Link: <http://foo.com/messages/624>; rel=create-next


    POST /messages/624
    Host: foo.com
    Link: <http://example.com/mytopic/messages/111>; rel=self,
          <http://example.com/mytopic>; rel=generator
    Content-Type: whatever
    body whateve

Speaking at new Boston JBoss User Group 2/9

Leave a comment

Jesper Pederson has created a Boston JBoss User Group.  Our first meeting is next Tuesday, February 9th.  I’m the first speaker and will be giving an intro to REST, JAX-RS, and, if I have time, some of the stuff that we’re doing at REST-* (rest-star.org).  Please click here for more details.

Overview of REST-* Messaging Draft 4


Just finished draft 4 of REST-* Messaging.  Please check our our discussion group if you want to talk more about it.  Here’s a list of resources and their corresponding relationships for a high level overview.  See the spec for more details.  It relies heavily on Link headers.  The current draft shows a lot of example HTTP request/responses to give you a good idea of what is going on.


A queue or a topic resource.


  • post-message – link to POST/create a message to a queue or topic.  This can be
  • post-message-once – link to POST/create a message to a queue or topic using the POST-Once-Exactlly (POE) pattern
  • post-batch – POST/create a batch of messages using Atom or Multipart
  • post-batch-once – POST/create a batch using the POE pattern.


Every message posted creates a message resource that can be viewed for adminstration, auditing, monitoring, or usage.

Links Relationships:

  • reply-to.  Destination that you can reply to
  • reply-to-once.  Destination that you can reply to once using the POST-Only-Once pattern
  • reply-to-batch. Destination that you can reply with a batch of messages
  • reply-to-batch-once.  Batch using POE.
  • next.  If pulling from a topic, messages will have this link.  This is the next message after this one.
  • acknowledgement.  If pulling from a queue, this link allows you to change the state of the message to acknowleged or not.
  • generator. Destination that the message came from


Has the same links as Destination with these added:

Link Relationships:

  • next.  GET the next incoming message.  Clients should use the Accept-Wait header to set a blocking timeout (see later).
  • last.  Last posted message to the topic
  • first.  First message posted to topic, or, the first message that is still around.
  • subscribers. URL that allows you to view (GET) or create/register (POST) a list of URLs to forward messages to.


Same as Destination, but these additional link relationships:

Link Relationships:

  • poller.  Alias to the next available message in the queue.  Returns a message representation along with a Content-Location header pointing to the real one.  a POST is required to access this because the state of the queue is changed.
  • subscribers.  Same as topic push model.

Etag embedded within a Link


I wanted to add acknowledgement to the queue consumer pull model in REST-* Messaging.  The way it would work is that consumers do a POST on the queue’s URL.  They receive the message as well as a Link header pointing to an acknowledgement resource.  When the client consumer successfully processes the message, it posts a form parameter, acknowledge=true to the acknowledgement link.

There is a problem with this though.  The design is connectionless to honor the stateless REST principle.  So there is no specific session resource that the client consumer is interacting with.  The consumer may never acknowledge the message, so I need the server to re-enqueue the message and deliver it to a new consumer.  The problem is, what if the old consumer tries to acknowledge after the message is re-enqueued or even after it is redelivered to a different consumer?

I first thought of letting the first consumer to acknowledge win and do something like POST-Once-Exactly (POE).  The problem with this is, what if there’s a network failure and the consumer doesn’t know if the acknowledgement happened or not?  It would redeliver the message and get back a Method Not Allowed response error code.  With this code, the consumer doesn’t know if somebody else acknowledged the message or if the older request just went through.  So, I went with a conditional POST.  The acknowledgement link, when performing a GET on it, would return an ETag header that the consumer must transmit with the acknowledgement POST.  If the message was re-enqueued, then the underlying ETag would change, and the conditional post would fail for the older consumer.

Still this solution is suboptimal because an additional GET request needs to be executed.  It is also subject to a race condition.  What if the message is re-enqueued before the consumer does a GET on the acknowledgement resource?  SO, what I decided to do was embed the etag value with the acknowledgement link.  For example:

1. Consume a message

POST /myqueue/consumer


HTTP/1.1 200 OK
Link: </myqueue/messages/111/acknowledgement>; rel=acknowledgement; etag=1
Content-Type: ...

...  body ...

2. Acknowledge the message

POST /myqueue/messages/111/acknowledgement
If-Match: 1
Content-Type: application/x-www-form-urlencoded


Success Response:

HTTP/1.1 204 No Content

Response when it was updated by somebody else.

HTTP/1.1 412 Precondition Failed

POE Redelivery Response.  It was already successfully updated by the consumer.

HTTP/1.1 405 Method Not Allowed

No Wait in HTTP


One thing the HTTP specification does not have is a “Server Timeout” response code.  The 408 and 504 response codes are the only thing that comes close.  The idea of a “Server Timeout” code is that the server received the request, but timed out internally trying to process the request.  Another thing I think that is missing from HTTP is a way for the client to tell the server how long it is willing to wait for a request to be processed.

I’ve run into both of these scenarios with the REST-* Messaging specification when I have pulling client consumers.  For the “Server Timeout” I decided upon 202, Accepted.  It seems to fit as I can tell the client to try again at a specific URL.  As for the client requesting a wait time?  I invented a new request header:  “X-RS-Wait”.  Its value is the time in seconds that the client wants to wait for a request to be processed.  Maybe there is a standard or drafted protocol I missed in my Google search?

REST-* Messaging Draft 3

Leave a comment

After prototyping, I’m back to writing another draft. This is a little bit more formal draft. I created a OSS project at:


A draft of the PDF is at:


(You’ll have to click through an additional link that is *very* long). This draft only talks about the Message Publishing Protocol. I still have to write up the topic and queue push and pull model.  To discuss the draft please post at the REST-* Messaging Group.  I’m also looking for people to help prototype the specification.  Post to the RESTEasy developers list if you’re interested.

Older Entries

%d bloggers like this: