A user requested Selector support. Follow the links and doco from:
To download, etc…
Software plumbing using middleware wrenches
October 25, 2010
hornetq, REST, REST-star 1 Comment
A user requested Selector support. Follow the links and doco from:
To download, etc…
August 9, 2010
hornetq, JAX-RS, REST, REST-star, RESTEasy 6 Comments
After being distracted a lot with RESTEasy releases over the past few months, I finally have something usable (and more importantly, documented) for the HornetQ REST Interface I’ve been working on. The interface allows you to leverage the reliability and scalability features of HornetQ over a simple REST/HTTP interface. Messages are produced and consumed by sending and receiving simple HTTP messages containing the XML or JSON (really any media type) document you want to exchange.
Other than being buzzword compliant here are some of the reasons you might want to use the HornetQ REST Interface:
Visit the HornetQ REST Interface web page to find links for downloading and browsing docs, source code, and examples.
April 2, 2010
Draft 5 of REST-* messaging talked about iterating on the reliable posting protocol. Recently, I was arguing with Jan Algermissen on a completely unrelated subject. As a result of this conversation, I ended up re-reading Roy’s post on hypermedia for the 4th time in 2 years. Re-reading Roy’s blog got me thinking a bit about improving the message posting protocol of REST-* Messaging so that it is driven more by in-band information, rather than out-of-band information.
Firstly, I want to remove the post-message-once and post-message link relationships. Instead, a destination resource would only publish the create-next link. When a client wants to post a message to a queue or topic, it will use this link to create a new message resource on the server. The type of this link would be “*/*” meaning it accepts any media type.
The key change to the protocol would be for the client to be aware that responses to creating messages through a create-next link may contain a new create-next links. The client is encouraged to use these new, on-the-fly, create-next links to post additional messages to the topic or queue. An important point of this change is that the server is not required to send a create-next link with its responses. How, and if, the server protects itself from duplicate message posting is up to the server.
So how could the server protect itself from duplicate message posting? One implementation could be that the server returns a 307 response code, “Temporary Redirect” for the initial POST to the static top-level create-next link published by the destination resource. This 307 requires the client to re-POST the request to a URL contained in a response Location header as defined by the HTTP 1.1 specification. The Location header would point to a one-off URL (like the previous protocol defined in Draft 5). If a network failure happens, then the client re-POSTs to this URL. If the messages was previously successfully processed by the server, the server would respond with a 405, Method Not Allowed. If no network failure happens on the re-POST from the 307 redirection, then the server would just return a success code. In either response, the server would also return a new create-next link as a Link header within the response. The client would use this new create-next link to post new messages. Subsequent posts to these new links would not have to go through the re-direct protocol because they would be newly generated one-off URLs.
I have been reading a bit that some weird or undesirable behavior may be experience with some user agents when using 307 and POST/PUT. So, I think that if the REST-* Messaging specification leaves it undefined how a server implementation handles the initial response of a duplicate-message protection protocol, we can let it evolve on its own. The key here is that the client should be encouraged to look within the response for new create-next links even from error responses. For example, if instead of 307, the initial POST return a 415, Preconditions Failed and that error response contained a create-next link header, the client should use that link to re-post the request. NOTE! I think 307 is probably the best implementation, but maybe its best to give flexibility to implementors.
Keep the *-batch links
I still want to have separate links for submitting batches of messages. Specifically rename post-batch to create-next-batch (and remove post-batch-once). I want the distinction so that the server knows that it is receiving a collection of messages vs. the server just forwarding a message to message consumers that just happens to be a collection media type.
March 23, 2010
JAX-RS, jboss, REST, REST-star, RESTEasy Leave a comment
I’m doing a webinar tomorrow on REST, JAX-RS, RESTEasy, and REST-*. I only have 40 minutes, so it will be a brief overview of all those subjects and how they fit into our EAP product. I’ll be giving it twice:
9am – EST
2pm – EST
For more information, click here
March 4, 2010
I’ve made some small changes to REST-* Message Draft 5. First is to the reliable posting of messages to a message destination. The second is to the push model default subscription creation method.
New post-message-once protocol
Previously, the post-message-once link used the POE pattern to avoid duplicate message posting. I asked around and it seems that the POE pattern isn’t used a lot in practice. I’m glad because it kinda breaks the uniform interface (unsafe GET) and isn’t really consistent with the other protocols I defined. It is also very inefficient as you have to make two round trips to post each message. Nathan Winder, on the reststar-messaging list suggested using a one-off link generated with each message post. Here’s how it looks:
The post-message-once link URL provided by this link is not used to actually create a message, but rather to obtain a new, one-off, URL. An empty POST should be executed on the post-message-once link. The response provides a new “create-next” link which the client can then post their message to. The link is a “one-off” URL. What that means is that is that if the client re-posts the message to the create-next URL it will receive a 405 error response if the message has already successfully been posted to that URL. If the client receives a successful response or a 405 response, there should be a Link header returned containing a new “create-next” link that the client can post new messages to. Continuously providing a “create-next” link allows the client to avoid making two round-trip requests each and every time it wants to post a message to the destination resource. It is up to the server on whether the create-next URL is a permanent URL for the created message. If it is not permanent, the server should return a Content-Location header to the message.
post-message-once example
HEAD /topics/mytopic HTTP/1.1 Host: example.com
Response:
HTTP/1.1 200 OK Link: <...>; rel="post-message", <...>; rel="post-batch", <http://example.com/topics/mytopic/messages>; rel="post-message-once", <...>; rel="message-factory"
POST /topics/mytopic/messages Host: example.com
Response:
HTTP/1.1 200 OK Link: <http://example.com/topics/mytopic/messages/111>; rel="create-next"
POST /topics/mytopic/messages/111 Host: example.com Content-Type: application/json {'something' : 'abitrary'}
Response:
HTTP/1.1 200 Ok Link: <http://example.com/topics/mytopic/messages/112>; rel="create-next"
Change to push model subscription
I also added a minor change to the push model’s subscriber registration protocol. In the previous version of the spec, the client would post form parameters to a subscribers URL on the server. The form parameter would define a URL to forward messages to and whether or not to use the POE protocol to post this message. I changed this to simple require the client to post an Atom Link. Since links define protocol semantics, the server can look at the link relationship registered to know how to interact with the subscriber when forwarding messages. So, if the client registers a post-message-once link when it creates its subscription, the server knows how to interact with the link. This gives the client and server a lot of simple flexibility in describing how messages should be forwarded. For example:
This example shows the creation of a subscription and the receiving of a message by the subscriber.
HEAD /mytopic Host: example.com
Response:
HTTP/1.1 200 OK Link: <http://example.com/mytopic/subscribers, rel=subscribers, type=application/atom+xml ...
POST /mytopic/subscribers Host: example.com Content-Type: applicatin/atom+xml <atom:link rel="post-message-once" href="http://foo.com/messages"/>
Response:
HTTP/1.1 201 Created Location: /mytopic/subscribers/333
POST /messages Host: foo.com
Response:
HTTP/1.1 200 OK Link: <http://foo.com/messages/624>; rel=create-next
Request:
POST /messages/624 Host: foo.com Link: <http://example.com/mytopic/messages/111>; rel=self, <http://example.com/mytopic>; rel=generator Content-Type: whatever body whateve
February 2, 2010
JAX-RS, jboss, REST, REST-star Leave a comment
Jesper Pederson has created a Boston JBoss User Group. Our first meeting is next Tuesday, February 9th. I’m the first speaker and will be giving an intro to REST, JAX-RS, and, if I have time, some of the stuff that we’re doing at REST-* (rest-star.org). Please click here for more details.
November 19, 2009
Just finished draft 4 of REST-* Messaging. Please check our our discussion group if you want to talk more about it. Here’s a list of resources and their corresponding relationships for a high level overview. See the spec for more details. It relies heavily on Link headers. The current draft shows a lot of example HTTP request/responses to give you a good idea of what is going on.
Destination
A queue or a topic resource.
Relationships:
Message
Every message posted creates a message resource that can be viewed for adminstration, auditing, monitoring, or usage.
Links Relationships:
Topic
Has the same links as Destination with these added:
Link Relationships:
Queue
Same as Destination, but these additional link relationships:
Link Relationships:
November 12, 2009
I wanted to add acknowledgement to the queue consumer pull model in REST-* Messaging. The way it would work is that consumers do a POST on the queue’s URL. They receive the message as well as a Link header pointing to an acknowledgement resource. When the client consumer successfully processes the message, it posts a form parameter, acknowledge=true to the acknowledgement link.
There is a problem with this though. The design is connectionless to honor the stateless REST principle. So there is no specific session resource that the client consumer is interacting with. The consumer may never acknowledge the message, so I need the server to re-enqueue the message and deliver it to a new consumer. The problem is, what if the old consumer tries to acknowledge after the message is re-enqueued or even after it is redelivered to a different consumer?
I first thought of letting the first consumer to acknowledge win and do something like POST-Once-Exactly (POE). The problem with this is, what if there’s a network failure and the consumer doesn’t know if the acknowledgement happened or not? It would redeliver the message and get back a Method Not Allowed response error code. With this code, the consumer doesn’t know if somebody else acknowledged the message or if the older request just went through. So, I went with a conditional POST. The acknowledgement link, when performing a GET on it, would return an ETag header that the consumer must transmit with the acknowledgement POST. If the message was re-enqueued, then the underlying ETag would change, and the conditional post would fail for the older consumer.
Still this solution is suboptimal because an additional GET request needs to be executed. It is also subject to a race condition. What if the message is re-enqueued before the consumer does a GET on the acknowledgement resource? SO, what I decided to do was embed the etag value with the acknowledgement link. For example:
1. Consume a message
Request:
POST /myqueue/consumer
Response:
HTTP/1.1 200 OK Link: </myqueue/messages/111/acknowledgement>; rel=acknowledgement; etag=1 Content-Type: ... ... body ...
2. Acknowledge the message
Request:
POST /myqueue/messages/111/acknowledgement If-Match: 1 Content-Type: application/x-www-form-urlencoded acknowledge=true
Success Response:
HTTP/1.1 204 No Content
Response when it was updated by somebody else.
HTTP/1.1 412 Precondition Failed
POE Redelivery Response. It was already successfully updated by the consumer.
HTTP/1.1 405 Method Not Allowed
November 10, 2009
One thing the HTTP specification does not have is a “Server Timeout” response code. The 408 and 504 response codes are the only thing that comes close. The idea of a “Server Timeout” code is that the server received the request, but timed out internally trying to process the request. Another thing I think that is missing from HTTP is a way for the client to tell the server how long it is willing to wait for a request to be processed.
I’ve run into both of these scenarios with the REST-* Messaging specification when I have pulling client consumers. For the “Server Timeout” I decided upon 202, Accepted. It seems to fit as I can tell the client to try again at a specific URL. As for the client requesting a wait time? I invented a new request header: “X-RS-Wait”. Its value is the time in seconds that the client wants to wait for a request to be processed. Maybe there is a standard or drafted protocol I missed in my Google search?
November 6, 2009
REST, REST-star Leave a comment
After prototyping, I’m back to writing another draft. This is a little bit more formal draft. I created a OSS project at:
http://sf.net/projects/rest-star
A draft of the PDF is at:
http://reststar-messaging.googlegroups.com/web/REST-Star-Messaging-draft-3.pdf
(You’ll have to click through an additional link that is *very* long). This draft only talks about the Message Publishing Protocol. I still have to write up the topic and queue push and pull model. To discuss the draft please post at the REST-* Messaging Group. I’m also looking for people to help prototype the specification. Post to the RESTEasy developers list if you’re interested.