A RESTEasy user (and JBoss customer) asked about 6 weeks ago if we were planning on supporting Resteasy with Asynchronous HTTP. For those of you not familiar with Asynchronous HTTP, take a look at Servlet 3.0, Tomcat Comet APIs, or Jetty’s continuations. I decided to add some limited support after doing some research.
The primary (and IMO, the only) usecase for Asynchronous HTTP is in the case where the client is polling the server for a delayed response. The usual example is an AJAX chat client where you want to push/pull from both the client and the server. These scenarios have the client blocking a long time on the server’s socket waiting for a new message. What happens in synchronous HTTP is that you end up having a thread consumed per client connection. This eats up memory and valuable thread resources. No such a big deal in 90% of applications, but when you start getting a lot of concurrent clients that are blocking like this, there’s a lot of wasted resources.
As for Resteasy and asynchronous HTTP, I don’t want to create a COMET server. Maybe I’m totally wrong, but it seems that the idea of COMET is to use HTTP solely for the purpose of initiating a dedicated socket connection to your server. After the initial connection is made, the client and server just tunnel their own message protocol across the socket for any number of requests.
Now, this isn’t necessarily a bad idea. Its actually a good one for performance reasons. As a JBossian David Lloyd explained to me in private email, what this API allows you to do is create a direct connection to your service so that you don’t have to redispatch with every message that is exchange from the client and server. Sure, HTTP Keep Alive allows you to avoid re-establishing the socket connection, but you don’t have to worry about all the other path routing logic to get the message to the Java object that services the request.
Still, this is something I don’t want or need to provide through RESTEasy. Its really using HTTP to tunnel a new protocol and something that is very unRESTful to me. Basically it requires that the client knows how to communicate the COMET protocol, which IMO, makes ubiquity very hard. Besidesk, Tomcat, JBoss Web, and Jetty will probably do a fine job here. There are also already a number of COMET servers available on the web. No, I will focus on giving asynchronous HTTP features to a pure HTTP and JAX-RS request.
What I will initially provide through RESTEasy is a very simple callback API.
@Path("/myresource") @GET public void getMyResource(@Suspend AsynchronousResponse response) { ... hand of response to another thread ... } public interface AsynchronousResponse { void setResponse(javax.ws.rs.core.Response response); }
The @Suspend annotation tells Resteasy that the HTTP request/response should be detached from the currently executing thread and that the current thread should not try to automaticaly process the response. The AsynchronousResponse is the callback object. It is injected into the method by Resteasy. Application code hands off the AsynchronousResponse to a different thread for processing. The act of calling setResponse() will cause a response to be sent back to the client and will also terminate the HTTP request.
Servlet 3.0 has asynchronou APIs. When you suspend a request, the request may be redelivered when a timeout occurs. I don’t want to have a Resteasy replacement for these APIs. What I want is something that complements these APIs and makes a specific usecase easier to write. The use case I want to solve is detaching the processing of an HTTP response from the current thread of execution so that another thread can do the work when ready. That’s it. So, there will be no redelivery of a request.
Initially I plan to work with the asynchronous apis within the Tomcat 6.0 NIO transport as this is what is distributed with JBoss 4.2.3. Next I plan to work with the JBoss Web asynchronous apis, then Jetty 6, then finally with a Servlet 3.0 implementation.
If you have any ideas on this subject, please let me know. I can work them into the prototype I’m developing.
Oct 10, 2008 @ 14:45:40
I wish they stop calling it “Asynchronous HTTP” 🙂
Oct 11, 2008 @ 16:01:07
It’s not asynchronous but delayed or event-driven. I agree with you this feature has very limited use case (like waiting for a new message in an inbox and getting it). But who triggers the completion of the request? I guess, in most cases, it’s just another blocked thread unless everything is purely non-blocking, which is unlikely to happen in the real world. So I’d rather introduce an annotation which gives a hint about how a thread should be acquired for the request, not introducing a new type.
@Path(“/myresource”)
@GET
@Execution(“detached”) // @Execution(“attached”) is assumed implicitly
public Object getMyResource() {
… the thread that called this method is detached from its thread pool automatically …
… once the execution of this method completes, the thread is returned to the thread pool or is destroyed (if the pool size has reached to its maximum already) …
}
Oct 11, 2008 @ 16:03:01
about how a thread should be acquired -> about how a thread should be managed
Oct 11, 2008 @ 20:36:07
Trustin, your approach wouldn’t do/solve anything. The polling usecase limits the amount of threads. In your case, you’re just removing a thread from the pool, no? So, the idea would be that one thread makes multiple responses, hence the callback object.
Oct 12, 2008 @ 01:22:52
You are right. It makes sense when one thread makes more than one response. My point was that such a case will be pretty rare in the real world apps. A user will end up with a separate thread pool to wait for an event in many cases. For example, most data access API still live in a blocking paradigm.
Anyway, your idea will solve the problem when a user got some cool non-blocking data access layer or uses only in-memory data structure, so I don’t think it’s a bad idea. I was just wondering what would be the real use case is.
Oct 13, 2008 @ 05:00:40
Trustin, I agree and I don’t think the usecase falls under the 80/20 rule, but its not that uncommon. Think of any AJAX application that needs updates pushed from the server. Any monitoring applications comes to mind.
Oct 16, 2008 @ 22:43:37
Hi Bill,
I think there’s more use cases for “async http” than just the comet technique for ajax. For example, consider the case where a servlet wants to make a number of webservices calls to mashup some data. If each webservice call is executed synchronously, then on a moderately busy site this can rapidly eat up servlet container threads. Allowing the calls to be asynchronous and proceed in parallel (where possible of course) can free-up precious servlet container threads and thereby make the site more available. We implemented a webapp to show this. The servlet makes calls to eBay using Jetty’s async http client, and suspends the servlet until the results of all the REST calls are available. The numbers clearly show the benefit of this type of approach.
best regards
Jan
Oct 16, 2008 @ 23:00:12
Jan,
Thanks. Interesting use case.
RESTEasy Beta 9 Released « Bill the Plumber
Dec 01, 2008 @ 22:25:06
Mar 03, 2010 @ 09:43:21
I find the application of Comet or “delayed response” HTTP services very exciting, and I do not agree that the use case is limited.
The COMET (Async HTTP/Delayed Response/Ajax Push…) technique can be applied in any situation where “server push” is desired over HTTP protocol (which has obvious routing, security/TLS, and compression GZIP advantages), whether this is server/server communication, Server/RichClient or Server/Thin Client (Ajax).
The COMET approach provides low latency, low bandwidth combination, and as an added bonus you even get heartbeat/keepalive for free, when you have the typical 30-second timeout “null” message being returned to the client, ensuring that the client libraries and other network components do not disconnect the HTTP/Socket connection.
So you could argue, that the COMET approach is ideally suited to any application, where a client wish to subscribe to data from a channel/web service. I am currently involved in a large infrastructure project, where Rich Clients connect to web services to subscribe to data channels. The transport format is ATOM (over HTTP) and to get low latency, COMET is applied. We are currently looking into moving to RESTEasy with ATOM/Asynchronous HTTP since this implementation seems very elegant.
Background COMET article: http://en.wikipedia.org/wiki/Comet_(programming)