Azure Functions + Quarkus HTTP Native/Graal

Leave a comment

You can take an existing Quarkus HTTP application and deploy it as a native executable (Graal build) to Azure Functions using the Custom Handler runtime provided with Azure Functions.

Prerequisites:

You’ll need to configure a root path in application.properties:

quarkus.http.root-path=/api/{functionName}

Replace {functionName} with whatever you want to call the function you’re creating for the Quarkus HTTP application.

Next, rebuild your application to create a native executable targeting linux 64bit x86 runtime.

$ mvn package -Pnative -DskipTests=true \
         -Dquarkus.native.container-build=true \
         -Dquarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-mandrel-builder-image:jdk-17


After you do this create a directory in the root of your project and create function deployment descriptors. Specify anything you want for the name of the function

$ mkdir my-app
$ cd my-app
$ func new --language Custom --template HttpTrigger  \
                   --name my-func --authlevel anonymous

The func command will generate a custom handler host.json file and an http trigger function.json for you for the function named my-func.

Next copy your application’s native executable to your app directory:

$ cp ../target/code-with-quarkus-1.0.0-SNAPSHOT-runner app

Next you’ll need to edit my-func/function.json. Add a route that matches to all paths.

{
  "bindings": [
    {
      "authLevel": "Anonymous",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "get",
        "post"
      ],
      "route": "{*path}"
    },
    {
      "type": "http",
      "direction": "out",
      "name": "res"
    }
  ]
}

Without specifying a route, you will not be able to get requests to rest endpoints defined in your application.

Next you need to edit your host.json file to specify some configuration. You’ll need to enable http forwarding (enableForwardingHttpRequest), specify the executable name of your application (defaultExecutablePath), and define the http port for Quarkus to bind to by specifying a system property to pass to the application as an argument (arguments. Here’s what it should look like in the end:

{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "excludedTypes": "Request"
      }
    }
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[3.*, 4.0.0)"
  },
  "customHandler": {
    "enableForwardingHttpRequest": true,
    "description": {
    "defaultExecutablePath": "app",
    "workingDirectory": "",
	"arguments": [
       "-Dquarkus.http.port=${FUNCTIONS_CUSTOMHANDLER_PORT:8080}"
      ]
    }
  }
}

Test Locally

To test locally, use the func start command. Make sure you are in the my-app directory you created earlier!

$ func start


# Azure Functions Core Tools
# Core Tools Version:       4.0.5198 Commit hash: N/A  (64-bit)
#Function Runtime Version: 4.21.1.20667
#
#
# Functions:
#
#	my-func: [GET,POST] http://localhost:7071/api/my-func/{*path}
#
# For detailed output, run func with --verbose flag.
# [2023-08-10T19:08:59.190Z] __  ____  __  _____   ___  __ ____  ______ 
# [2023-08-10T19:08:59.191Z]  --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
# [2023-08-10T19:08:59.191Z]  -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
# [2023-08-10T19:08:59.191Z] --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
# [2023-08-10T19:08:59.191Z] 2023-08-10 15:08:59,187 INFO  [io.quarkus] (main) code-with-quarkus-1.0.0-SNAPSHOT native (powered by Quarkus 999-SNAPSHOT) started in 0.043s. Listening on: http://0.0.0.0:33917
# [2023-08-10T19:08:59.191Z] 2023-08-10 15:08:59,188 INFO  [io.quarkus] (main) Profile prod activated. 
# [2023-08-10T19:08:59.191Z] 2023-08-10 15:08:59,188 INFO  [io.quarkus] (main) Installed features: [cdi, resteasy, smallrye-context-propagation, vertx]
# [2023-08-10T19:08:59.209Z] Worker process started and initialized.
# [2023-08-10T19:09:04.023Z] Host lock lease acquired by instance ID '000000000000000000000000747C0C10'.

To test go to http://localhost:7071/api/my-func/hello. The hello part of the path can be replaced with your REST API.

Deployment

To deploy you’ll need to create an Azure Group and Function Application.

# login
$ az login

# list subscriptions
$ az account list -o table

# set active subscription.  You do not have to do this if you only have one subscription
$ az account set --subscription <SUBSCRIPTION_ID>

# create an Azure Resource Group 
az group create -n rg-quarkus-functions \
  -l eastus

# create an Azure Storage Account (required for Azure Functions App)
az storage account create -n sargquarkusfunctions2023 \
  -g rg-quarkus-functions \
  -l eastus

# create an Azure Functions App
az functionapp create -n my-app-quarkus \
  -g rg-quarkus-functions \
  --consumption-plan-location eastus\
  --os-type Linux \
  --runtime custom \
  --functions-version 4 \
  --storage-account sargquarkusfunctions2023 

Make sure that the os-type is Linux!!! Note that your Function Application name must be unique and may collide with others.

Now you can deploy your application. Again, make sure you are in the my-app directory!

$ func azure functionapp publish my-app-quarkus

# Getting site publishing info...
# Uploading package...
# Uploading 15.32 MB 
# Upload completed successfully.
# Deployment completed successfully.
# Syncing triggers...
# Functions in my-app-quarkus:
#    my-func - [httpTrigger]
#         Invoke url: https://my-app-quarkus.azurewebsites.net/api/{*path}

A successful deployment will tell you how to access the app. Just remember the root context will always be /api/{functionName}.

Quarkus Live Coding and Continous Testing with Lambda

Leave a comment

Last week I merged Quarkus Live Coding and Continuous Testing support for Lambda and it will be available in Quarkus 2.3. I’m excited to see how its received by the Quarkus community. The PR allows you to run in Quarkus Dev or Test mode all locally within a mock Lambda environment automatically set up by the Quarkus build and runtime. This works when writing raw lambdas, with Quarkus’s Lambda + HTTP support, as well as with Funqy and Lambda. For all these integration points with Lambda, Quarkus provides an HTTP server that you can invoke on with Curl manually, or via Rest Assured or another HTTP client if you’re writing tests.

To completely understand what is going on here and how it works, I need to explain a little bit how Lambda’s work in general. Lambda’s (at least Java Lambdas) do not receive or process requests directly. Instead, an AWS Lambda event server is polled by your Lambda deployment for new requests that need to be processed. The AWS Lambda Java runtime has this poll loop built in. For native binary deployments with Graal VM, Quarkus re-implemented this poll loop itself.

The new Quarkus Live Coding and Continuous Testing support I merged has a new Mock Event Server that simulates the AWS Lambda event server. When running in Dev or Test mode, Quarkus boots up this Mock Event Server and runs your Lambda’s within the Quarkus poll loop. The Mock Event Server starts an HTTP server under port 8080 in Dev mode and port 8081 in Test mode. You can send requests to the event server with any HTTP client at http://localhost:8080 and http://localhost:8081 respectively. The payload of the request is sent as a Lambda event and is picked up by the poll loop and sent to your lambda code.

For Quarkus’s Lambda + HTTP support, the same Mock Event Server is started as well. In this case the Mock Event Server looks like a normal HTTP server to the client, but under the covers it automatically converts HTTP requests to the JSON HTTP event payload that the AWS API Gateway uses. If you want to send raw API Gateway HTTP events, you can by posting to http://localhost:8080/_lambda_ or http://localhost:8081/_lambda_ depending on whether you are in dev or test mode. Sending raw API Gateway HTTP events is a great way to test AWS security locally.

Anyways, I hope you all like this support. I’m very curious how what I’ve done compares to running locally with SAM CLI. Is it better? Worse? Needs a few more tweaks? Hope to hear from you soon.

Anybody else hate PR reviews?

2 Comments

I’ll start by introducing some flaim bait:

PR REVIEWS ARE WORTHLESS!

How often are you waiting days or even weeks for a PR review? And when it is about to be approved you get a merge conflict and have to start all over again? How often are PR reviews subjective rather than objective? How often are you subjected to anal-retentive suggestions that have nothing to do with the efficiency or effectiveness of your code and all to do with somebody’s pet peeve?

Nobody ever reviews my tests

I’ve been paid to write code for 31 years now. Its never good to deal in absolutes like always or never or everybody or nobody, but I can honestly say that in the last 3 decades my tests have NEVER EVER been reviewed by ANYBODY. The only time I ever get a comment on testing is if I submit a PR that doesn’t have ANY test code. This is the biggest reason why I think PR reviews are completely worthless. Your testsuite is the most important part of your project. Without a good testsuite that CI green checkmark you waited so long to get has a lot less meaning. Without a good testsuite, developers become superstitious and fearful about changing any code. If PR reviewers aren’t reviewing the tests, what the fuck is the point of the review?

PR Reviews slow down development for little gain

PR reviews almost always slow down development and can even bring it to a standstill. I’ve experienced this at multiple companies so its not just a symptom of open source development at Red Hat. At the last company I worked at, you’d dread any inter-team PR as you would wait for weeks, even months for a PR review to get on somebody’s Sprint board. On the Quarkus team given the nature of open source development, the sheer volume of PRs is overwhelming for those that can actually review and approve PRs.

The results of the PR review backlog are merge conflicts, unintegrated code, and often paralyzed development. Every day you wait for an approval increases the probability of a merge conflict causing additional work you didn’t plan for. As PR’s pile up you end up with a bunch of unintegrated code that has not been tested together, with a CI green checkmark that could be days or weeks old. If you need to continuously work on the same part of the code base, your PR backlog can prevent you from working on bugs or improvements as you are waiting on your code to become part of master.

As a result of this, you find yourself looking for people that are rubber stamp reviewers. “Hey buddy, can you just approve my PR? The sprint ends tomorrow.”. We are all guilty of it at some point or another. Obviously, this defeats the purpose of the PR review.

PR reviewers look for low hanging fruit to deny your PR

The PR review process is a chore for developers. A good review takes time. Time most developers don’t have. This is especially true for cross-team PRs where the target team often does not have your PR on their sprint board. Because of this time squeeze, reviewers look for low hanging fruit to deny your PR so they don’t have to approve it. This is especially common for PRs that come from external non-team members as trust hasn’t been established. Because of the lack of time for a comprehensive review, they look for a simple subjective suggestion to bump you back to the CI/approval queue. i.e. “I don’t like the name of this method, class, or field.” Sometimes I think reviewers feel compelled to find something wrong. Like they aren’t doing their job if they don’t (especially younger devs).

PR reviews discourage iterative development

I’m old. I spent a significant number of years without the PR review process in place. What I’ve found personally is that I rarely do much iterative development anymore because the CI/PR review process takes so long. In the past I’d be fixing a small bug and find a piece of code or API that I think was poorly written and just rewrite it. Refactorings often create bigger PRs. The thing is, developers hate reviewing large PRs as it takes a lot of time. So, any large PR might spend a lot longer in the PR review queue. Because of this, I don’t do refactorings much anymore because a long review wait queue makes merge conflicts inevitable.

How to fix the problem?

How can we fix this PR review bottleneck?

Approve and open up a new issue instead

If the PR passes CI, is tested correctly, but you believe the code could be better, instead of demanding a change, open a new ticket and schedule the code improvement for later. Better yet, assign the new ticket to yourself.

Automatic approval after fixed time period

If a PR has the CI greenlight, but sits un-reviewed. Approve it blindly after a fixed time period.

Stan Silvert suggested this in the comments

Suggest, not demand subjective changes

Identify when you are asking for a subjective change. Be honest, aren’t the majority of PR reviews just subjective suggestions? Instead, just comment on your suggestion, or just use Github and code a change to the PR itself (Github allows you to autocommit this).

Introduce an “Approve if CI passes

I think Github supports this, but, have a PR approval if CI passes option to PR requests. Automate it if possible.

Review tests

Read what my PR is trying to accomplish then review my test code to see if it is comprehensive enough. If you don’t have time to do that, then you are better off just approving the PR and removing yourself as a bottleneck.

Trust your test suite and CI

To speed up the PR approval you have trust in your test suite and CI builds. If your test suite sucks you are going to be more superstitious and fearful of changes to your codebase and will look for any excuse to deny a PR. If you are a project lead spend a Sprint or two every once and awhile and focus on improving your testsuite and CI builds. Introduce backward compatibility tests. Performance regression tests. Review code coverage reports. If your testsuite is strong, its ok to just gloss over PRs, or just blindly approve PRs from trusted individuals.

Encourage asynchronous, pre-PR feedback, but without Slack

As a PR submitter, you will ease a lot of pain if you talk about your design before you begin coding. Talk about API changes you want to make before getting to work. Get some input. You’ll save a lot of CI/PR review time if you do some of this upfront.

You’ll be even happier if you fish for asynchronous feedback. Instead of zoom, try sending out an email about what your ideas are. This allows people to give you feedback on their own time.

Why not just drop the PR review?

I’ll conclude my blog with some controversy: Why not just drop PR reviews altogether. Their trouble is worth more than their value. PR submitters look for rubber stampers. Reviewers don’t review tests: the most important part of your PR. Let your testsuite and CI become your automated implicit PR reviewer.

If you’re too scared to make this leap, maybe develop a circle of trust where you have a subset of developers that can be trusted to self-approve and merge their PRs. Before git, PR reviews, and CI, back in the old JBoss days of the 2000s, our open source developers earned commit rights. If you earned the trust of the project leaders, you earned the right to commit and merge whatever you wanted. Granted, in those days we also didn’t have Continuous Development, but maybe there is some middle ground we can reach.

/votetokick pr-reviews

Finally, please note that this blog is tagged as “flame bait”. I’ve received some internal emails with people that are relatively shocked I could even suggest or question the value of the PR review. Maybe I’m just trying to show people that they need to improve/streamline their review process or just get out of my way and trust me? 🙂

QSON: New Java JSON parser for Quarkus

10 Comments

Quarkus has a new JSON parser and object mapper called QSON. It does bytecode generation for the Java classes you want to map to and from JSON around a small core library. I’m not going to get into details on how to use it, just visit the github page for more information.

I started this project because I noticed a huge startup time for Jackson as relative to the other components within Quarkus applications. IIRC it was taking about 20% of the boot time for a simple JAX-RS microservice. So the initial prototype was to see how much I could improve boot time and I was pleasantly surprised that the parser I implemented was a bit better than Jackson at runtime too!

The end result was that boot time improved about 20% for a simple Quarkus JAX-RS microservice. The runtime performance is also better in most instances too. Here are the numbers from a JMH benchmark I did:

Benchmark                           Mode  Cnt       Score   Error  Units
MyBenchmark.testParserAfterburner  thrpt    2  223630.276          ops/s
MyBenchmark.testParserJackson      thrpt    2  218748.065          ops/s
MyBenchmark.testParserQson         thrpt    2  251086.874          ops/s
MyBenchmark.testWriterAfterburner  thrpt    2  189243.175          ops/s
MyBenchmark.testWriterJackson      thrpt    2  168637.541          ops/s
MyBenchmark.testWriterQson         thrpt    2  177855.879          ops/s

These are runtime throughput numbers so the higher the better. Qson is better than regular Jackson and Jackson+Afterburner for json to object mapping (reading/parsing). For output, Qson is better than regular Jackson, but is a little behind Afterburner.

There’s still some work to do for Qson. One of the big things I need is a maven and gradle plugin to handle bytecode generation so that Qson can be used outside of Quarkus. We’ll also be adding more features to Qson like custom mappings. One thing to note though is that I won’t add features that hurt performance, increase memory footprint, or hurt boot time.

Over time, we’ll be integrating Qson as an option for any Quarkus extension that needs Json object mapping. So far, I’ve done integration with JAX-RS (Resteasy). Funqy is a prime candidate next.

Quarkus Funqy: Portable Function API

Leave a comment

Quarkus Funqy is a new FaaS API that is portable across cloud runtimes like AWS Lambda, Azure Functions, Knative Events, and Google Cloud Functions. Or, if you’re deploying to a traditional environment, Funqy functions can work standalone as well.

public class MyClass {
   @Funq
   public Greeting greet(String name) {
     ...
   }
}

The idea of Funqy is simple. You write one Java method that has one optional input parameter and that returns optional output. Either primitives or POJOs are supported as input and output types. Under the covers, Funqy integrates with whatever plumbing is needed depending on your deployment environment. Funqy classes are Quarkus components and can accept injections using Spring DI or CDI annotations (through Arc).

Funqy Bindings

Funqy can be deployed in a number of environments.

Motivations for Funqy

Why did we create Funqy? Part of our Quarkus Serverless Strategy was to make popular REST frameworks available for use in environments like AWS Lambda. When the Quarkus team was asked to integrate with Cloud Events, we felt like traditional REST frameworks didn’t quite fit even though Cloud Events has an HTTP binding. Funqy gave us an opportunity to not only unify under one API for FaaS development, but to greatly simplify the development API and to create a framework that was written for and optimized for the Quarkus platform.

REST vs Funqy

The author of this blog loves JAX-RS, was the founder of Resteasy, and even wrote a book on JAX-RS. REST is still the preferred architecture and REST over HTTP is still an ubiquitous way of writing service APIs. The thing is though, if you go out into the wild you’ll find that many application developers don’t follow REST principles. HTTP and REST frameworks are pretty much used as an RPC mechanism. Cool features in HTTP like cache-control and content negotiation are rarely used and JSON is the de facto representation exchanged between client and server.

If all this is true, you don’t need 80% of the features that a spec like JAX-RS provides. Nor do you want the overhead of supporting those unused features in your runtime. Since Funqy is a small, tightly constrained API, all the overhead of supporting unused REST features are ripped out. If you look at the implementation, its a very thin integration layer over the ridiculously fast Vert.x Web runtime. Each Funqy binding is a handful of classes. Funqy’s overhead is purely marshalling.

Who knows what the future holds for Funqy. It’s part of a quest to reduce the complexity and overhead of Java development as well as provide something that is portable to many environments so that you aren’t locked into a specific cloud vendor API. Enjoy everybody!

Quarkus Serverless Strategy

Leave a comment

What is Serverless?

Serverless architectures allow us to scale our services from zero instances to many based on request and event traffic.  The advantages are clear.  If our services are idle most of the day, why should we have to pay a cloud provider for a full day or waste scarce resources in our company’s private cloud?  Why should we have to plan for peak load when our architecture can scale up for this peak load automatically based on volume of incoming traffic?  Serverless solves these types of problems.

Function as a Service (FaaS) is also part of the Serverless paradigm and focuses on exposing one remote function per deployment.  It is a more fine grain approach than Microservices, with the idea being that you can be more agile at getting functionality to market with even smaller deployment.  AWS Lambda and Azure Functions are an example of FaaS implementations.   FaaS frameworks like AWS Lambda and Azure Functions not only bring autoscaling to your services, but they’ve started to make it much easier to deploy your code to the cloud.  In a Lambda or Azure environment, developers don’t worry about the container anymore and can just focus on pushing their code.  FaaS environments have started to take the “Ops” out of “DevOps”.

Java’s Disadvantages

Unless you’re focusing solely on batch processing, one of the disadvantages of a Serverless architecture is the instance spin up time.  In other words, the cold-start latency.  If you need milliseconds to response to a client request, and your service spinup is measured in seconds, then you have a problem.

Java frameworks like Spring, Hibernate, Microprofile, Java EE and other technologies traditionally have been slow to boot and even microservices written in these technologies take seconds to start up.  This is because most of these frameworks do all their configration and metadata processing at boot time.  Spring and Hibernate scan classes for annotations.  Hibernate additionally builds SQL queries.  They do the same exact pre-processing every single time they are spun up.

Java also has a huge memory problem.  If FaaS is the way to go and you’re having many more fine grain deployments, then Java based deployments are going to take up a huge amount of memory.  Some cloud environments also charge based on the memory used compounding the issue.

Quarkus Perfect Match for Serverless

Quarkus’s core values are to drastically reduce memory footprint and boot time for Java based applications and services.  There are two of the biggest concerns when dealing with Serverless architectures.  Quarkus has moved most of the pre-processing that frameworks like Spring and Hibernate do from boot time to build time.  This approach has drastically reduced service spin up and memory footprint.  Quarkus has also smoothed out the rough edges with Graal so that you can compile your Java microservices into native executables which provide even faster boot time and a lesser memory footprint than running with the JVM.

Quarkus Serverless Strategy

The Quarkus team is tackling Serverless in a variety of ways:

  • Enhance existing Serverless Java stacks out of the box
  • Bring the Java ecosystem to existing Serverless Java stacks
  • Provide portability between Serverless stacks through traditional, mature, existing Java APIs
  • Provide a new Java function API (Funqy) that is portable across Serverless providers

Quarkus Enhances Lambda

By modifying your pom and adding a few Quarkus AWS Lambda integration dependencies like the Quarkus maven plugin, you can compile your AWS Lambda Java projects into a native binary that the AWS Lambda Custom Runtime can consume.  Watch your cold-start latency and memory footprint drop dramatically.  Try it out yourself.

The idea here is to bring Graal support to AWS Lambda through Quarkus in a seemless way.  We have smoothed out the rough edges Graal introduces for a variety of AWS SDKs.

Pull in Java Ecosystem

Another part of the Quarkus Serverless strategy is to pull in the Java ecosystem into existing Serverless stacks.  Through Quarkus your AWS Lambda classes can inject service components via Spring DI or CDI.  You’re not stuck with using whatever AWS SDK provides and can use the mature Java frameworks you’ve been using for years.

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import org.springframework.beans.factory.annotation.Autowired;

public class GreetingLambda implements RequestHandler<String, String> {

    @Autowired
    GreetingService service;

    @Override
    public String handleRequest(String name, Context context) {
        return service.greeting(name);
    }
}

Avoid Vendor Lock-in

Let’s face it.  If you use AWS, Azure, or any other cloud provider SDKs, then you are locked into that platform.  If AWS jacks up their prices down the road, you’re going to have a tough time moving off their platform.  Quarkus helps alleviate this issue by providing integration between REST and HTTP frameworks like JAX-RS, Spring MVC, Servlet, and Vertx.Web with AWS Lambda and Azure Functions.  Let REST and HTTP be your portable architecture between cloud providers and avoid vendor lock-in by using REST frameworks that you’ve been using for years.  Try it out with AWS Lambda or Azure Functions.

One great thing about using our JAX-RS or Spring MVC support with AWS Lambda or Azure Functions is that you’re not stuck with one REST endpoint per deployment.  You can deploy existing microservices as one Lambda deployment if you desire.  This alleviates some of the management issues that an explosion of function deployments might create down the road as you can aggregate as many endpoints as you want into one Lambda deployment.

Funqy Cross Platform Functions

The final piece of our Quarkus Serverless Strategy is a new cross-platform function API called Funqy.  Quarkus Funqy is a simple API that allows you to write functions that are usable in a variety of FaaS environment:  AWS Lambda, Azure Functions, Knative Events, and more.

public class MyClass {
   @Funq
   public MyOutput myFunction(MyInput input) {
     ...
   }
}

Funqy is still in development.  We’ll have a follow up blog as soon as it is ready to release.

More to come

Quarkus will continue to revise and expand our Serverless Strategy.  Come try out our integrations and new APIs.

Quarkus unifies reactive and imperative for REST

Leave a comment

The latest release of Quarkus has unified Vert.x Web, Servlet (Undertow), and JAX-RS (Resteasy) under one I/O abstraction.  Specifically, Servlet and JAX-RS were written on top of Vert.x.

What this means for you is that if you are using Vert.x, Servlet, and/or JAX-RS in one application they will all share the same io and worker thread pools.  Scarce resources are now reused.  Because everything is unified under Vert.x, there’s a lot of future optimizations and features that we can bring to Resteasy and the JAX-RS coding model.  More info on this coming soon!

 

Deflate this!

Leave a comment

Got to see Bill DeCoste this week at DevNation and we relived some of our Superbowl memories. Here’s some pix
FullSizeRender
IMG_3825IMG_1596
bb-6-rings

Ring ceremony was awhile ago, but I just love this picture.  The guy has 6 bleeping rings!  BTW, somebody asked me if I was at this ring ceremony…LOL!  NO!  I just love this picture I found of Vince and Bill!

HTTP Quote of the day

Leave a comment

“T]hose who do not understand HTTP are doomed to re-implement it on top of itself

I wonder how many of us have done this?  I know I did a few times when I first started writing REST-based services.

Keycloak 1.1.0.Beta2 Released: Adapters, Proxy, Wildfly Subsystem

Leave a comment

A lot of new features this release.

  • Tomcat 6, 7, and 8 adapters
  • Jetty 8.1, 9.1, and 9.2 adapters
  • HTTP Security Proxy for platforms that don’t have an adapter based on Undertow.
  • Wildfly subsystem for auth server.  Allows you to run keycloak in domain mode to make it easier to run in a cluster.

Hope to do 1.1.0.Final sometime end of January.  See http://keycloak.org for more details.

Older Entries