JSON RPC (and other RPC invocations) are synchronous by nature - the caller invokes a remote procedure call, waits for completion and returns the result. These semantics are enforced further by some of the transports which have an explicit FSM in the message layer. For example in ZMQ the transport itself insists that every request has a matching reply.

These semantics introduce a significant performance penalty when interfaced to ODL or any other system built around the concept of asynchronous invocation. While the actual semantics (Java Futures, etc), may differ from language to language they all react badly to forced synchronization. An example of this is the netconf project in ODL which performs a forced synchronization on every operation. Instead of supplying a Future as expected by the caller and setting that Future at a future date it invokes the netconf RPC synchronously and sets the result in the Future immediately upon its creation. Rather unsurprisingly, the performance is quite dire. The performance of the initial version of JSON RPC is not as high as we would have wished it to be for the same reasons.

We intend to work around this limitation in three phases:

Asynchronous Invocation and Full Support for Futures

We cannot use Netty or other mechanisms used elsewhere in ODL to make IO asynchronous as they do not support all the transports we are interested in. We also do not need all of the Netty bells and whistles as our transports have their own pipelines and message framing. In fact, as various experiments on integrating Netty into ZMQ, etc have shown, Netty just gets in the way.

All we need is a simple blocking queue. While we can size the queue dynamically and do various optimizations, we believe that this should be done after the asynchronous support is feature complete and not from the beginning. So, for the time being, a simple ArrayBlockingQueue will suffice.

We separate the actual invocation of the RPC and the invocation of the interface presented by the jsonrpc plugin. The invocation of the RPC now enqueues the request and returns a simple Future. The Future is immediate only if the enqueue fails for some reason. This allows the caller to proceed and do other, more interesting things until it needs the actual result in the Future. An attempt to obtain the result by using get(), checkedGet() or their timeout-enabled versions will block until the actual RPC invocation sets the Future.

The enqueue is now separate from the actual RPC invocation. The RPC is invoked and the result is used to set the Future. Any callers currently blocked on get() will unblock. Any exceptions are tunneled to the caller as needed and presented when the consumer of the Future attempts to perform a get().

This support is now merged.

Conveying Asynchronous Invocation Beyond ODL

We need a JSON RPC 2.0 extension to convey that a call is asynchronous and it should be as transparent as possible to any legacy callers. We will avoid defining the handshake semantics until all the possible implications of taking asynchronous calls beyond ODL has been revealed.

These are our initial assumptions:

In order to support asynchronous invocation we introduce an additional property in the JSON RPC 2.0 JSON payload object. We will call this property "metadata". It must be a JSON Object and it is optional. Asynchronous invocation and other extensions developed for ODL and YANG modelled JSON RPC will be conveyed by various properties in the metadata JSON Object. At this time we define only one property called async which is a uuid4 for the asynchronous invocation.

Call flow:

This support is work in progress, initial patches are in ODL gerrit.

Support for Batching

Quite clearly, the introduction of queuing maps very well onto JSON RPC batching semantics (section 6 of the JSON RPC 2.0 Specification). We can introduce batching support before introducing the async extension . We do believe that this will make adding async support more difficult at a later date and have deliberately delayed it until the async extension is complete.