Skip to main content

Some queuing theory: throughput, latency and bandwidth

· 12 min read
Matthew Sackman

You have a queue in Rabbit. You have some clients consuming from that queue. If you don't set a QoS setting at all (basic.qos), then Rabbit will push all the queue's messages to the clients as fast as the network and the clients will allow. The consumers will balloon in memory as they buffer all the messages in their own RAM. The queue may appear empty if you ask Rabbit, but there may be millions of messages unacknowledged as they sit in the clients, ready for processing by the client application. If you add a new consumer, there are no messages left in the queue to be sent to the new consumer. Messages are just being buffered in the existing clients, and may be there for a long time, even if there are other consumers that become available to process such messages sooner. This is rather sub optimal.

So, the default QoS prefetch setting gives clients an unlimited buffer, and that can result in poor behaviour and performance. But what should you set the QoS prefetch buffer size to? The goal is to keep the consumers saturated with work, but to minimise the client's buffer size so that more messages stay in Rabbit's queue and are thus available for new consumers or to just be sent out to consumers as they become free.

RabbitMQ Performance Measurements, part 2

· 7 min read
Simon MacMullen

Welcome back! Last time we talked about flow control and latency; today let's talk about how different features affect the performance we see. Here are some simple scenarios. As before, they're all variations on the theme of one publisher and one consumer publishing as fast as they can.

London Realtime hackweekend

· 2 min read
Michael Bridgen

Over the weekend, RabbitMQ co-sponsored London Realtime, two nights and two days of unadulterated hackery. It was all put on by the apparently indefatigable* crew at GoSquared, a very impressive debut effort.

As a co-sponsor we had one of the iPad prizes to award. We decided to allow hacks that used one or more of RabbitMQ, SockJS, or Cloud Foundry. This meant that about half of the twenty-seven hacks were eligible when it came to judging, making the choice rather difficult.

RabbitMQ Performance Measurements, part 1

· 6 min read
Simon MacMullen

So today I would like to talk about some aspects of RabbitMQ's performance. There are a huge number of variables that feed into the overall level of performance you can get from a RabbitMQ server, and today we're going to try tweaking some of them and seeing what we can see.

How to compose apps using WebSockets

· 4 min read
Marek Majkowski

Or: How to properly do multiplexing on WebSockets or on SockJS

As you may know, WebSockets are a cool new HTML5 technology which allows you to asynchronously send and receive messages. Our compatibility layer - SockJS - emulates it and will work even on old browsers or behind proxies. WebSockets conceptually are very simple. The API is basically: connect, send and receive. But what if your web-app has many modules and every one wants to be able to send and receive data?

AtomizeJS: Distributed Software Transactional Memory

· 3 min read
Matthew Sackman

AtomizeJS is a JavaScript library for writing distributed programs, that run in the browser, without having to write any application specific logic on the server.

Here at RabbitMQ HQ we spend quite a lot of time arguing. Occasionally, it's about important things, like what messaging really means, and the range of different APIs that can be used to achieve messaging. RabbitMQ and AMQP present a very explicit interface to messaging: you very much have verbs send and receive and you need to think about what your messaging patterns are. There's a lot (of often quite clever stuff) going on under the bonnet but nevertheless, the interface is quite low-level and explicit, which gives a good degree of flexibility. Sometimes though, that style of API is not the most natural fit for the problem you're trying to solve - do you really reach an impasse and think "What I need here is an AMQP-message broker", or do you, from pre-existing knowledge, realise that you could choose to use an AMQP-message broker to solve your current problem?

RabbitMQ 2.7.0 and 2.7.1 are released

· 9 min read
Steve Powell

The previous release of RabbitMQ (2.7.0) brought with it a better way of managing plugins, one-stop URI connecting by clients, thread-safe consumers in the Java client, and a number of performance improvements and bug-fixes. The latest release (2.7.1) is essentially a bug-fix release; though it also makes RabbitMQ compatible with Erlang R15B and enhances some of the management interface. The previous release didn't get a blog post, so I've combined both releases in this one.  (These are my own personal remarks and are NOT binding; errors of commission or omission are entirely my own -- Steve Powell.)

Ponies, Dragons and Socks

· 2 min read
Marek Majkowski

We were wondering how to present SockJS and its possibilities to a wider audience. Having a working demo is worth much more than explaining dry theory, but what can you present if you are just a boring technologist, with no design skills whatsoever?

With questions like that it's always good to open a history book and review previous generation of computer geeks with no artistic skills. What were they doing? On consoles with green letters they were playing geeky computer games, MUDs (Multi User Dungeons) were especially popular.

Hey, we can do that!

Performance of Queues: when less is more

· 9 min read
Matthew Sackman

Since the new persister arrived in RabbitMQ 2.0.0 (yes, it's not so new anymore), Rabbit has had a relatively good story to tell about coping with queues that grow and grow and grow and reach sizes that preclude them from being able to be held in RAM. Rabbit starts writing out messages to disk fairly early on, and continues to do so at a gentle rate so that by the time RAM gets really tight, we've done most of the hard work already and thus avoid sudden bursts of writes. Provided your message rates aren't too high or too bursty, this should all happen without any real impact on any connected clients.

Some recent discussion with a client made us return to what we'd thought was a fairly solved problem and has prompted us to make some changes.