Open Source Messaging Queue for Lightning-Fast Client Server Comms

微信扫一扫,分享到朋友圈

Open Source Messaging Queue for Lightning-Fast Client Server Comms

In the recent past, I’ve been working on a big IoT project that has taught me an invaluable lesson about transferring data from client to server. And how to do it fast. Extremely fast. I thought I’d share what I’ve learned with you good people and maybe you can benefit, too.

I will give you some context first without revealing any trade secrets. All you need to know is that we were working with a device (Client) that gathered a lot of data. The device wasn’t stationary, so access to the internet couldn’t be taken for granted. An important requirement was that we needed the data to be transferred to the Server as quickly as possible. Ideally, it would have been instant, but we don’t live in an ideal world and we had to settle for the second best thing. Obviously, the transfer was to be performed in a safe and secure manner.

Our first approach was to use a REST API. It’s very quick to implement and pretty much every software engineer is familiar with it.

So we developed a demo product. It worked great in our customer simulation environment and we decided to storm ahead.

The initial response from our actual customer was also positive and the product worked as intended. We thought we were golden. We focused on implementing other parts of the software and considered the data upload engine a closed chapter.

However, a new cohort of customers took on our system and this exposed a weakness in our RESTful solution.

Those new customers were different from the first batch. They rarely used the safe and reliable source of internet which is Wi-Fi. Instead, they almost always used mobile data. To make things worse they worked in remote areas where the connection was poor and at times non-existent.

It was also quite common for them to get access to the internet for a few seconds before going into darkness again.

That meant our data wasn’t getting to the server and the devices were clogged up with valuable information we needed to somehow get to our server.

REST simply didn’t cut it. Its failing was the amount of overhead data that needs to be exchanged between the server and client before it can even begin sending the stuff we care the most about. Oftentimes, when we successfully established a connection with the server and all of the required handshaking was complete and we were just about to start uploading our data, the connection would be lost. This cycle liked to repeat itself many times making our Real-Time system more like a Delayed-Time system.

So we went looking for answers. Ideally, Open Source answers.

Keeping the long story short, that’s when we stumbled upon ZeroMQ. Here’s the official introduction which succeeded at grabbing our team’s attention.

“ZeroMQ (also known as ØMQ, 0MQ, or zmq) looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N- to-N with patterns like fan-out, pub-sub, task distribution, and request-reply. It’s fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. It has a score of language APIs and runs on most operating systems. ZeroMQ is from iMatix and is LGPLv3 open source.”

We liked the sound of that. On the premise of being able to cut out any overhead and speed up our comms with the server, we started to experiment with it.

The learning curve was a bit steeper than with REST so it took a while to bake it into our system even though ZMQ is pretty easy once you understand it.

Here’s what a simple client and server look like. Written in Java, naturally.

Server.java

public static void main(String[] args) throws Exception {
        ZMQ.Context context = ZMQ.context(1);

        //  Socket to talk to clients
        ZMQ.Socket responder = context.socket(ZMQ.REP);
        responder.bind("tcp://*:5556");

        while (!Thread.currentThread().isInterrupted()) {
            // Wait for next request from the client
            String msg = responder.recvStr();
            if (StringUtils.isNotEmpty(msg)) {

                // Do something with your data

                // Send ack back to client
                String reply = "Acked";
                responder.send(reply.getBytes(), 0);
            }
        }
        responder.close();
        context.term();
    }

Client.java

public static void main(String[] args) {

        ZMQ.Context context = ZMQ.context(1);

        //  Socket to talk to server
        System.out.println("Connecting to server…");

        ZMQ.Socket requester = context.socket(ZMQ.REQ);
        requester.connect("tcp://localhost:5556");

        Event event = new Event(1L, "SecretMessage");
        String message = new Gson().toJson(event);
        System.out.println(message);

        for (int requestNbr = 0; requestNbr <= 10; requestNbr++) {

            requester.send(message.getBytes(), 0);
            byte[] reply = requester.recv(0);

        }
        requester.close();
        context.term();
    }

In the end, we went with ZMQ as we found the performance to be out of the ordinary. But don’t take my word for it.

We ran some tests to compare the speeds between our two solutions (REST and ZMQ). Below is a table comparing times it took for the Server side to receive a number of messages of the same size (31 bytes).

Events sent (31 bytes each)

REST

ZMQ

Performance Gain

1,000

6s

353ms

16x

10,000

38s

2202ms (2s)

17x

100,000

388s (6m 22s)

11445ms (11s)

34x

1,000,000

didn’t bother…

62s (1m 2s)

> 9000x

Drops the mic and leaves the room.

To conclude, we have proven that ZMQ is indeed lightning fast and reduces bandwidth imprint of your application.

It is also a bit trickier to implement and adds another library your dev team must support.

As always, it depends on the project you are working on whether it is worth the hassle or not but if you care about network performance, it’s really difficult to ignore ZMQ.

P.S. If you’d like to see the code I’ve used for the benchmarks above. Drop me a message or comment below and I’ll happily upload to GitHub.

微信扫一扫,分享到朋友圈

Open Source Messaging Queue for Lightning-Fast Client Server Comms

零售周报:星巴克计划裁员;亚马逊又推新概念店;美团发布上市后首份财报

上一篇

Schedule Kubernetes Pods to Nodes

下一篇

你也可能喜欢

Open Source Messaging Queue for Lightning-Fast Client Server Comms

长按储存图像,分享给朋友