nanomsg-next-generation -- light-weight brokerless messaging

nng - nanomsg-next-gen

Linux Status Windows Status macOS Status Coverage Codacy LGTM Discord Manual MIT License Latest version

ℹ️
If you are looking for the legacy version of nanomsg, please see the nanomsg repository.

This project is a rewrite of the Scalability Protocols library known as libnanomsg, and adds significant new capabilities, while retaining compatibility with the original.

It may help to think of this as "nanomsg-next-generation".

NNG: Lightweight Messaging Library

NNG, like its predecessors nanomsg (and to some extent ZeroMQ), is a lightweight, broker-less library, offering a simple API to solve common recurring messaging problems, such as publish/subscribe, RPC-style request/reply, or service discovery. The API frees the programmer from worrying about details like connection management, retries, and other common considerations, so that they can focus on the application instead of the plumbing.

NNG is implemented in C, requiring only C99 and CMake to build. It can be built as a shared or a static library, and is readily embeddable. It is also designed to be easy to port to new platforms if your platform is not already supported.

License

NNG is licensed under a liberal, and commercial friendly, MIT license. The goal to the license is to minimize friction in adoption, use, and contribution.

Enhancements (Relative to nanomsg)

Here are areas where this project improves on "nanomsg":

Reliability

NNG is designed for production use from the beginning. Every error case is considered, and it is designed to avoid crashing except in cases of gross developer error. (Hopefully we don’t have any of these in our own code.)

Scalability

NNG scales out to engage multiple cores using a bespoke asynchronous I/O framework, using thread pools to spread load without exceeding typical system limits.

Maintainability

NNG’s architecture is designed to be modular and easily grasped by developers unfamiliar with the code base. The code is also well documented.

Extensibility

Because it avoids ties to file descriptors, and avoids confusing interlocking state machines, it is easier to add new protocols and transports to NNG. This was demonstrated by the addition of the TLS and ZeroTier transports.

Security

NNG provides TLS 1.2 and ZeroTier transports, offering support for robust and industry standard authentication and encryption. In addition, it is hardened to be resilient against malicious attackers, with special consideration given to use in a hostile Internet.

Usability

NNG eschews slavish adherence parts of the more complex and less well understood POSIX APIs, while adopting the semantics that are familiar and useful. New APIs are intuitive, and the optional support for separating protocol context and state from sockets makes creating concurrent applications vastly simpler than previously possible.

Compatibility

This project offers both wire compatibility and API compatibility, so most nanomsg users can begin using NNG right away.

Existing nanomsg and mangos applications can inter-operate with NNG applications automatically.

That said, there are some areas where legacy nanomsg still offers capabilities NNG lacks — specifically enhanced observability with statistics, and tunable prioritization of different destinations are missing, but will be added in a future release.

Additionally, some API capabilities that are useful for foreign language bindings are not implemented yet.

Some simple single threaded, synchronous applications may perform better under legacy nanomsg than under NNG. (We believe that these applications are the least commonly deployed, and least interesting from a performance perspective. NNG’s internal design is slightly less efficient in such scenarios, but it greatly benefits when concurrency or when multiple sockets or network peers are involved.)

Supported Platforms

NNG supports Linux, macOS, Windows (Vista or better), illumos, Solaris, FreeBSD, Android, and iOS. Most other POSIX platforms should work out of the box but have not been tested. Very old versions of otherwise supported platforms might not work.

Requirements

To build this project, you will need a C99 compatible compiler and CMake version 3.13 or newer.

We recommend using the Ninja build system (pass "-G Ninja" to CMake) when you can. (And not just because Ninja sounds like "NNG" — it’s also blindingly fast and has made our lives as developers measurably better.)

If you want to build with TLS support you will also need mbedTLS. See docs/BUILD_TLS.adoc for details.

Quick Start

With a Linux or UNIX environment:

  $ mkdir build
  $ cd build
  $ cmake -G Ninja ..
  $ ninja
  $ ninja test
  $ ninja install

API Documentation

The API documentation is provided in Asciidoc format in the docs/man subdirectory, and also online. The libnng(3) page is a good starting point.

You can also purchase a copy of the NNG Reference Manual. (It is published in both electronic and printed formats.) Purchases of the book help fund continued development of NNG.

Example Programs

Some demonstration programs have been created to help serve as examples. These are located in the demo directory.

Legacy Compatibility

A legacy libnanomsg compatible API is available, and while it offers less capability than the modern NNG API, it may serve as a transition aid. Please see nng_compat(3) for details.

Commercial Support

Commercial support for NNG is available.

Please contact Staysail Systems to inquire further.

Commercial Sponsors

The development of NNG has been made possible through the generous sponsorship of Capitar IT Group BV and Staysail Systems, Inc..

Comments
  • Reproducing IPC transport unit tests causing INFINITE lock wait

    Reproducing IPC transport unit tests causing INFINITE lock wait

    Couched in my understanding that the API isn't quite ready yet. This may be one of those areas, I don't know.

    Whether following the C-style calls or the C++ wrapper, it does not seem to matter. Something in the way the testing is initialized "under the hood" is causing the tests to hang INFINITE-ly.

    When there is a valid listen/dial pair in effect, the dial operation hangs INFINITE-ly. Debugging gets as far as nni_ep_dial(...) and the subsequent nni_aoi_wait(...). Which then waits with INFINITE options on the call to SleepConditionVariableSRW.

    // Gets as far as here...
    int nni_ep_dial(nni_ep *ep, int flags)
    
    // Then blocks indefinitely on the (synchronous?) AOI to return.
    nni_aio_wait(aio);
    
    // Ultimately waiting on a condition variable that is never cleared, apparently.
    // And with "INFINITE" timeout? not otherwise queued on the socket options? i.e. timeouts?
    void
    nni_plat_cv_wait(nni_plat_cv *cv)
    {
      (void) SleepConditionVariableSRW(&cv->cv, cv->srl, INFINITE, 0);
    }
    

    TCP works great. So does INPROC. IPC, on the other hand, is problematic under Windows (i.e. Windows 7 x64).

    The only difference I can determine at this point is in how the NNG IPC transport tests are initialized.

    void
    trantest_init(trantest *tt, const char *addr)
    {
      trantest_next_address(tt->addr, addr);
      So(nng_req_open(&tt->reqsock) == 0); // These look innocent enough.
      So(nng_rep_open(&tt->repsock) == 0);
    
      tt->tran = nni_tran_find(addr); // This however I find suspicious.
      So(tt->tran != NULL); // And/or perhaps something in the way that the project is build, flags, etc.
    }
    
  • SIGSEGV in RepReq's rep0 recv - use after free

    SIGSEGV in RepReq's rep0 recv - use after free

    As briefly discussed in #1240 there appears to be a bug in the current implementation that results in a SIGSEGV when a socket is closed but the resend timeout fired at least once. Or that is my current theory on it at least.

    SIGSEGV from C implementation:

    https://gist.github.com/AlexKornitzer/371eef55e558e89d10850d53986dfb35
    
    ./reqrep server ipc:///tmp/abcd 
    ./reqrep client ipc:///tmp/abcd
    
    panic: pthread_mutex_lock: Invalid argument
    This message is indicative of a BUG.
    Report this at https://github.com/nanomsg/nng/issues
    /home/developer/Developer/scratch/nng/reqrep(+0x15bb9) [0x555555569bb9]
    /home/developer/Developer/scratch/nng/reqrep(+0x1f7af) [0x5555555737af]
    /home/developer/Developer/scratch/nng/reqrep(+0x1f958) [0x555555573958]
    /home/developer/Developer/scratch/nng/reqrep(+0x1c826) [0x555555570826]
    /home/developer/Developer/scratch/nng/reqrep(+0x1c562) [0x555555570562]
    /home/developer/Developer/scratch/nng/reqrep(+0xf21f) [0x55555556321f]
    /home/developer/Developer/scratch/nng/reqrep(+0x24617) [0x555555578617]
    /home/developer/Developer/scratch/nng/reqrep(+0x15f4e) [0x555555569f4e]
    /home/developer/Developer/scratch/nng/reqrep(+0xe60d) [0x55555556260d]
    /home/developer/Developer/scratch/nng/reqrep(+0xec35) [0x555555562c35]
    /home/developer/Developer/scratch/nng/reqrep(+0x18a00) [0x55555556ca00]
    /home/developer/Developer/scratch/nng/reqrep(+0x739f) [0x55555555b39f]
    /home/developer/Developer/scratch/nng/reqrep(+0x710d) [0x55555555b10d]
    /home/developer/Developer/scratch/nng/reqrep(+0x6f29) [0x55555555af29]
    /home/developer/Developer/scratch/nng/reqrep(+0x690f) [0x55555555a90f]
    /home/developer/Developer/scratch/nng/reqrep(+0x6dac) [0x55555555adac]
    /usr/lib/libc.so.6(__libc_start_main+0xf3) [0x7ffff7ded023]
    /home/developer/Developer/scratch/nng/reqrep(+0x672e) [0x55555555a72e]
    
    (gdb) bt
    #0  0x00007ffff7e01ce5 in raise () from /usr/lib/libc.so.6
    #1  0x00007ffff7deb857 in abort () from /usr/lib/libc.so.6
    #2  0x00005555555733bf in nni_plat_abort ()
    #3  0x0000555555569bbe in nni_panic ()
    #4  0x00005555555737af in nni_pthread_mutex_lock ()
    #5  0x0000555555573958 in nni_plat_mtx_lock ()
    #6  0x0000555555570826 in nni_mtx_lock ()
    #7  0x0000555555570562 in nni_task_dispatch ()
    #8  0x000055555556321f in nni_aio_begin ()
    #9  0x0000555555578617 in ipctran_pipe_recv ()
    #10 0x0000555555569f4e in nni_pipe_recv ()
    #11 0x000055555556260d in rep0_ctx_recv ()
    #12 0x0000555555562c35 in rep0_sock_recv ()
    #13 0x000055555556ca00 in nni_sock_recv ()
    #14 0x000055555555b39f in nng_recv_aio ()
    #15 0x000055555555b10d in nng_recvmsg ()
    #16 0x000055555555af29 in nng_recv ()
    #17 0x000055555555a90f in server ()
    #18 0x000055555555adac in main ()
    

    SIGSEGV from Rust version:

    Thread 1 "scratch" received signal SIGSEGV, Segmentation fault.
    0x0000555555577ce4 in nni_pipe_recv (p=0x0, aio=0x7fffe8001430)
        at /home/developer/.cargo/registry/src/github.com-1ecc6299db9ec823/nng-sys-1.2.4-rc.1/nng/src/core/pipe.c:141
    141             p->p_tran_ops.p_recv(p->p_tran_data, aio);
    (gdb) bt
    #0  0x0000555555577ce4 in nni_pipe_recv (p=0x0, aio=0x7fffe8001430)
        at /home/developer/.cargo/registry/src/github.com-1ecc6299db9ec823/nng-sys-1.2.4-rc.1/nng/src/core/pipe.c:141
    #1  0x000055555556dddf in rep0_ctx_recv (arg=0x5555555ead20, aio=0x5555555edcb0)
        at /home/developer/.cargo/registry/src/github.com-1ecc6299db9ec823/nng-sys-1.2.4-rc.1/nng/src/protocol/reqrep0/rep.c:504
    #2  0x000055555556e361 in rep0_sock_recv (arg=0x5555555eac10, aio=0x5555555edcb0)
        at /home/developer/.cargo/registry/src/github.com-1ecc6299db9ec823/nng-sys-1.2.4-rc.1/nng/src/protocol/reqrep0/rep.c:670
    #3  0x000055555557ade3 in nni_sock_recv (sock=0x5555555ea220, aio=0x5555555edcb0)
        at /home/developer/.cargo/registry/src/github.com-1ecc6299db9ec823/nng-sys-1.2.4-rc.1/nng/src/core/socket.c:847
    #4  0x00005555555666e6 in nng_recv_aio (s=..., aio=0x5555555edcb0)
        at /home/developer/.cargo/registry/src/github.com-1ecc6299db9ec823/nng-sys-1.2.4-rc.1/nng/src/nng.c:214
    #5  0x000055555556651d in nng_recvmsg (s=..., msgp=0x7fffffffcf50, flags=0)
        at /home/developer/.cargo/registry/src/github.com-1ecc6299db9ec823/nng-sys-1.2.4-rc.1/nng/src/nng.c:136
    #6  0x0000555555564a84 in nng::socket::Socket::recv (self=0x7fffffffd018)
        at /home/developer/.cargo/git/checkouts/nng-rs-585a5382ec72d3dc/fc301c2/src/socket.rs:245
    #7  0x00005555555600f1 in scratch::server () at src/main.rs:76
    #8  0x0000555555560339 in scratch::main () at src/main.rs:87
    
  • nng_recv() sometimes acts on null `msg` pointer

    nng_recv() sometimes acts on null `msg` pointer

    NNG & Platform details.

    NNG commit 6401d100e8b616c65014ae7fd62ac9a575466bef Windows 10 (RS3, 10.0.16299) using from C++

    Expected Behavior

    No segfault in nng_recv on calling nng_msg_len.

    Actual Behavior

    Segfault, nng_recvmsg rv = 0, msg = NULL.

    Steps to Reproduce

    Haven't found a minimal repro yet, however some details on our usage:

    • 2 threads, let's name them 'main' and 'net'
    • each thread owns a client req (our understanding is this allows resending?) inproc:// socket to the thread's rep socket (inproc://net and inproc://main)
    • 4-byte messages get sent to signal wakeup, no other data is sent at this time
    • recv call:
    					while (nng_recv(netSocket, &msgBuffer, &msgLen, NNG_FLAG_NONBLOCK | NNG_FLAG_ALLOC) == 0)
    					{
    						nng_free(msgBuffer, msgLen);
    
    						int ok = 0;
    						nng_send(netSocket, &ok, 4, NNG_FLAG_NONBLOCK);
    
    						m_netThreadCallbacks.Run();
    					}
    
    • send call:
    		nng_send(sockets[i], &idxList[0], idxList.size() * sizeof(int), NNG_FLAG_NONBLOCK);
    

    We're also having some other issues with aio lifetime on both Linux and Windows - possibly some kind of use-after-free or double frees, but no actual coredump at hand for this. 😕

  • IPC error during Bus tests

    IPC error during Bus tests

    2017.11.04 17:55:10.830   ERROR Process C:\Users\Michael\AppData\Local\JetBrains\Installations\ReSharperPlatformVs14\JetBrains.ReSharper.TaskRunner.CLR45.x64.exe:14588 exited with code '3':
    panic: G:\Source\Spikes\nanomsg\csnng\prototype\repos\nng\src\platform\windows\win_ipc.c: 65: assert err: aio->a_iov[0].iov_len
    This message is indicative of a BUG.
    Report this at http://github.com/nanomsg/nanomsg
    

    Test log:

    Running protocol tests for address family 'InterProcess'.
    Testing using address 'ipc://pipe/325656c3-4e60-4c89-81d4-097e4a20dfa2'.
    Given three Nanomsg2.Sharp.Protocols.Bus.LatestBusSocket instances.
      And messages can be delivered.
        And receive times out.
          And Bus1 delivers message to both Bus2 and Bus3.
    

    Based on IPC specialized Bus tests:

    namespace Nanomsg2.Sharp.Protocols.Bus
    {
        using Xunit.Abstractions;
    
        public class InterProcessBusTests : BusTests
        {
            protected override SocketAddressFamily Family { get; } = SocketAddressFamily.InterProcess;
    
            public InterProcessBusTests(ITestOutputHelper @out)
                : base(@out)
            {
            }
        }
    }
    

    Running under Visual Studio 2015, C# 6.0, xUnit, Windows 7 Professional (x64).

  • [Question] req/rep inverted async demo

    [Question] req/rep inverted async demo

    Hello,

    I'm filing this issue to have an answer for a question that I have: I have a very similar need than the one described in the async demo. The only slight difference is that my server is actually a REQ and my clients are REPs.

    I've first prototyped this in my project and realized that the library was behaving weird so I figured I'd try to make something smaller. I modified the async demo to do just that: make the server a REQ instead of a REP; instead of receiving then sending, it sends, then receives. Also modified the client to accommodate that.

    To give a few more details on what I'm seeing (I haven't really debugged nor tried to find where / why this is happening yet): if I have X aio waiting on a recv (to be able to handle X simultaneous clients) and a client comes, the response of the server gets sent more than once; once with the expected nng_ctx but I can also see the same data getting sent with other nng_ctxs as well (I simply display the context identifier).

    Before I write a reproducer, is the above expected to work, or is there some sort of limitation / something I misunderstood along the way regarding the APIs/REQ/REP? FWIW I am doing my tests with the latest release which is v1.3.2; and I've verified that the same things happen on both Linux / Windows with the tcp transport.

    Cheers

  • kqueue: add kqueue-based pollq implementation

    kqueue: add kqueue-based pollq implementation

    Initial cut at kqueue support for #35

    This passes all tests on my machine, will see how it does on CI. Happy to rename/reorg as needed.

    A couple thoughts:

    • since some of the poller API (add, remove, enable, disable) may involve syscalls, it would nice to return errors from those routines and handle them accordingly, possibly in a future PR
    • i wasn't totally clear on the expected handling of event flags - it looks like the existing poll implementation treats event flag manipulation as a cheap operation (it just manipulates userspace memory in the poll implementation), but since epoll and kqueue implementations might prefer to use the provided syscall interfaces to enable/disable events as distinct from adding/removing nodes, it could be nice to take that into consideration and minimize changes to enable/disable state.
  • PUB/SUB Multi-Message Handling

    PUB/SUB Multi-Message Handling

    When porting a script from zmq to pynng (much nicer python coding experience) to broadcast data (bursts of 4 direct follow up messages per loop, approx. 60 loops/s, approx. 1 KB/s TCP traffic) from a PUB to two SUBs, I experienced an unmissable data/message loss that was not experienced with zmq.

    In followup investigations on this topic I found some issues in the nanomsg repository describing this problem (nanomsg/nanomsg#283, nanomsg/nanomsg#390).

    I was wondering, if this "behavior" is still expected to happen with nng. Probably yes, as of #762 and the Conceptual Overview section of the nng manual:

    nng sockets are message oriented, so that messages are either delivered wholly, or not at all. Partial delivery is not possible. Furthermore, nng does not provide any other delivery or ordering guarantees; messages may be dropped or reordered. (Some protocols, such as nng_req(7) may offer stronger guarantees by performing their own retry and validation schemes.)

    NNG & Platform details.

    NNG: Using pynng (codypiersall/[email protected]) and the nng version shipped with it. OS: Windows 10 10.0.17134 (64-Bit) PL: Python 3.7.0

    Expected Behavior

    Not drop 0.5 % (or even 75 %) of messages.

    Actual Behavior

    Drops 0.5 % (or even 75 %) of messages.

    Steps to Reproduce

    The following python code compares the percentage of dropped messages between nng and zmq

    Single Message Example

    import pynng  # From commit 61c9f11
    import zmq
    import time
    
    pub = pynng.Pub0(listen='tcp://*:5555')
    
    sub = pynng.Sub0(dial='tcp://localhost:5555')
    sub.subscribe(b'0')
    
    time.sleep(1)
    
    i_send = 0
    i_recv = 0
    try:
        while True:
            i_send += 1
            
            pub.send(b'0')
            # time.sleep(0.000001)  # Cures the loss
            
            try:
                msg = sub.recv(block=False)
                i_recv += 1
            except pynng.exceptions.TryAgain:
                pass
            
            print(f'pynng: Lost {i_send - i_recv} of {i_send} msgs ' + '({0:3.3f} % loss)'.format((i_send - i_recv) / i_send * 100) + '  [Exit with Ctrl+C]', end='\r')
            if i_send >= 10**6:
                break
    except KeyboardInterrupt:
        pass
    finally:
        print()
        sub.close()
        pub.close()
        time.sleep(1)
    
    
    ctx = zmq.Context()
    
    pub = ctx.socket(zmq.PUB)
    pub.bind('tcp://*:5555')
    
    sub = ctx.socket(zmq.SUB)
    sub.connect('tcp://localhost:5555')
    sub.setsockopt(zmq.SUBSCRIBE, b'0')
    
    time.sleep(1)
    
    i_send = 0
    i_recv = 0
    try:
        while True:
            i_send += 1
            
            pub.send(b'0')
            # time.sleep(0.000001)  # Cures the loss
    
            try:
                msg = sub.recv(zmq.NOBLOCK)
                i_recv += 1
            except zmq.error.Again:
                pass
            
            print(f'zmq:   Lost {i_send - i_recv} of {i_send} msgs ' + '({0:3.3f} % loss)'.format((i_send - i_recv) / i_send * 100) + '  [Exit with Ctrl+C]', end='\r')
            if i_send >= 10**6:
                break
    except KeyboardInterrupt:
        pass
    finally:
        sub.close()
        pub.close()
        time.sleep(1)
    
    exit()
    

    Yielding quite reproducibly the following values:

     pynng: Lost 5258 of 1000000 messages (0.526 % loss)  [Exit with Ctrl+C]
     zmq:   Lost 45 of 1000000 messages (0.005 % loss)  [Exit with Ctrl+C]
    

    Quite interestingly, zmq manages to handle this fast PUB/SUB by 2 orders of magnitude better than nng. A short time.sleep() after send cures the loss in both cases, however, dramatically slows down the script.

    Message Burst Example

    The following script simulates the beforehand mentioned 4 message data bursts:

    import pynng  # From commit 61c9f11
    import zmq
    import time
    
    burstsize = 4
    
    pub = pynng.Pub0(listen='tcp://*:5555')
    
    sub = pynng.Sub0(dial='tcp://localhost:5555')
    sub.subscribe(b'')
    
    time.sleep(1)
    
    i_send = 0
    i_recv = 0
    try:
        while True:
            msgs = []
            for i in range(burstsize):
                i_send += 1
                pub.send(bytes(str(i), encoding='utf-8'))
                # time.sleep(0.000001)  #Leads to 75 % loss - only 1 msg is received!
            
            while True:
                try:
                    msgs += [sub.recv(block=False)]
                    i_recv += 1
                    # time.sleep(0.000001)  # Has no effect
                except pynng.exceptions.TryAgain:
                    break
            
            print(f'pynng: Lost {i_send - i_recv} of {i_send} msgs ' + '({0:3.3f} % loss)  '.format((i_send - i_recv) / i_send * 100) + f'Burst: Recvd {len(msgs)} of {burstsize} msgs ' + '({0:3.3f} % loss)  '.format((burstsize - len(msgs)) / burstsize * 100) + '[Exit with Ctrl+C]', end='\r')
            if i_send >= 10**6:
                break
    except KeyboardInterrupt:
        pass
    finally:
        sub.close()
        pub.close()
        time.sleep(1)
        print()
    
    ctx = zmq.Context()
    
    pub = ctx.socket(zmq.PUB)
    pub.bind('tcp://*:5555')
    
    sub = ctx.socket(zmq.SUB)
    sub.connect('tcp://localhost:5555')
    sub.setsockopt(zmq.SUBSCRIBE, b'')
    
    time.sleep(1)
    
    i_send = 0
    i_recv = 0
    try:
        while True:
            msgs = []
            for i in range(burstsize):
                i_send += 1
                pub.send(bytes(str(i), encoding='utf-8'))
                # time.sleep(0.000001)  # Leads to 75 % loss - only 1 msg is received!
            
            while True:
                try:
                    msgs += [sub.recv(zmq.NOBLOCK)]
                    i_recv += 1
                    # time.sleep(0.000001)  # Has no effect
                except zmq.error.Again:
                    break
            
            print(f'zmq  : Lost {i_send - i_recv} of {i_send} msgs ' + '({0:3.3f} % loss)  '.format((i_send - i_recv) / i_send * 100) + f'Burst: Recvd {len(msgs)} of {burstsize} msgs ' + '({0:3.3f} % loss)  '.format((burstsize - len(msgs)) / burstsize * 100) + '[Exit with Ctrl+C]', end='\r')
            if i_send >= 10**6:
                break
    except KeyboardInterrupt:
        pass
    finally:
        sub.close()
        pub.close()
        time.sleep(1)
    
    exit()
    

    Delivering the following results:

    pynng: Lost 620351 of 1000000 messages (62.035 % loss)  [Exit with Ctrl+C]
    zmq:   Lost 4 of 1000000 messages (0.000 % loss)  [Exit with Ctrl+C]
    

    Again, zmq is the definite winner. In the code example it seems quite random which messages are lost (each of the for different messages are received over time). However, as stated in the code, putting time.sleep() after send, leads to 75 % data loss for nng and the sole message that received is the 1st of the 4 (1 out of 4 = 75 % loss).

    Am I missing some setting here to define a buffer size? Is there any option to make the PUB wait/block until the message was really sent (just sent, not received)?

    As stated by @gdamore in https://github.com/nanomsg/nanomsg/issues/390#issuecomment-98404170

    I'm closing this as the problem is not with nanomsg, but rather with your usage and understanding the limitations of the design.

    a PUB/SUB socket was not the right choice to multicast "a lot of" data in nanomsg, which seemingly still is the case for nng.

    What I thus wonder, and clearly do not understand: If nng's PUB/SUB is not the right choice, what else is?

    As I have to send this data fast and lossless, this issue is the only reason why I have to stick with zmq for my application. (Ok, except for this https://github.com/codypiersall/pynng/issues/9#issuecomment-437969587 - even with some C code from @codypiersall)

    (Sorry for this extremely long issue :) )


    Edit: Code example was 2 × the same. Changed the second to the real burst example.

  • nng_sockaddr_in6 should support 32-bit and 16-bit unionized fields

    nng_sockaddr_in6 should support 32-bit and 16-bit unionized fields

    If I understand my blogs reading according to the spec and the responses I've read, that's the layout, which does appear to care about network byte order. So while 16-byte array is interesting, it is virtually useless to work with.

    See this SO for more detail.

    So, this:

    struct nng_sockaddr_in6 {
        uint16_t sa_family;
        uint16_t sa_port;
        uint8_t  sa_addr[16];
    };
    

    Should probably be something more like this:

    struct nng_sockaddr_in6 {
        uint16_t sa_family;
        uint16_t sa_port;
        union {
            uint8_t sa_addr8[16];
            uint16_t sa_addr16[8];
            uint32_t sa_addr32[4];
        } sa_addr;
    };
    

    Which, actually the 16-bit representation is probably most accurate according to the statement: "An IPv6 address is represented as eight groups of four hexadecimal digits, each group representing 16 bits ".

    Oh, for fun with C/C++ unions.

    Could be wrong.

  • always see 1% message drop on listener

    always see 1% message drop on listener

    NNG Version : 1.1.0

    I did some experiments about how much msg/sec i can pump on a Pair socket with one listener and one dialer. What i found out is irrespective of message size and number of messages per iteration i always see a message drop of around 1%

    iteration | total messages received | total time | msg/sec | drop rate -- | -- | -- | -- | -- 0 | 14927 | 1105 | 13508.60 | 0.49 1 | 14866 | 1072 | 13867.50 | 0.89 2 | 14931 | 1255 | 11897.20 | 0.46 3 | 14965 | 1252 | 11952.90 | 0.23 4 | 14983 | 1101 | 13608.50 | 0.11 5 | 14913 | 1082 | 13782.80 | 0.58 6 | 14919 | 1197 | 12463.70 | 0.54 7 | 14861 | 1067 | 13927.80 | 0.93 8 | 14931 | 1093 | 13660.60 | 0.46 9 | 14900 | 1130 | 13185.80 | 0.67

    these stats are collected on listener and sent back to dialer upon request. Each iteration is attempting to send 15k messages. So if I 4k messages per iteration then receive side would give number in 1% less range. same for any number of messages i send. i thought m

  •  memory leak ( I am not sure if it was by nng)

    memory leak ( I am not sure if it was by nng)

    pattern: pair v1 poly mode. transport: tcp server: pair v1 listen .VS2013 X64 compile. (win server 2016) count:1 clients: pair v1 dial (win 7) count: about 15K+ clients keep getting online and offline all the time , and the memory increases...

    /////////////////////////////////// extra information: every client keep sending a msg (length is 1) to server every 10 minutes(HeartBeat). I doubt if this result in the memory leak of server.

    ///////////////////////// extra information-2: server p-code: for (;;) { nng_recvmsg; nng_pipe pi =nng_msg_get_pipe(msg); nng_pipe_get_addr (opt remote addr); } you see, I didn't close the pi. I doubt if this leaks. I will read nng docs continusly. /////end -add -extra2

    I have found out one of the leak code with windbg:

    static int tcptran_pipe_alloc(tcptran_pipe **pipep) { tcptran_pipe *p; int rv; if ((p = NNI_ALLOC_STRUCT(p)) == NULL) { return (NNG_ENOMEM); } nni_mtx_init(&p->mtx); if (((rv = nni_aio_alloc(&p->txaio, tcptran_pipe_send_cb, p)) != 0) || ((rv = nni_aio_alloc(&p->rxaio, tcptran_pipe_recv_cb, p)) != 0) || ((rv = nni_aio_alloc(&p->negoaio, tcptran_pipe_nego_cb, p)) != 0)) { tcptran_pipe_fini(p); return (rv); }

    the "p->rxaio" didn't free all the time, I don't know why.

    nng x64 leak callstack -1 : (leak block size:1d8)===="tcptran_pipe_alloc" leak

    00000285784a0cb0 0020 0000 [00] 00000285784a0ce0 001d8 - (busy) 7ffce6c20543 ntdll!RtlpCallInterceptRoutine+0x000000000000003f 7ffce6bbeee2 ntdll!RtlpAllocateHeapInternal+0x0000000000001142 *** WARNING: Unable to verify checksum for C:\test_Svr.exe 7ff7b83e40d5 test_Svr!_calloc_impl+0x000000000000005d 7ff7b840cca9 test_Svr!calloc+0x0000000000000015 7ff7b83ec777 test_Svr!nni_aio_init+0x0000000000000027 7ff7b83fc3d0 test_Svr!nng_tcp_register+0x0000000000000460 (in fact, it's nni_aio_alloc +xx) 7ff7b83fbff3 test_Svr!nng_tcp_register+0x0000000000000083 ( in fact ,it's tcptran_pipe_alloc+xx) 7ff7b83f69a9 test_Svr!nni_taskq_sys_init+0x00000000000000a9 7ff7b83f2ef4 test_Svr!nni_thr_run+0x0000000000000094 7ff7b83ebfc8 test_Svr!nni_plat_thr_is_self+0x0000000000000038 7ff7b840caeb test_Svr!_callthreadstartex+0x0000000000000017 7ff7b840cc92 test_Svr!_threadstartex+0x0000000000000102 7ffce67f84d4 KERNEL32!BaseThreadInitThunk+0x0000000000000014 7ffce6bfe851 ntdll!RtlUserThreadStart+0x0000000000000021

    nng x64 leak callstack -2 : (leak block size:4d8) (this callstack is corrent, no analysis mistake)

    0:031> !heap -p -a 000002856b618970
    address 000002856b618970 found in _HEAP @ 28556500000 HEAP_ENTRY Size Prev Flags UserPtr UserSize - state 000002856b618940 0051 0000 [00] 000002856b618970 004d8 - (busy) 7ffce6c20543 ntdll!RtlpCallInterceptRoutine+0x000000000000003f 7ffce6bbeee2 ntdll!RtlpAllocateHeapInternal+0x0000000000001142 7ff7b83e40d5 test_Svr!_calloc_impl+0x000000000000005d 7ff7b840cca9 test_Svr!calloc+0x0000000000000015 7ff7b83efd5e test_Svr!nni_pipe_sys_init+0x000000000000008e 7ff7b83efaad test_Svr!nni_pipe_create_listener+0x000000000000003d 7ff7b83f05be test_Svr!nni_listener_add_pipe+0x000000000000004e 7ff7b83eeef4 test_Svr!nni_dialer_sys_init+0x00000000000000b4 7ff7b83f69a9 test_Svr!nni_taskq_sys_init+0x00000000000000a9 7ff7b83f2ef4 test_Svr!nni_thr_run+0x0000000000000094 7ff7b83ebfc8 test_Svr!nni_plat_thr_is_self+0x0000000000000038 7ff7b840caeb test_Svr!_callthreadstartex+0x0000000000000017 7ff7b840cc92 test_Svr!_threadstartex+0x0000000000000102 7ffce67f84d4 KERNEL32!BaseThreadInitThunk+0x0000000000000014 7ffce6bfe851 ntdll!RtlUserThreadStart+0x0000000000000021`

    leak size 470:

    0:031> !heap -p -a 0000028565028690
    address 0000028565028690 found in _HEAP @ 28556500000 HEAP_ENTRY Size Prev Flags UserPtr UserSize - state 0000028565028660 004d 0000 [00] 0000028565028690 00470 - (busy) ? test_Svr!nni_win_tcp_init+220 7ffce6c20543 ntdll!RtlpCallInterceptRoutine+0x000000000000003f 7ffce6bbeee2 ntdll!RtlpAllocateHeapInternal+0x0000000000001142 7ff7b83e40d5 test_Svr!_calloc_impl+0x000000000000005d 7ff7b840cca9 test_Svr!calloc+0x0000000000000015 7ff7b840b589 test_Svr!nni_win_tcp_init+0x0000000000000029 7ff7b840a342 test_Svr!nni_tcp_listener_accept+0x00000000000000e2 7ff7b83fc02d test_Svr!nng_tcp_register+0x00000000000000bd 7ff7b83f69a9 test_Svr!nni_taskq_sys_init+0x00000000000000a9 7ff7b83f2ef4 test_Svr!nni_thr_run+0x0000000000000094 7ff7b83ebfc8 test_Svr!nni_plat_thr_is_self+0x0000000000000038 7ff7b840caeb test_Svr!_callthreadstartex+0x0000000000000017 7ff7b840cc92 test_Svr!_threadstartex+0x0000000000000102 7ffce67f84d4 KERNEL32!BaseThreadInitThunk+0x0000000000000014 7ffce6bfe851 ntdll!RtlUserThreadStart+0x0000000000000021

    Okay, in fact , the top 3 size block leaks most of my memory. the windbg heap stats is below:

    0:031> !heap -stat -h 000001a363440000 heap @ 000001a363440000 group-by: TOTSIZE max-display: 20 size #blocks total ( %) (percent of total busy bytes) 1d8 36aeab - 64d20b48 (43.40) 4d8 7cfce - 25d68dd0 (16.29) 470 7cfd4 - 22aa3cc0 (14.92) 58 36aec5 - 12cc13b8 (8.09) 138 7cfd8 - 9854f40 (4.10) 100 7cfd9 - 7cfd900 (3.36) 78 7cfcf - 3a96908 (1.58) 1800000 2 - 3000000 (1.29) ed4100 3 - 2c7c300 (1.20) 48 7cfc3 - 2326ed8 (0.95) 219dbe3 1 - 219dbe3 (0.90) 68 5106f - 20ead18 (0.89) 80010 34 - 1a00340 (0.70) 30 7cff2 - 176fd60 (0.63) 40 59075 - 1641d40 (0.60) 46 5106c - 1627d88 (0.60) 20 7cfd5 - f9faa0 (0.42) 3ff70 c - 2ff940 (0.08) 4059f 1 - 4059f (0.01) 40000 1 - 40000 (0.01)

  • Running multiple Bus tests causes error code 3

    Running multiple Bus tests causes error code 3

    Taken individually, InProc, IPv4, or IPv6, the tests all run just fine. But run in parallel causes problems.

    2017.11.04 18:36:17.392   ERROR Process C:\Users\Michael\AppData\Local\JetBrains\Installations\ReSharperPlatformVs14\JetBrains.ReSharper.TaskRunner.CLR45.x64.exe:13608 exited with code '3'.
    

    However, by contrast, the pipeline tests are running just fine, individually, or altogether in parallel (same set as above).

    Generally, I am following the same approach as the C code, closer to the C++ code, probably even a bit "simpler" due no smart pointer gymnastics going on. I am also not bothering to vet any of the protocol/peer stuff, just focusing on messaging, the socket protocols, etc.

    I'll have my repo committed this weekend so you can take a look.

  • demo projects don't build

    demo projects don't build

    Ubuntu 16. The problem during cmake stage is:

    CMake Warning (dev) at CMakeLists.txt:18 (add_executable): Policy CMP0028 is not set: Double colon in target name means ALIAS or IMPORTED target. Run "cmake --help-policy CMP0028" for policy details. Use the cmake_policy command to set the policy and suppress this warning.

    Target "raw" links to target "Threads::Threads" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing? This warning is for project developers. Use -Wno-dev to suppress it.

    And building will fail.

    The solution is to add to CMakeLists.txt:

    set(THREADS_PREFER_PTHREAD_FLAG ON) find_package(Threads REQUIRED)

    For more: https://stackoverflow.com/questions/1620918/cmake-and-libpthread

  • examples demo nng under windows

    examples demo nng under windows

    Dears, I'm following this tutorial to learn nng, https://nanomsg.org/gettingstarted/nng/index.html the problem is, all the command are under linux, as for example,

    Execution
    ./pipeline node0 ipc:///tmp/pipeline.ipc & node0=$! && sleep 1
    ./pipeline node1 ipc:///tmp/pipeline.ipc "Hello, World!"
    ./pipeline node1 ipc:///tmp/pipeline.ipc "Goodbye."
    kill $node0
    

    how can I use these examples under windows? what is the windows equivalent statement of the following,

    ipc:///tmp/pipeline.ipc & node0=$!
    
  • httpclient test failed

    httpclient test failed

    Describe the bug maybe reproduced #787

    Expected behavior test should pass, maybe other does.

    Actual Behavior and Reproduce ninja test failed, and follow the script, i get this:

    [email protected] /home/levi/.cache/paru/clone/nng/src/build/tests
    ⚡ ./httpclient -v -p TEST_PORT=13040
    === RUN: HTTP Client
    
      Given a TCP connection to example.com ✔✔✔✔
        We can initiate a message ✔✔✔✔✔✔
          The message contents are correct ✔✔✔✔✔✔✔✔✔✔✔✔✔✔✔✔✔✔✔✔✔
      Given a client ✔✔✔
        One off exchange works ✔✔✔✔✔✔✔✔
        Connection reuse works ✔✔✔✔✔✔✔✔✔✔✔✔✔
      Client times out ⚠
      Given a client (chunked) ✔✔✔
        One off exchange works ✔✔✘✔✔✔
    
    
    Failures:
    
      * Assertion Failed (One off exchange works)
      File: /home/levi/.cache/paru/clone/nng/src/nng-1.5.2/tests/httpclient.c
      Line: 37
      Test: nng_aio_result(aio) == 0
    

    Environment Details

  • nng creates a massive amount of threads on high core count machines

    nng creates a massive amount of threads on high core count machines

    Describe the bug I'm running a 32-core/64-thread threadripper machine, and it looks like in total nng creates 151 threads.

    While this could be the wanted case for some apps, for the application that I'm writing it's all around way too much, and clutters up the profiling tools quite a bit.

    | Thread name | Thread count | |---| ---| | nng:iocp | 64 | | nng:ipc:dial | 1| | nng:resolver| 4| | nng:task | 16| | nng:reap2| 1| | nng:timer| 1| | nng:aio:expire| 64|

    Expected behavior Ideally, thread creation is left to the application instead of being done by the nng framework, that way I can control scheduling of work and fit it into my async execution runtime, instead of having nng take up resource (and arguably more important) screen real estate in the profiler.

    In our application we're trying to severely limit our number of threads and schedule work on our own.

    Actual Behavior See above

    To Reproduce Simply run nng on a high core-count machine.

    ** Environment Details **

    • NNG version: we're using the nng-rs crate, I think it's a bit behind on latest nng
    • Operating system and version: Windows
    • Compiler and language used: rustc
    • Shared or static library: unknown (and probably irrelevant)

    Additional context I found an additional issue on limiting the amount of threads being spun up in #769 this seems to require compile time toggles? Ideally this is something that can be set at runtime (we have a few different use-cases all sharing the same codebase - one case where run the service inproc and want a low thread count, one where we run the service as a daemon and we don't really care about thread count).

  • Lost messages when using the bus protocol.

    Lost messages when using the bus protocol.

    I'm trying to speed test nng with zmq and redis, but found that nng has a problem of message loss under bus protocol, even setting send_buffer_size and recv_buffer_size does not improve.

    This problem does not exist in pubsub, and also not exist in zmq and redis (because they only have pubsub). I'm curious what is causing this and how to fix it.

    Here is the log and code:

    Log

    Subs: 1 Msgs: 100
    Count: 99 Time: 0.02 s
    Subs: 2 Msgs: 100
    Count: 98 Time: 0.01 s
    Count: 95 Time: 0.01 s
    Subs: 4 Msgs: 100
    Count: 100 Time: 0.02 s
    Count: 100 Time: 0.02 s
    Count: 100 Time: 0.02 s
    Count: 100 Time: 0.02 s
    Subs: 6 Msgs: 100
    Count: 100 Time: 0.02 s
    Count: 100 Time: 0.02 s
    Count: 100 Time: 0.02 s
    Count: 100 Time: 0.02 s
    Count: 100 Time: 0.02 s
    Count: 0 Time: 0.00 s
    Subs: 1 Msgs: 1000
    Count: 551 Time: 0.04 s
    Subs: 2 Msgs: 1000
    Count: 682 Time: 0.05 s
    Count: 687 Time: 0.05 s
    Subs: 4 Msgs: 1000
    Count: 915 Time: 0.09 s
    Count: 911 Time: 0.09 s
    Count: 900 Time: 0.09 s
    Count: 0 Time: 0.00 s
    

    Code:

    from time import sleep
    import pynng
    import time
    import redis
    import zmq
    from mpire.pool import WorkerPool
    from pynng import Bus0, Pub0, Sub0
    
    from random import random
    from string import ascii_lowercase
    
    bal = [c.encode('ascii') for c in ascii_lowercase]
    msg_100000 = b''.join([bal[int(random() * 26)] for _ in range(int(1e5))])
    
    
    def main_nng_pub_sub(i, n_msgs):
        def publish():
            with Pub0() as sock:
                sock.send_buffer_size = 1000
                sock.listen("tcp://127.0.0.1:12345")
                time.sleep(0.7)
    
                start_time = time.time()
                for i in range(n_msgs):
                    sock.send(b"msg::" + msg_100000)
                for i in range(50):
                    sock.send(b"end::")
                    time.sleep(0.01)
    
        def subscribe():
            with Sub0() as sock:
                port = 12345 + i
                sock.recv_buffer_size = 1000
                # sock.listen("tcp://127.0.0.1:{}".format(port))
                sock.dial("tcp://127.0.0.1:12345")
                sock.subscribe("")
    
                start_time = None
                count = 0
    
                while True:
                    msg = sock.recv(block=True)
                    start_time = start_time or time.time(
                    )  # Start when first message receive
                    event, _ = msg.decode().split("::", maxsplit=1)
                    if event == "end":
                        print("Count: {}".format(count),
                              "Time: {:.2f} s".format(time.time() - start_time))
                        break
                    else:
                        count += 1
    
        if i == 0:
            publish()
        else:
            subscribe()
    
    
    def main_nng(i, n_msgs):
        def publish():
            with Bus0() as sock:
                sock.send_buffer_size = 1000
                sock.listen("tcp://127.0.0.1:12345")
                time.sleep(0.7)
    
                start_time = time.time()
                for i in range(n_msgs):
                    sock.send(b"msg::" + msg_100000)
                for i in range(500):
                    sock.send(b"end::")
                    time.sleep(0.01)
    
        def subscribe():
            with Bus0() as sock:
                port = 12345 + i
                sock.recv_buffer_size = 1000
                # sock.listen("tcp://127.0.0.1:{}".format(port))
                sock.dial("tcp://127.0.0.1:12345")
    
                start_time = None
                count = 0
    
                while True:
                    msg = sock.recv(block=True)
                    start_time = start_time or time.time(
                    )  # Start when first message receive
                    event, _ = msg.decode().split("::", maxsplit=1)
                    if event == "end":
                        print("Count: {}".format(count),
                              "Time: {:.2f} s".format(time.time() - start_time))
                        break
                    else:
                        count += 1
    
        if i == 0:
            publish()
        else:
            subscribe()
    
    
    def main_redis(i, n_msgs):
        r = redis.Redis(host='127.0.0.1', port=6379, db=0)
    
        def publish():
            sleep(0.7)
            for i in range(n_msgs):
                r.publish("topic", b"msg::" + msg_100000)
            r.publish("topic", b"end::")
            time.sleep(1)
    
        def subscribe():
            p: redis.client.PubSub = r.pubsub()
            p.subscribe("topic")
            count = 0
            start_time = None
    
            while True:
                msg = p.get_message(ignore_subscribe_messages=True)
                if msg == None:
                    sleep(0.001)
                    continue
                start_time = start_time or time.time(
                )  # Start when first message receive
                event, _ = msg["data"].decode().split("::", maxsplit=1)
                if event == "end":
                    print("Count: {}".format(count),
                          "Time: {:.2f} s".format(time.time() - start_time))
                    break
                else:
                    count += 1
    
        if i == 0:
            publish()
        else:
            subscribe()
    
    
    def main_zmq(i, n_msgs):
        def publish():
            context = zmq.Context()
            socket = context.socket(zmq.PUB)
            socket.setsockopt(zmq.SNDHWM, 0)  # Maximize Queue Length
            socket.bind("tcp://*:%s" % 23456)
    
            sleep(0.7)
    
            start_time = time.time()
            for i in range(n_msgs):
                socket.send(b"msg::" + msg_100000)
            for i in range(50):
                socket.send(b"end::")
                time.sleep(0.01)
    
        def subscribe():
            context = zmq.Context()
            socket = context.socket(zmq.SUB)
            socket.setsockopt(zmq.SUBSCRIBE, b"")
            socket.connect("tcp://127.0.0.1:23456")
    
            count = 0
            start_time = None
    
            while True:
                msg = socket.recv()
                start_time = start_time or time.time(
                )  # Start when first message receive
                event, _ = msg.decode().split("::", maxsplit=1)
                if event == "end":
                    print("Count: {}".format(count),
                          "Time: {:.2f} s".format(time.time() - start_time))
                    break
                else:
                    count += 1
    
        if i == 0:
            publish()
        else:
            subscribe()
    
    
    if __name__ == "__main__":
        for n_msgs in [100, 1000, 10000]:
            for n_subs in [1, 2, 4, 6]:
                print("Subs: {}".format(n_subs), "Msgs: {}".format(n_msgs))
                n_jobs = n_subs + 1
                params = [[i, n_msgs] for i in range(n_jobs)]
    
                with WorkerPool(n_jobs=n_jobs, start_method="spawn",
                                daemon=False) as pool:
                    pool.map(main_nng, params)
                time.sleep(1)
    
Dec 19, 2021
pugixml is a Light-weight, simple and fast XML parser for C++ with XPath support

pugixml is a C++ XML processing library, which consists of a DOM-like interface with rich traversal/modification capabilities, an extremely fast XML parser which constructs the DOM tree from an XML file/buffer, and an XPath 1.0 implementation for complex data-driven tree queries. Full Unicode support is also available, with Unicode interface variants and conversions between different Unicode encodings (which happen automatically during parsing/saving).

May 16, 2022
Poseidon OS (POS) is a light-weight storage OS

Poseidon OS Poseidon OS (POS) is a light-weight storage OS that offers the best performance and valuable features over storage network. POS exploits t

May 17, 2022
A light-weight Flutter Engine Embedder based on HADK ,which for Android devices that runs without any java code

flutter-hadk A light-weight Flutter Engine Embedder based on HADK ,which for Android devices that runs without any java code 1.Build by android-ndk-to

Nov 8, 2021
Light-weight UNIX backdoor

JadedWraith Lightweight UNIX backdoor for ethical hacking. Useful for red team engagements and CTFs. Something I wrote a few years ago as part of a ga

Apr 16, 2022
Analytics In Real-time (AIR) is a light-weight system profiling tool

Analytics In Real-time Analytics In Real-time (AIR) is a light-weight system profiling tool that provides a set of APIs for profiling performance, lat

Mar 3, 2022
Ducktape is an Open source Light weight 2d Game Engine that gives utmost priority to user convenience.
Ducktape is an Open source Light weight 2d Game Engine that gives utmost priority to user convenience.

Ducktape is an Open source Light weight 2d Game Engine that gives utmost priority to user convenience. It is written in c++ and uses SFML and Box2d for graphics and physics respectively.

May 12, 2022
qpSWIFT is a light-weight sparse quadratic programming solver

qpSWIFT Light-weight sparse Quadratic Programming Solver Introduction qpSWIFT is light-weight sparse Quadratic Programming solver targetted for embedd

May 11, 2022
A light-weight music Discord bot using Orca.

What's the "Music Discord bot with C"? A light-weight music Discord bot using Orca for it's bot. It's easy to use and install. How to download and use

Mar 12, 2022
A very simple and light-weight drawing app made with qt and C++.
A very simple and light-weight drawing app made with qt and C++.

Blackboard A very simple and light-weight drawing app made with qt and C++. It supports tablet and pen pressure with the help of QTabletEvents. So you

Nov 15, 2021
Fast and Light-weight path smoothing methods for vehicles
Fast and Light-weight path smoothing methods for vehicles

path_smoother About Fast and Light-weight path smoothing methods for vehicles Denpendencies This project has been tested on Ubuntu 18.04. sudo apt-get

Dec 1, 2021
Simple, Fast, Light weight
Simple, Fast, Light weight

Welcome To PradoshOS Github! Index Main heading Setup Step 1 Step 2 Step 3 Step 4 Compilation of Bootloader Compilation of Kernel Compilation of Userl

May 13, 2022
nanomsg library

Welcome to nanomsg The nanomsg library is a simple high-performance implementation of several "scalability protocols". These scalability protocols are

May 18, 2022
nanomsg library

Welcome to nanomsg The nanomsg library is a simple high-performance implementation of several "scalability protocols". These scalability protocols are

May 15, 2022
zlib replacement with optimizations for "next generation" systems.

zlib-ng zlib data compression library for the next generation systems Maintained by Hans Kristian Rosbach aka Dead2 (zlib-ng àt circlestorm dót org) C

Feb 16, 2022
zlib replacement with optimizations for "next generation" systems.

zlib-ng zlib data compression library for the next generation systems Maintained by Hans Kristian Rosbach aka Dead2 (zlib-ng àt circlestorm dót org) C

Feb 16, 2022
zlib replacement with optimizations for "next generation" systems.

zlib-ng zlib data compression library for the next generation systems Maintained by Hans Kristian Rosbach aka Dead2 (zlib-ng àt circlestorm dót org) C

May 12, 2022
This repository accompanies Ray Tracing Gems II: Next Generation Rendering with DXR, Vulkan, and OptiX
This repository accompanies Ray Tracing Gems II: Next Generation Rendering with DXR, Vulkan, and OptiX

Apress Source Code This repository accompanies Ray Tracing Gems II: Next Generation Rendering with DXR, Vulkan, and OptiX by Adam Marrs, Peter Shirley

May 19, 2022