High performance server-side application framework

Seastar

CircleCI Version License: Apache2 n00b issues

Introduction

SeaStar is an event-driven framework allowing you to write non-blocking, asynchronous code in a relatively straightforward manner (once understood). It is based on futures.

Building Seastar

For more details and alternative work-flows, read HACKING.md.

Assuming that you would like to use system packages (RPMs or DEBs) for Seastar's dependencies, first install them:

$ sudo ./install-dependencies.sh

then configure (in "release" mode):

$ ./configure.py --mode=release

then compile:

$ ninja -C build/release

If you're missing a dependency of Seastar, then it is possible to have the configuration process fetch a version of the dependency locally for development.

For example, to fetch fmt locally, configure Seastar like this:

$ ./configure.py --mode=dev --cook fmt

--cook can be repeated many times for selecting multiple dependencies.

Build modes

The configure.py script is a wrapper around cmake. The --mode argument maps to CMAKE_BUILD_TYPE, and supports the following modes

CMake mode Debug info Optimi­zations Sanitizers Allocator Checks Use for
debug Debug Yes -O0 ASAN, UBSAN System All gdb
release RelWithDebInfo Yes -O3 None Seastar Asserts production
dev Dev (Custom) No -O1 None Seastar Asserts build and test cycle
sanitize Sanitize (Custom) Yes -Os ASAN, UBSAN System All second level of tests, track down bugs

Note that seastar is more sensitive to allocators and optimizations than usual. A quick rule of the thumb of the relative performances is that release is 2 times faster than dev, 150 times faster than sanitize and 300 times faster than debug.

Using Seastar from its build directory (without installation)

It's possible to consume Seastar directly from its build directory with CMake or pkg-config.

We'll assume that the Seastar repository is located in a directory at $seastar_dir.

Via pkg-config:

$ g++ my_app.cc $(pkg-config --libs --cflags --static $seastar_dir/build/release/seastar.pc) -o my_app

and with CMake using the Seastar package:

CMakeLists.txt for my_app:

find_package (Seastar REQUIRED)

add_executable (my_app
  my_app.cc)
  
target_link_libraries (my_app
  Seastar::seastar)
$ mkdir $my_app_dir/build
$ cd $my_app_dir/build
$ cmake -DCMAKE_PREFIX_PATH="$seastar_dir/build/release;$seastar_dir/build/release/_cooking/installed" -DCMAKE_MODULE_PATH=$seastar_dir/cmake $my_app_dir

The CMAKE_PREFIX_PATH values ensure that CMake can locate Seastar and its compiled submodules. The CMAKE_MODULE_PATH value ensures that CMake can uses Seastar's CMake scripts for locating its dependencies.

Using an installed Seastar

You can also consume Seastar after it has been installed to the file-system.

Important:

  • Seastar works with a customized version of DPDK, so by default builds and installs the DPDK submodule to $build_dir/_cooking/installed

First, configure the installation path:

$ ./configure.py --mode=release --prefix=/usr/local

then run the install target:

$ ninja -C build/release install

then consume it from pkg-config:

$ g++ my_app.cc $(pkg-config --libs --cflags --static seastar) -o my_app

or consume it with the same CMakeLists.txt as before but with a simpler CMake invocation:

$ cmake ..

(If Seastar has not been installed to a "standard" location like /usr or /usr/local, then you can invoke CMake with -DCMAKE_PREFIX_PATH=$my_install_root.)

There are also instructions for building on any host that supports Docker.

Use of the DPDK is optional.

Seastar's C++ dialect: C++17 or C++20

Seastar supports both C++17, and C++20. The build defaults to the latest dialect supported by your compiler, but can be explicitly selected with the --c++-dialect configure option, e.g., --c++-dialect=gnu++17, or if using CMake directly, by setting on the Seastar_CXX_DIALECT CMake variable.

See the compatibity statement for more information.

Getting started

There is a mini tutorial and a more comprehensive one.

The documentation is available on the web.

Resources

Ask questions and post patches on the development mailing list. Subscription information and archives are available here, or just send an email to [email protected].

Information can be found on the main project website.

File bug reports on the project issue tracker.

The Native TCP/IP Stack

Seastar comes with its own userspace TCP/IP stack for better performance.

Recommended hardware configuration for SeaStar

  • CPUs - As much as you need. SeaStar is highly friendly for multi-core and NUMA
  • NICs - As fast as possible, we recommend 10G or 40G cards. It's possible to use 1G too but you may be limited by their capacity. In addition, the more hardware queue per cpu the better for SeaStar. Otherwise we have to emulate that in software.
  • Disks - Fast SSDs with high number of IOPS.
  • Client machines - Usually a single client machine can't load our servers. Both memaslap (memcached) and WRK (httpd) cannot over load their matching server counter parts. We recommend running the client on different machine than the servers and use several of them.

Projects using Seastar

  • cpv-cql-driver: C++ driver for Cassandra/Scylla based on seastar framework
  • cpv-framework: A web framework written in c++ based on seastar framework
  • redpanda: A Kafka replacement for mission critical systems
  • Scylla: A fast and reliable NoSQL data store compatible with Cassandra and DynamoDB
  • smf: The fastest RPC in the West
Owner
ScyllaDB
ScyllaDB, the Real-Time Big Data Database
ScyllaDB
Comments
  • seastar.pc: Neither

    seastar.pc: Neither "--static" nor without "--static" does the right thing

    When I try to build a Seastar application with

    $ c++ getting-started.cc `pkg-config --cflags --libs --static ./build/release/seastar.pc`
    

    The linking fails when trying to find /usr/lib64/libunistring.so, -ltspi and -lidn2. I do have all three libraries installed on my machine, but not the development package (libunistring-devel et al) but just the runtime package. For example, this runtime package includes /usr/lib64/libunistring.so.2, but NOT a symbolic link /usr/lib64/libunistring.so.

    That should be enough - Seastar doesn't use any of these packages directly, it just uses GNU TLS, and cmake decied that that needs these libunistring.so, -ltspi and -lidn2. But since GNU TLS is a shared library, it will bring its own dependencies with it and it knows their full path (e.g. /usr/lib64/libunistring.so.2) - see ldd /usr/lib64/libgnutls.so - and does NOT need the symlink from the development packages to be installed.

    Note that build without "--static" doesn't work either, it misses, for example, "-lrt".

  • flush hangs if called many times in parallel

    flush hangs if called many times in parallel

    I have recently ran into an Issue in ScyllaDB, where triggering many SSTable writes in parallel would hang the system.

    I have managed to come up with a reproducer that does not use any ScyllaDB infrastructure at all, meaning the bug is either in Seastar, or the Kernel

    I have put the reproducer in my seastar tree, branch issue-flushes, 6c30c64355e5e

    After more than 5 minutes of waiting, this test does not complete. I monitor its activities with iostat during the whole duration of the test, and at some point, all disk activity just ceases.

    Replacing the parallel_for_each call with do_for_each, the test finishes in slightly over a minute - with mild disk activity always present.

    That indicates that the kernel is probably not the culprit, since from the kernel PoV, both cases are the almost the same (since the thread pool will serialize the flushes). Also, I have ran this test both in XFS and ext4, with similar results. On top of that, @penberg ran it with BTRFS, and also see similar hangs.

    However, for completeness of information, here is what I am running:

    $ uname -a
    Linux trueserverstrongandfree 4.4.5-300.vanilla.knurd.1.fc23.x86_64 #1 SMP Thu Mar 10 06:32:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    

    Reducing the number of iterations, from 10000 to something lower makes it work. But if aside from the flush, I also write something to the file, then I can reproduce this with as little as 500 iterations. The code for that is in the branch issue-flushes-writes, c8eec3dcd71d34f7d of my seastar tree

    I have suspected that the syscall mechanism by itself may be at fault. To test that hypothesis, I have implemented a version of sleep that calls into posix_sleep, and issued as many as 10000 calls that would sleep for 10ms each. I did not manage to trigger the issue by doing that.

    Other things that I have already ruled out:

    • seastar sleep mode: running with --poll-mode, I can still reproduce the issue
    • I/O scheduler: running with --max-io-requests 100000 I can still reproduce the issue. Also, the fact that I have seen this without any writes, just flushes, pretty much bypass much of the I/O queue, if not all.
  • coroutine: exception: retain exception_ptr type

    coroutine: exception: retain exception_ptr type

    Currently, make_exception/return_exception call std::make_exception_ptr unconditionally. When called over a std::exception_ptr, std::make_exception_ptr loses the arg's exception type and creates an untyped exception_ptr.

    Instead, use a templated constructor for struct exception that just moves the given exception_ptr into its member to retain its type. It's also a bit more efficient since it save the construction of a new exception ptr and one move of it (into the constructor's parameter).

    Added a respective unit test.

    Signed-off-by: Benny Halevy [email protected]

  • util: log-impl: rework log_buf::inserter_iterator

    util: log-impl: rework log_buf::inserter_iterator

    inserter_iterator is bugged, because postincrement can push _buf->_current over _buf->_end, resulting in a buffer overflow during the following call to fill_buffer(). Fix that.

    Since increment and dereference operators for an inserter iterator are confusing, this commit makes them a no-op, and moves all the work to operator=, which is exactly what std::back_insert_iterator does.

    Fixes #868

  • io_queues: add max_band and max_iops configuration per io-queue

    io_queues: add max_band and max_iops configuration per io-queue

    HEAD: 6973080cd1045675af890f155b0fcd7308b4277c

    Description Our io-queue idea heavily resembles the systemd.resource-control interface: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#. However, clearly, it only has a few out of plenty configuration pivots of systemd's.

    We probably don't need to implement everything at once but adding some when we need them (and we do!) definitely makes sense.

    Our shares are parallel to IOWeight and I suggest to add pivots parallel to IOXxxBandwidthMax and IOXxxIOPSMax.

    This would allow to limit the corresponding io-queue by a specific max bandwidth/IOPs values thereby ensuring a required QoS for other classes.

  • 跑getting-started.cc报段错误

    跑getting-started.cc报段错误

    Hello world Segmentation fault on shard 7. Backtrace: 0x00000000005c24fc 0x00000000005c2620 0x00000000005c2695 /lib/x86_64-linux-gnu/libpthread.so.0+0x000000000001138f 0x00000000006135f3 0x0000000000618ae7 0x0000000000675bdd /lib/x86_64-linux-gnu/libpthread.so.0+0x00000000000076b9 /lib/x86_64-linux-gnu/libc.so.6+0x000000000010741c Segmentation fault (core dumped)

  • Add simple http client

    Add simple http client

    There's http server implementation in seastar, but client code is usually done by sending raw string into connected_socket's stream and reading raw string back from another socket stream. This set provides basement for more convenient http client API. The proposed API summary is in the patch titled "http: Basic connection implementation"

    fixes: #761

  • aio_write() blocks reactor thread for 3.3 sec on xfs_ilock with kworkers

    aio_write() blocks reactor thread for 3.3 sec on xfs_ilock with kworkers

    Probably related to concurrent fdatasync (file::flush()).

    Refs https://github.com/scylladb/scylla/issues/2127

    Setup details here

    perf_fast_forwa  1632 [000]   396.404031: sched:sched_switch: perf_fast_forwa:1632 [120] D ==> timer-0:1634 [120]
                7fff9287472f __schedule ([kernel.kallsyms])
                7fff92874d16 schedule ([kernel.kallsyms])
                7fff92878491 rwsem_down_write_failed ([kernel.kallsyms])
                7fff9242cd27 call_rwsem_down_write_failed ([kernel.kallsyms])
                7fff92877a3d down_write ([kernel.kallsyms])
                7fffc05d2fe3 xfs_ilock ([kernel.kallsyms])
                7fffc05d01db xfs_vn_update_time ([kernel.kallsyms])
                7fff92285ade file_update_time ([kernel.kallsyms])
                7fffc05c3dff xfs_file_aio_write_checks ([kernel.kallsyms])
                7fffc05c405c xfs_file_dio_aio_write ([kernel.kallsyms])
                7fffc05c456d xfs_file_write_iter ([kernel.kallsyms])
                7fff922bc5d3 aio_write ([kernel.kallsyms])
                7fff922bcec1 do_io_submit ([kernel.kallsyms])
                7fff922bdd40 sys_io_submit ([kernel.kallsyms])
                7fff9287a6b7 entry_SYSCALL_64_fastpath ([kernel.kallsyms])
                         687 io_submit (/usr/lib64/libaio.so.1.0.1)
                      112373 seastar::reactor::flush_pending_aio (/home/tgrabiec/src/scylla/build/release/tests/perf/perf_fast_forward_g)
    

    Woken up:

    kworker/0:1   137 [000]   399.725111: sched:sched_wakeup: perf_fast_forwa:1632 [120] success=1 CPU:000
                7fff920d11b9 ttwu_do_wakeup ([kernel.kallsyms])
                7fff920d12d7 ttwu_do_activate ([kernel.kallsyms])
                7fff920d1f00 try_to_wake_up ([kernel.kallsyms])
                7fff920d21f4 wake_up_q ([kernel.kallsyms])
                7fff920f624e rwsem_wake ([kernel.kallsyms])
                7fff9242cd8b call_rwsem_wake ([kernel.kallsyms])
                7fff920f4a15 up_write ([kernel.kallsyms])
                7fffc05d3314 xfs_iunlock ([kernel.kallsyms])
                7fffc05cfc6d xfs_iomap_write_unwritten ([kernel.kallsyms])
                7fffc05c31ef xfs_dio_write_end_io ([kernel.kallsyms])
                7fff922d3cd0 iomap_dio_complete ([kernel.kallsyms])
                7fff922d3d65 iomap_dio_complete_work ([kernel.kallsyms])
                7fff920c0593 process_one_work ([kernel.kallsyms])
                7fff920c080a worker_thread ([kernel.kallsyms])
                7fff920c6e55 kthread ([kernel.kallsyms])
                7fff9287a905 ret_from_fork ([kernel.kallsyms])
    

    There are multiple kworkers active, blocking each other on xfs_ilock and buf lock. Example kworker wake point:

    kworker/0:1H   631 [000]   399.724993: sched:sched_wakeup: kworker/0:0:3 [120] success=1 CPU:000
                7fff920d11b9 ttwu_do_wakeup ([kernel.kallsyms])
                7fff920d12d7 ttwu_do_activate ([kernel.kallsyms])
                7fff920d1f00 try_to_wake_up ([kernel.kallsyms])
                7fff920d21a5 wake_up_process ([kernel.kallsyms])
                7fff92877678 __up.isra.0 ([kernel.kallsyms])
                7fff920f4856 up ([kernel.kallsyms])
                7fffc05bea0a xfs_buf_unlock ([kernel.kallsyms])
                7fffc05ea06c xfs_buf_item_unpin ([kernel.kallsyms])
                7fffc05e2313 xfs_trans_committed_bulk ([kernel.kallsyms])
                7fffc05e7527 xlog_cil_committed ([kernel.kallsyms])
                7fffc05e4199 xlog_state_do_callback ([kernel.kallsyms])
                7fffc05e435d xlog_state_done_syncing ([kernel.kallsyms])
                7fffc05e4400 xlog_iodone ([kernel.kallsyms])
                7fffc05beadb xfs_buf_ioend ([kernel.kallsyms])
                7fffc05bec25 xfs_buf_ioend_work ([kernel.kallsyms])
                7fff920c0593 process_one_work ([kernel.kallsyms])
                7fff920c080a worker_thread ([kernel.kallsyms])
                7fff920c6e55 kthread ([kernel.kallsyms])
                7fff9287a905 ret_from_fork ([kernel.kallsyms])
    
    
  • Log messages are inhibited due logger::silencer on a concurrent shard

    Log messages are inhibited due logger::silencer on a concurrent shard

    Loggers are typically global so the logger::silencer in https://github.com/scylladb/seastar/blob/d27bf8b5a14e5b9e9c9df18fd1306489b651aa42/src/util/log.cc#L273 causes log calls on other shards to be temporarily silenced as well.

    Also, reversing the order of destruction of concurrent silencers on the the same global logger instance might leave the level at -1.

    level = info
    silencer 0: level := -1, _level=info
    silencer 1: level := -1, _level=-1
    ...
    ~silencer 0: level := info
    ~silencer 1: level := -1
    
  • unit.tls sometimes fails on short read

    unit.tls sometimes fails on short read

    .../unit/tls_test.cc(89): error: in "test_x509_client_with_builder_system_trust_multiple": check buf.size() > 8 has failed
    

    Failing assertion:

                                auto f = in.read();
                                return f.then([](temporary_buffer<char> buf) {
                                    // std::cout << buf.get() << std::endl;
    
                                    // Avoid passing a nullptr as an argument of strncmp().
                                    // If the temporary_buffer is empty (e.g. due to the underlying TCP connection
                                    // being reset) passing the buf.get() (which would be a nullptr) to strncmp()
                                    // causes a runtime error which masks the actual issue.
                                    if (buf) {
                                        BOOST_CHECK(strncmp(buf.get(), "HTTP/", 5) == 0);
                                    }
    This one >>>                    BOOST_CHECK(buf.size() > 8);
                                });
    

    examples: https://app.circleci.com/pipelines/github/scylladb/seastar/1306/workflows/34276448-c252-4b70-979c-e200e290a024/jobs/2577 https://app.circleci.com/pipelines/github/scylladb/seastar/1309/workflows/9be37ca0-183b-4225-bbdf-a0095be22261/jobs/2619

  • reactor: support more network ops in io_uring backend

    reactor: support more network ops in io_uring backend

    • reorder required_ops in try_create_uring in the exact order of io_uring_op in linux/include/uapi/linux/io_uring.h. the io_uring_op enum is defined in the chronological order when the ops are added. and add comments to note the exact linux version in which the corresponding ops where added.

    • move speculation from reactor_backend_uring::poll(), so we can be more explicit when calling readable() or writeable(). So, if take_speculation() is called, we want to have speculation explicitly. If poll() is called, the outcome should be a readable or writeable fd.

    • document the take_speculation() and speculate_epoll() methods

    • rewrite reactor_backend_uring::read_some() and reactor_backend_uring::write_some() with io_uring ops. please note, quote from the commit message of linux source commit d7718a9d

      io_uring tries any request in a non-blocking manner, if it can, and then retries from a worker thread if we get -EAGAIN.

      so, io_uring never returns EAGAIN, and after linux v5.7, it even polls for the events before performing I/O if -EAGAIN is returned. so we don't have to poll for the event or open the file / socket without SOCK_NONBLOCK or O_NONBLOCK in order to prevent io_uring from returning -EAGAIN.

    tested on a 8-core x86 machine with httpd --smp 1 (release build) and wrk -c 128 -t 4. the aio backend showed:

    Running 10s test @ http://localhost:10000/
      4 threads and 128 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.49ms  109.56us   3.81ms   87.41%
        Req/Sec    21.51k   783.10    28.03k    87.84%
      862627 requests in 10.10s, 112.71MB read
    Requests/sec:  85407.13
    Transfer/sec:     11.16MB
    

    while the io_uring backend gave us:

    Running 10s test @ http://localhost:10000/
      4 threads and 128 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.35ms  400.14us  37.64ms   98.36%
        Req/Sec    23.94k     2.36k   26.30k    70.25%
      952788 requests in 10.00s, 124.48MB read
    Requests/sec:  95266.43
    Transfer/sec:     12.45MB
    

    Signed-off-by: Kefu Chai [email protected]

  • Added safe pointer utility

    Added safe pointer utility

    A safe pointer is a mechanism for interacting with a possibly destructed object: Sometimes some object A needs to interact with another object B, e.g., A calls one of B's methods. If B happens to be already destructed (and freed), this results in a use-after-free bug. One way to avoid this problem is for A to manage the lifetime of B via a unique_ptr or shared_ptr. Sometimes, however, it is undesirable to have such ownership relation, and instead A would like to just skip the interaction in case B has already died (in the above example, A would simply not call the method on B). This class provides a solution. See example below.

    @reviewers:

    • Do you think this is a useful utility for seastar? (If so, I will add a unit test.)
    • Currently, I throw an exception when dereferencing a pointer to a dead object. We can also make it configurable, like the action in seastar::checked_ptr.

    Example: B is given a safe_ptr_factory as class member:

    class B {
      public:
        safe_ptr_factory<B> sp_factory;
        B() : sp_factory(this) {}
    };
    
    safe_ptr<B> sp;
    {
      B b;
      sp = b.sp_factory.get_safe_ptr();
    } // b destructs here
    
    *sp; // throws exception
    

    (as a TODO, I need to add a const-overload for get_safe_ptr() fix const-correctness)

  • reactor: allow registering handler multiple times for a signal.

    reactor: allow registering handler multiple times for a signal.

    std::unordered_map::empalce() inserts a new element into the container constructed in-place with the given args only if there is no element with the key in the container, which means we can only register handler for a specific signal only once and that's not reasonable.

    We often use the stop_signal(which should have been a standard component of seastar but not yet) to manage SIGINT/SIGTERM for gracefully stopping application. It waits until receiving SIGINT or SIGTERM and then register a no-op handler on destruction to unregister the handler registered on construction, which however cannot work properly since the handler can be registered only once for a signal before this patch.

    Signed-off-by: Jianyong Chen [email protected]

  • build: add FindBenchmark.cmake

    build: add FindBenchmark.cmake

    the goal of this PR is to request for comments on adding benchmark tests for utilities used at the critical paths.

    with the build facilities setup here, we will be able to run tests to understand the performance of utilities with minimal efforts.

  • httpd: optimize header field assignment

    httpd: optimize header field assignment

    We lookup the field name twice, even in the simple case: once to check if it already exists, and the second time to insert it.

    Optimize by using try_emplace to merge the check and insertion into a single lookup.

  • Use an integral type for uniform_int_distribution

    Use an integral type for uniform_int_distribution

    char is not a supported type, and in particular is rejected by libc++-15. Change it to int and explicitly set the bounds where necessary.

    This is a re-submission of https://github.com/scylladb/seastar/pull/1317 to see if this fixes the Circle CI issue.

    cc @BenPope

  • reactor: use aio to implement reactor_backend_uring::read()

    reactor: use aio to implement reactor_backend_uring::read()

    this change reverts https://github.com/tchaikov/seastar/commit/64480988a3dd531603e46010693f9a87baa9437c. per our tests on linux 5.11.0, read op returns -EAGAIN even after poll(EPOLLIN) with io_uring backend returns with an fd created using file_desc::inotify_init(). this practically fails all tests in fsnotifier_test.cc, and also some tests in tls_test.cc exercising reloadable_credentials. as reloadable_credentials uses inotify to watch the changes of the credential files.

    but on newer kernel like linux 6.0, the io_uring backend works fine with inotify. since inotify is not used in critical paths, we can tolerate this hybrid implementation at this moment. and we can use io_uring backend once the CircleCI is able to offer newer image with newer linux kernel, and we need to bump up the required kernel version for using some io_uring features.

    Fixes #1386 Fixes #1387 Signed-off-by: Kefu Chai [email protected]

Dec 15, 2022
LAppS - Lua Application Server for micro-services with default communication over WebSockets. The fastest and most vertically scalable WebSockets server implementation ever. Low latency C++ <-> Lua stack roundtrip.

LAppS - Lua Application Server This is an attempt to provide very easy to use Lua Application Server working over WebSockets protocol (RFC 6455). LApp

Oct 13, 2022
A cross-platform network learning demos. Like high-performance http server

Network-Learn A cross-platform network learning demos (toys). And I try not to use 3rd-party libraries. Welcome to try it out and leave your comments.

Sep 6, 2022
KBMS is a C++11 high performance network framework based on libevent.

KBMS is a C++11 high performance network framework based on libevent. It is light and easy to deploy. At present, it mainly supports HTTP protocol.

Sep 13, 2022
A headers only high performance C++ middleware framework/lib. See README for details.

# README # hmbdc is an open source headers only C++ framework and library built on top of various real-world tested lockfree algorithms that facilit

Nov 6, 2022
Side-channel file transfer between independent VM executed on the same physical host
Side-channel file transfer between independent VM executed on the same physical host

Inter-process or cross-VM data exchange via CPU load modulation What is this I made this PoC as a visual aid for an online discussion about M1RACLES -

Dec 28, 2022
Rudimentary opinionated client-side lua libwayland bindings and scanner

wau This should work with Lua 5.3+. By default it builds with 5.3 instead of 5.4 because the examples depend on lgi. These aren't 1-to-1 bindings to l

Nov 19, 2022
Messaging Client - Server application

Message_Client-Server Messaging Client - Server application Message Socket Server (server.c) Uses TCP/IP (stream socket) Requieres: 1 command paramete

Oct 5, 2021
Built a client-server application using TCP and UDP sockets, in which the clients can subscribe/unsubscribe to various topics.

Built a client-server application using TCP and UDP sockets, in which the clients can subscribe/unsubscribe to various topics.

Jun 22, 2022
Creating a server-client application with C sockets.
Creating a server-client application with C sockets.

C-ServerClient Creating a server-client application with C socket. How to use? Clone the project and cd in to the main directory. Open a terminal and

Oct 2, 2022
The project consists in a client/server architecture voice over IP application, similar to Skype or TeamSpeak.

Babel The project consists in a client/server architecture voice over IP application, similar to Skype or TeamSpeak. Build and Run the Project First y

Jan 17, 2022
A modern C++ network library for developing high performance network services in TCP/UDP/HTTP protocols.
A modern C++ network library for developing high performance network services in TCP/UDP/HTTP protocols.

evpp Introduction 中文说明 evpp is a modern C++ network library for developing high performance network services using TCP/UDP/HTTP protocols. evpp provid

Jan 5, 2023
GNUWeebBot - High-performance bot Telegram, running on Linux environment.

High-performance bot Telegram, running on Linux environment, written in C. Core Features Event Debug Event Logger Modules Telegram debug info.

May 8, 2022
Pipy is a tiny, high performance, highly stable, programmable proxy written in C++

Pipy is a tiny, high performance, highly stable, programmable proxy. Written in C++, built on top of Asio asynchronous I/O library, Pipy is extremely lightweight and fast, making it one of the best choices for service mesh sidecars.

Dec 23, 2022
High performance in-kernel WireGuard implementation for Windows

WireGuard for the NT Kernel High performance in-kernel WireGuard implementation for Windows WireGuardNT is an implementation of WireGuard, for the NT

Dec 2, 2022
Brynet - Header Only Cross platform high performance TCP network library using C++ 11.
Brynet - Header Only Cross platform high performance TCP network library using C++ 11.

Brynet Header Only Cross platform high performance TCP network library using C++ 11. Build status Windows : Linux/MacOS : Features Header only Cross p

Jan 8, 2023
High-performance Fortran program to calculate polarizability and inverse dielectric response function.

DielectricKit First-principles HPC toolkit for simulating dielectric responses Introduction DielectricKit is a high-performance computing toolkit to c

Dec 27, 2022
Full-featured high-performance event loop loosely modelled after libevent

libev is a high-performance event loop/event model with lots of features. (see benchmark at http://libev.schmorp.de/bench.html) ABOUT Homepage:

Jan 3, 2023
A high-performance and easy-to-use C++ network library.

pine A high-performance and easy-to-use C++ network library. Now this is just a toy library for education purpose, do not use in production. example A

Dec 30, 2022