A high-performance REST Toolkit written in C++



Build Status

Pistache is a modern and elegant HTTP and REST framework for C++. It is entirely written in pure-C++14 and provides a clear and pleasant API.


We are still looking for a volunteer to document fully the API. In the mean time, partial documentation is available at http://pistache.io. If you are interested in helping with this, please open an issue ticket.


Pistache has the following third party dependencies


Pistache is released under the Apache License 2.0. Contributors are welcome!

Pistache was originally created by Mathieu Stefani, but he is no longer actively maintaining Pistache. A team of volunteers has taken over. To reach the original maintainer, drop a private message to @octal in cpplang Slack channel.

For those that prefer IRC over Slack, the rag-tag crew of maintainers idle in #pistache on Freenode. Please come and join us!

The Launchpad Team administers the daily and stable Ubuntu pre-compiled packages.

Release Versioning

Please update version.txt accordingly with each unstable or stable release.

Interface Versioning

The version of the library's public interface (ABI) is not the same as the release version. The interface version is primarily associated with the external interface of the library. Different platforms handle this differently, such as AIX, GNU/Linux, and Solaris.

GNU Libtool abstracts each platform's idiosyncrasies away because it is more portable than using ar(1) or ranlib(1) directly. However, it is a pain to integrate with CMake so we made do without it by setting the SONAME directly.

When Pistache is installed it will normally ship:

  • libpistache-<release>.so.X.Y: This is the actual shared-library binary file. The X and Y values are the major and minor interface versions respectively.

  • libpistache-<release>.so.X: This is the soname soft link that points to the binary file. It is what other programs and other libraries reference internally. You should never need to directly reference this file in your build environment.

  • libpistache-<release>.so: This is the linker name entry. This is also a soft link that refers to the soname with the highest major interface version. This linker name is what is referred to on the linker command line. This file is created by the installation process.

  • libpistache-<release>.a: This is the static archive form of the library. Since when using a static library all of its symbols are normally resolved before runtime, an interface version in the filename is unnecessary.

If your contribution has modified the interface, you may need to update the major or minor interface versions. Otherwise user applications and build environments will eventually break. This is because they will attempt to link against an incorrect version of the library -- or worse, link correctly but with undefined runtime behaviour.

The major version should be incremented every time a non-backward compatible change occured in the ABI. The minor version should be incremented every time a backward compatible change occurs. This can be done by modifying version.txt accordingly.

Precompiled Packages

If you have no need to modify the Pistache source, you are strongly recommended to use precompiled packages for your distribution. This will save you time.

Debian and Ubuntu

We have submitted a Request for Packaging downstream to Debian. Once we have an official Debian package maintainer intimately familiar with the Debian Policy Manual, we can expect to eventually see it become available in Debian and all derivatives (e.g. Ubuntu and many others).

But until then currently Pistache has partially compliant upstream Debianization. Our long term goal is to have our source package properly Debianized downstream by a Debian Policy Manual SME. In the mean time consider using our PPAs to avoid having to build from source.

Supported Architectures

Currently Pistache is built and tested on a number of architectures. Some of these are suitable for desktop or server use and others for embedded environments. As of this writing we do not currently have any MIPS related packages that have been either built or tested. The ppc64el builds are occasionally tested on POWER9 hardware, courtesy of IBM.

  • amd64
  • arm64
  • armhf
  • i386
  • ppc64el
  • s390x

Ubuntu PPA (Unstable)

The project builds daily unstable snapshots in a separate unstable PPA. To use it, run the following:

$ sudo add-apt-repository ppa:pistache+team/unstable
$ sudo apt update
$ sudo apt install libpistache-dev

Ubuntu PPA (Stable)

Currently there are no stable release of Pistache published into the stable PPA. However, when that time comes, run the following to install a stable package:

$ sudo add-apt-repository ppa:pistache+team/stable
$ sudo apt update
$ sudo apt install libpistache-dev

Other Distributions

Package maintainers, please insert instructions for users to install pre-compiled packages from your respective repositories here.

Use via pkg-config

If you would like to automatically have your project's build environment use the appropriate compiler and linker build flags, pkg-config can greatly simplify things. It is the portable international de facto standard for determining build flags. The development packages include a pkg-config manifest.

GNU Autotools

To use with the GNU Autotools, as an example, include the following snippet in your project's configure.ac:

    # Pistache...
        [libpistache], [libpistache >= 0.0.2], [],
        [AC_MSG_ERROR([libpistache >= 0.0.2 missing...])])


To use with a CMake build environment, use the FindPkgConfig module. Here is an example:

    cmake_minimum_required(3.4 FATAL_ERROR)

    # Find the library.
    find_package(Pistache 0.0.2 REQUIRED)

    add_executable(${PROJECT_NAME} main.cpp)
    target_link_libraries(${PROJECT_NAME} pistache_shared)


To use within a vanilla makefile, you can call pkg-config directly to supply compiler and linker flags using shell substitution.

    CFLAGS=-g3 -Wall -Wextra -Werror ...
    LDFLAGS=-lfoo ...
    CFLAGS+= $(pkg-config --cflags libpistache)
    LDFLAGS+= $(pkg-config --libs libpistache)

Building from source

To download the latest available release, clone the repository over github.

    $ git clone https://github.com/oktal/pistache.git

Then, init the submodules:

    $ git submodule update --init

Now, compile the sources:

    $ cd pistache
    $ mkdir -p {build,prefix}
    $ cd build
    $ cmake -G "Unix Makefiles" \
        -DCMAKE_BUILD_TYPE=Release \
        -DPISTACHE_BUILD_DOCS=false \
        -DPISTACHE_USE_SSL=true \
        -DCMAKE_INSTALL_PREFIX=$PWD/../prefix \
    $ make -j
    $ make install

If you chose to build the examples, then perform the following to build the examples.

    $ cd examples
    $ make -j

Optionally, you can also build and run the tests (tests require the examples):

    $ cmake -G "Unix Makefiles" -DPISTACHE_BUILD_EXAMPLES=true -DPISTACHE_BUILD_TESTS=true ..
    $ make test test_memcheck

Be patient, async_test can take some time before completing. And that's it, now you can start playing with your newly installed Pistache framework.

Some other CMAKE defines:

Option Default Description
PISTACHE_BUILD_EXAMPLES False Build all of the example apps
PISTACHE_BUILD_TESTS False Build all of the unit tests
PISTACHE_ENABLE_NETWORK_TESTS True Run unit tests requiring remote network access
PISTACHE_USE_SSL False Build server with SSL support

Continuous Integration Testing

It is important that all patches pass unit testing. Unfortunately developers make all kinds of changes to their local development environment that can have unintended consequences. This can means sometimes tests on the developer's computer pass when they should not, and other times failing when they should not have.

To properly validate that things are working, continuous integration (CI) is required. This means compiling, performing local in-tree unit tests, installing through the system package manager, and finally testing the actually installed build artifacts to ensure they do what the user expects them to do.

The key thing to remember is that in order to do this properly, this all needs to be done within a realistic end user system that hasn't been unintentionally modified by a developer. This might mean a chroot container with the help of QEMU and KVM to verify that everything is working as expected. The hermetically sealed test environment validates that the developer's expected steps for compilation, linking, unit testing, and post installation testing are actually replicable.

There are different ways of performing CI on different distros. The most common one is via the international DEP-8 standard as used by hundreds of different operating systems.


On Debian based distributions, autopkgtest implements the DEP-8 standard. To create and use a build image environment for Ubuntu, follow these steps. First install the autopkgtest(1) tools:

$ sudo apt install autopkgtest

Next create the test image, substituting groovy or amd64 for other releases or architectures:

$ autopkgtest-buildvm-ubuntu-cloud -r groovy -a amd64

Generate a Pistache source package in the parent directory of pistache_source:

$ cd pistache_source
$ sudo apt build-dep pistache
$ ./debian/rules get-orig-source
$ debuild -S -sa -uc

Test the source package on the host architecture in QEMU with KVM support and 8GB of RAM and four CPUs:

$ autopkgtest --shell-fail --apt-upgrade ../pistache_(...).dsc -- \
      qemu --ram-size=8192 --cpus=4 --show-boot path_to_build_image.img \


Hello World (server)

#include <pistache/endpoint.h>

using namespace Pistache;

struct HelloHandler : public Http::Handler {
  void onRequest(const Http::Request&, Http::ResponseWriter writer) override{
    writer.send(Http::Code::Ok, "Hello, World!");

int main() {
  • Add Meson support

    Add Meson support

    This PR addresses a few things: Meson support, a fix to parse_RFC_850(), re-add support for code coverage in CMake and update Travis config. All the details about this four things are in their respective commit description :)

    If you don't know what Meson is, I highly recommend you to check it out. Meson build files are far more readable and maintainable than CMake ones, and the build system is also faster, which is never a bad thing.

  • Can someone describe the process of installing Pistache on Centos 6.5 etc

    Can someone describe the process of installing Pistache on Centos 6.5 etc

    I have actually finished my program and it does exactly what I want, but I have been doing it in codeblocks on a Ubuntu system. So now I want to run my program on a Centos headless server. I have made a Makefile and my Makefile works on Ubuntu without errors, but then pistache is installed.

    So I was wondering if someone could talk me through the process of installing Pistache on Centos. Is there a yum command to install pistache or do I just clone pistache from git and use the Makefile to build it and then copy the pistache folder to my project?

    Bear in mind I am new to linux and I usually do everything in windows, so if my question sounds stupid then that is why.

  • fix #1030

    fix #1030

    1. Keep-alive connection server endpoint, send 408 message after idle timeout, it's not expected.
    2. I add an keep-alive timeout option. Before starting the HTTP service, you can set the keepalive time for the long connection. If the keepalive time exceeds this value, you can disable the long connection.
    3. I use tcpdump capture packet, confirming that 408 is not sent and the connection is directly closed.
  • Possible solution for BUG#1007

    Possible solution for BUG#1007

    I'm also currently facing the issue described in #1007. I added the MSG_NOSIGNAL Flag as proposed by @amang8662. But there was still the problem, how to get the info if the socket is still alive. So I checked on the documentation (https://linux.die.net/man/3/getsockopt). With the function getsockopt() you are able to receive the last error of the socket. In case the pipe breaks, the error is returned through this function. So we can check if the stream is still alive by testing the return value.

    I added the function isOpen() which returns true if there isn't an error on the socket and false otherwise.

    This is my first PR in any open-source project. So I hope for leniency.

  • Dynamic arraybuf

    Dynamic arraybuf

    • Changed ingestion buffer to std::vector instead of a stack-based array. There will be a performance penalty, but it allows for file uploads
    • Added Const::MaxPayload to avoid killing hosts due to massive payloads.
  • [ENDPOINT] NEW: Now can use SSL encryption on HTTP endpoint

    [ENDPOINT] NEW: Now can use SSL encryption on HTTP endpoint


    This patch introduces full SSL support for the pistache endpoint. The purpose of this patch is to be able to use SSL encryption without a SSL proxy, which is pretty useful for embedded development.

    It has been tested locally with simple SSL support and certificate connection, my example file is below. I did try to match the coding style of pistache in order to ease the merge, apologies if I missed anything.


    First of all, in order to enable SSL support in the library, one has to compile with PISTACHE_USE_SSL option ON. One can then configure the endpoint to accept / use SSL connections only:

    Http::Endpoint server(addr);
    server.useSSL("./server.crt", "./cert/server.key");

    With that done, all the connections to the server shall now be in HTTPS.

    One can also enable certificate authentication against the server:


    The server will now only accept client connection with a certificate signed with the Certificate Authority passed in the example above.


    Full test file:

    #include <pistache/endpoint.h>
    using namespace Pistache;
    struct HelloHandler : public Http::Handler {
        void onRequest(const Http::Request& request, Http::ResponseWriter writer) {
            writer.send(Http::Code::Ok, "Hello, World!");
    int main(void) {
        Pistache::Address   addr;
        auto opts = Http::Endpoint::options().threads(1);
        addr = Pistache::Address(Pistache::Ipv4::any(), Pistache::Port(9080));
        Http::Endpoint server(addr);
        server.useSSL("./cert/server.crt", "./cert/server.key");
        return 0;

    Compiled with:

    g++ main.cpp -L../pistache/build/src/ -lpistache -I../pistache/include/ -o server -lpthread -lssl -lcrypto

    In order to generate a Certificate Authority and server / client certificate and key, please refer to this gist


    Simple CURL:

    $> curl --cacert CA/rootCA.crt -k                        
    Hello, World!% 

    With CA/rootCA.crt as my root Certificate Authority, used to generate the server certificate. The -k is to ignore the curl error:

    curl: (51) SSL: certificate subject name 'Common Name Test Server' does not match target host name ''

    Since I did not generate a server certificate for

    SSL authentication

    With the following line added in my test file:


    Simple CURL:

    $> curl --cacert CA/rootCA.crt -k
    curl: (35) error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure

    With proper client cert & key:

    $> curl --cacert CA/rootCA.crt --cert ./client_cert/client.crt --key client_cert/client.key -k
    Hello, World!%

    Same usage of the -k option as above.

    Verbose output

    $> curl --cacert CA/rootCA.crt --cert ./client_cert/client.crt --key client_cert/client.key -k -v
    *   Trying
    * TCP_NODELAY set
    * Connected to ( port 9080 (#0)
    * ALPN, offering h2
    * ALPN, offering http/1.1
    * successfully set certificate verify locations:
    *   CAfile: CA/rootCA.crt
      CApath: none
    * TLSv1.2 (OUT), TLS handshake, Client hello (1):
    * TLSv1.2 (IN), TLS handshake, Server hello (2):
    * TLSv1.2 (IN), TLS handshake, Certificate (11):
    * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
    * TLSv1.2 (IN), TLS handshake, Request CERT (13):
    * TLSv1.2 (IN), TLS handshake, Server finished (14):
    * TLSv1.2 (OUT), TLS handshake, Certificate (11):
    * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
    * TLSv1.2 (OUT), TLS handshake, CERT verify (15):
    * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
    * TLSv1.2 (OUT), TLS handshake, Finished (20):
    * TLSv1.2 (IN), TLS handshake, Finished (20):
    * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
    * ALPN, server did not agree to a protocol
    * Server certificate:
    *  subject: C=FR; ST=Paris; L=Paris; O=XXX; CN=Common Name Test Server
    *  start date: May 29 15:59:03 2018 GMT
    *  expire date: Oct 11 15:59:03 2019 GMT
    *  issuer: C=FR; ST=Paris; L=Paris; O=XXX; CN=Common Name Root Authority; [email protected]
    *  SSL certificate verify ok.
    > GET /test HTTP/1.1
    > Host:
    > User-Agent: curl/7.60.0
    > Accept: */*
    < HTTP/1.1 200 OK
    < Connection: Keep-Alive
    < Content-Length: 13
    * Connection #0 to host left intact
    Hello, World!% 
  • single async response multithreads

    single async response multithreads

    I am trying to write a c++ pistache server that, on a specific endpoint, has to contact another pistache server. This is the scenario:

    client -> server1 -> server2

    client <- server1 <- server2

    I am having problems waiting for the response in server1 and sending it back to the client asynchronously.

    I share my server1 code to show you how I did implement it.

    void doSmth(const Rest::Request& request, Http::ResponseWriter httpResponse){
            auto resp_srv2 = client.post(addr).body(json).send();    
            [&](Http::Response response) {
            [&](std::exception_ptr exc) {
                PrintException excPrinter;
        Async::Barrier<Http::Response> barrier(resp_srv2);

    In particular this code works only if I set one single thread for the server1. If I use more thread it crash on the first request. Of course the performances are not that good.

    If I try to follow your example, I am able to use more threads. But this has a problem (overflow?) with requests count. On my sever, I can reach around 28k requests sent from the client to server1, and after that it crashes.

    Also, if I do like:

    		    [&](Http::Response response) {
    		    [&](std::exception_ptr exc) {
    		        PrintException excPrinter;

    it does not work well giving back a segmentation fault when the method is called.

    Am I doing something wrong? Please, help me to fix this issue.

  • Upgrade to C++17 and use std::optional

    Upgrade to C++17 and use std::optional

    Fixes #858.

    As per the recommendation by @kiplingw in https://github.com/pistacheio/pistache/issues/858#issuecomment-758279752 this PR upgrades from C++14 to C++17 to take advantage of std::optional.

  • C++17: `std::string_view`, `[[maybe_unused]]`, nested namespaces

    C++17: `std::string_view`, `[[maybe_unused]]`, nested namespaces

    This PR complements #859, and should be merged only after merging that one.

    I recommend reviewing this adding ?w=1 at the end of the URL to ignore whitespace changes.

    Fixes #826, fixes #835.

  • Address array and vector storage through .data()

    Address array and vector storage through .data()

    When used like &vec[0], there would be an assert in gnu std lib:

    /usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = char; _Alloc = std::allocator; std::vector<_Tp, _Alloc>::reference = char&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.

    on the other hand, using .data() is allowed even on empty uninited vectors

    array<> access is modified for consistency.

  • Provide a way to build .RPMs

    Provide a way to build .RPMs

    Suggested usage scenario: cmake make rpm

    Generated packages: libpistache{,-devel,-static}-0.0.1.-1.el8..rpm libpistache-0.0.1.-1.el8.src.rpm

    Todo: provide consistent version naming (introduce tags?) Note: as of 20.01.2020 debuginfo package cannot be created see https://bugzilla.redhat.com/show_bug.cgi?id=1206312 main rpm contains libpistache.so with debuginfo

  • Install instructions don't work on Ubuntu focal

    Install instructions don't work on Ubuntu focal

    From what I can see (and maybe I just did something weird) the install instructions do not work as specified on Ubuntu Focal

    When I ran apt install libpistache-dev I got:

    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Package libpistache-dev is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsoleted, or
    is only available from another source
    E: Package 'libpistache-dev' has no installation candidate

    I then specified a version apt install libpistache-dev=0.0.003+git20220805.4c54e8f~ubuntu20.04.1 and got a slightly different error:

    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:
    The following packages have unmet dependencies:
     libpistache-dev : Depends: libpistache0 (= 0.0.003+git20220805.4c54e8f~ubuntu20.04.1) but it is not installable
    E: Unable to correct problems, you have held broken packages.

    Finally I specified versions for both and it installed without issue:

    [email protected]<machine>:/src# apt install libpistache-dev=0.0.003+git20220805.4c54e8f~ubuntu20.04.1 libpistache0=0.0.003+git20220805.4c54e8f~ubuntu20.04.1
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following NEW packages will be installed:
      libpistache-dev libpistache0
    0 upgraded, 2 newly installed, 0 to remove and 4 not upgraded.
    Need to get 701 kB of archives.
    After this operation, 4463 kB of additional disk space will be used.
    Get:1 http://ppa.launchpad.net/pistache+team/unstable/ubuntu focal/main arm64 libpistache0 arm64 0.0.003+git20220805.4c54e8f~ubuntu20.04.1 [290 kB]
    Get:2 http://ppa.launchpad.net/pistache+team/unstable/ubuntu focal/main arm64 libpistache-dev arm64 0.0.003+git20220805.4c54e8f~ubuntu20.04.1 [411 kB]

    Seems like installs might be broken on Focal?

  • Add set request attribute #1080

    Add set request attribute #1080

    The issue #1080 has some time and I didn't saw pull requests, I think this is very useful, so I make some code that do it.

    I created two commits:

    • the code
    • the ninja clang-format that the git commit scripts asks

    Suggestions for more tests? Or names? Thanks!

  • [Question][Suggestion] how to change the peer->getData/Http::Handler onInput in runtime for WebSockets implmentation

    [Question][Suggestion] how to change the peer->getData/Http::Handler onInput in runtime for WebSockets implmentation

    Hi. I was testing how to implement WebSockets using Pistache. The flow is making the handshake in HTTP/1.1, then switching the raw socket input data treating to a WebSockets implementation. And I tried somethings like: to inherit the Http::Handler or the Rest::Private::RouterHandler, but the methods onConnection and onInput being private blocks from parent calling inside the custom class method.

    Using a proxy method and receiving and existing object created by Router::handler(), casting to the base Tcp::Handler can work but needs to adjust the sizes that the Http::Handler holds because the Transport class write to the proxy class before calling onIput, something like this maybe: *original = *this; ***In this case treating the original as Http::Handler, but this method is not thread safe and adding a mutex maybe impact the performance.

    I was thinking about change the parser or change the onInput to perform polyformally but I can't change the Parser of a already connected Tcp::Peer.

    I think I can not exactly implements webosckets outside Pistache's codebase or maybe changing some methods visibility in the PIstache's code.

    One of my tests using inheritance and calling the parent, but I change the private to protected on the Http::Handler for this to work.

    Handler set


    Route handshake minimal code to study the handler question

    void CWebSocketController::ws_route(const Pistache::Rest::Request &request,
                                        Pistache::Http::ResponseWriter response) {
        response.headers().addRaw(Header::Raw{"Upgrade", "websocket"});
        response.headers().addRaw(Header::Raw{"Connection", "Upgrade"});
        auto peer = response.getPeer();
        auto wsHandler = std::make_shared<WebSocketHandler>();
        wsHandler->peer = response.getPeer();
    // websocket frame received callback
        wsHandler->onMessage = [](const WebSocketHandler::frame &frame) {
            std::cout << frame.flags << std::endl;
            std::cout << frame.size << std::endl;
            std::cout << frame.payload << std::endl;
        peer->putData("__WEBSOCKETHANDLER", wsHandler);
        threads.push(asyncws, wsHandler); // a thread to send data to the websocket
    #pragma once
    #include "stdafx.hpp"
    #include <memory>
    #include <pistache/router.h>
    class RouterHandler : public Pistache::Rest::Private::RouterHandler {
        /*void onRequest(const Pistache::Http::Request &request,
                       Pistache::Http::ResponseWriter response) override;*/
        onConnection(const std::shared_ptr<Pistache::Tcp::Peer> &peer) override;
        onDisconnection(const std::shared_ptr<Pistache::Tcp::Peer> &peer) override;
        void onInput(const char *buffer, size_t len,
                     const std::shared_ptr<Pistache::Tcp::Peer> &peer) override;
        RouterHandler(Pistache::Rest::Router &router)
            : Pistache::Rest::Private::RouterHandler(router) {}
        std::shared_ptr<Pistache::Tcp::Handler> clone() const override {
            return std::make_shared<RouterHandler>(*this);
        RouterHandler(const RouterHandler &) = default;
        RouterHandler(RouterHandler &&) = default;
        RouterHandler &operator=(const RouterHandler &) = default;
        RouterHandler &operator=(RouterHandler &&) = default;
        ~RouterHandler() override;
    #include "RouterHandler.hpp"
    #include "WebSocketHandler.hpp"
    #include <cstddef>
    #include <cstdint>
    #include <memory>
    #include <pistache/peer.h>
    void RouterHandler::onConnection(
        const std::shared_ptr<Pistache::Tcp::Peer> &peer) {
        std::cout << __func__ << ": " << peer->fd() << std::endl;
    void RouterHandler::onDisconnection(
        const std::shared_ptr<Pistache::Tcp::Peer> &peer) {
        if (auto WSHandler = peer->tryGetData("__WEBSOCKETHANDLER")) {
            std::cout << "Websocket disconnected\n";
    void RouterHandler::onInput(
        const char *buffer, size_t len,
        const std::shared_ptr<Pistache::Tcp::Peer> &peer) {
        if (auto WSHandler = peer->tryGetData("__WEBSOCKETHANDLER")) {
            auto handler = std::static_pointer_cast<WebSocketHandler>(WSHandler);
            handler->onInput(buffer, len, peer);
        } else {
            Pistache::Rest::Private::RouterHandler::onInput(buffer, len, peer);
    RouterHandler::~RouterHandler() = default;

    Incomplete WebSocket protocol reading:

    #pragma once
    #include "stdafx.hpp"
    #include <array>
    #include <atomic>
    #include <cstddef>
    #include <cstdint>
    #include <functional>
    #include <memory>
    #include <string>
    #include <utility>
    class WebSocketHandler {
        struct frame {
            int32_t flags{}, lenByte{};
            uint64_t size{};
            std::array<int8_t, 4> mask{};
            int32_t readingState{};
            std::string payload{};
            size_t bufferPos{};
            std::array<char, 16> bufferTmp{};
            const std::shared_ptr<Pistache::Tcp::Peer> *peer{};
            bool useMask{};
            std::pair<size_t, bool> receiveData(const char *buffer, size_t lenraw);
        std::atomic<bool> disconnected{false};
        frame frameInst;
        std::function<void(const frame&)> onMessage;
        std::weak_ptr<Pistache::Tcp::Peer> peer;
        void onInput(const char *buffer, size_t len,
                     const std::shared_ptr<Pistache::Tcp::Peer> &peer);
    #include "WebSocketHandler.hpp"
    #include <cstddef>
    #include <pistache/peer.h>
    #include <tuple>
    std::pair<size_t, bool> WebSocketHandler::frame::receiveData(const char *buffer,
                                                                 size_t lenraw) {
        const uint8_t *bytesRaw = reinterpret_cast<const uint8_t *>(buffer);
        size_t inBufferOffset = 0;
        bool done = false;
        while (inBufferOffset < lenraw) {
            switch (readingState) {
            case 0:
                flags = bytesRaw[inBufferOffset++];
            case 1:
                lenByte = bytesRaw[inBufferOffset++];
                useMask = (lenByte & 0x80) != 0;
                size = lenByte & 0x7F;
                bufferPos = 0;
            case 2:
                if (size == 126 || size == 127) {
                    bufferTmp[bufferPos++] = buffer[inBufferOffset++];
                    if (bufferPos == 8 && size == 127) {
                        size = be64toh(
                            *reinterpret_cast<uint64_t *>(bufferTmp.data()));
                        bufferPos = 0;
                    } else if (bufferPos == 2 && size == 126) {
                        size = be16toh(
                            *reinterpret_cast<uint16_t *>(bufferTmp.data()));
                        bufferPos = 0;
                } else {
                    bufferPos = 0;
            case 3:
                if (useMask) {
                    bufferTmp[bufferPos++] = buffer[inBufferOffset++];
                    if (bufferPos == 4) {
                        std::copy_n(bufferTmp.begin(), 4, mask.begin());
                        bufferPos = 0;
                } else {
            case 4:
                bufferPos = 0;
                payload[bufferPos++] = buffer[inBufferOffset++];
                if (bufferPos == size) {
                    done = true;
        if (!done) {
            return {inBufferOffset, false};
        readingState = 0;
        bufferPos = 0;
        for (size_t i = 0; i < payload.size(); i++) {
            payload[i] ^= mask[i % 4];
        return {inBufferOffset, true};
    void WebSocketHandler::onInput(
        const char *buffer, size_t lenraw,
        const std::shared_ptr<Pistache::Tcp::Peer> &inpeer) {
        size_t inBufferOffset = 0;
        do {
            bool done = false;
            const char *bufferit = buffer + inBufferOffset;
            size_t currentLen = lenraw - inBufferOffset;
            std::tie(inBufferOffset, done) =
                frameInst.receiveData(bufferit, currentLen);
            if (!done) {
            frameInst.peer = &inpeer;
            if (onMessage) {
            frameInst.flags = 0;
            frameInst.lenByte = 0;
        } while (inBufferOffset < lenraw);
  • Setting a request attribute

    Setting a request attribute

    In the past I have used Java Servlets (now Jarkarta Servlets). With Servlets, I could set a "request attribute". To better explain what a request attribute is, it is easier if I provide a use case.

    bool auth_middleware(Request &request, ResponseWriter &response) {
        const auto cookies = req.cookies();
        const auto userId = getUserId(&cookies);
        // use a service to retrieve the User entity corresponding to userId
        const std::optional<User> user = UserService::getUser(userId);
        if (user.has_value()) {
            request.setAttribute("user", user->get());
            return true; // proceed along the chain
        } else {
            return false; // reject the request

    When the execution flow reaches the actual handler, one could do as follows:

    void some_handler(const Rest::Request &request, Http::ResponseWriter response) {
        auto user = request.getAttribute<User>("user");
        // use "user" for whatever purpose

    The advantage of this is that any handler that needs to do something with the User entity does not need to repeat the logic of checking whether the user exists and is authenticated (separation of concerns).

    Is it possible to do something like this in Pistache? Thank you! :smile:

  • ci: build on multiple Linux distributions

    ci: build on multiple Linux distributions

    This new CI job builds and tests Pistache on different Linux distros, with different compilers, and under different sanitizers, and also re-adds Codecov coverage integration back.

    Currently, Pistache is tested on Debian Stable, Debian Testing and Red Hat Enterprise Linux 8, and the code is built with GCC and Clang. Sanitizers are only used on Debian, because as far as I'm aware RHEL doesn't ship the various sanitizers in its base images.

    ThreadSanitizer is also disabled for now, as it reports a lot of data races in some tests. Some of the bugs are in the unit tests themselves, but I haven't yet looked into solving them. @dennisjenkins75 I seem to remember that you opened a meta-issue about dace conditions in Pistache but I haven't found it.

    ~I'll add RHEL 9 as soon as Red Hat publishes the image on Docker Hub~ added using Red Hat's own registry

    It would be nice to add something like Gentoo too; @dennisjenkins75 you use/used Gentoo, right? Would you be able to help?

    ~I've also fixed a bug in the streaming_test test that caused lock ups on old distros (@kiplingw this means that PPA builds for Ubuntu 20.10 and older should be fixed as well)~ pushed as 8ce07256dc220cbf96b3fd6f5991a3c68fd54aba

    Lastly, I ~removed~ reduced the scope of the autopkgtest job; you can find my rationale in the commit message.

  • Timeout test from example not working

    Timeout test from example not working

    compile pistache with examples https://github.com/pistacheio/pistache/blob/master/examples/http_server.cc execute server: examples/run_http_server curl http://localhost:9080/timeout hangs forever Does timeoutAfter in Http::ResponseWriter really work?

bittyhttp - A threaded HTTP library for building REST services in C.

bittyhttp - A threaded HTTP library for building REST services in C.

Nov 29, 2021
Pistache is a modern and elegant HTTP and REST framework for C++

Pistache is a modern and elegant HTTP and REST framework for C++. It is entirely written in pure-C++17* and provides a clear and pleasant API.

Aug 7, 2022
Your high performance web application C framework

facil.io is a C micro-framework for web applications. facil.io includes: A fast HTTP/1.1 and Websocket static file + application server. Support for c

Aug 15, 2022
Experimental, scalable, high performance HTTP server

Lwan Web Server Lwan is a high-performance & scalable web server. The project web site contains more details. Build status OS Arch Release Debug Stati

Aug 15, 2022
Cetus is a high performance, stable, protocol aware proxy for MySQL Group Replication.

Introduction Cetus is a high performance, stable, protocol aware proxy for MySQL Group Replication. Getting started 1. Prerequisites cmake gcc glib2-d

Jun 12, 2022
CppCMS - High Performance C++ Web Framework

CppCMS - High Performance C++ Web Framework What is CppCMS? CppCMS is a Free High Performance Web Development Framework (not a CMS) aimed at Rapid Web

Aug 2, 2022
A high performance, middleware oriented C++14 http web framework please use matt-42/lithium instead

A high performance, middleware oriented C++14 http web framework please use matt-42/lithium instead

Aug 2, 2022
TreeFrog Framework : High-speed C++ MVC Framework for Web Application

Small but Powerful and Efficient TreeFrog Framework is a high-speed and full-stack web application framework based on C++ and Qt, which supports HTTP

Aug 7, 2022
a very based, minimal, and flexible static site generator written in pure C89 with no external deps.

based-ssg is a very based, minimal, and flexible static site generator written in pure C89 with no external deps.

Apr 23, 2022
A WebAssembly interpreter written in C for demonstration.
A WebAssembly interpreter written in C for demonstration.

wasmc 中文文档 A WebAssembly interpreter written in C for demonstration. This repository implements a WebAssembly interpreter. It is written to clarify ho

Jul 25, 2022
Tntnet is a web application server for web applications written in C++.

Tntnet is a web application server for web applications written in C++.

Aug 1, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Aug 6, 2022
Lite.AI.ToolKit 🚀🚀🌟: A lite C++ toolkit of awesome AI models such as RobustVideoMatting🔥, YOLOX🔥, YOLOP🔥 etc.
Lite.AI.ToolKit 🚀🚀🌟:  A lite C++ toolkit of awesome AI models such as RobustVideoMatting🔥, YOLOX🔥, YOLOP🔥 etc.

Lite.AI.ToolKit ?? ?? ?? : A lite C++ toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc.

Aug 17, 2022
Nov 13, 2020
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration
 Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Aug 8, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

The Microsoft Cognitive Toolkit is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph.

Aug 9, 2022
Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI
Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI

High-Performance-Computing-Experiments Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI 实验结

Nov 27, 2021
An eventing framework for building high performance and high scalability systems in C.

NOTE: THIS PROJECT HAS BEEN DEPRECATED AND IS NO LONGER ACTIVELY MAINTAINED As of 2019-03-08, this project will no longer be maintained and will be ar

Aug 8, 2022
OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

What is OceanBase database OceanBase Database is a native distributed relational database. It is developed entirely by Alibaba and Ant Group. OceanBas

Aug 9, 2022