Asynchronous gRPC with Boost.Asio executors

asio-grpc

Reliability Rating

This library provides an implementation of boost::asio::execution_context that dispatches work to a grpc::CompletionQueue. Making it possible to write asynchronous gRPC servers and clients using C++20 coroutines, Boost.Coroutines, Boost.Asio's stackless coroutines, std::futures and callbacks. Also enables other Boost.Asio non-blocking IO operations like HTTP requests - all on the same CompletionQueue.

Example

Server side:

boost::asio::awaitable { grpc::ServerContext server_context; helloworld::HelloRequest request; grpc::ServerAsyncResponseWriter writer{&server_context}; bool request_ok = co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service, server_context, request, writer); helloworld::HelloReply response; std::string prefix("Hello "); response.set_message(prefix + request.name()); bool finish_ok = co_await agrpc::finish(writer, response, grpc::Status::OK); }, boost::asio::detached); grpc_context.run(); server->Shutdown(); ">
grpc::ServerBuilder builder;
std::unique_ptr
      server;
helloworld::Greeter::AsyncService service;
agrpc::GrpcContext grpc_context{builder.
     AddCompletionQueue()};
builder.AddListeningPort(
     "0.0.0.0:50051", grpc::InsecureServerCredentials());
builder.RegisterService(&service);
server = builder.BuildAndStart();


     boost::asio::co_spawn(
    grpc_context,
    [&]() -> boost::asio::awaitable
     
      
    {
        grpc::ServerContext server_context;
        helloworld::HelloRequest request;
        grpc::ServerAsyncResponseWriter
      
        writer{&server_context};
        
       bool request_ok = 
       co_await 
       agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service,
                                                  server_context, request, writer);
        helloworld::HelloReply response;
        std::string 
       prefix(
       "Hello ");
        response.
       set_message(prefix + request.
       name());
        
       bool finish_ok = 
       co_await 
       agrpc::finish(writer, response, grpc::Status::OK);
    },
    boost::asio::detached);

grpc_context.run();
server->
       Shutdown();
      
     
    

snippet source | anchor

Client side:

()}; boost::asio::co_spawn( grpc_context, [&]() -> boost::asio::awaitable { grpc::ClientContext client_context; helloworld::HelloRequest request; request.set_name("world"); std::unique_ptr > reader = stub->AsyncSayHello(&client_context, request, agrpc::get_completion_queue(grpc_context)); helloworld::HelloReply response; grpc::Status status; bool ok = co_await agrpc::finish(*reader, response, status); }, boost::asio::detached); grpc_context.run(); ">
auto stub =
    helloworld::Greeter::NewStub(grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials()));
agrpc::GrpcContext grpc_context{std::make_unique
      ()};


      boost::asio::co_spawn(
    grpc_context,
    [&]() -> boost::asio::awaitable
      
       
    {
        grpc::ClientContext client_context;
        helloworld::HelloRequest request;
        request.
       set_name(
       "world");
        std::unique_ptr
       
        
         > reader =
            stub->
         AsyncSayHello(&client_context, request, 
         agrpc::get_completion_queue(grpc_context));
        helloworld::HelloReply response;
        grpc::Status status;
        
         bool ok = 
         co_await 
         agrpc::finish(*reader, response, status);
    },
    boost::asio::detached);

grpc_context.run();
        
       
      
     

snippet source | anchor

Requirements

Tested:

  • gRPC 1.37
  • Boost 1.74
  • MSVC VS 2019 16.11
  • GCC 10.3
  • C++17 or C++20

For MSVC compilers the following compile definitions might need to be set:

BOOST_ASIO_HAS_DEDUCED_REQUIRE_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_EXECUTE_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_EQUALITY_COMPARABLE_TRAIT
BOOST_ASIO_HAS_DEDUCED_QUERY_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_PREFER_MEMBER_TRAIT

Usage

The library can be added to a CMake project using either add_subdirectory or find_package . Once set up, include the following header:

#include <agrpc/asioGrpc.hpp>

As a subdirectory

Clone the repository into a subdirectory of your CMake project. Then add it and link it to your target.

add_subdirectory(/path/to/repository/root)
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc)

As a CMake package

Clone the repository and install it.

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/desired/installation/directory ..
cmake --build . --target install

Locate it and link it to your target.

# Make sure to set CMAKE_PREFIX_PATH to /desired/installation/directory
find_package(asio-grpc)
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc)

Performance

asio-grpc is part of grpc_bench. Head over there to compare its performance against other libraries and languages.

Results from the helloworld unary RPC. Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, Linux, Boost 1.74, gRPC 1.30.2, asio-grpc v1.0.0

1 CPU server

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
rust_tonic_mt 44639 22.27 ms 9.63 ms 10.55 ms 572.53 ms 101.12% 16.06 MiB
rust_grpcio 39826 24.95 ms 26.31 ms 27.19 ms 28.45 ms 101.5% 30.46 MiB
rust_thruster_mt 38038 26.17 ms 11.39 ms 12.33 ms 673.02 ms 100.16% 13.17 MiB
cpp_grpc_mt 34954 28.53 ms 31.28 ms 31.75 ms 33.55 ms 101.93% 8.36 MiB
cpp_asio_grpc 34015 29.32 ms 32.05 ms 32.56 ms 34.41 ms 101.35% 7.72 MiB
go_grpc 6772 141.75 ms 287.57 ms 330.45 ms 499.47 ms 97.8% 28.07 MiB

2 CPU server

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
rust_tonic_mt 66253 14.33 ms 39.24 ms 59.11 ms 91.03 ms 201.2% 16.09 MiB
rust_grpcio 62678 15.38 ms 22.38 ms 24.81 ms 29.00 ms 201.38% 45.07 MiB
cpp_grpc_mt 62488 14.78 ms 31.76 ms 40.60 ms 60.79 ms 199.84% 24.9 MiB
cpp_asio_grpc 62040 14.91 ms 30.17 ms 37.77 ms 60.10 ms 199.6% 26.65 MiB
rust_thruster_mt 59204 16.22 ms 43.04 ms 71.87 ms 110.07 ms 199.31% 13.87 MiB
go_grpc 13978 63.48 ms 110.86 ms 160.62 ms 205.85 ms 198.23% 29.48 MiB

Documentation

The main workhorses of this library are the agrpc::GrpcContext and its executor_type - agrpc::GrpcExecutor.

The agrpc::GrpcContext implements boost::asio::execution_context and can be used as an argument to Boost.Asio functions that expect an ExecutionContext like boost::asio::spawn.

Likewise, the agrpc::GrpcExecutor models the Executor and Networking TS requirements and can therefore be used in places where Boost.Asio expects an Executor.

This library's API for RPCs is modeled closely after the asynchronous, tag-based API of gRPC. As an example, the equivalent for grpc::ClientAsyncReader .Read(helloworld::HelloReply*, void*) would be agrpc::read(grpc::ClientAsyncReader &, helloworld::HelloReply&, CompletionToken) . It can therefore be helpful to refer to async_unary_call.h and async_stream.h while working with this library.

Instead of the void* tag in the gRPC API the functions in this library expect a CompletionToken. Boost.Asio comes with several CompletionTokens out of the box: C++20 coroutine, std::future, stackless coroutine, callback and Boost.Coroutine.

Getting started

Start by creating a agrpc::GrpcContext.

For servers and clients:

grpc::ServerBuilder builder;
agrpc::GrpcContext grpc_context{builder.AddCompletionQueue()};

snippet source | anchor

For clients only:

agrpc::GrpcContext grpc_context{std::make_unique
   ()};
  

snippet source | anchor

Add some work to the grpc_context (shown further below) and run it. Make sure to shutdown the server before destructing the grpc_context. Also destruct the grpc_context before destructing the server. A grpc_context can only be run on one thread at a time.

grpc_context.run();
server->Shutdown();
}  // grpc_context is destructed here before the server

snippet source | anchor

It might also be helpful to create a work guard before running the agrpc::GrpcContext to prevent grpc_context.run() from returning early.

auto guard = boost::asio::make_work_guard(grpc_context);

snippet source | anchor

Alarm

gRPC provides a grpc::Alarm which similar to boost::asio::steady_timer. Simply construct it and pass to it agrpc::wait with the desired deadline to wait for the specified amount of time without blocking the event loop.

grpc::Alarm alarm;
bool wait_ok = agrpc::wait(alarm, std::chrono::system_clock::now() + std::chrono::seconds(1), yield);

snippet source | anchor

wait_ok is true if the Alarm expired, false if it was canceled. (source)

Unary RPC Server-Side

Start by requesting a RPC. In this example yield is a boost::asio::yield_context, other CompletionTokens are supported as well, e.g. boost::asio::use_awaitable. The example namespace has been generated from example.proto.

grpc::ServerContext server_context;
example::v1::Request request;
grpc::ServerAsyncResponseWriter
    writer{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestUnary, service, server_context,
                                 request, writer, yield);
  

snippet source | anchor

If request_ok is true then the RPC has indeed been started otherwise the server has been shutdown before this particular request got matched to an incoming RPC. For a full list of ok-values returned by gRPC see CompletionQueue::Next.

The grpc::ServerAsyncResponseWriter is used to drive the RPC. The following actions can be performed.

bool send_ok = agrpc::send_initial_metadata(writer, yield);

example::v1::Response response;
bool finish_ok = agrpc::finish(writer, response, grpc::Status::OK, yield);

bool finish_with_error_ok = agrpc::finish_with_error(writer, grpc::Status::CANCELLED, yield);

snippet source | anchor

Unary RPC Client-Side

On the client-side a RPC is initiated by calling the desired AsyncXXX function of the Stub

grpc::ClientContext client_context;
example::v1::Request request;
std::unique_ptr
   
    > reader =
    stub.AsyncUnary(&client_context, request, agrpc::get_completion_queue(grpc_context));
   
  

snippet source | anchor

The grpc::ClientAsyncResponseReader is used to drive the RPC.

bool read_ok = agrpc::read_initial_metadata(*reader, yield);

example::v1::Response response;
grpc::Status status;
bool finish_ok = agrpc::finish(*reader, response, status, yield);

snippet source | anchor

For the meaning of read_ok and finish_ok see CompletionQueue::Next.

Client-Streaming RPC Server-Side

Start by requesting a RPC.

grpc::ServerContext server_context;
grpc::ServerAsyncReader
    reader{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestClientStreaming, service,
                                 server_context, reader, yield);
  

snippet source | anchor

Drive the RPC with the following functions.

bool send_ok = agrpc::send_initial_metadata(reader, yield);

example::v1::Request request;
bool read_ok = agrpc::read(reader, request, yield);

example::v1::Response response;
bool finish_ok = agrpc::finish(reader, response, grpc::Status::OK, yield);

snippet source | anchor

Client-Streaming RPC Client-Side

Start by requesting a RPC.

grpc::ClientContext client_context;
example::v1::Response response;
std::unique_ptr
   
    > writer;

    bool request_ok = agrpc::request(&example::v1::Example::Stub::AsyncClientStreaming, stub, client_context, writer,
                                 response, yield);
   
  

snippet source | anchor

There is also a convenience overload that returns the grpc::ClientAsyncWriter at the cost of a sizeof(std::unique_ptr) memory overhead.

auto [writer, request_ok] =
    agrpc::request(&example::v1::Example::Stub::AsyncClientStreaming, stub, client_context, response, yield);

snippet source | anchor

With the grpc::ClientAsyncWriter the following actions can be performed to drive the RPC.

bool read_ok = agrpc::read_initial_metadata(*writer, yield);

example::v1::Request request;
bool write_ok = agrpc::write(*writer, request, yield);

bool writes_done_ok = agrpc::writes_done(*writer, yield);

grpc::Status status;
bool finish_ok = agrpc::finish(*writer, status, yield);

snippet source | anchor

For the meaning of read_ok, write_ok, writes_done_ok and finish_ok see CompletionQueue::Next.

Server-Streaming RPC Server-Side

Start by requesting a RPC.

grpc::ServerContext server_context;
example::v1::Request request;
grpc::ServerAsyncWriter
    writer{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestServerStreaming, service,
                                 server_context, request, writer, yield);
  

snippet source | anchor

With the grpc::ServerAsyncWriter the following actions can be performed to drive the RPC.

bool send_ok = agrpc::send_initial_metadata(writer, yield);

example::v1::Response response;
bool write_ok = agrpc::write(writer, response, yield);

bool write_and_finish_ok = agrpc::write_and_finish(writer, response, grpc::WriteOptions{}, grpc::Status::OK, yield);

bool finish_ok = agrpc::finish(writer, grpc::Status::OK, yield);

snippet source | anchor

For the meaning of send_ok, write_ok, write_and_finish and finish_ok see CompletionQueue::Next.

Server-Streaming RPC Client-Side

Start by requesting a RPC.

grpc::ClientContext client_context;
example::v1::Request request;
std::unique_ptr
   
    > reader;

    bool request_ok =
    
    agrpc::request(&example::v1::Example::Stub::AsyncServerStreaming, stub, client_context, request, reader, yield);
   
  

snippet source | anchor

There is also a convenience overload that returns the grpc::ClientAsyncReader at the cost of a sizeof(std::unique_ptr) memory overhead.

auto [reader, request_ok] =
    agrpc::request(&example::v1::Example::Stub::AsyncServerStreaming, stub, client_context, request, yield);

snippet source | anchor

With the grpc::ClientAsyncReader the following actions can be performed to drive the RPC.

bool read_metadata_ok = agrpc::read_initial_metadata(*reader, yield);

example::v1::Response response;
bool read_ok = agrpc::read(*reader, response, yield);

grpc::Status status;
bool finish_ok = agrpc::finish(*reader, status, yield);

snippet source | anchor

For the meaning of read_metadata_ok, read_ok and finish_ok see CompletionQueue::Next.

Bidirectional-Streaming RPC Server-Side

Start by requesting a RPC.

grpc::ServerContext server_context;
grpc::ServerAsyncReaderWriter
    reader_writer{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestBidirectionalStreaming, service,
                                 server_context, reader_writer, yield);
  

snippet source | anchor

With the grpc::ServerAsyncReaderWriter the following actions can be performed to drive the RPC.

bool send_ok = agrpc::send_initial_metadata(reader_writer, yield);

example::v1::Request request;
bool read_ok = agrpc::read(reader_writer, request, yield);

example::v1::Response response;
bool write_and_finish_ok =
    agrpc::write_and_finish(reader_writer, response, grpc::WriteOptions{}, grpc::Status::OK, yield);

bool write_ok = agrpc::write(reader_writer, response, yield);

bool finish_ok = agrpc::finish(reader_writer, grpc::Status::OK, yield);

snippet source | anchor

For the meaning of send_ok, read_ok, write_and_finish_ok, write_ok and finish_ok see CompletionQueue::Next.

Bidirectional-Streaming RPC Client-Side

Start by requesting a RPC.

grpc::ClientContext client_context;
std::unique_ptr
   
    > reader_writer;

    bool request_ok = agrpc::request(&example::v1::Example::Stub::AsyncBidirectionalStreaming, stub, client_context,
                                 reader_writer, yield);
   
  

snippet source | anchor

There is also a convenience overload that returns the grpc::ClientAsyncReaderWriter at the cost of a sizeof(std::unique_ptr) memory overhead.

auto [reader_writer, request_ok] =
    agrpc::request(&example::v1::Example::Stub::AsyncBidirectionalStreaming, stub, client_context, yield);

snippet source | anchor

With the grpc::ClientAsyncReaderWriter the following actions can be performed to drive the RPC.

bool read_metadata_ok = agrpc::read_initial_metadata(*reader_writer, yield);

example::v1::Request request;
bool write_ok = agrpc::write(*reader_writer, request, yield);

bool writes_done_ok = agrpc::writes_done(*reader_writer, yield);

example::v1::Response response;
bool read_ok = agrpc::read(*reader_writer, response, yield);

grpc::Status status;
bool finish_ok = agrpc::finish(*reader_writer, status, yield);

snippet source | anchor

For the meaning of read_metadata_ok, write_ok, writes_done_ok, read_ok and finish_ok see CompletionQueue::Next.

Comments
  • c++20 coroutine based version is not as fast as the one using boost fiber?

    c++20 coroutine based version is not as fast as the one using boost fiber?

    Recently I ran the grpc_bench to compare the performance of different settings. I found the coroutine based one is slower than both the boost fiber version and the grpc multi-thread version. Do you have any insight about this?

  • Which compile definitions are recommended for clang?

    Which compile definitions are recommended for clang?

    Which compile definitions are recommended for clang build on Linux?

    I already figured out that I need to set on my cmake line:

    -DASIO_GRPC_USE_BOOST_CONTAINER=1

    But I wonder if there are any others?

    I'm using a slightly older version of asio-grpc, it would take me a little effort to update so don't want to have to do that. I'm using commit a17b559b101d10836c0fc226101f857728f3428f. Don't know if this is the reason?

    The reason that I ask is that when I use clang 10.0.1 I get a clang crash when trying to build hello-world-server-cpp20:

    /gitworkspace/distributions/clang/10.0.1/bin/clang++ -DBOOST_ALL_NO_LIB -DCARES_STATICLIB -I/gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/native.build/src/generated -isystem /gitworkspace/rbresali/mdt/mdt_stage/native.stage/usr/local/include -stdlib=libc++ -fPIC -g -Wall -Werror -DBOOST_THREAD_VERSION=4 -save-temps=obj -std=gnu++2a -MD -MT src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -MF src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o.d -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -c /gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/src/hello-world-server-cpp20.cpp
    Stack dump:
    0.	Program arguments: /vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10 -cc1 -triple x86_64-unknown-linux-gnu -S -save-temps=obj -disable-free -disable-llvm-verifier -discard-value-names -main-file-name hello-world-server-cpp20.cpp -mrelocation-model pic -pic-level 2 -mthread-model posix -mframe-pointer=all -fmath-errno -fno-rounding-math -masm-verbose -mconstructor-aliases -munwind-tables -target-cpu x86-64 -dwarf-column-info -fno-split-dwarf-inlining -debug-info-kind=limited -dwarf-version=4 -debugger-tuning=gdb -resource-dir /vol/dwdmgit_distributions/clang/10.0.1/lib64/clang/10.0.1 -Wall -Werror -std=gnu++2a -fdebug-compilation-dir /gitworkspace/rbresali-mdt_211028.110446/mdt_example/asio_grpc_example/native.build -ferror-limit 19 -fmessage-length 0 -fgnuc-version=4.2.1 -fobjc-runtime=gcc -fdiagnostics-show-option -faddrsig -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.s -x ir src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc 
    1.	Code generation
    2.	Running pass 'Function Pass Manager' on module 'src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc'.
    3.	Running pass 'X86 DAG->DAG Instruction Selection' on function '@"_ZN5boost4asio6detail20co_spawn_entry_pointINS0_15any_io_executorEZ4mainE3$_0NS1_16detached_handlerEEENS0_9awaitableINS1_28awaitable_thread_entry_pointET_EEPNS6_IvS8_EES8_T0_T1_"'
     #0 0x00000000016b6e24 PrintStackTraceSignalHandler(void*) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b6e24)
     #1 0x00000000016b4b8e llvm::sys::RunSignalHandlers() (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b4b8e)
     #2 0x00000000016b7225 SignalHandler(int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b7225)
     #3 0x00007f5f529b1630 __restore_rt (/lib64/libpthread.so.0+0xf630)
     #4 0x00000000021de359 llvm::DAGTypeLegalizer::getTableId(llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de359)
     #5 0x00000000021de216 llvm::DAGTypeLegalizer::RemapValue(llvm::SDValue&) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de216)
     #6 0x00000000021dd97f llvm::DAGTypeLegalizer::ReplaceValueWith(llvm::SDValue, llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21dd97f)
     #7 0x00000000021e01ac llvm::DAGTypeLegalizer::DisintegrateMERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21e01ac)
     #8 0x0000000002236e99 llvm::DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x2236e99)
     #9 0x00007ffe5377a490 
    clang-10: error: unable to execute command: Segmentation fault (core dumped)
    clang-10: error: clang frontend command failed due to signal (use -v to see invocation)
    clang version 10.0.1 
    Target: x86_64-unknown-linux-gnu
    Thread model: posix
    InstalledDir: /gitworkspace/distributions/clang/10.0.1/bin
    clang-10: note: diagnostic msg: PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace, preprocessed source, and associated run script.
    clang-10: note: diagnostic msg: 
    ********************
    
    PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
    Preprocessed source(s) and associated run script(s) are located at:
    clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.cpp
    clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.sh
    clang-10: note: diagnostic msg: 
    
    ********************
    
    
  • How to get notified when client close

    How to get notified when client close

    Suppose I have a server streaming rpc:

    rpc ServerStream(Req) returns (stream Resp);
    

    When a client calls ServerStream, the server does some bookkeeping; when the client disconnects, the bookkeeping needs to be removed. Is there an API, let's say on_recv_client_close(release_function), that can register a callback for a client closed event?

    Thank you.

    P.S.

    • I know I can use failed state of the agrpc::write to indicate that client is closed. But I want to get notified even when the server doesn't send anything.
    • grpc core has a GRPC_OP_RECV_CLOSE_ON_SERVER op that I don't know whether it helps.
    • Here is a similar question I found but for grpc-go: https://stackoverflow.com/questions/39825671/grpc-go-how-to-know-in-server-side-when-client-closes-the-connection
  • Compiler error trying to use asio::experimental::use_promise as completion token

    Compiler error trying to use asio::experimental::use_promise as completion token

    I'm trying to use a promise as the completion token to agrpc methods, and getting a compiler error using gcc 11. Here is a minimal example, based off your streaming-server.cpp example:

    // additional includes required:
    #include <asio/experimental/promise.hpp>
    #include <asio/this_coro.hpp>
    
    asio::awaitable<void> handle_bidirectional_streaming_request(example::v1::Example::AsyncService& service)
    {
        grpc::ServerContext server_context;
        grpc::ServerAsyncReaderWriter<example::v1::Response, example::v1::Request> reader_writer{&server_context};
        bool request_ok = co_await agrpc::request(&example::v1::Example::AsyncService::RequestBidirectionalStreaming,
                                                  service, server_context, reader_writer);
        if (!request_ok)
        {
            // Server is shutting down.
            co_return;
        }
        example::v1::Request request;
    
        // none of the below work to put as COMPLETIONTOKEN - the following line fails to compile:
        // asio::experimental::use_promise
        // asio::experimental::use_promise_t<agrpc::GrpcContext>{}
        // asio::experimental::use_promise_t<agrpc::GrpcContext::executor_type>{}
        // asio::experimental::use_promise_t<agrpc::s::BasicGrpcExecutor<>>{}
        // asio::experimental::use_promise_t<asio::this_coro::executor_t>{}
        auto&& read_promise = agrpc::read(reader_writer, COMPLETIONTOKEN);
    
        co_await read_promise.async_wait(asio::use_awaitable);
    }
    

    The use case is that later in the function I would simultaneously await any of 3 conditions: New request from client, finished writing response to client, or new response ready from data processing thread pool:

    auto&& write_promise = agrpc::write(rw, response, COMPLETIONTOKEN);
    auto&& data_ready_promise = // asynchronously dispatch work to data processing thread pool
    auto rwd_promise = asio::experimental::promise<>::all(
        std::forward<decltype(read_promise)>(read_promise),
        std::forward<decltype(write_promise)>(write_promise),
        std::forward<decltype(data_ready_promise)>(data_ready_promise)
    );
    std::tie(read_ok, write_ok, data_ready_ok) = co_await rwd_promise.async_wait(asio::use_awaitable);
    
  • Multi-threaded server and health check

    Multi-threaded server and health check

    Hi,

    I tried test example/multi-threaded-server with enabled DefaultHealthCheckService and found that after I set grpc::EnableDefaultHealthCheckService(true) before start server all handlers worked in only one thread. What could be causing this and how to fix it?

  • Using asio::io_context in single threaded applications

    Using asio::io_context in single threaded applications

    Hi, Thank you for your awesome library. It is a very convenient way to use asio for writing single threaded (but concurrent) applications without worrying about problems in multi threaded applications. If i'm not mistaken, right now the only way to use agrpc is to instantiate GrpcContext and run it on its own thread, which means we need to run asio::io_context on a separate thread and deal with concurrency problems between them. Is there any plan for making it possible to reuse asio::io_context for agrpc services?

  • Theads and asio-grpc

    Theads and asio-grpc

    Thank you for implementing this excellent project to provide a consolidated way of executing async grpc command and send/receive tcp packages asynchronously with boost asio library. I just begin to use boost asio recently and have a couple of quesions when using this library.

    According to this link: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/threads.html, multiple threads may call io_context::run() to set up a threads pool and the io_context may distribute work across them. Dose asio-grpc's execution_context also guarantee thread safety if threads pool is enabled on it? I am using C++20 coroutines and assuming that each co_spawn will locate a thread from the threads pool and run the composed asynchronous operations. Correct me if my understanding is wrong. What if the composed asynchronous operations contains a blocking operation, it may block the running thread and how can I prevent the other co_spawn call to use the blocked thread for execution? In additional, co_spawn could spawn from both execution_context and excutor. I am guessing that if spawn from execution_context it will locate a new thread and run while from excutor, it will just run on the thread that the excutor is running. Is my guessing correct?

    Meanwhile #8 mentions that if co_spawn non-grpc async operation like steady_timer from grpc_context, it will automatically spawns a 2nd io_context thread. So it seems that asio-grpc internally maintain two threads for both grpc execution_context and io_context to run async grpc operations and other async non-grpc operations. And the last comments says version 1.4 would also support ask io_context for a agrpc::GrpcContext. Considering my application would serve many clients and for each client's requst issue one single composed asynchronous operations containing one async grpc call and several async tcp read&write to the server call and response back to the client, will asio-grpc guarantee there won't have interleave between the grpc operation and the tcp operations when the single composed asynchronous operation is co_spawned from either grpc_context or io_context since they are from two context on two thread? Also does asio-grpc support the mode of having threads pool for io_context and single thread for grpc_context or both have threads pool enabled?

                          one single composed asynchronous operations
                         /                                            \
    client1 --> { co_wait async grpc operation, co_wait async tcp operations } --> server 
    client2 --> { co_wait async grpc operation, co_wait async tcp operations } --> server
    clientN ...
    

    Hope to get some guidence from you. Thanks.

  • Question: is it possible to implement a server to client request, using a bidirectional-streaming channel and exposed as a standard C++ class/interface?

    Question: is it possible to implement a server to client request, using a bidirectional-streaming channel and exposed as a standard C++ class/interface?

    Hi,

    Thank you for writing this library.

    I'm currently trying trying to use asio-grpc for implementing a service that, as a part of a request, can call back to a connected client to get additional data - dependency injection style. This dependency injection channel is a long-living bidirectional streaming grpc call. My problem is that the server-logic is calling into a normal pure virtual class (interface) for requesting these values. AFAIK this rules out using co_await, co_return since this would imply my interface should return a coroutine. So I'm trying to figure out if I can implement such interface by using co_yield, where the consumer of the values does not need to be a coroutine.

    The server-logic is being triggered by another async grpc call, but the server-logic itself is not async.

    I hope someone is able to help me to figure out if and how this is possible. Let me know if my description is not clear enough.

    Best regards

  • compiling problem with versions installed by vcpkg

    compiling problem with versions installed by vcpkg

    Hi, thanks for creating a wonderful framework. It has made my life much easier.

    I have used your framework for a few months and now I need to set up our project on a new machine. The installation is successful with the following command

    ./vcpkg install asio-grpc[boost-container]:x64-linux
    

    However, when I compile my project, I got the following errors. image image image image

    Do you have any idea why does this error happen?

    I look forward to hearing from you soon.

    Thanks

  • assertion GRPC_CALL_ERROR_TOO_MANY_OPERATIONS in the example server code

    assertion GRPC_CALL_ERROR_TOO_MANY_OPERATIONS in the example server code

    Hi, I am trying to run the example-server.cpp and example-client.cpp example from the version 1.1.2 [installed using vcpkg with boost container feature].

    I got an assertion error GRPC_CALL_ERROR_TOO_MANY_OPERATIONS at the following line in the server code image

    here is the stack trace when the assertion occurs. image

    here is the detail information about the assertion. image

    do you have any idea why the bug happens?

  • Example: generic server that handles bidistream requests

    Example: generic server that handles bidistream requests

    Hi, any ideas about how to handle bidistream requests in a generic server? (I notice that there is only a unary handler in the "example/generic-server.cpp" file.)

    Many thanks

  • High-level server API

    High-level server API

    • I/O object for server-side requests: unary and streaming. Similar to the high-level client API.
    • Figure out what API to provide for attaching request handler. E.g. introspect one user-provided ServiceHandler class and register repeatedly_request for all of them automatically. Or let the user register a handler per endpoint themselves, like with repeatedly_request at the moment.
    • Nicely integrate with tracing/metrics/logging/load balancing, like opentelemetry, opencensus, ORCA, xDS, etc.
    • Consider owning the grpc::CompletionQueue and grpc::Server to provide clean shutdown and multi-threading
    • Allow pre-request ServerContext configuration, e.g. to enable compression
    • Support AsyncNotifyWhenDone
Packio - An asynchronous msgpack-RPC and JSON-RPC library built on top of Boost.Asio.

Header-only | JSON-RPC | msgpack-RPC | asio | coroutines This library requires C++17 and is designed as an extension to boost.asio. It will let you bu

Nov 26, 2022
HTTP and WebSocket built on Boost.Asio in C++11
HTTP and WebSocket built on Boost.Asio in C++11

HTTP and WebSocket built on Boost.Asio in C++11 Branch Linux/OSX Windows Coverage Documentation Matrix master develop Contents Introduction Appearance

Dec 1, 2022
Nov 26, 2022
Boost::ASIO low-level redis client (connector)

bredis Boost::ASIO low-level redis client (connector), github gitee Features header only zero-copy (currently only for received replies from Redis) lo

Nov 26, 2022
A very simple, fast, multithreaded, platform independent HTTP and HTTPS server and client library implemented using C++11 and Boost.Asio.

A very simple, fast, multithreaded, platform independent HTTP and HTTPS server and client library implemented using C++11 and Boost.Asio. Created to be an easy way to make REST resources available from C++ applications.

Nov 22, 2022
Boost.GIL - Generic Image Library | Requires C++11 since Boost 1.68
Boost.GIL - Generic Image Library | Requires C++11 since Boost 1.68

Documentation GitHub Actions AppVeyor Azure Pipelines CircleCI Regression Codecov Boost.GIL Introduction Documentation Requirements Branches Community

Nov 24, 2022
gRPC - An RPC library and framework Baind Unity 3D Project

Unity 3D Compose for Desktop and Android, a modern UI framework for C ++ , C# that makes building performant and beautiful user interfaces easy and enjoyable.

May 19, 2022
C++ peer to peer library, built on the top of boost

Breep What is Breep? Breep is a c++ bridged peer to peer library. What does that mean? It means that even though the network is constructed as a peer

Nov 24, 2022
requests-like networking library using boost for C++

cq == C++ Requests cq == C++ Requests is a "Python Requests"-like C++ header-only library for sending HTTP requests. The library is inspired a lot by

Dec 15, 2021
Lightweight, header-only, Boost-based socket pool library

Stream-client This is a lightweight, header-only, Boost-based library providing client-side network primitives to easily organize and implement data t

Aug 5, 2022
Boost headers

About This repository contains a set of header files from Boost. Can be useful when using header only libraries. How to use You can easily include the

Oct 16, 2021
Boost.org signals2 module

Signals2, part of collection of the Boost C++ Libraries, is an implementation of a managed signals and slots system. License Distributed under the Boo

Nov 11, 2022
Boost.org property_tree module

Maintainer This library is currently maintained by Richard Hodges with generous support from the C++ Alliance. Build Status Branch Status develop mast

Oct 20, 2022
Super-project for modularized Boost

Boost C++ Libraries The Boost project provides free peer-reviewed portable C++ source libraries. We emphasize libraries that work well with the C++ St

Dec 5, 2022
Level up your Beat Saber experience on Quest! AnyTweaks provides various tweaks to help boost your experience on Quest, such as Bloom, FPS Counter and more.

Need help/support? Ask in one of BSMG's support channels for Quest, or join my Discord server! AnyTweaks Level up your Beat Saber experience on Quest!

Nov 20, 2022
The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.

Welcome! The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design

Nov 26, 2022
Corvusoft's Restbed framework brings asynchronous RESTful functionality to C++14 applications.

Restbed Restbed is a comprehensive and consistent programming model for building applications that require seamless and secure communication over HTTP

Nov 29, 2022
Cross-platform, efficient, customizable, and robust asynchronous HTTP/WebSocket server C++14 library with the right balance between performance and ease of use

What Is RESTinio? RESTinio is a header-only C++14 library that gives you an embedded HTTP/Websocket server. It is based on standalone version of ASIO

Nov 25, 2022
A C library for asynchronous DNS requests

c-ares This is c-ares, an asynchronous resolver library. It is intended for applications which need to perform DNS queries without blocking, or need t

Nov 29, 2022