RPC based on C++ Workflow. Supports Baidu bRPC, Tencent tRPC, thrift protocols.

中文版入口

SRPC

Language Build Status

Introduction

SRPC is an RPC system developed by Sogou. Its main features include:

  • Base on Sogou C++ Workflow, with the following features:
    • High performance
    • Low development and access cost
    • Compatible with SeriesWork and ParallelWork in Workflow
    • One-click migration for existing projects with protobuf/thrift
  • Support several IDL formats, including:
    • Protobuf
    • Thrift
  • Support several data formats, including:
    • Protobuf serialize
    • Thrift Binary serialize
    • JSON serialize
  • Support several compression formats transparently, including:
    • gzip
    • zlib
    • snappy
    • lz4
  • Support several communication protocols transparently, including:
    • tcp
    • http
    • sctp
    • ssl
    • https
  • With HTTP+JSON, you can use any language:
    • As a server, you can accept POST requests with HTTP server developed in any language and parse the HTTP headers.
    • As a client, you can send POST requests with HTTP client developed in any language and add the required HTTP headers.
  • Built-in client/server which can seamlessly communicate with a server/client in other RPC frameworks, including:
    • BRPC
    • TRPC (the only open-source implementation of TRPC protocol so far)
    • GRPC
    • Thrift Framed Binary
    • Thrift Http Binary
  • How to use it together with Workflow:
    • You can use the interface to create an RPC task
    • You can put the RPC task into SeriesWork or ParallelWork, and you can also get the current SeriesWork in the callback.
    • You can also use other features supported by Workflow, including upstream, calculation scheduling, asynchronous file IO, etc.
  • More features and layers

Installation

  • srpc is a static library, libsrpc.a. You only need to add the libsrpc as a dependency in the development environment, and it is not required in the compiled binary release.
  • srpc depends on Workflow and protobuf3.
    • For protobuf, you must install protobuf v3.0.0 or above by yourself.
    • For Workflow, it`s added as dependency automatically via git submodule.
    • For snappy and lz4, source codes are also included as third_party via git submodule.
git clone --recursive https://github.com/sogou/srpc.git
cd srpc
make
sudo make install

Tutorial

Easy to compile tutorial with these commands:

cd tutorial
make

Quick Start

1. example.proto

syntax = "proto3";// You can use either proto2 or proto3. Both are supported by srpc

message EchoRequest {
    string message = 1;
    string name = 2;
};

message EchoResponse {
    string message = 1;
};

service Example {
    rpc Echo(EchoRequest) returns (EchoResponse);
};

2. generate code

protoc example.proto --cpp_out=./ --proto_path=./
srpc_generator protobuf ./example.proto ./

3. server.cc

#include <stdio.h>
#include <signal.h>
#include "example.srpc.h"

using namespace srpc;

class ExampleServiceImpl : public Example::Service
{
public:
    void Echo(EchoRequest *request, EchoResponse *response, RPCContext *ctx) override
    {
        response->set_message("Hi, " + request->name());

        // gzip/zlib/snappy/lz4/none
        // ctx->set_compress_type(RPCCompressGzip);

        // protobuf/json
        // ctx->set_data_type(RPCDataJson);

        printf("get_req:\n%s\nset_resp:\n%s\n",
                request->DebugString().c_str(), response->DebugString().c_str());
    }
};

void sig_handler(int signo) { }

int main()
{
    signal(SIGINT, sig_handler);
    signal(SIGTERM, sig_handler);

    SRPCServer server_tcp;
    SRPCHttpServer server_http;

    ExampleServiceImpl impl;
    server_tcp.add_service(&impl);
    server_http.add_service(&impl);

    server_tcp.start(1412);
    server_http.start(8811);
    getchar(); // press "Enter" to end.
    server_http.stop();
    server_tcp.stop();

    return 0;
}

4. client.cc

#include <stdio.h>
#include "example.srpc.h"

using namespace srpc;

int main()
{
    Example::SRPCClient client("127.0.0.1", 1412);
    EchoRequest req;
    req.set_message("Hello, srpc!");
    req.set_name("workflow");

    client.Echo(&req, [](EchoResponse *response, RPCContext *ctx) {
        if (ctx->success())
            printf("%s\n", response->DebugString().c_str());
        else
            printf("status[%d] error[%d] errmsg:%s\n",
                    ctx->get_status_code(), ctx->get_error(), ctx->get_errmsg());
    });

    getchar(); // press "Enter" to end.
    return 0;
}

5. make

These compile commands are only for Linux system. On other system, complete cmake in tutorial is recommanded.

g++ -o server server.cc example.pb.cc -std=c++11 -lsrpc
g++ -o client client.cc example.pb.cc -std=c++11 -lsrpc

6. run

Terminal 1

./server

Terminal 2

./client
curl 127.0.0.1:8811/Example/Echo -H 'Content-Type: application/json' -d '{message:"from curl",name:"CURL"}'

Output of Terminal 1

get_req:
message: "Hello, srpc!"
name: "workflow"

set_resp:
message: "Hi, workflow"

get_req:
message: "from curl"
name: "CURL"

set_resp:
message: "Hi, CURL"

Output of Terminal 2

message: "Hi, workflow"
{"message":"Hi, CURL"}

Benchmark

  • CPU 2-chip/8-core/32-processor Intel(R) Xeon(R) CPU E5-2630 v3 @2.40GHz
  • Memory all 128G
  • 10 Gigabit Ethernet
  • BAIDU brpc-client in pooled (connection pool) mode

QPS at cross-machine single client→ single server under different concurrency

Client = 1
ClientThread = 64, 128, 256, 512, 1024
RequestSize = 32
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

QPS at cross-machine multi-client→ single server under different client processes

Client = 1, 2, 4, 8, 16
ClientThread = 32
RequestSize = 32
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

QPS at same-machine single client→ single server under different concurrency

Client = 1
ClientThread = 1, 2, 4, 8, 16, 32, 64, 128, 256
RequestSize = 1024
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

QPS at same-machine single client→ single server under different request sizes

Client = 1
ClientThread = 100
RequestSize = 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

Latency CDF for fixed QPS at same-machine single client→ single server

Client = 1
ClientThread = 50
ClientQPS = 10000
RequestSize = 1024
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16
Outiler = 1%

IMG

Latency CDF for fixed QPS at cross-machine multi-client→ single server

Client = 32
ClientThread = 16
ClientQPS = 2500
RequestSize = 512
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16
Outiler = 1%

IMG

Contact

  • Email - [email protected] - main author
  • Issue - You are very welcome to post questions to issues list.
  • QQ - Group number: 618773193
Owner
C++ Workflow Project and Ecosystem
C++ Workflow Project and Ecosystem
Comments
  • Windows环境内存增涨问题

    Windows环境内存增涨问题

    #include #include #include <QtConcurrent/QtConcurrent>

    #include "echo_pb.srpc.h" #include "srpc/rpc_types.h" #include "workflow/WFFacilities.h"

    class ExampleServiceImpl : public Example::Service { public: void Echo(EchoRequest *req, EchoResponse *resp, srpc::RPCContext *ctx) override { // ctx->set_compress_type(srpc::RPCCompressLz4); resp->set_message("%d", req->GetCachedSize());

    // printf("Server Echo()\nget_req:\n%s\nset_resp:\n%s\n", // req->DebugString().c_str(), resp->DebugString().c_str()); } };

    int main(int argc, char *argv[]) { QCoreApplication a(argc, argv);

    QtConcurrent::run([](){
        srpc::SRPCServer server;
        ExampleServiceImpl impl;
    
        server.add_service(&impl);
    
        if (server.start(1412) == 0) {
            qDebug() << "start sucess!";
        } else {
            qDebug() << "start failed!";
        }
        WFFacilities::WaitGroup group(1);
        group.wait();
    });
    
    Example::SRPCClient client("127.0.0.1", 1412);
    srpc::RPCTaskParams *par = (srpc::RPCTaskParams *)client.get_task_params();
    

    // par->compress_type = srpc::RPCCompressLz4; int counter = 0; while (true) { EchoRequest req; req.set_name("123123"); std::string data(8192, '\0'); (req.mutable_message()) = data; // client.Echo(&req, [](EchoResponse resp, srpc::RPCContext* ctx){ // if (ctx->success()) // qDebug() << "sucess " << resp->message().c_str(); // else // qDebug() << "failure " << ctx->get_errmsg(); // });

        auto future = client.async_Echo(&req);
        auto res = future.get();
        qDebug() << "res:" << res.first.message().c_str() << ++counter;// << res.second.success();
    

    // EchoResponse sync_resp; // srpc::RPCSyncContext sync_ctx; // client.Echo(&req, &sync_resp, &sync_ctx);

    // if (sync_ctx.success) // printf("scuss %s\n", sync_resp.DebugString().c_str()); // else { // printf("failure:%s\n", sync_ctx.errmsg.c_str()); // } } return a.exec(); }

  • 如何使用srpc中的文件压缩功能

    如何使用srpc中的文件压缩功能

    作者大大您好:目前我想用workflow实现客户端文件压缩传输的功能,当前主要是先完成批量文件下载的功能。 主要思路为利用ParallelWork和SeriesWork,在串行流中根据第一个httptask获取服务器中文件大小,文件超出阈值则在串行流中增加后续httptask分块下载,SeriesWork的callback中进行文件完整性检验与合并。 但是workflow中并不支持文件压缩的功能,srpc中是有这样的能力的,但是我不太懂rpc,所以想问问有没有什么小示例来解决我的疑惑,同时作为一个网络服务初学者,想请您咨询一下我的思路有没有啥问题,希望能获得您的帮助!

  • MacOS 安装 srpc 时 ld 出错

    MacOS 安装 srpc 时 ld 出错

    MacOS 10.15.7 环境下 make 命令报错:

    截屏2022-04-05 下午6.09.47.png

    openssl 使用 homebrew 安装, 版本为 LibreSSL 2.8.3, workflow 也已正常安装。 看上去是 architecture 设置不对

    已设置环境变量:

    OPENSSL_ROOT_DIR=/usr/local/opt/openssl 
    OPENSSL_LIBRARIES=/usr/local/opt/openssl/lib
    

    cmake 变量也已设置

    cmake -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl -DOPENSSL_LIBRARIES=/usr/local/opt/openssl/lib
    
  • rpc 空参数 的支持

    rpc 空参数 的支持

    ProtocolBuf 自带了EmptyParam 的支持,但是在proto文件中使用导入,SRPC生成的时候报找到不到 google/protobuf/empty.proto。 所以,如何让SRPC支持空参数? 自己修改生成的RPC函数应该可以实现,但是很多地方都要修改。 对了,生成的是C++语言的。

  • thrift桩代码生成工具生成的桩代码与期望不一样

    thrift桩代码生成工具生成的桩代码与期望不一样

    此处使用了apache thrift提供的thrift文件(thrift后缀github文本编辑器不支持上传,所以换成了txt后缀) tutorial.txt shared.txt

    使用srpc_generator生成后发现部分地方生成错误,如下图红框 企业微信截图_52ebe79a-890e-42d2-9312-1548ffdc6e6b

    总结:出现如上错误的问题,我猜测是tutorial.thrift文件中的Struct Work下的num1赋了初始值0,导致代码生成工具没有成功识别 wecom-temp-55d9af5a6d4522116b7655249faa16a1

  • fix check for VCPKG_TOOLCHAIN in Cmake files

    fix check for VCPKG_TOOLCHAIN in Cmake files

    Make checks on VCPKG stricter. Also add target_link snappy and lz4 to tutorials and benchmarks.

    For those who just use VCPKG to install some other dependencies instead of SRPC, we won't find srpc in ${CMAKE_INSTALL_PREFIX}/tools/srpc. Refer to https://github.com/sogou/srpc/issues/216

  • windows下编译问题

    windows下编译问题

    图片 搞了一天了,心累。 昨天没看issues,没用vcpkg,自己按照依赖用cmake一个个编译的,最终编译到tutorial的时候报错,运行时库不匹配,改好之后又报错重定义。最后实在没办法就按照issues里用vcpkg装好了依赖,然后一步步编译workflow、srpc,最后又到了tutorial,这次又出了新问题。 说实话,真的不想用了,看起来依赖不多,编译起来问题一堆,我现在就一个想法,赶紧的毁灭吧

  • aarch64上make 出错 error: cannot use typeid with -fno-rtti

    aarch64上make 出错 error: cannot use typeid with -fno-rtti

    环境

    EulerOS,aarch64,protobuf-3.5.0,gcc-7.3.0

    复现方式:

    cd srpc
    make -j128
    

    报错

    [ 62%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress.cc.o
    [ 65%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o
    In file included from /usr/include/google/protobuf/message.h:118:0,
                     from /home/x/code/srpc/src/rpc_basic.h:22,
                     from /home/x/code/srpc/src/compress/rpc_compress_snappy.h:20,
                     from /home/x/code/srpc/src/compress/rpc_compress_snappy.cc:19:
    /usr/include/google/protobuf/arena.h: In member function ‘void* google::protobuf::Arena::AllocateInternal(bool)’:
    /usr/include/google/protobuf/arena.h:654:15: error: cannot use typeid with -fno-rtti
         AllocHook(RTTI_TYPE_ID(T), n);
                   ^
    /usr/include/google/protobuf/arena.h: In member function ‘T* google::protobuf::Arena::CreateInternalRawArray(size_t)’:
    /usr/include/google/protobuf/arena.h:693:15: error: cannot use typeid with -fno-rtti
         AllocHook(RTTI_TYPE_ID(T), n);
                   ^
    make[3]: *** [src/compress/CMakeFiles/compress.dir/build.make:76: src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o] Error 1
    make[3]: Leaving directory '/home/x/code/srpc/build.cmake'
    make[2]: *** [CMakeFiles/Makefile2:405: src/compress/CMakeFiles/compress.dir/all] Error 2
    make[2]: Leaving directory '/home/x/code/srpc/build.cmake'
    make[1]: *** [Makefile:152: all] Error 2
    make[1]: Leaving directory '/home/x/code/srpc/build.cmake'
    make: *** [GNUmakefile:13: all] Error 2
    
  • Update srpc_generator

    Update srpc_generator

    Add two updates mentioned in this Pull Request: https://github.com/holmes1412/srpc/pull/26

    (1) Some redundancy in specifying the input IDL file type (2) Crash if output directory doesn't exist

    Also make it compatible with the original usage and add a version check.

    Correct usage:

    srpc_generator protobuf echo_pb.proto ./
    
    srpc_generator echo_thrift.thrift ./
    

    Error usage and message:

    [email protected]_QZLIU-MB0 srpc_1412_new % srpc_generator .xxx ./                             
    ERROR: Invalid IDL file .xxx
    Usage:
    	srpc_generator <idl_file> <output_dir>
    
    [email protected]_QZLIU-MB0 srpc_1412_new % srpc_generator xxx tutorial/echo_pb.proto ./
    ERROR: Invalid IDL type xxx
    Usage:
    	srpc_generator <idl_file> <output_dir>
    
    [email protected]_QZLIU-MB0 srpc_1412_new % srpc_generator --version                    
    srpc_generator version 0.9.5
    
    [email protected]_QZLIU-MB0 srpc_1412_new % srpc_generator          
    Usage:
    	srpc_generator <idl_file> <output_dir>
    
  • 尝试将srpc集成到项目中,编译成功,运行时报错:undefined symbol: _ZTVN8protocol11HttpMessageE

    尝试将srpc集成到项目中,编译成功,运行时报错:undefined symbol: _ZTVN8protocol11HttpMessageE

    用c++filt解析结果为

    vtable for protocol::HttpMessage

    系统为CentOS 7 ProtoBuf version:3.13.0 项目的CMakeLists.txt 如下,srpc编译好后,将include和静态链接库放入项目里

    set(SRPC_LIB srpc) list(APPEND SRPC_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/srpc/_include ${ClickHouse_SOURCE_DIR}/contrib/srpc/workflow/_include ) dbms_target_link_libraries(PRIVATE ${SRPC_LIB}) dbms_target_include_directories(PRIVATE ${SRPC_INCLUDE_DIR})

    请问可能是哪里出了问题呢?

  • 编译报错问题

    编译报错问题

    /usr/local/include/srpc/rpc_task.inl:169:58: error: no type named 'Series' in 'WFServerTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>' class RPCSeries : public WFServerTask<RPCREQ, RPCRESP>::Series ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~ /usr/local/include/srpc/rpc_task.inl:449:20: note: in instantiation of member class 'srpc::RPCServerTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::RPCSeries' requested here SERIES *series = dynamic_cast<SERIES *>(series_of(this)); ^ /usr/local/include/srpc/rpc_task.inl:132:2: note: in instantiation of member function 'srpc::RPCClientTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::message_out' requested here RPCClientTask(const std::string& service_name, ^ /usr/local/include/srpc/rpc_client.h:57:20: note: in instantiation of member function 'srpc::RPCClientTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::RPCClientTask' requested here auto *task = new TASK(this->service_name, ^ ./example.srpc.h:222:21: note: in instantiation of function template specialization 'srpc::RPCClientsrpc::RPCTYPEBRPC::create_rpc_client_task' requested here auto *task = this->create_rpc_client_task("Echo", std::move(done)); ^ 1 warning and 3 errors generated.

  • Add vcpkg installation instructions

    Add vcpkg installation instructions

    srpc is available as a port in vcpkg, a C++ library manager that simplifies installation for srpc and other project dependencies. Documenting the install process here will help users get started by providing a single set of commands to build srpc, ready to be included in their projects.

    We also test whether our library ports build in various configurations (dynamic, static) on various platforms (OSX, Linux, Windows: x86, x64) to keep a wide coverage for users.

    I'm a maintainer for vcpkg, and here is what the port script looks like. We try to keep the library maintained as close as possible to the original library. 😊

  • srpc在windows上的编译流程参考

    srpc在windows上的编译流程参考

    0.编译前的准备:

    从CMake的官网下载CMake的二进制文件并安装,建议CMake版本 >= 3.6

    我们假设您当前的路径为:E:/GitHubProjects

    由于一些原因,我建议您使用vcpkg来安装依赖项

    打开Powershell/cmd/bash,拉取vcpkg,并安装依赖项:

    git clone https://github.com/microsoft/vcpkg.git
    cd vcpkg
    .\bootstrap-vcpkg.bat
    
    # 安装依赖项
    
    # win32
    .\vcpkg.exe install zlib:x86-windows protobuf:x86-windows openssl:x86-windows snappy:x86-windows lz4:x86-windows
    
    # amd64
    .\vcpkg.exe install zlib:x64-windows protobuf:x64-windows openssl:x64-windows snappy:x64-windows lz4:x64-windows
    
    # 之所以要指定架构并分别安装两个架构的库的原因是因为vcpkg有迷之bug会导致cmake找不到包
    
    # 注意!不推荐将vcpkg全局集成,这会导致vcpkg的包污染您的项目,如果您希望集成vcpkg的包到某一个项目,请您使用nuget的本地仓库
    
    

    1.编译源代码:

    从官方仓库拉取代码,并编译:

    # 回到上一级目录
    cd ..
    git clone --recursive https://github.com/sogou/srpc.git
    cd srpc
    cd workflow
    
    # 将workflow切换到windows分支
    git checkout windows
    
    #使用cmake生成vs解决方案,我当前的环境是cmake 3.23.0,visual studio 2022
    
    # 编译32位版本
    cmake -B build32 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A Win32
    
    cmake --build build32 --config Debug
    cmake --build build32 --config Release
    
    # 编译64位版本
    cmake -B build64 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A x64
    
    cmake --build build64 --config Debug
    cmake --build build64 --config Release
    
    

    接下来编译srpc:

    # 回到上一级目录
    cd ..
    
    # 编译32位版本
    cmake -B build32 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A Win32
    
    cmake --build build32 --config Debug
    cmake --build build32 --config Release
    
    # 编译64位版本
    cmake -B build64 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A x64
    
    cmake --build build64 --config Debug
    cmake --build build64 --config Release
    
    
    
    
    

    2.编译例子:

    # 直接在srpc的目录下编译
    
    # 编译32位版本
    cmake -B buildt32 -S tutorial -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A Win32
    
    cmake --build buildt32 --config Debug
    cmake --build buildt32 --config Release
    
    # 编译64位版本
    cmake -B buildt64 -S tutorial -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A x64
    
    cmake --build buildt64 --config Debug
    cmake --build buildt64 --config Release
    
    

    至此,windows上srpc的整个编译流程就结束了,可以尝试运行一下例子测试一下效果

  • SRPC supports reporting traces to OpenTelemetry

    SRPC supports reporting traces to OpenTelemetry

    1. Introduction

    SRPC supports generating and reporting tracing and spans, which can be reported in multiple ways, including exporting data locally or to OpenTelemetry.

    Since SRPC follows the data specification of OpenTelemetry and the specification of w3c trace context, now we can use RPCSpanOpenTelemetry as the reporting plugin.

    The report conforms to the Workflow style, which is pure asynchronous task and therefore has no performance impact on the RPC requests and services.

    2. Usage

    After the plugin RPCSpanOpenTelemetry is constructed, we can use add_filter() to add it into server or client.

    For tutorial/tutorial-02-srpc_pb_client.cc, add 2 lines like the following :

    int main()                                                                   
    {                                                                        
        Example::SRPCClient client("127.0.0.1", 1412); 
    
        RPCSpanOpenTelemetry span_otel("http://127.0.0.1:55358"); // jaeger http collector ip:port   
        client.add_filter(&span_otel);
        ...
    

    For tutorial/tutorial-01-srpc_pb_server.cc, add the similar 2 lines. We also add the local plugin to print the reported data on the screen :

    int main()
    {
        SRPCServer server;  
    
        RPCSpanOpenTelemetry span_otel("http://127.0.0.1:55358");                            
        server.add_filter(&span_otel);                                                 
    
        RPCSpanDefault span_log; // this plugin will print the tracing info on the screen                                                  
        server.add_filter(&span_log);                                              
        ...
    

    make the tutorial and run both server and client, we can see some tracing information on the screen.

    image

    We can find the span_id: 04d070f537f17d00 in client become parent_span_id: 04d070f537f17d00 in server:

    image

    3. Traces on Jaeger

    Open the show page of Jaeger, we can find our service name Example and method name Echo. Here are two span nodes, which were reported by server and client respectively.

    image

    As what we saw on the screen, the client reported span_id: 04d070f537f17d00 and server reported span_id: 00202cf737f17d00, these span and the correlated tracing information can be found on Jaeger, too.

    image

    4. About Parameters

    How long to collect a trace, and the number of reported retries and other parameters can be specified through the constructor parameters of RPCSpanOpenTelemetry. Code reference: src/module/rpc_span_policies.h

    The default value is to collect up to 1000 trace information per second, and features such as transferring tracing information through the srpc framework transparently have also been implemented, which also conform to the specifications.

    5. Attributes

    We can also use add_attributes() to add some other informations as OTEL_RESOURCE_ATTRIBUTES.

    Please notice that our service name "Example" is set also thought this attributes, the key of which is service.name. If service.name is also provided in OTEL_RESOURCE_ATTRIBUTES by users, then srpc service name takes precedence. Refers to : OpenTelemetry#resource

    6. Log and Baggage

    SRPC provides log() and baggage() to carry some user data through span.

    API :

    void log(const RPCLogVector& fields);
    void baggage(const std::string& key, const std::string& value);
    

    As a server, we can use RPCContext to add log annotation:

    class ExampleServiceImpl : public Example::Service                                 
    {
    public: 
        void Echo(EchoRequest *req, EchoResponse *resp, RPCContext *ctx) override
        {
            resp->set_message("Hi back");
            ctx->log({{"event", "info"}, {"message", "rpc server echo() end."}});
        }
    };
    

    As a client, we can use RPCClientTask to add log on span:

    srpc::SRPCClientTask *task = client.create_Echo_task(...);
    task->log({{"event", "info"}, {"message", "log by rpc client echo()."}});
    
  • srpc与workflow如何无缝连接使用

    srpc与workflow如何无缝连接使用

    我想使用srpc做数据IO服务器,只使用workflow时,有现成的例子,照着做就行(参考http_file_server),但在使用srpc时不知道如何将workflow中的代码和srpc无缝衔接起来,具体问题描述如下: 我在server的Echo中函数体中设置了一个IO读任务,并设置了回调,代码大致如下:

    void Echo(WWIORequest *request, WWIOResponse *response, srpc::RPCContext *ctx) override {
        WFFileIOTask *pread_task;
        pread_task = WFTaskFactory::create_pread_task(fd, buf, size, 0,
                                                              pread_callback);
        pread_task->user_data = response; 
    }
    

    如果参照http_file_server的代码 ,则应该将以下4行类似代码整合到Echo函数中:

    pread_task->user_data = resp;   /* pass resp pointer to pread task. */
    server_task->user_data = buf;   /* to free() in callback() */
    server_task->set_callback([](WFHttpTask *t){ free(t->user_data); });
    series_of(server_task)->push_back(pread_task);
    

    但是,server_task在srpc的环境中是不存在的,所以有以下几个问题: 1)没有类似server_task对象,buf中的数据如何传递给pread_task的回调函数? 2)没有类似server_task对象,释放buf怎么释放?如果是使用ctx->get_series()->set_callback函数的话,在里面的lamda函数该如何写呢? 3)series_of(server_task)->push_back(pread_task);这一行在srpc语境中等效于workflow的语句是ctx->get_series()->push_back(pread_task)吗?

    辛苦大佬回答。

Packio - An asynchronous msgpack-RPC and JSON-RPC library built on top of Boost.Asio.

Header-only | JSON-RPC | msgpack-RPC | asio | coroutines This library requires C++17 and is designed as an extension to boost.asio. It will let you bu

Dec 26, 2022
RPC++ is a tool for Discord RPC (Rich Presence) to let your friends know about your Linux system
RPC++ is a tool for Discord RPC (Rich Presence) to let your friends know about your Linux system

RPC++ RPC++ is a tool for Discord RPC (Rich Presence) to let your friends know about your Linux system Installing requirements Arch based systems pacm

Jul 6, 2022
We use Clash as the backend proxy, which supports Shadowsocks(R), V2Ray, and Trojan protocols.
We use Clash as the backend proxy, which supports Shadowsocks(R), V2Ray, and Trojan protocols.

We use Clash as the backend proxy, which supports Shadowsocks(R), V2Ray, and Trojan protocols.

Dec 31, 2022
prometheus exporter using workflow HTTP server
prometheus exporter using workflow HTTP server

wfprometheus This is a light prometheus exporter using workflow HTTP server. This project is currently in the development stage, and the first version

Oct 23, 2021
Ultra fast and low latency asynchronous socket server & client C++ library with support TCP, SSL, UDP, HTTP, HTTPS, WebSocket protocols and 10K connections problem solution
Ultra fast and low latency asynchronous socket server & client C++ library with support TCP, SSL, UDP, HTTP, HTTPS, WebSocket protocols and 10K connections problem solution

CppServer Ultra fast and low latency asynchronous socket server & client C++ library with support TCP, SSL, UDP, HTTP, HTTPS, WebSocket protocols and

Jan 3, 2023
A modern C++ network library for developing high performance network services in TCP/UDP/HTTP protocols.
A modern C++ network library for developing high performance network services in TCP/UDP/HTTP protocols.

evpp Introduction 中文说明 evpp is a modern C++ network library for developing high performance network services using TCP/UDP/HTTP protocols. evpp provid

Jan 5, 2023
Mongoose Embedded Web Server Library - a multi-protocol embedded networking library with TCP/UDP, HTTP, WebSocket, MQTT built-in protocols, async DNS resolver, and non-blocking API.
Mongoose Embedded Web Server Library - a multi-protocol embedded networking library with TCP/UDP, HTTP, WebSocket,  MQTT built-in protocols, async DNS resolver, and non-blocking API.

Mongoose - Embedded Web Server / Embedded Networking Library Mongoose is a networking library for C/C++. It implements event-driven non-blocking APIs

Jan 1, 2023
Like libevent and libuv, libhv provides event-loop with non-blocking IO and timer, but simpler api and richer protocols.
Like libevent and libuv, libhv provides event-loop with non-blocking IO and timer, but simpler api and richer protocols.

中文版 Intro Like libevent, libev, and libuv, libhv provides event-loop with non-blocking IO and timer, but simpler api and richer protocols. Features cr

Jan 4, 2023
Encapsulates the two protocols of OpenVpn and Ikev2, you only need to enter the server IP and port number to realize the connection and status display, and the specific situation of the connection can be displayed at the same time。

NewVpnCore 封装了OpenVpn和Ikev2两种协议,只需要输入服务器IP和端口号即可实现连接和状态显示,同时可以显示连接的具体情况。 UniteVpn Core(第一版) 1. 模块说明 unitevpn:封装了vpn的操作和统一不同协议信息的模块 ikev2:IKEV2协议的源码 op

Jun 8, 2022
Dec 15, 2022
🚀 Discord RPC Blocker for Lunar Client
🚀 Discord RPC Blocker for Lunar Client

?? Soyuz Soyuz has one simple purpose; listen for incoming Discord RPC requests from Lunar Client and block them! Limitations Windows only Soon to com

Oct 6, 2022
C++ framework for json-rpc (json remote procedure call)
C++ framework for json-rpc (json remote procedure call)

I am currently working on a new C++17 implementation -> json-rpc-cxx. Master Develop | libjson-rpc-cpp This framework provides cross platform JSON-RPC

Dec 28, 2022
modern C++(C++11), simple, easy to use rpc framework

modern C++(C++11), simple, easy to use rpc framework

Jan 4, 2023
gRPC - An RPC library and framework Baind Unity 3D Project

Unity 3D Compose for Desktop and Android, a modern UI framework for C ++ , C# that makes building performant and beautiful user interfaces easy and enjoyable.

May 19, 2022
Gromox - Groupware server backend with MAPI/HTTP, RPC/HTTP, IMAP, POP3 and PHP-MAPI support for grommunio

Gromox is the central groupware server component of grommunio. It is capable of serving as a replacement for Microsoft Exchange and compatibles. Conne

Dec 26, 2022
Fastest RPC in the west
Fastest RPC in the west

smf - the fastest RPC in the West We're looking for a new maintainer for the SMF project. As I have little time to keep up with issues. Please let me

Dec 28, 2022
a lightweight and performant multicast DNS (mDNS) reflector with modern design, supports zone based reflection and IPv6

mDNS Reflector mDNS Reflector (mdns-reflector) is a lightweight and performant multicast DNS (mDNS) reflector with a modern design. It reflects mDNS q

Dec 10, 2022
A GlobalProtect VPN client (GUI) for Linux based on OpenConnect and built with Qt5, supports SAML auth mode.
A GlobalProtect VPN client (GUI) for Linux based on OpenConnect and built with Qt5, supports SAML auth mode.

A GlobalProtect VPN client (GUI) for Linux based on OpenConnect and built with Qt5, supports SAML auth mode.

Jan 2, 2023
Built a peer-to-peer group based file sharing system where users could share or download files from the groups they belonged to. Supports parallel downloading with multiple file chunks from multiple peers.

Mini-Torrent Built a peer-to-peer group based file sharing system where users could share or download files from the groups they belonged to. Supports

Nov 15, 2021