Unified Executors

Overview

The 'libunifex' project is a prototype implementation of the C++ sender/receiver async programming model that is currently being considered for standardisation.

This project contains implementations of the following:

  • Schedulers
  • Timers
  • Asynchronous I/O (Linux w/ io_uring)
  • Algorithms that encapsulate certain concurrency patterns
  • Async streams
  • Cancellation
  • Coroutine integration

Status

This project is still evolving and should be considered experimental in nature. No guarantee is made for API or ABI stability.

Build status

  • on Github Actions: GitHub Actions Status

Documentation

Requirements

A recent compiler that supports C++17 or later. Libunifex is known to work with the following compilers:

  • GCC, 9.x and later
  • Clang, 10.x and later
  • MSVC 2019.6 and later

This library also supports C++20 coroutines. You will need to compile with coroutine support enabled if you want to use the coroutine integrations. This generally means adding -std=c++2a or -fcoroutines-ts on Clang (see "Configuring" below).

Linux

The io_uring support on Linux requires a recent kernel version (5.6 or later).

See http://git.kernel.dk/cgit/linux-block/log/?h=for-5.5/io_uring

The io_uring support depends on liburing: https://github.com/axboe/liburing/

Windows

windows_thread_pool executor requires Windows Vista or later.

Building

This project can be built using CMake.

The examples below assume using the Ninja build system. You can use other build systems supported by CMake.

Configuring

First generate the build files under the ./build subdirectory.

From the libunifex project root:

cmake -G Ninja -H. -Bbuild \
      -DCMAKE_CXX_COMPILER:PATH=/path/to/compiler

By default, this builds libunifex in C++17 without coroutines. If you want to turn on coroutines with clang, add:

      -DCMAKE_CXX_FLAGS:STRING=-fcoroutines-ts

To use libc++ with clang, which has coroutine support, you should also add:

      -DCMAKE_CXX_FLAGS:STRING=-stdlib=libc++ \
      -DCMAKE_EXE_LINKER_FLAGS:STRING="-L/path/to/libc++/lib"

If you want to build libunifex as C++20, add:

      -DCMAKE_CXX_STANDARD:STRING=20

Building Library + Running Tests

To build the library and tests.

From the ./build subdirectory run:

ninja

Once the tests have been built you can run them.

From the ./build subdirectory run:

ninja test

License

This project is made available under the Apache License, version 2.0, with LLVM Exceptions.

See LICENSE.txt for details.

See also:

References

C++ standardisation papers:

Owner
Facebook Experimental
These are Facebook projects that are not necessarily used in production but are being developed in the open nevertheless.
Facebook Experimental
Comments
  • adding upon_* async algorithms

    adding upon_* async algorithms

    P2300 proposes two algorithms similar to unifex::then, upon_done and upon_error. This PR intends to add those algorithms to libunifex with names: unifex::upon_done and unifex::upon_error. I am trying to make coding conventions similar to already existing modules in libunifex and would be happy to change if some coding conventions doesn't match coding conventions of project.

    Current Progress:

    • upon_done implementation and test added and tests are passing. Tests are kept similar to unifex::then tests.
    • upon_error implementation and tests added and tests are passing.
  • Make libunifex play nicer with the wider C++ ecosystem, plus add in LLFIO

    Make libunifex play nicer with the wider C++ ecosystem, plus add in LLFIO

    Changes:

    • Port third party dependency management over to cmake hunter, and fetch gtest from there from now on.
    • Use cmake toolchain files for alternative compilers and STL implementations.
    • Add in LLFIO and liburing as an external project.
    • Make libunifex installable, and a good player for third party cmake inclusion.
    • Add Ninja build support for Windows (untested, but should work)
    • Coroutines detection no longer forces STL to libc++ on Linux, and now supports future GCC coroutines flags.
  • Example to define a receiver?

    Example to define a receiver?

    I am confused about when and where should I define a receiver. From the example in libunifex, I only find the definition for sender instead of receiver. I want to know in what situation should we define a receiver and how should we implement it.

  • Add any_object type-erasing wrapper

    Add any_object type-erasing wrapper

    Adds a move-only unifex::any_object type-erasing wrapper that supports the small-object optimisation.

    Also refactors some of the type-erasing helper classes from any_unique to make them more reusable.

    Some questions on semantics:

    • If I pass an allocator with std::allocator_arg to the constructor but the type to be constructed fits in the small-object buffer then should it use the allocator or should it still use the small-object buffer, ignoring the allocator (currently does the latter)
    • Should any_object allow non-movable objects that would be heap-allocated because it's a large object (move ctor is not called on concrete type in this case)?
    • If so, should any_object also store small objects that are not movable in a heap-allocation?
    • Or should we just always require that wrapped types are always move-constructible? (this is the current behaviour)
    • Are we ok with requiring that destructors are noexcept?
    • any_object currently utilises ADL-isolation techniques but this means we can't deduce template args for this type. Is this going to break any use-cases for writing customisations of algorithms for any_object?
    • Are there better ways of structuring the template args to make them more ergonomic? Currently we have any_object<InlineSize, InlineAlignment, RequireNoExceptMove, DefaultAllocator, CPOs...>
    • Should the move constructor leave the source object in a valid but moved from state when moving a small object? Or should it destroy the source object after a successful move?

    TODO

    • [ ] Add more tests for any untested use-cases
    • [ ] Write documentation for type-erasing wrappers (any_unique as well as any_object)
  • Use gtest iff BUILD_TESTING=ON

    Use gtest iff BUILD_TESTING=ON

    This PR disables gtest if one wants to build the examples but not the tests. In my case, I have issues compiling gtest with GCC 11.0.1 on Fedora 34 but this enables me to "test" the examples.

  • fix non-virtual destructor warnings

    fix non-virtual destructor warnings

    When compiling the coroutine_stream_consumer example with -Werror, I observed errors such as:

        ... has accessible non-virtual destructor [-Werror=non-virtual-dtor]
        220 |         struct concrete_receiver final : next_receiver_base {
    

    Fix these by adding default virtual destructors to all base structures which define virtual functions.

    Signed-off-by: Patrick Williams [email protected]

  • Add bulk_via

    Add bulk_via

    bulk_via returns a ManySender that produces the results from predecessor on the execution context of specified scheduler, every result scheduled individually.

    The example use case I am trying to fulfill is: we are parsing some data, which should be done sequentially, but once we have a parsed an independent chunk of data - we want to send it for processing immediately (processing can be done in parallel):

                unifex::bulk_transform(
                    unifex::bulk_via(
                        thread_pool_scheduler,
                        unifex::bulk_transform(
                            unifex::bulk_schedule(thread_pool_scheduler, count),
                            [](std::size_t index) noexcept { /* Parse data sequentially*/ return handleToParsedObject; },
                            unifex::seq
                        )
                    ),
                    [&](std::size_t handleToParsedObject) noexcept { /* Do post-processing in parallel*/ },
                    unifex::par_unseq
                )
    
  • Fix compile issue related to io_uring

    Fix compile issue related to io_uring

    Fix a compilation problem when using liburing support related to a change of layout in a io_uring struct (remove padding). Adapted the usage accordingly.

  • simple usage of transform & sync_wait with compile error

    simple usage of transform & sync_wait with compile error

    I tried writing a shortest unifex example possible and came up with this

    image

    I'm curious as why sync_wait is not compiling. The error is a bit too obfiscated for someone not very familiar with unifex yet. If I return void from the second function it works fine. After inspecting transform's definition it seems to allow returning custom values.

    I'm using vcpkg's version of unifex(when "then" is still "transform"), and in visual studio 2019.

    Thanks.

  • add unifex::create for trivially wrapping a C-style async API, fixes #294

    add unifex::create for trivially wrapping a C-style async API, fixes #294

    create<ValueTypes...>(callable)

    Synopsis: A utility for building a (lazy) sender-based async API out of an eager C-style async API that accepts a void* context and a callback.

    Example:

    // A void-returning C-style async API that accepts a context and a continuation:
    using callback_t = void(void* /*context*/, int /*result*/);
    void old_c_style_api(int a, int b, void* context, callback_t* callback_fn);
    
    // A sender-based async API implemented in terms of the C-style API (using C++20):
    unifex::typed_sender auto new_sender_api(int a, int b) {
      return unifex::create<int>([=](auto& rec) {
        static_assert(unifex::receiver_of<decltype(rec), int>);
        old_c_style_api(a, b, &rec, [](void* context, int result) {
          unifex::void_cast<decltype(rec)>(context).set_value(result);
        });
      });
    }
    

    ValueTypes... is a pack representing the value types of the resulting sender. It should be the list of value type(s) accepted by the callback (with the exception of the void* context). In the above example, since callback_t accepts an int as the result of the async computation, we pass int as the template argument to create.

    The first argument to create is a void-returning callable that accepts an lvalue reference to an object whose type satisfies the unifex::receiver_of<ValueTypes...> concept. This function should dispatch to the C-style callback (see example).

    The second argument is an optional extra bit of data to be bundled with the receiver passed to the callable. E.g., if the first argument to create is a lambda that accepts (auto& rec) and the second argument is 42, then from within the body of the lambda, the value of the expression rec.context() is 42.

    create returns a typed sender that, when connected and started, dispatches to the wrapped C-style API with the callback of your choosing. The receiver passed to the callable wraps the receiver passed to connect. The callback should "complete" the receiver passed to the callable, which will complete the receiver passed to connect in turn.

  • Does io_uring_context support multiple IO threads?

    Does io_uring_context support multiple IO threads?

    It seems that only remote queue is atomic, while local queue and pending IO queue are not thread local. How can I use multiple threads for io_uring_context?

  • How to parallelize producer and consumer in current stream design (maybe with buffer)?

    How to parallelize producer and consumer in current stream design (maybe with buffer)?

    I noticed that libunifex have additional stream design which is currently missing in P2300. However, I have no idea how to parallelize producer and consumer with the current design.

    What I want:

    new_thread_context thread_ctx;  // used for producer for long running computational task
    io_context io_ctx;  // used for consumer to do some IO on values produced by producer
    
    async_buffer buffer;  // something like System.Threading.Tasks.Dataflow.BufferBlock in C#
    
    Task<> producer() {
      co_await schedule(thread_ctx.get_scheduler());
      for (auto&& value: long_running_work()) {
         buffer.post(value);
      }
      buffer.complete();
    }
    
    Task<> consumer() {
      co_await schedule(io_ctx.get_scheduler());
      while (auto value = co_await buffer.async_read()) {
        // async IO operation need to be scheduled by io_context
        co_await async_io(value);
      }
    }
    
    sync_wait(when_all(producer(), consumer()));
    

    While current stream design the values are produced lazily and on-demand only when the consumer asks for the next value. In this case, I must co_await the finish of this async io operation before I can ask the producer to compute next value, and It's a waste of time. Since the computing and io operations are scheduled on different contexts, they can be parallel actually.

    auto s = some_stream();
    while (auto value = co_await done_as_optional(next(s))) {
      // async io operation need to be scheduled by io_context
      co_await async_io(value);
    }
    

    By the way, here I cannot just put async_io to additional async_scope to avoid co_awaiting, since some io operation like AsyncWrite in GRPC does not support calling multiple times before scheduled by the context again.

    Maybe I can put next(s) to async_scope? I don't know if it's right. But if the mentioned async_buffer exists, the solution is handy.

    Here I used coroutines to express my thoughts since it's more intuitive. Another question is how to express that without coroutine?

    Thanks

  • tag_invoke related compiler error with gcc >= 11

    tag_invoke related compiler error with gcc >= 11

    I was playing with the following example, and noticed it passes on GCC 10, clang 12,13,14, but fails to compile on GCC 11, 12.

        let_done(
            let_error(
                    []() -> task<int> { co_return 5; }(),
                [](auto&&) noexcept { // lambda 1
                    return just();
                }
            ) ,[]() noexcept { // lambda 2
                return just();
            }
        );
    

    https://godbolt.org/z/fG6z49bM5. Note that in this example, I'm only constructing a sender, but not actually consuming (e.g. no sync_wait). I believe the error is coming from one of the is_tag_invocable evaluations, replicated below:

        auto l1 = [](auto&&) noexcept {return just(); }; // lambda 1
        auto l2 = []() noexcept { return just(); }; // lambda 2
    
        static_assert(
            !is_tag_invocable_v<
                _let_d::_cpo::_fn,
                _let_e::_sndr<_task::_task<int>::type, decltype(l1)>::type,
                decltype(l2)
            >);
    

    I think the compiler seeslet_error sender's tag_invoke, eventually checking whether Receiver = unifex::_let_e::_rcvr<unifex::_task::_task<int>::type&&, main()::<lambda(auto:16&&)>, main()::<lambda()> >::type is a receiver, which it is not since main()::<lambda()> (the second lambda in my example) is not itself a receiver.

    The static_assert above passes compilation on GCC 10/clang, but fails on GCC 11/12 - but fails in the same way as the original example.

    Narrowing down to a smaller example

    I simplified the overall code focusing on the tag_invoke portion of the code, agnostic of unifex: https://godbolt.org/z/Ee35h3eYq. This similarly passes compilation with GCC 10 and clang, but not GCC 11/12. GCC10 seems to be less eager and never attempts to instantiate receiver_type<object>, but GCC 11/12 more eagerly evaluates the template and fails the compilation while doing so. I can't tell which behavior is correct (compiler bug?).

  • [WIP] get_completion_scheduler query for senders as of project goals

    [WIP] get_completion_scheduler query for senders as of project goals

    get_completion_scheduler query is demonstrated in P2300R4 and it is also one of the project goals. We also need this for implementing unifex::bulk PR (#354). So, I am starting to implement this.

    I am not an expert on this, so I would need continuous pointers on how we want to progress on this.

    There are 2 phases of task of this feature I guess:

    • Implementation of base get_completion_scheduler query that would just tag_invoke the call on the given sender.
    • Implementing the get_completion_scheduler for each and every sender.

    The latter one seems to be a lot of work to do, so based on maintainers decision we can do one of the following:

    • This PR covers all the 2 phases
    • This PR just covers the first one and there would be many subsequent PRs which would be implementation of get_completion_scheduler for different senders.

    I may raise very simple questions on this PR as I am newbie on this (and possibly in C++ too :sweat_smile:)

  • Investigate changing customisations of algorithms to be based on connect instead of invocation of algorithm itself

    Investigate changing customisations of algorithms to be based on connect instead of invocation of algorithm itself

    The current implementation of scheduler affinity for the coroutine task type relies on special behaviour of being able to identify when the coroutine awaits a sender returned by schedule(some_scheduler) so that it can treat that as a change of the current scheduler to some other scheduler.

    The current solution complicates the implementation of schedule() a lot and makes it difficult to extend to other kinds of expressions as well.

    One option could be to turn invocation of sender algorithms into generic algorithms that just curry their arguments into a sender<CPO, Args...> type which then customises connect() to then pass the arguments through to connect(receiver, CPO, args...).

    As all senders would then be effectively just an argument currying mechanism we could customise await_transform() when the awaited object is of type sender<tag_t<schedule>, SomeScheduler> to handle this.

    This would also allow simpler customisation of entire sender expressions when, e.g. the expression is passed to on(sched, some_sender_expression). The sender expression can just be an expression-template of a well-known structure/type that the scheduler can use to customise on(). See below for an example of how this might work.

    Implementation Sketch

    It might also help if you think of connect() instead as async_invoke().

    For example: We can define a generic sender type, parameterised on a CPO which is just responsible for currying arguments.

    template<typename CPO, typename... Args>
    struct sender {
      constexpr sender(const sender& other) = default;
      constexpr sender(sender&& other) = default;
    
      template<typename... Args2>
        requires sizeof...(Args) == sizeof...(Args2) && (std::constructible<Args, Args2> && ...)
      constexpr sender(Args2&&... args2) : args(static_cast<Args2&&>(args2)...)
    
      // Allow further currying args to produce another sender.
      template<typename... ExtraArgs>
        requires (std::copy_constructible<Args> && ...) && (std::constructible<remove_cvref_t<ExtraArgs>, ExtraArgs> && ...)
      sender<CPO, Args..., remove_cvref_t<ExtraArgs>...> operator()(ExtraArgs&&... extraArgs) & const {
        return std::apply([&](const Args&... args) {
          return sender<CPO, Args..., remove_cvref_t<ExtraArgs>...>{args..., static_cast<ExtraArgs&&>(extraArgs)...};
        }, this->args);
      }
      
      // Allow pipe operator to prepend an argument.
      template<typename Arg>
      friend sender<CPO, remove_cvref_t<Arg>, Args...> operator|(Arg&& arg, const sender& s) {
        return std::apply([&](const Args&... args) {
          return sender<CPO, remove_cvref_t<Arg>, Args...>{static_cast<Arg&&>(arg), args...};
        }, s.args);
      }
      
      // etc.. for other value categories
    
      // Customise async_invoke() to forward to the CPO
      template<typename Receiver, typename... ExtraArgs>
        requires std::invocable<tag_t<async_invoke>, Receiver, CPO, const Args&..., ExtraArgs...>
      friend decltype(auto) tag_invoke(tag_t<async_invoke>, Receiver r, const sender& s, ExtraArgs&&... extraArgs) {
        return std::apply([&](const Args&... args) {
          return async_invoke(move(r), CPO{}, args..., static_cast<ExtraArgs&&>(extraArgs)...);
        }, s.args);
      }
    
      std::tuple<Args...> args;
    };
    

    Then we can define CPOs to have a default implementation of async_invoke() that has the default implementation. We can use a helper here to allow invocation of the CPO to return the sender.

    template<typename CPO>
    struct sender_cpo_base {
      template<typename... Args>
      sender<CPO, remove_cvref_t<Args>...> operator()(Args&&... args) const {
        return sender<CPO, remove_cvref_t<Args>...>{static_cast<Args&&>(args)...};
      }
    };
    
    struct just_t : sender_cpo_base<just_t> {
      template<typename Receiver, typename... Args>
      struct default_op_state {
        Receiver r;
        std::tuple<remove_cvref_t<Args>...> args;
        
        friend void tag_invoke(tag_t<start>, default_op_state& op) noexcept {
          std::apply([&](Args&&... args) noexcept {
            set_value(move(op.r), move(args)...);
          }, move(op.args));
        }
      };
    
      template<typename Receiver, typename... Args>
        requires /* ... */
      default_op_state<Receiver, remove_cvref_t<Args>...> tag_invoke(tag_t<async_invoke>, Receiver r, just_t, Args&&... args) {
        return {move(r), {static_cast<Args&&>(args)...}};
      }
    };
    
    struct transform_t : sender_cpo_base<transform_t> {
      template<typename Receiver, typename Sender, typename... Funcs>
      struct default_op_state {
        struct receiver {
          op_state& op;
          
          template<typename... Values>
          friend void tag_invoke(tag_t<set_value>, receiver&& r, Values&&... values) {
            try {
              std::apply([&](Funcs&&... funcs) {
                if constexpr (sizeof...(Funcs) == 1) {
                  set_value(move(r.op.r), move(funcs)(static_cast<Values&&>(values)...)...);
                } else {
                  set_value(move(r.op.r), move(funcs)(values...)...);
                }
              }, move(r.op.funcs));
            } catch (...) {
              set_error(move(r.op.r), std::current_exception());
            }
          }
        };
        
        friend void tag_invoke(tag_t<start>, default_op_state& op) noexcept {
          start(op.childOp);
        }
        
        template<typename... Fs2>
        default_op_state(Receiver r, Sender&& s, Fs2&&... fs)
        : r(move(r))
        , funcs(static_cast<Fs2&&>(fs)...)
        , childOp(async_invoke(receiver{*this}, static_cast<Sender&&>(s)))
        {}
        
        Receiver r;
        std::tuple<Fs...> funcs;
        async_invoke_result_t<receiver, Sender> childOp;
    };
    

    Then a customisation of the algorithm could be implemented as follows:

    struct my_scheduler {
      template<typename Receiver>
      struct schedule_op { ... };
    
      template<typename Receiver>
      friend auto tag_invoke(tag_t<async_invoke>, Receiver r, schedule_t, my_scheduler s) {
        return schedule_op<Receiver>{ ... };
      }
    };
    

    This doesn't yet handle things like sender-queries or sender-traits which also need to be considered, however.

    Sketch of Scheduler Customisation

    struct my_scheduler {
      // ...
    };
    
    // customise: on(my_scheduler{}, bulk(src, count, f))
    template<typename Receiver, typename Src, typename Count, typename Func>
    auto tag_invoke(tag_t<async_invoke>, Receiver r, tag_t<on>, my_scheduler s, sender<tag_t<bulk>, Src, Count, Func> bulkOp) {
      // customisation goes here...
    }
    
Modern concurrency for C++. Tasks, executors, timers and C++20 coroutines to rule them all

concurrencpp, the C++ concurrency library concurrencpp is a tasking library for C++ allowing developers to write highly concurrent applications easily

Jun 24, 2022
Asynchronous gRPC with Boost.Asio executors

asio-grpc This library provides an implementation of boost::asio::execution_context that dispatches work to a grpc::CompletionQueue. Making it possibl

Jun 16, 2022
C++ task programming with Asio executors

Futures C++ Task Programming with Asio Executors A future object represents a handle to a value that might not be available yet from an asynchronous o

Jun 15, 2022
Code for Eric Niebler's executors talk at CppCon 2021

Code for Eric Niebler's CppCon 2021 talk mkdir build cd build cmake -G Ninja -DCMAKE_CXX_STANDARD:STRING=23 -DCMAKE_CXX_FLAGS:STRING="/Zc:externConste

Dec 28, 2021
a unified framework for modeling chemically reactive systems

Reaktoro Reaktoro is a unified framework for modeling chemically reactive systems. It provides methods for chemical equilibrium and kinetic calculatio

Jun 14, 2022
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Jun 15, 2022
The official Open-Asset-Importer-Library Repository. Loads 40+ 3D-file-formats into one unified and clean data structure.

Open Asset Import Library (assimp) A library to import and export various 3d-model-formats including scene-post-processing to generate missing render

Jun 24, 2022
Convenient unified display of the most relevant technical and tag data for video and audio files.

MediaInfoLib README MediaInfo(Lib) is a convenient unified display of the most relevant technical and tag data for video and audio files. MediaInfoLib

Jun 18, 2022
Jun 15, 2022
XMRig is a high performance, open source, cross platform RandomX, KawPow, CryptoNight and AstroBWT unified CPU/GPU miner

XMRig is a high performance, open source, cross platform RandomX, KawPow, CryptoNight and AstroBWT unified CPU/GPU miner and RandomX benchmark. Official binaries are available for Windows, Linux, macOS and FreeBSD.

Jun 24, 2022
Unified Gaussian Preintegrated Measurements (UGPMs)

This repository provides the C++ implementation of the preintegration methods presented in our RSS'21 paper titled Continuous Integration over SO(3) for IMU Preintegration (with video here ). If you are using that code for any purpose, please cite the corresponding work as explained at the end of this page.

Jun 17, 2022
KernInfra, a unified kernel operation framework

KernInfra KernInfra is a developer-friendly kernel read-write framework. Why KernInfra KernInfra is built to address the following engineering issues:

Apr 29, 2022
Provide a unified trading framework and connectors to popular trading venues

Boost.connector Provide a unified trading framework and connectors to popular trading venues This is currently NOT an official Boost library. Introduc

Nov 24, 2021
SpDB is a data integration tool designed to organize scientific data from different sources under the same namespace according to a global schema and to provide access to them in a unified form (views)

SpDB is a data integration tool designed to organize scientific data from different sources under the same namespace according to a global schema and to provide access to them in a unified form (views). Its main purpose is to provide a unified data access interface for complex scientific computations in order to enable the interaction and integration between different programs and databases.

Jun 22, 2022
Unified device tree for Poco X3 NFC (surya/karna)
Unified device tree for Poco X3 NFC (surya/karna)

Copyright (C) 2020 The LineageOS Project Unified device configuration for POCO X3 / X3 NFC The POCO X3/X3 NFC (codenamed "karna / surya") are mid-rang

Dec 9, 2021
XTAO Unified Distributed Storage

Anna - A branch project from CEPH Anna is a XTAO project branched from CEPH distributed storage. CEPH is a nice opensource project for unified distrib

Nov 12, 2021
An unified library for fitting primitives from 3D point cloud data with both C++&Python API.
An unified library for fitting primitives from 3D point cloud data with both C++&Python API.

PrimitivesFittingLib An unified library for fitting multiple primitives from 3D point cloud data with both C++&Python API. The supported primitives ty

Apr 14, 2022
Unified interface for selecting hardware or software SPI implementations on Arduino platforms

AceSPI Unified interface for selecting hardware or software SPI implementations on Arduino platforms. The code was initially part of the AceSegment li

Oct 22, 2021
Unified interface for selecting different implementations for communicating with a TM1637 LED controller chip on Arduino platforms

AceTMI Unified interface for communicating with a TM1637 LED controller chip on Arduino platforms. The code was initially part of the AceSegment libra

Feb 2, 2022
Unified interface for selecting different I2C implementations on Arduino platforms

AceWire Wrapper classes that provide a simple, unified interface for different I2C implementations on Arduino platforms. The code was initially part o

Feb 3, 2022