dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover

dqlite Build Status codecov

dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover.

The acronym "dqlite" stands for "distributed SQLite", meaning that dqlite extends SQLite with a network protocol that can connect together various instances of your application and have them act as a highly-available cluster, with no dependency on external databases.

Design highlights

  • Asynchronous single-threaded implementation using libuv as event loop.
  • Custom wire protocol optimized for SQLite primitives and data types.
  • Data replication based on the Raft algorithm and its efficient C-raft implementation.

License

The dqlite library is released under a slightly modified version of LGPLv3, that includes a copyright exception allowing users to statically link the library code in their project and release the final work under their own terms. See the full license text.

Try it

The simplest way to see dqlite in action is to use the demo program that comes with the Go dqlite bindings. Please see the relevant documentation in that project.

Media

A talk about dqlite was given at FOSDEM 2020, you can watch it here.

Wire protocol

If you wish to write a client, please refer to the wire protocol documentation.

Install

If you are on a Debian-based system, you can get the latest development release from dqlite's dev PPA:

sudo add-apt-repository ppa:dqlite/dev
sudo apt-get update
sudo apt-get install libdqlite-dev

Build

To build libdqlite from source you'll need:

  • A reasonably recent version of libuv (v1.8.0 or beyond).
  • A reasonably recent version of sqlite3-dev
  • A build of the C-raft Raft library.

Your distribution should already provide you with a pre-built libuv shared library and libsqlite3-dev.

To build the raft library:

git clone https://github.com/canonical/raft.git
cd raft
autoreconf -i
./configure
make
sudo make install
cd ..

Once all the required libraries are installed, in order to build the dqlite shared library itself, you can run:

autoreconf -i
./configure
make
sudo make install

Usage Notes

Detailed tracing will be enabled when the environment variable LIBDQLITE_TRACE is set before startup.

Comments
  • Lost node due to raft issue

    Lost node due to raft issue

    I've been testing failure scenarios and I have a node that can't start because it gets the following error on startup:

    raft_start(): io: load closed segment 0000000010000470-0000000010000655: 
    

    All I've been doing is randomly killing the server. ~Let me know if you want the data files.~ I emailed the data files for this node.

  • database is locked errors

    database is locked errors

    I have a goland app running on sqlite today using the configuration "./db/state.db?_journal=WAL&cache=shared." This is essentially a multithreaded app and it runs with no apparent issues. I've switched it to using dqlite and it immediately gets "database is locked" errors. Is there something I can do to allow concurrency without getting "database is locked" errors?

  • packaging dqlite in linux distribution / static linking to sqlite3 fork

    packaging dqlite in linux distribution / static linking to sqlite3 fork

    Current way of building dqlite is highly problematic because it needs sqlite3 fork.

    Installing/packaging sqlite3 fork is a problem on distribution because it conflicts with regular sqlite3. Packaging fork would mean patching it to rename libraries to some other name (libsqlite3dqlite.so maybe).

    Other solutions are: a) merge changes into upstream sqlite3 (I assume it wasn't done because upstream doesn't want it) b) change configure system in dqlite to be able to use bundled sqlite3 fork and statically link it (and only it, leave rest shared). Static linking is not a great solution (security issues in sqlite etc) but well...

    Any other ideas how to make dqlite "packagable" for any linux distro?

  • Memory spike with concurrent operations

    Memory spike with concurrent operations

    When hitting the dqlite-demo with multiple concurrent requests I can pretty much reliably reproduce a memory spike of GBs.

    To reproduce first add an extra HTTP verb handler on the dqlite-demo:

    diff --git a/cmd/dqlite-demo/dqlite-demo.go b/cmd/dqlite-demo/dqlite-demo.go
    index 49f8197..deb4a91 100644
    --- a/cmd/dqlite-demo/dqlite-demo.go
    +++ b/cmd/dqlite-demo/dqlite-demo.go
    @@ -77,6 +77,15 @@ Complete documentation is available at https://github.com/canonical/go-dqlite`,
                                            if _, err := db.Exec(update, key, value); err != nil {
                                                    result = fmt.Sprintf("Error: %s", err.Error())
                                            }
    +                               case "POST":
    +                                       result = "yes!"
    +                                       for i:=0 ; i < 10000; i++{
    +                                               value := fmt.Sprintf("This is so mdata %s", i)
    +                                               if _, err := db.Exec(update, key, value); err != nil {
    +                                                       result = fmt.Sprintf("Error: %s %d", err.Error(), i)
    +                                                       break
    +                                               }
    +                                       }
                                    default:
                                            result = fmt.Sprintf("Error: unsupported method %q", r.Method)
    

    Setup a three node cluster as described in the go-dqlite readme [1].

    On two terminals start trigger the new operations:

    while true ; do curl -X POST -d "foo=bar" http://localhost:8001/mykey1 ; done
    

    And

    while true ; do curl -X POST -d "foo=bar" http://localhost:8001/mykey2 ; done
    

    Let it run on for a few minutes and almost half the times the memory usage of one of the dqlite-demo processes will spike, eg [2].

    FYI, @sevangelatos

    [1] https://github.com/canonical/go-dqlite#demo [2] https://pasteboard.co/JET1frM.jpg

  • row size limit? getting

    row size limit? getting "Error: disk I/O error" from dqlite-demo

    is there a row byte size limit?

    libuv=1.44.2, libraft=0.15.0, sqlite3=3.39.4, dqlite=1.11.1

    running the dqlite-demo with following

    server

    CGO_LDFLAGS_ALLOW="-Wl,-z,now" CGO_ENABLED=1 go run  dqlite-demo.go --api 127.0.0.1:8001 --db 127.0.0.1:9001 -v
    

    client

    # base64 /dev/urandom | head -c 20000000 > file.txt
    #  curl -X PUT -T "file.txt" http://127.0.0.1:8001/my-key-1
    Error: disk I/O error
    
  • panic in sqlite

    panic in sqlite

    I keep getting this panic. In this situation this code path is not directly dqlite. In k3s I'm supporting both sqlite and dqlite, so when I run the old sqlite code but with the patched sqlite library it will randomly fail in this same place

    goroutine 77265 [syscall]:
    runtime.cgocall(0x3416b20, 0xc00d447388, 0x0)
            /usr/local/go/src/runtime/cgocall.go:128 +0x5b fp=0xc00d447358 sp=0xc00d447320 pc=0x40472b
    github.com/rancher/k3s/vendor/github.com/mattn/go-sqlite3._Cfunc_sqlite3_close_v2(0x9144e00, 0x0)
            _cgo_gotypes.go:607 +0x49 fp=0xc00d447388 sp=0xc00d447358 pc=0xf282b9
    github.com/rancher/k3s/vendor/github.com/mattn/go-sqlite3.(*SQLiteConn).Close.func1(0xc01717a5a0, 0x0)
            /go/src/github.com/rancher/k3s/vendor/github.com/mattn/go-sqlite3/sqlite3.go:1646 +0x5f fp=0xc00d4473c8 sp=0xc00d447388 pc=0xf3edff
    github.com/rancher/k3s/vendor/github.com/mattn/go-sqlite3.(*SQLiteConn).Close(0xc01717a5a0, 0x8, 0xc0137dd6c0)
            /go/src/github.com/rancher/k3s/vendor/github.com/mattn/go-sqlite3/sqlite3.go:1646 +0x2f fp=0xc00d4473f8 sp=0xc00d4473c8 pc=0xf37f9f
    database/sql.(*driverConn).finalClose.func2()
            /usr/local/go/src/database/sql/sql.go:521 +0x49 fp=0xc00d447430 sp=0xc00d4473f8 pc=0xf194b9
    database/sql.withLock(0x486c400, 0xc016a67a80, 0xc00d4474c8)
            /usr/local/go/src/database/sql/sql.go:3097 +0x63 fp=0xc00d447458 sp=0xc00d447430 pc=0xf19133
    database/sql.(*driverConn).finalClose(0xc016a67a80, 0x3e18580, 0x7fb7fc6d4908)
            /usr/local/go/src/database/sql/sql.go:519 +0x130 fp=0xc00d4474f0 sp=0xc00d447458 pc=0xf0c9d0
    database/sql.finalCloser.finalClose-fm(0xc0007d8560, 0x4823b60)
            /usr/local/go/src/database/sql/sql.go:565 +0x2f fp=0xc00d447518 sp=0xc00d4474f0 pc=0xf1bb9f
    database/sql.(*driverConn).Close(0xc016a67a80, 0xc016a67a80, 0x0)
            /usr/local/go/src/database/sql/sql.go:500 +0x138 fp=0xc00d447568 sp=0xc00d447518 pc=0xf0c878
    database/sql.(*DB).putConn(0xc0007d8540, 0xc016a67a80, 0x0, 0x0, 0xc0000a0200)
            /usr/local/go/src/database/sql/sql.go:1277 +0x1c8 fp=0xc00d4475d8 sp=0xc00d447568 pc=0xf101c8
    database/sql.(*driverConn).releaseConn(...)
            /usr/local/go/src/database/sql/sql.go:421
    database/sql.(*driverConn).releaseConn-fm(0x0, 0x0)
            /usr/local/go/src/database/sql/sql.go:420 +0x4c fp=0xc00d447610 sp=0xc00d4475d8 pc=0xf1bc1c
    database/sql.(*Rows).close(0xc01464ca80, 0x0, 0x0, 0x0, 0x0)
            /usr/local/go/src/database/sql/sql.go:3001 +0x15a fp=0xc00d447660 sp=0xc00d447610 pc=0xf18c0a
    database/sql.(*Rows).Close(...)
            /usr/local/go/src/database/sql/sql.go:2972
    database/sql.(*Rows).Next(0xc01464ca80, 0x0)
            /usr/local/go/src/database/sql/sql.go:2661 +0xb9 fp=0xc00d4476c0 sp=0xc00d447660 pc=0xf17389
    github.com/rancher/k3s/vendor/github.com/rancher/kine/pkg/logstructured/sqllog.RowsToEvents(0xc01464ca80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
            /go/src/github.com/rancher/k3s/vendor/github.com/rancher/kine/pkg/logstructured/sqllog/sql.go:221 +0xd9 fp=0xc00d447740 sp=0xc00d4476c0 pc=0x10143b9
    
  • Is dqlite really a single-writer DB?

    Is dqlite really a single-writer DB?

    From the FAQ "How does dqlite behave during conflict situations?", it seems that all write operations have to be performed in the master node, and therefore the non-master nodes can only make read-queries. This would require that the "user code" on non-master nodes somehow record all desired changes, and use some other networking stack to send those changes to the master to be executed. This seems very complicated and impractical. Is this correct? Could this be more precisely explained in the FAQ?

  • Case-insensitive column type lookups, DATE support

    Case-insensitive column type lookups, DATE support

    The column types are return as defined by the user, so if it was done lowercase it won't match -- this change fixes that.

    Also added a DQLITE_ISODATE type to mark the column as being a date only (not a timestamp).

  • Initial disk mode support

    Initial disk mode support

    ~WIP - don't review~

    • Have tried to run all existing tests with the new disk vfs. Most failures come from the fsm snapshot functionality not being in place for the on-disk case.
    • Not yet 100% sure about the abstraction, I feel like a user should be able to choose the VFS per database, but currently I just put the whole of dqlite in disk-mode, i.e. every database will be stored on disk. If the snapshotting behaviour is different for in-memory/on-disk, different VFS's for different databases could lead to complex snapshotting behaviour in raft, where we have to mix different methods to snapshot. Would like to avoid that.
    • SYNCs are still turned off, the real transaction is the raft log being stored to disk, the database writes to disk from SQLite don't have to be synced I think.
    • Still needs a bunch of cleanup.
  • Support compile with clang on MacOS

    Support compile with clang on MacOS

    A number of changes to make the code compile on MacOS with clang.

    The other patches in series:

    • https://github.com/canonical/go-dqlite/pull/132
    • https://github.com/canonical/raft/pull/173
  • Error while building the raft library in dqlite.

    Error while building the raft library in dqlite.

    So by following the project's README, i tried to build the raft library so I could build dqlite.

    Steps I took:

    1. Clone dqlite.
    2. Clone raft in dqlite and run commands as stated in the README:
       git clone https://github.com/canonical/raft.git
       cd raft
       autoreconf -i
       ./configure
       make
       sudo make install
       cd ..
      
    3. When I run make, I get a fatal error: 'lz4frame.h' file not found. I digged into the codebase and found this block in compress.c
    #ifdef LZ4_AVAILABLE
    #include <lz4frame.h>
    #endif
    

    I am assuming LZ4_AVAILABLE exists but lz4frame.h does not. How do I resolve this issue?

  • Test failure: integration test  membership/transferPendingTransaction

    Test failure: integration test membership/transferPendingTransaction

    see https://github.com/canonical/dqlite/actions/runs/3640210436/jobs/6144579153

    membership/transferPendingTransaction                       
      disk_mode=0                                               [ ERROR ]
    LIBDQLITE 1670426213715849606 VfsInit:2024 vfs init
    LIBDQLITE 1670426213715977709 raftProxyInit:242 raft proxy init
    LIBDQLITE 1670426213716077211 fsm__init:711 fsm init
    LIBDQLITE 1670426213748460188 impl_init:45 impl init
    LIBDQLITE 1670426213748602191 dqlite_node_start:716 dqlite node start
    LIBDQLITE 1670426213751115451 impl_listen:55 impl listen
    LIBDQLITE 1670426213751208754 clientInit:17 init client fd 15
    LIBDQLITE 1670426213751281755 VfsInit:2024 vfs init
    LIBDQLITE 1670426213751341857 conn__start:290 conn start
    LIBDQLITE 1670426213751377558 gateway__init:18 gateway init
    LIBDQLITE 1670426213751425459 raftProxyInit:242 raft proxy init
    LIBDQLITE 1670426213751445959 fsm__init:711 fsm init
    LIBDQLITE 1670426213788379145 impl_init:45 impl init
    LIBDQLITE 1670426213788441846 dqlite_node_start:716 dqlite node start
    LIBDQLITE 1670426213788733553 impl_listen:55 impl listen
    LIBDQLITE 1670426213788789055 clientInit:17 init client fd 23
    LIBDQLITE 1670426213788818155 VfsInit:2024 vfs init
    LIBDQLITE 1670426213788847156 raftProxyInit:242 raft proxy init
    LIBDQLITE 1670426213788870656 conn__start:290 conn start
    LIBDQLITE 1670426213788883957 gateway__init:18 gateway init
    LIBDQLITE 1670426213788902757 fsm__init:711 fsm init
    LIBDQLITE 1670426213820434913 impl_init:45 impl init
    LIBDQLITE 1670426213820500515 dqlite_node_start:716 dqlite node start
    LIBDQLITE 1670426213820850423 impl_listen:55 impl listen
    LIBDQLITE 1670426213820908925 clientInit:17 init client fd 31
    LIBDQLITE 1670426213820958026 clientSendHandshake:52 client send handshake fd 15
    LIBDQLITE 1670426213820988226 conn__start:290 conn start
    LIBDQLITE 1670426213821002327 gateway__init:18 gateway init
    LIBDQLITE 1670426213821037828 clientSendAdd:340 client send add fd 15 id 2 address @2
    LIBDQLITE 1670426213821089729 gateway__handle:1280 gateway handle
    LIBDQLITE 1670426213821108529 handle_add:883 handle add
    LIBDQLITE 1670426213821672243 clientRecvEmpty:379 client recv empty fd 15
    LIBDQLITE 1670426213823241380 raftChangeCb:866 raft change cb status 0
    LIBDQLITE 1670426213823324282 clientSendAssign:350 client send assign fd 15 id 2 role 1
    LIBDQLITE 1670426213823504987 clientRecvEmpty:379 client recv empty fd 15
    LIBDQLITE 1670426213823355283 gateway__resume:1327 gateway resume - finished
    LIBDQLITE 1670426213823533787 gateway__handle:1280 gateway handle
    LIBDQLITE 1670426213823544288 handle_assign:916 handle assign
    LIBDQLITE 1670426213823623790 impl_connect:169 impl connect id:2 address:@2
    LIBDQLITE 1670426213823754693 connect_work_cb:63 connect work cb
    LIBDQLITE 1670426213823810694 conn__start:290 conn start
    LIBDQLITE 1670426213823817794 gateway__init:18 gateway init
    LIBDQLITE 1670426213823881696 raft_connect:111 raft_connect
    LIBDQLITE 1670426213823889396 raftProxyAccept:271 raft proxy accept
    LIBDQLITE 1670426213823955798 connect_after_work_cb:139 connect after work cb status 0
    LIBDQLITE 1670426213824569712 impl_connect:169 impl connect id:1 address:@1
    LIBDQLITE 1670426213824606513 connect_work_cb:63 connect work cb
    LIBDQLITE 1670426213824656114 conn__start:290 conn start
    LIBDQLITE 1670426213849924420 leader__barrier:446 leader barrier
    LIBDQLITE 1670426213849927320 leader__barrier:449 not needed
    LIBDQLITE 1670426213849930120 prepareBarrierCb:279 prepare barrier cb status:0
    LIBDQLITE 1670426213849963521 gateway__resume:1327 gateway resume - finished
    LIBDQLITE 1670426213849970621 clientRecvStmt:187 client recv stmt fd 15 stmt_id 3
    LIBDQLITE 1670426213849976421 clientSendQuery:225 client send query fd 15 stmt_id 3
    LIBDQLITE 1670426213849991922 clientRecvRows:245 client recv rows fd 15
    LIBDQLITE 1670426213850001022 gateway__handle:1280 gateway handle
    LIBDQLITE 1670426213850008122 handle_query:516 handle query
    LIBDQLITE 1670426213850014622 leader__barrier:446 leader barrier
    LIBDQLITE 1670426213850017522 leader__barrier:449 not needed
    LIBDQLITE 1670426213850020822 query_barrier_cb:491 query barrier cb status:0
    LIBDQLITE 1670426213850050623 gateway__resume:1327 gateway resume - finished
    LIBDQLITE 1670426213850086124 clientInit:17 init client fd 45
    LIBDQLITE 1670426213850109525 conn__start:290 conn start
    LIBDQLITE 1670426213850117625 gateway__init:18 gateway init
    LIBDQLITE 1670426213850119325 clientSendHandshake:52 client send handshake fd 45
    LIBDQLITE 1670426213850132925 clientSendTransfer:370 client send transfer fd 45 id 2
    LIBDQLITE 1670426213850144225 clientRecvEmpty:379 client recv empty fd 45
    LIBDQLITE 1670426213850162126 gateway__handle:1280 gateway handle
    LIBDQLITE 1670426213850170226 handle_transfer:1216 handle transfer
    LIBDQLITE 1670426214007846406 raftTransferCb:1207 transfer failed
    LIBDQLITE 1670426214007908608 gateway__resume:1327 gateway resume - finished
    LIBDQLITE 1670426214007982709 clientRecvEmpty:381 read decode failed rv 0)
    Error: test/integration/test_membership.c:210: assertion failed: rv_ == 0 (1 == 0)
    

    Looks like the leadership transfer fails sometimes.

  • Support building sqlite3 from amalgamation

    Support building sqlite3 from amalgamation

    This adds a --enable-build-sqlite option to our configure.ac. In this mode, the build will look for sqlite3.c in the build root and just add that to libdqlite_la_SOURCES, instead of linking with -lsqlite3.

    The build system doesn't take care of fetching the SQLite amalgamation -- the idea is to obtain an SQLite source archive/checkout and do make sqlite3.c, or download the amalgamation from sqlite.org. It's important to use a version of the amalgamation that matches the version of sqlite.h that's on your include path.

    Our build turns on a bunch of warnings that fire on the SQLite source code; I had to modify the amalgamation by adding

    #pragma GCC diagnostic push
    #pragma GCC diagnostic ignored "-Wsign-conversion"
    #pragma GCC diagnostic ignored "-Wconversion"
    #pragma GCC diagnostic ignored "-Wfloat-equal"
    #pragma GCC diagnostic ignored "-Wfloat-conversion"
    #pragma GCC diagnostic ignored "-Wimplicit-fallthrough"
    

    at the top and

    #pragma GCC diagnostic pop
    

    at the bottom.

    The point is to be able to easily add printfs in SQLite code, turn on SQLITE_DEBUG, etc., to support debugging issues like #432 that arise from the interaction between SQLite and our VFS.

    Signed-off-by: Cole Miller [email protected]

  • Support attaching additional databases to a dqlite connection

    Support attaching additional databases to a dqlite connection

    Right now, it seems a lot of dqlite code (down to the VFS layer) assumes that operations on a single leader connection only affect the database that was OPENed on that connection. But SQLite has an in-band mechanism for "attaching" additional database files to an open connection. We've disabled this for now (#440), but we might want to provide proper support for it, particularly since ATTACH is used internally by SQLite in the implementation of VACUUM (see #435) and possibly other things. I'm opening this issue so we have a place to discuss what that support might look like and how big of a project it might be (we can close this if we decide that ATTACH isn't something we want to support).

  • Introduce RESET request

    Introduce RESET request

    This PR implements a new RESET request that allows a dqlite client to restore the database it has opened to a pristine state. Instead of deleting the in-memory database "file", we use the SQLITE_DBCONFIG_RESET_DATABASE method described here, which goes through the normal VFS channels. This still requires a separate request because of the out-of-band sqlite3_db_config calls. The implementation of the new request mimics what would happen if we did EXEC_SQL("VACUUM"), with some simplifications since we don't need to run a barrier or bind parameters.

    Closes #422

    Signed-off-by: Cole Miller [email protected]

  • Fail outstanding requests when closing the gateway or when node loses leadership

    Fail outstanding requests when closing the gateway or when node loses leadership

    Make sure all outstanding requests on the gateway are properly replied to when we close the gateway or when the node of which the gateway is part of loses leadership, cfr. #425

    It's possible this is already the case, but we should at least check it as I feel we are missing some cases.

HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine
HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine

HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine. It can provide fast and consistent execution on heterogeneous SQL data systems, e.g., OLAD database, HTAP system, SparkSQL, and Flink Stream SQL.

Sep 12, 2021
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

Dec 24, 2022
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

Jan 7, 2023
PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider: High-Performance SQL Cluster Engine for distributed big data.

Sep 8, 2022
DuckDB is an in-process SQL OLAP Database Management System
DuckDB is an in-process SQL OLAP Database Management System

DuckDB is an in-process SQL OLAP Database Management System

Jan 3, 2023
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Jan 2, 2023
An embeddable fulltext search engine. Groonga is the successor project to Senna.

README Groonga is an open-source fulltext search engine and column store. Reference manual See doc/source/ directory or http://groonga.org/docs/. Here

Dec 23, 2022
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

Dec 26, 2022
A mini database for learning database

A mini database for learning database

Nov 14, 2022
A lightweight header-only C++11 library for quick and easy SQL querying with QtSql classes.

EasyQtSql EasyQtSql is a lightweight header-only C++11 library for quick and easy SQL querying with QtSql classes. Features: Header only C++11 library

Dec 30, 2022
A type safe SQL template library for C++

sqlpp11 A type safe embedded domain specific language for SQL queries and results in C++ Documentation is found in the wiki So what is this about? SQL

Dec 30, 2022
A PostgreSQL extension providing an async networking interface accessible via SQL using a background worker and curl.

pg_net is a PostgreSQL extension exposing a SQL interface for async networking with a focus on scalability and UX.

Dec 14, 2022
A bare-bone SQL implementation

MiniSQL A bare-bone SQL implementation. Project Structure include folder contains header files of all modules. These header files are meant to be shar

Apr 23, 2022
DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.
DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.

DB Browser for SQLite What it is DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files c

Jan 2, 2023
ESE is an embedded / ISAM-based database engine, that provides rudimentary table and indexed access.

Extensible-Storage-Engine A Non-SQL Database Engine The Extensible Storage Engine (ESE) is one of those rare codebases having proven to have a more th

Dec 22, 2022
A very fast lightweight embedded database engine with a built-in query language.

upscaledb 2.2.1 Fr 10. Mär 21:33:03 CET 2017 (C) Christoph Rupp, [email protected]; http://www.upscaledb.com This is t

Dec 30, 2022
An Embedded NoSQL, Transactional Database Engine

UnQLite - Transactional Embedded Database Engine

Dec 24, 2022
A friendly and lightweight C++ database library for MySQL, PostgreSQL, SQLite and ODBC.

QTL QTL is a C ++ library for accessing SQL databases and currently supports MySQL, SQLite, PostgreSQL and ODBC. QTL is a lightweight library that con

Dec 12, 2022
Velox is a new C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.
Velox is a new C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.

Velox is a C++ database acceleration library which provides reusable, extensible, and high-performance data processing components

Jan 8, 2023