🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

ArangoDB-Logo

ArangoDB

Slack: ArangoDB-Logo

ArangoDB is a scalable open-source multi-model database natively supporting graph, document and search. All supported data models & access patterns can be comnbined in queries allowing for maximal flexibility. ArangoDB runs on prem, in the cloud – anywhere.

ArangoDB Cloud Service

Oasis is the simplest way to run ArangoDB. Users can make deployments on all major cloud providers in many regions. Test ArangoDBs Cloud Service Oasis for free.

Getting Started

For the impatient:

  • Start the ArangoDB Docker container

    docker run -e ARANGO_ROOT_PASSWORD=test123 -p 8529:8529 -d arangodb/arangodb
    
  • Alternatively, download and install ArangoDB. Start the server arangod if the installer did not do it for you.

  • Point your browser to http://127.0.0.1:8529/

Key Features in ArangoDB

  • Multi-Model: Documents, graphs and key-value pairs — model your data as you see fit for your application.
  • Joins: Conveniently join what belongs together for flexible ad-hoc querying, less data redundancy.
  • Transactions: Easy application development keeping your data consistent and safe. No hassle in your client.

Here is an AQL query that makes use of all those features:

AQL Query Example

Joins and transactions are key features for flexible, secure data designs, widely used in relational databases but lacking in many NoSQL products. However, there is no need to forgo them in ArangoDB. You decide how and when to use joins and strong consistency guarantees, without sacrificing performance and scalability.

Furthermore, ArangoDB offers a JavaScript framework called Foxx that is executed in the database server with direct access to the data. Build your own data-centric microservices with a few lines of code. By extending the HTTP API with user code written in JavaScript, ArangoDB can be turned into a strict schema-enforcing persistence engine.

Other features of ArangoDB include:

  • Use a data-centric microservices approach with ArangoDB Foxx and fuse your application-logic and database together for maximal throughput
  • JavaScript for all: no language zoo, you can use one language from your browser to your back-end
  • Flexible data modeling: model your data as combination of key-value pairs, documents or graphs - perfect for social relations
  • Different storage engines: ArangoDB provides a storage engine for mostly in-memory operations and an alternative storage engine based on RocksDB which handle datasets that are much bigger than RAM.
  • Powerful query language (AQL) to retrieve and modify data
  • Transactions: run queries on multiple documents or collections with optional transactional consistency and isolation
  • Replication and Sharding: set up the database in a master-slave configuration or spread bigger datasets across multiple servers
  • Configurable durability: let the application decide if it needs more durability or more performance
  • Schema-free schemata let you combine the space efficiency of MySQL with the performance power of NoSQL
  • index support: exploit the perfect index types for your different use cases, including document queries, graph queries, geo location queries, and fulltext searches
  • ArangoDB is multi-threaded - exploit the power of all your cores
  • Easy to use web interface and command-line client tools for interaction with the server
  • It is open source (Apache License 2.0)

For more in-depth information read the design goals of ArangoDB

Latest Release

Packages for all supported platforms can be downloaded from https://www.arangodb.com/download.

Please also check what's new in ArangoDB.

More Information

See our documentation for detailed installation & compilation instructions.

There is an introductory chapter showing the basic operation of ArangoDB. To learn ArangoDB's query language check out the AQL tutorial.

Stay in Contact

We really appreciate feature requests and bug reports. Please use our Github issue tracker for reporting them:

https://github.com/arangodb/arangodb/issues

You can use our Google group for improvements, feature requests, comments:

https://www.arangodb.com/community

StackOverflow is great for questions about AQL, usage scenarios etc.

https://stackoverflow.com/questions/tagged/arangodb

To chat with the community and the developers we offer a Slack chat:

https://slack.arangodb.com/

Owner
ArangoDB
the multi-model NoSQL database
ArangoDB
Comments
  • front-end application howto

    front-end application howto

    I don't find any documentation on how to load an application into ArangoDB. A better explanation of the arango.conf format would also be great, what options are required for setting up an application properly. For example I would like the default url http://127.0.0.1:8529 point to my app which is mapped to an arbitrary url/path.

    How is this done ?

    I'm currently on version 1.0.alpha2

  • intermittent issues with 1.0.0

    intermittent issues with 1.0.0

    I'm suddenly experiencing failures when creating cursors using the nodejs arango-client ArangoDB gets unresponsive and CTRL-C doesn't terminate the process. This message is shown several times after issuing ctrl-c.

    2012-09-04T10:17:33Z [9149] WARNING forcefully shutting down thread 'dispatcher'
    

    To replicate, install the arango-client with devRequirements and run the tests.

    git clone git://github.com/kaerus/arango-client.git
    cd arango-client
    npm install -d
    npm test
    

    By the way, Arangodb-1.0.0 install procedure (sudo make install) didn't create a log directory so the process failed to launch.

  • Feature Request (from Google Groups): DATE handling

    Feature Request (from Google Groups): DATE handling

    I took the liberty to create a feature request out of this discussion: https://groups.google.com/forum/?fromgroups=#!topic/arangodb/3Wt4c44mc0I

    Excerpt: 2) ArangoDB Server provides date handling functionality.

    a) As date is a special type, I believe it could be handled that way. This can be provided by defining attributes in a collection as of special type 'date'. This could look like this:

    db.myCollection.defineAttributeType('my.attribute', 'date');  
    // This would for example set the nested attribute 'attribute' as a date field.
    

    ... for a nested document attribute inside a list, it could be:

    db.myCollection.defineAttributeType('my[*].attribute', 'date');
      // This would for example set the nested attribute 'attribute' as a date field. ```
    

    ... or for a list that would contain dates:

    db.myCollection.defineAttributeType('my[*]', 'date');
      // This would for example set the contents of a list as a date field
    

    ... mixing both could also be possible. So one could have a list of direct values and nested ones:

     {"my":[{"attribute":"2012-11-30T22:24:32.43Z"}, "2012-11-30T22:24:32.43Z"]}
    

    b) The user would pass datetime strings like { "my":{"attribute":"2012-11-30T22:24:32.43Z"}} or maybe other formats like { "my":{"attribute":"2012-11-30"}}... etc

      Internally, ArangoDB would convert and store those as timestamps.
      Knowing that it's a date field, it would automatically convert the datetime strings to timestamps for storing.
    
      That way, no special JSON handling is required by either the server or the client.
      Error handling: if the datetime string would have invalid data, the operation should be rejected.
    

    c) With those would of course come some datetime functions to query (AQL) Simple queries would probably not need anything special, as they are limited anyway. AQL queries:

    DATE_MATH ("{mathcommand like add || subtract}", {date field to operate on, or a value}, {the modifying value , or a string like INTERVAL 1 WEEK'} );
     // Function to do math on a field or value
    
    DATE_CURRENT (); Gets current date on server.
    

    ...maybe more, that I haven't thought of just yet. But you get the idea :)

    What do you think?

  • ArangoDB on Windows hanging...

    ArangoDB on Windows hanging...

    Besides my main Linux installations of ArangoDB, I also occasionally use a local installation on my Windows 7 (x64). Mainly for running ArangoDB-PHP tests also on Windows.

    So, I upgraded the Windows installation a few days ago, from 1.3.1(I think it was) to 1.4.0.

    Today I was testing the ArangoDB-PHP Driver and the tests were suddenly failing (only locally on the windows machine). I closed the Batch window, and reopened it, and the tests were running successfully. I'd like to note here, that I hadn't turned off my machine since yesterday, so that batch window was open for at least 12 hours if I remember correctly.

    I also noticed that the database files are not deleted from the data directory (The ones that are created and deleted through the ArangoDB-PHP test suite.)

    I couldn't attach a zip file here, so I am sending the zipped data directory via mail to @jsteemann (Please also check Spam folder :smile: ) for analysis.

  • Error starting up Coordinator

    Error starting up Coordinator

    Hi I am having an issue starting up the coordinator using version 3.1.8. I am not sure if this is a bug or user error.

    This is the command i am using to launch the coordinator

    sudo /usr/sbin/arangod --server.authentication=false --server.endpoint tcp://$HOSTNAME:28530 --cluster.my-address tcp://$HOSTNAME:28530 --cluster.my-local-info osdi-03-coordinator --cluster.my-role COORDINATOR --cluster.agency-endpoint tcp://$HOSTNAME:25001 /data/data/arango/cordinator

    And here's the error message i am getting. Please let me know how i can resolve this

    2017-01-22T20:22:06Z [15360] INFO file-descriptors (nofiles) hard limit is 999999, soft limit is 999999 2017-01-22T20:22:06Z [15360] INFO created database directory '/data/data/arango/cordinator'. 2017-01-22T20:22:06Z [15360] INFO WAL directory '/data/data/arango/cordinator/journals' does not exist. creating it... 2017-01-22T20:22:06Z [15360] INFO JavaScript using startup '//usr/share/arangodb3/js', application '/var/lib/arangodb3-apps' 2017-01-22T20:22:09Z [15360] INFO Cluster feature is turned on. Agency version: {"server":"arango","version":"3.1.8","license":"community"}, Agency endpoints: http+tcp://osdi-03:25001, server id: 'Coordinator001', internal address: tcp://osdi-03:28530, role: COORDINATOR 2017-01-22T20:22:09Z [15360] INFO using heartbeat interval value '1000 ms' from agency 2017-01-22T20:22:09Z [15360] INFO In database '_system': No version information file found in database directory. 2017-01-22T20:22:09Z [15360] INFO In database '_system': Database is up-to-date (30108/cluster-local/init) 2017-01-22T20:22:09Z [15360] INFO using endpoint 'http+tcp://osdi-03:28530' for non-encrypted requests 2017-01-22T20:22:09Z [15360] INFO In database '_system': Found 14 defined task(s), 12 task(s) to run 2017-01-22T20:22:09Z [15360] INFO In database '_system': state coordinator-global/init, tasks setupGraphs, setupUsers, createUsersIndex, addDefaultUserSystem, createModules, createRouting, insertRedirectionsAll, setupAqlFunctions, createStatistics, createFrontend, setupQueues, setupJobs 2017-01-22T20:22:09Z [15360] WARNING {cluster} createCollectionCoordinator: replicationFactor is too large for the number of DBservers 2017-01-22T20:22:15Z [15360] WARNING {cluster} createCollectionCoordinator: replicationFactor is too large for the number of DBservers 2017-01-22T20:22:25Z [15360] ERROR In database '_system': Executing task #4 (addDefaultUserSystem: add default root user for system database) failed with exception: ArangoError 502: could not determine number of documents in collection (while optimizing plan) ArangoError: could not determine number of documents in collection (while optimizing plan) 2017-01-22T20:22:25Z [15360] ERROR at Error (native) 2017-01-22T20:22:25Z [15360] ERROR at ArangoStatement.execute (/usr/share/arangodb3/js/server/modules/@arangodb/arango-statement.js:81:16) 2017-01-22T20:22:25Z [15360] ERROR at ArangoDatabase._query (/usr/share/arangodb3/js/server/modules/@arangodb/arango-database.js:79:45) 2017-01-22T20:22:25Z [15360] ERROR at SimpleQueryByExample.execute (/usr/share/arangodb3/js/server/modules/@arangodb/simple-query.js:137:42) 2017-01-22T20:22:25Z [15360] ERROR at SimpleQueryByExample.SimpleQuery.toArray (/usr/share/arangodb3/js/common/modules/@arangodb/simple-query-common.js:340:8) 2017-01-22T20:22:25Z [15360] ERROR at ArangoCollection.firstExample (/usr/share/arangodb3/js/server/modules/@arangodb/arango-collection.js:287:71) 2017-01-22T20:22:25Z [15360] ERROR at Object.exports.save (/usr/share/arangodb3/js/server/modules/@arangodb/users.js:135:22) 2017-01-22T20:22:25Z [15360] ERROR at Object.addTask.task (/usr/share/arangodb3/js/server/upgrade-database.js:510:21) 2017-01-22T20:22:25Z [15360] ERROR at runTasks (/usr/share/arangodb3/js/server/upgrade-database.js:274:27) 2017-01-22T20:22:25Z [15360] ERROR at upgradeDatabase (/usr/share/arangodb3/js/server/upgrade-database.js:346:16) 2017-01-22T20:22:25Z [15360] ERROR In database '_system': Executing task #4 (addDefaultUserSystem: add default root user for system database) failed. Aborting init procedure. 2017-01-22T20:22:25Z [15360] ERROR In database '_system': Please fix the problem and try starting the server again. 2017-01-22T20:22:25Z [15360] ERROR upgrade-database.js for cluster script failed! 2017-01-22T20:22:25Z [15360] WARNING {cluster} createCollectionCoordinator: replicationFactor is too large for the number of DBservers 2017-01-22T20:22:35Z [15360] ERROR ArangoError: could not determine number of documents in collection (while optimizing plan) 2017-01-22T20:22:35Z [15360] ERROR at Error (native) 2017-01-22T20:22:35Z [15360] ERROR at ArangoStatement.execute (/usr/share/arangodb3/js/server/modules/@arangodb/arango-statement.js:81:16) 2017-01-22T20:22:35Z [15360] ERROR at ArangoDatabase._query (/usr/share/arangodb3/js/server/modules/@arangodb/arango-database.js:79:45) 2017-01-22T20:22:35Z [15360] ERROR at SimpleQueryAll.execute (/usr/share/arangodb3/js/server/modules/@arangodb/simple-query.js:96:42) 2017-01-22T20:22:35Z [15360] ERROR at SimpleQueryAll.SimpleQuery.hasNext (/usr/share/arangodb3/js/common/modules/@arangodb/simple-query-common.js:388:8) 2017-01-22T20:22:35Z [15360] ERROR at refillCaches (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/manager.js:265:17) 2017-01-22T20:22:35Z [15360] ERROR at Object.initializeFoxx (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/manager.js:1483:3) 2017-01-22T20:22:35Z [15360] ERROR at Object.foxxes (/usr/share/arangodb3/js/server/bootstrap/foxxes.js:64:47) 2017-01-22T20:22:35Z [15360] ERROR at server/bootstrap/cluster-bootstrap.js:51:54 2017-01-22T20:22:35Z [15360] ERROR at server/bootstrap/cluster-bootstrap.js:55:2 2017-01-22T20:22:35Z [15360] ERROR JavaScript exception in file '/usr/share/arangodb3/js/server/modules/@arangodb/foxx/queues/index.js' at 108,7: TypeError: Cannot read property 'save' of undefined 2017-01-22T20:22:35Z [15360] ERROR ! throw err; 2017-01-22T20:22:35Z [15360] ERROR ! ^ 2017-01-22T20:22:35Z [15360] FATAL error during execution of JavaScript file 'server/bootstrap/coordinator.js'

  • Regular

    Regular "arangod.exe stopped working" crashes on Windows

    Hi there!

    We have regular "arangod.exe stopped working" crashes on Windows. Happens with ArangoDB 2.5.2, 2.5.3, 2.6. We could not isolate what causes it, but it happens quite often on developer machines. Not necessarily when the DB is used heavily, often happens just randomly, sometimes more than once a day.

    Attached you can find several instances of the crash happening, screenshots from windows Event Log and the details of the crashes, also a log file from ArangoDB itself.

    The common log entries for the crashes are :

    • In the ArangoDB log: ERROR Unhandled exception: -1073741819 - will crash now.
    • In the windows event log: Unhandled exception: -1073741819 - will crash now. ....\arangod\RestServer\WindowsServiceUtils.cpp unhandledExceptionHandler 527

    Full logs can be found here: https://drive.google.com/file/d/0B-Rhoo8GyarIUDFVb0hMUnZNVTA/view and here https://drive.google.com/file/d/0B-Rhoo8GyarId29Rc3lTVDQtZW8/view

    arango_app_error_event arango_crash_event_log_scr_gmt 2

  • Possible memory leak / regression in 3.3+

    Possible memory leak / regression in 3.3+

    my environment running ArangoDB

    I'm using ArangoDB version:

    • [x] 3.3.8 (subsequently upgraded to 3.3.10)

    Mode:

    • [x] Cluster

    Storage-Engine:

    • [x] rocksdb

    On this operating system:

    • [x] Linux
      • [x] other: self-built Docker container using the latest Ubuntu package

    this is an installation-related issue:

    Hello,

    Since upgrading to 3.3 we've noticed a rather drastic increase in memory usage of server nodes over time. I've had a look at some of the existing open tickets and this looks quite similar to #5414 - I've decided to open a separate issue and let you decide though.

    We're seeing this problem only in server instances - both the agency and coordinator nodes have very stable memory usage. Our standard deployment model for Arango is 3x agency, 3x coordinator and 3x server. We're using the following config options (I've skipped ones that seem irrelevant to me like directory locations, ip addresses etc - let me know if you'd like the full list):

    --server.authentication=false
    --cluster.my-role PRIMARY
    --log.level info
    --javascript.v8-contexts 16
    --javascript.v8-max-heap 3072
    --server.storage-engine rocksdb
    

    Here's a list of sysctls we're setting:

    - { name: "vm.max_map_count", value: "262144" }
    - { name: "vm.overcommit_memory", value: "0" } # arangodb recommend setting this to 2 but this causes a lot of issues bringing other containers up
    - { name: "vm.zone_reclaim_mode", value: "0" }
    - { name: "vm.swappiness", value: "1" }
    - { name: "net.core.somaxconn", value: "65535" }
    

    net.core.somaxconn is also set (same value) on the Docker container.

    We're setting the transparent_hugepage defrag and enabled properties to never.

    We've upgraded from 3.2.9 to 3.3.0 and have since used 3.3.3, 3.3.4 and on 3.3.8 now. Here's a memory usage (RSS) for the server nodes in one of our environments (which got upgraded to 3.3.x around 25th April - note that this environment gets shutdown in the evening every day hence the large gaps in the graph):

    screen shot 2018-06-11 at 13 06 54

    This is the above graph zoomed in to the last 5 working day period:

    screen shot 2018-06-11 at 13 06 22

    This is another environment, the upgrade to 3.3.x happened on 3rd of April. The change in the memory usage pattern on the 27th April has been caused by applying a docker memory limit of 2g.

    screen shot 2018-06-11 at 13 05 16

    The above environments have extremely light usage of Arango.

    Here's one of which gets used a bit more: screen shot 2018-06-11 at 13 22 43

    As a reference point of sorts, here's what it looks like when we run a load test against our application: screen shot 2018-06-11 at 13 23 39 The server nodes eventually tail off at 6.4GB and memory usage remains perfectly stable afterwards.

    All of the above graphs were taken using the same settings I've mentioned earlier.

    Let me know what other information would be useful to provide - I guess disabling statistics and or Foxx queues would be something you might want us to try? If so - shall we try disabling both off or try them one by one (if so - what order would you prefer)?

    Thanks, Simon

    edit: as part of the investigation I have upgraded from 3.3.8 to 3.3.10

  • Ubuntu upgrade fails

    Ubuntu upgrade fails

    my environment running ArangoDB

    I'm using the latest ArangoDB of the respective release series:

    • [ ] 2.8
    • [ ] 3.0
    • [ ] 3.1
    • [x] 3.2
    • [ ] self-compiled devel branch

    Mode:

    • [ ] Cluster
    • [x] Single-Server

    On this operating system:

    • [ ] DCOS on
      • [ ] AWS
      • [ ] Azure
      • [ ] own infrastructure
    • [ ] Linux
      • [ ] Debian .deb
      • [x] Ubuntu .deb

    this is an installation-related issue:

    I got the following error while trying to upgrade Arangodb 3.2.4 to 3.2.6 with apt on my ubuntu server:

    Paketaktualisierung (Upgrade) wird berechnet... Fertig
    Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
      linux-headers-4.4.0-96 linux-headers-4.4.0-96-generic linux-image-4.4.0-96-generic linux-image-extra-4.4.0-96-generic
    Verwenden Sie »sudo apt autoremove«, um sie zu entfernen.
    Die folgenden Pakete werden aktualisiert (Upgrade):
      arangodb3 distro-info-data grub-common grub-pc grub-pc-bin grub2-common initramfs-tools initramfs-tools-bin initramfs-tools-core iproute2 libgnutls-openssl27 libgnutls30 libpam-systemd libsystemd0 libudev1 lshw mdadm
      python3-distupgrade python3-update-manager resolvconf snap-confine snapd systemd systemd-sysv ubuntu-core-launcher ubuntu-release-upgrader-core udev update-manager-core
    28 aktualisiert, 0 neu installiert, 0 zu entfernen und 0 nicht aktualisiert.
    Es müssen 56,5 MB an Archiven heruntergeladen werden.
    Nach dieser Operation werden 8.393 kB Plattenplatz zusätzlich benutzt.
    Holen:1 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub-pc amd64 2.02~beta2-36ubuntu3.14 [197 kB]
    Holen:2 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub2-common amd64 2.02~beta2-36ubuntu3.14 [511 kB]
    Holen:3 https://www.arangodb.com/repositories/arangodb32/xUbuntu_16.04  arangodb3 3.2.6 [33,6 MB]
    Holen:4 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub-pc-bin amd64 2.02~beta2-36ubuntu3.14 [888 kB]
    Holen:5 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub-common amd64 2.02~beta2-36ubuntu3.14 [1.705 kB]
    Holen:6 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libsystemd0 amd64 229-4ubuntu21 [205 kB]
    Holen:7 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpam-systemd amd64 229-4ubuntu21 [115 kB]
    Holen:8 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 systemd amd64 229-4ubuntu21 [3.628 kB]
    Holen:9 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 udev amd64 229-4ubuntu21 [992 kB]
    Holen:10 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 mdadm amd64 3.3-2ubuntu7.5 [393 kB]
    Holen:11 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libudev1 amd64 229-4ubuntu21 [54,6 kB]
    Holen:12 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.9 [8.616 B]
    Holen:13 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools-core all 0.122ubuntu8.9 [42,7 kB]
    Holen:14 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools-bin amd64 0.122ubuntu8.9 [9.632 B]
    Holen:15 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 systemd-sysv amd64 229-4ubuntu21 [12,1 kB]
    Holen:16 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-core-launcher amd64 2.28.5 [1.568 B]
    Holen:17 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 snapd amd64 2.28.5 [12,6 MB]
    Holen:18 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 snap-confine amd64 2.28.5 [1.726 B]
    Holen:19 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 distro-info-data all 0.28ubuntu0.5 [4.224 B]
    Holen:20 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 iproute2 amd64 4.3.0-1ubuntu3.16.04.2 [522 kB]
    Holen:21 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgnutls-openssl27 amd64 3.4.10-4ubuntu1.4 [22,0 kB]
    Holen:22 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgnutls30 amd64 3.4.10-4ubuntu1.4 [548 kB]
    Holen:23 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 resolvconf all 1.78ubuntu5 [52,0 kB]
    Holen:24 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 lshw amd64 02.17-1.1ubuntu3.3 [215 kB]
    Holen:25 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-release-upgrader-core all 1:16.04.23 [29,4 kB]
    Holen:26 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-distupgrade all 1:16.04.23 [104 kB]
    Holen:27 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-update-manager all 1:16.04.10 [31,9 kB]
    Holen:28 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 update-manager-core all 1:16.04.10 [5.320 B]
    Es wurden 56,5 MB in 4 min 33 s geholt (207 kB/s).
    Vorkonfiguration der Pakete ...
    (Lese Datenbank ... 132347 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von .../grub-pc_2.02~beta2-36ubuntu3.14_amd64.deb ...
    Entpacken von grub-pc (2.02~beta2-36ubuntu3.14) über (2.02~beta2-36ubuntu3.12) ...
    Vorbereitung zum Entpacken von .../grub2-common_2.02~beta2-36ubuntu3.14_amd64.deb ...
    Entpacken von grub2-common (2.02~beta2-36ubuntu3.14) über (2.02~beta2-36ubuntu3.12) ...
    Vorbereitung zum Entpacken von .../grub-pc-bin_2.02~beta2-36ubuntu3.14_amd64.deb ...
    Entpacken von grub-pc-bin (2.02~beta2-36ubuntu3.14) über (2.02~beta2-36ubuntu3.12) ...
    Vorbereitung zum Entpacken von .../grub-common_2.02~beta2-36ubuntu3.14_amd64.deb ...
    Entpacken von grub-common (2.02~beta2-36ubuntu3.14) über (2.02~beta2-36ubuntu3.12) ...
    Vorbereitung zum Entpacken von .../libsystemd0_229-4ubuntu21_amd64.deb ...
    Entpacken von libsystemd0:amd64 (229-4ubuntu21) über (229-4ubuntu19) ...
    Trigger für man-db (2.7.5-1) werden verarbeitet ...
    Trigger für install-info (6.1.0.dfsg.1-5) werden verarbeitet ...
    Trigger für ureadahead (0.100.0-19) werden verarbeitet ...
    Trigger für libc-bin (2.23-0ubuntu9) werden verarbeitet ...
    libsystemd0:amd64 (229-4ubuntu21) wird eingerichtet ...
    Trigger für libc-bin (2.23-0ubuntu9) werden verarbeitet ...
    (Lese Datenbank ... 132347 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von .../libpam-systemd_229-4ubuntu21_amd64.deb ...
    Entpacken von libpam-systemd:amd64 (229-4ubuntu21) über (229-4ubuntu19) ...
    Vorbereitung zum Entpacken von .../systemd_229-4ubuntu21_amd64.deb ...
    Entpacken von systemd (229-4ubuntu21) über (229-4ubuntu19) ...
    Trigger für man-db (2.7.5-1) werden verarbeitet ...
    Trigger für dbus (1.10.6-1ubuntu3.3) werden verarbeitet ...
    Trigger für ureadahead (0.100.0-19) werden verarbeitet ...
    systemd (229-4ubuntu21) wird eingerichtet ...
    addgroup: Die Gruppe »systemd-journal« existiert bereits als Systemgruppe. Programmende.
    (Lese Datenbank ... 132347 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von .../udev_229-4ubuntu21_amd64.deb ...
    Entpacken von udev (229-4ubuntu21) über (229-4ubuntu19) ...
    Vorbereitung zum Entpacken von .../mdadm_3.3-2ubuntu7.5_amd64.deb ...
    Entpacken von mdadm (3.3-2ubuntu7.5) über (3.3-2ubuntu7.4) ...
    Vorbereitung zum Entpacken von .../libudev1_229-4ubuntu21_amd64.deb ...
    Entpacken von libudev1:amd64 (229-4ubuntu21) über (229-4ubuntu19) ...
    Trigger für man-db (2.7.5-1) werden verarbeitet ...
    Trigger für ureadahead (0.100.0-19) werden verarbeitet ...
    Trigger für systemd (229-4ubuntu21) werden verarbeitet ...
    Trigger für libc-bin (2.23-0ubuntu9) werden verarbeitet ...
    libudev1:amd64 (229-4ubuntu21) wird eingerichtet ...
    Trigger für libc-bin (2.23-0ubuntu9) werden verarbeitet ...
    (Lese Datenbank ... 132349 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von .../initramfs-tools_0.122ubuntu8.9_all.deb ...
    Entpacken von initramfs-tools (0.122ubuntu8.9) über (0.122ubuntu8.8) ...
    Vorbereitung zum Entpacken von .../initramfs-tools-core_0.122ubuntu8.9_all.deb ...
    Entpacken von initramfs-tools-core (0.122ubuntu8.9) über (0.122ubuntu8.8) ...
    Vorbereitung zum Entpacken von .../initramfs-tools-bin_0.122ubuntu8.9_amd64.deb ...
    Entpacken von initramfs-tools-bin (0.122ubuntu8.9) über (0.122ubuntu8.8) ...
    Vorbereitung zum Entpacken von .../systemd-sysv_229-4ubuntu21_amd64.deb ...
    Entpacken von systemd-sysv (229-4ubuntu21) über (229-4ubuntu19) ...
    Trigger für man-db (2.7.5-1) werden verarbeitet ...
    systemd-sysv (229-4ubuntu21) wird eingerichtet ...
    (Lese Datenbank ... 132349 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von .../ubuntu-core-launcher_2.28.5_amd64.deb ...
    Entpacken von ubuntu-core-launcher (2.28.5) über (2.27.5) ...
    Vorbereitung zum Entpacken von .../snapd_2.28.5_amd64.deb ...
    Warning: Stopping snapd.service, but it can still be activated by:
      snapd.socket
    Entpacken von snapd (2.28.5) über (2.27.5) ...
    Vorbereitung zum Entpacken von .../snap-confine_2.28.5_amd64.deb ...
    Entpacken von snap-confine (2.28.5) über (2.27.5) ...
    Vorbereitung zum Entpacken von .../distro-info-data_0.28ubuntu0.5_all.deb ...
    Entpacken von distro-info-data (0.28ubuntu0.5) über (0.28ubuntu0.3) ...
    Vorbereitung zum Entpacken von .../iproute2_4.3.0-1ubuntu3.16.04.2_amd64.deb ...
    Entpacken von iproute2 (4.3.0-1ubuntu3.16.04.2) über (4.3.0-1ubuntu3.16.04.1) ...
    Vorbereitung zum Entpacken von .../libgnutls-openssl27_3.4.10-4ubuntu1.4_amd64.deb ...
    Entpacken von libgnutls-openssl27:amd64 (3.4.10-4ubuntu1.4) über (3.4.10-4ubuntu1.3) ...
    Vorbereitung zum Entpacken von .../libgnutls30_3.4.10-4ubuntu1.4_amd64.deb ...
    Entpacken von libgnutls30:amd64 (3.4.10-4ubuntu1.4) über (3.4.10-4ubuntu1.3) ...
    Vorbereitung zum Entpacken von .../resolvconf_1.78ubuntu5_all.deb ...
    Entpacken von resolvconf (1.78ubuntu5) über (1.78ubuntu4) ...
    Vorbereitung zum Entpacken von .../lshw_02.17-1.1ubuntu3.3_amd64.deb ...
    Entpacken von lshw (02.17-1.1ubuntu3.3) über (02.17-1.1ubuntu3.2) ...
    Vorbereitung zum Entpacken von .../ubuntu-release-upgrader-core_1%3a16.04.23_all.deb ...
    Entpacken von ubuntu-release-upgrader-core (1:16.04.23) über (1:16.04.22) ...
    Vorbereitung zum Entpacken von .../python3-distupgrade_1%3a16.04.23_all.deb ...
    Entpacken von python3-distupgrade (1:16.04.23) über (1:16.04.22) ...
    Vorbereitung zum Entpacken von .../python3-update-manager_1%3a16.04.10_all.deb ...
    Entpacken von python3-update-manager (1:16.04.10) über (1:16.04.9) ...
    Vorbereitung zum Entpacken von .../update-manager-core_1%3a16.04.10_all.deb ...
    Entpacken von update-manager-core (1:16.04.10) über (1:16.04.9) ...
    Vorbereitung zum Entpacken von .../arangodb3_3.2.6_amd64.deb ...
    Entpacken von arangodb3 (3.2.6) über (3.2.4) ...
    Trigger für man-db (2.7.5-1) werden verarbeitet ...
    Trigger für libc-bin (2.23-0ubuntu9) werden verarbeitet ...
    Trigger für ureadahead (0.100.0-19) werden verarbeitet ...
    Trigger für systemd (229-4ubuntu21) werden verarbeitet ...
    grub-common (2.02~beta2-36ubuntu3.14) wird eingerichtet ...
    update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
    grub2-common (2.02~beta2-36ubuntu3.14) wird eingerichtet ...
    grub-pc-bin (2.02~beta2-36ubuntu3.14) wird eingerichtet ...
    grub-pc (2.02~beta2-36ubuntu3.14) wird eingerichtet ...
    Installing for i386-pc platform.
    installation beendet. Keine Fehler aufgetreten.
    Grub-Konfigurationsdatei wird generiert …
    Linux-Abbild gefunden: /boot/vmlinuz-4.4.0-98-generic
    initrd-Abbild gefunden: /boot/initrd.img-4.4.0-98-generic
    Linux-Abbild gefunden: /boot/vmlinuz-4.4.0-97-generic
    initrd-Abbild gefunden: /boot/initrd.img-4.4.0-97-generic
    Linux-Abbild gefunden: /boot/vmlinuz-4.4.0-96-generic
    initrd-Abbild gefunden: /boot/initrd.img-4.4.0-96-generic
    erledigt
    libpam-systemd:amd64 (229-4ubuntu21) wird eingerichtet ...
    udev (229-4ubuntu21) wird eingerichtet ...
    addgroup: Die Gruppe »input« existiert bereits als Systemgruppe. Programmende.
    update-initramfs: deferring update (trigger activated)
    mdadm (3.3-2ubuntu7.5) wird eingerichtet ...
    Neue Version der Konfigurationsdatei /etc/init.d/mdadm wird installiert ...
    update-initramfs: deferring update (trigger activated)
    Grub-Konfigurationsdatei wird generiert …
    Linux-Abbild gefunden: /boot/vmlinuz-4.4.0-98-generic
    initrd-Abbild gefunden: /boot/initrd.img-4.4.0-98-generic
    Linux-Abbild gefunden: /boot/vmlinuz-4.4.0-97-generic
    initrd-Abbild gefunden: /boot/initrd.img-4.4.0-97-generic
    Linux-Abbild gefunden: /boot/vmlinuz-4.4.0-96-generic
    initrd-Abbild gefunden: /boot/initrd.img-4.4.0-96-generic
    File descriptor 3 (pipe:[14045643]) leaked on lvs invocation. Parent PID 2676: /bin/sh
    erledigt
    update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
    initramfs-tools-bin (0.122ubuntu8.9) wird eingerichtet ...
    initramfs-tools-core (0.122ubuntu8.9) wird eingerichtet ...
    initramfs-tools (0.122ubuntu8.9) wird eingerichtet ...
    update-initramfs: deferring update (trigger activated)
    snapd (2.28.5) wird eingerichtet ...
    Neue Version der Konfigurationsdatei /etc/apparmor.d/usr.lib.snapd.snap-confine.real wird installiert ...
    Neue Version der Konfigurationsdatei /etc/profile.d/apps-bin-path.sh wird installiert ...
    ubuntu-core-launcher (2.28.5) wird eingerichtet ...
    snap-confine (2.28.5) wird eingerichtet ...
    distro-info-data (0.28ubuntu0.5) wird eingerichtet ...
    iproute2 (4.3.0-1ubuntu3.16.04.2) wird eingerichtet ...
    libgnutls30:amd64 (3.4.10-4ubuntu1.4) wird eingerichtet ...
    libgnutls-openssl27:amd64 (3.4.10-4ubuntu1.4) wird eingerichtet ...
    resolvconf (1.78ubuntu5) wird eingerichtet ...
    lshw (02.17-1.1ubuntu3.3) wird eingerichtet ...
    arangodb3 (3.2.6) wird eingerichtet ...
    FATAL ERROR: EXIT_FAILED - "exit with error"
    dpkg: Fehler beim Bearbeiten des Paketes arangodb3 (--configure):
     Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück
    python3-distupgrade (1:16.04.23) wird eingerichtet ...
    python3-update-manager (1:16.04.10) wird eingerichtet ...
    ubuntu-release-upgrader-core (1:16.04.23) wird eingerichtet ...
    update-manager-core (1:16.04.10) wird eingerichtet ...
    Trigger für systemd (229-4ubuntu21) werden verarbeitet ...
    Trigger für ureadahead (0.100.0-19) werden verarbeitet ...
    Trigger für initramfs-tools (0.122ubuntu8.9) werden verarbeitet ...
    update-initramfs: Generating /boot/initrd.img-4.4.0-98-generic
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    Trigger für libc-bin (2.23-0ubuntu9) werden verarbeitet ...
    Trigger für resolvconf (1.78ubuntu5) werden verarbeitet ...
    Fehler traten auf beim Bearbeiten von:
     arangodb3
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    

    I tried uninstalling -> reinstalling arangodb completly with no success:

    [email protected]:~# apt-get install arangodb3=3.2.6
    Paketlisten werden gelesen... Fertig
    Abhängigkeitsbaum wird aufgebaut.
    Statusinformationen werden eingelesen.... Fertig
    Die folgenden NEUEN Pakete werden installiert:
      arangodb3
    0 aktualisiert, 1 neu installiert, 0 zu entfernen und 0 nicht aktualisiert.
    Es müssen noch 0 B von 33,6 MB an Archiven heruntergeladen werden.
    Nach dieser Operation werden 277 MB Plattenplatz zusätzlich benutzt.
    Vorkonfiguration der Pakete ...
    Vormals nicht ausgewähltes Paket arangodb3 wird gewählt.
    (Lese Datenbank ... 94794 Dateien und Verzeichnisse sind derzeit installiert.)
    Vorbereitung zum Entpacken von .../arangodb3_3.2.6_amd64.deb ...
    Entpacken von arangodb3 (3.2.6) ...
    Trigger für systemd (229-4ubuntu21) werden verarbeitet ...
    Trigger für ureadahead (0.100.0-19) werden verarbeitet ...
    Trigger für man-db (2.7.5-1) werden verarbeitet ...
    arangodb3 (3.2.6) wird eingerichtet ...
    FATAL ERROR: EXIT_FAILED - "exit with error"
    dpkg: Fehler beim Bearbeiten des Paketes arangodb3 (--configure):
     Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück
    Fehler traten auf beim Bearbeiten von:
     arangodb3
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    

    Also trying to re-install 3.2.4 failed with the same error. Before each install command I forced apt to clean its package cache and update sources:

    apt-get clean
    apt-get autoclean
    apt-get update
    

    Thanks for any help.

    • Michl
  • Add ArangoDB to travis-ci so that ci tests can be written for APIs

    Add ArangoDB to travis-ci so that ci tests can be written for APIs

    As the title says, for being able to write and run ci-tests for the APIs, it would be nice to add ArangoDB to travis-ci like other databases have been added, too. --> http ://about.travis-ci.org/docs/user/database-setup/

    Also, Is it possible to have different versions of AramgoDB to test against? Say 1.0, 1.1, 1.2, 2.0, etc...

  • Gaps in continuous replication (documents missing on slaves)

    Gaps in continuous replication (documents missing on slaves)

    My Environment

    • ArangoDB Version: e.g. 3.3.19 (all 3 instances)
    • Storage Engine: RocksDB
    • Deployment Mode: Master/Slave
    • Infrastructure: own
    • Operating System: Debian 9.5 (master and slave 1) Ubuntu 16.04 (slave 2)
    • Used Package: Debian or Ubuntu .deb

    Component, Query & Data

    Affected feature: Continuous replication.

    Dataset: 14 collections, the 3 biggest having 50M, 10M, 3M docs. Those 3 big collections are continuously undergoing many writes, with peaks of 10 writes per second.

    Notable operations: Every night some CRON job performs a huge lot of writes on the master, among 4 collections, over a range of 3 hours (between 12 AM and 3 AM roughly). The total number of writes each night is around 9M ! Yes, that's huge.

    Those writes are documents modifications, sometimes (very rarely) documents removals, never documents creations.

    Steps to reproduce

    1. On each slave, setup a continuous replication from the same master using arangosh

    2. Wait for initial sync to finish and continuous replication to start. Everything works fine

    3. Wait for a nightly CRON job to set off

    4. The next morning, count the documents, look for the date of most recently synced documents

    5. Wait for a few hours, new documents getting created in realtime on the master get replicated again, but documents created in the meantime are missing on the slaves

    Problem: Continuous replications works very well until the CRON job sets off, then stalls (too many writes I guess). Continuous replication does not crash, and eventually resumes syncing new docs again. I found no relevant log entries, either on master or slaves.

    But, on slaves the log usually shows a huge lot of lines like INFO {replication} fetching master log from tick… and on both slaves those entries get less and less frequent since around 2 AM (almost the end of the CRON job). Then they get very numerous again at around 5 PM. The master tick mentioned in those entries is always different and increasing, as expected.

    Note 1: a collection that constantly undergoes many writes, but is not affected by nightly CRON job, still gets stuck during the night. The collections affected by the CRON seem synced because the number of documents is the same on master and slaves, but it's hard to tell because only modifications happened on those collections (no creation / removal)

    Note 2: on the slaves, the dates of latest synced docs in the different collections are close to each other, but not the same (usually in a range of 30 minutes)

    Note 3: for a given collection, the dates of latest synced docs on the 2 different slaves is always exactly the same (the latest synced doc is the same)

    Note 4: the date/time of the first docs that get synced again each day varies, roughly from 9 AM to 11 AM. This is always way sooner than the time we see the new replication log entries appear on the slaves (around 5 PM).

    Expected result: The documentation says that the replication is single-threaded and that it might not keep up with huge number of writes. Although I understand this limitation, I would expect replication to be delayed but not docs to be lost. Or maybe an option could allow to dedicate multiple threads to replication (I guess it's not easy to implement).

    Thank you, Mathias

  • content-length mismatch side effects

    content-length mismatch side effects

    I get this error when trying to deploy files to the key-value index.

    2012-10-11T06:51:07Z [736] TRACE [[email protected]/VocBase/datafile.c:825] cannot write marker, not enough space
    

    It is preceded by this strange output

    2012-10-11T06:51:06Z [736] TRACE [[email protected]/HttpServer/HttpCommTask.cpp:98] HTTP READ FOR 0x100d41220:\nonger existing\n);\nPUT /_api/key/test/js/pages.js?create=1 HTTP/1.1\r\ncontent-type: application/json\r\ncontent-length: 8619\r\nx-voc-expires: 2012-10-20T00:00:00.000Z\r\nx-voc-extended: {"contentType":"application/javascript"}\r\nHost: 127.0.0.1:8529\r\nConnection: keep-alive\r\n\r\n
    2012-10-11T06:51:06Z [736] TRACE [[email protected]/HttpServer/HttpCommTask.cpp:112] server port = 8529, client port = 44741
    2012-10-11T06:51:06Z [736] WARNING got corrupted HTTP request 'onger '
    

    any ideas what's going on?

    The file that trigger the error contains this string at the end... no illegal characters (gremlins) was found.

    js/element.details.js:  'open' in document.createElement('details')// Chrome 10 will fail this detection, but Chrome 10 is no longer existing
    

    Will investigate the arango-client next... if that is the reason for the corrupted request.

  • Update mac_tool.py (#3.8)

    Update mac_tool.py (#3.8)

    Scope & Purpose

    Backport of https://github.com/arangodb/arangodb/pull/16434.

    • [x] :hankey: Bugfix
    • [ ] :pizza: New feature
    • [ ] :fire: Performance improvement
    • [ ] :hammer: Refactoring/simplification

    Checklist

    • [ ] Tests
      • [ ] Regression tests
      • [ ] C++ Unit tests
      • [ ] integration tests
      • [ ] resilience tests
    • [ ] :book: CHANGELOG entry made
    • [ ] :books: documentation written (release notes, API changes, ...)
    • [ ] Backports

    Related Information

    (Please reference tickets / specification / other PRs etc)

    • [ ] Docs PR:
    • [ ] Enterprise PR:
    • [ ] GitHub issue / Jira ticket:
    • [ ] Design document:
  • prevent data races when throttle pokes in RocksDB's internals

    prevent data races when throttle pokes in RocksDB's internals

    Scope & Purpose

    Backport of https://github.com/arangodb/arangodb/pull/16438

    Prevent data races when throttle pokes in RocksDB's internals. Performance and correctness impact of this change is yet untested and needs to be proven.

    • [x] :hankey: Bugfix
    • [ ] :pizza: New feature
    • [ ] :fire: Performance improvement
    • [ ] :hammer: Refactoring/simplification

    Checklist

    • [ ] Tests
      • [ ] Regression tests
      • [ ] C++ Unit tests
      • [ ] integration tests
      • [ ] resilience tests
    • [ ] :book: CHANGELOG entry made
    • [ ] :books: documentation written (release notes, API changes, ...)
    • [ ] Backports
      • [ ] Backport for 3.9: (Please link PR)
      • [ ] Backport for 3.8: (Please link PR)
      • [ ] Backport for 3.7: (Please link PR)

    Related Information

    • [ ] Docs PR:
    • [ ] Enterprise PR:
    • [ ] GitHub issue / Jira ticket:
    • [ ] Design document:
  • prevent data races when throttle pokes in RocksDB's internals

    prevent data races when throttle pokes in RocksDB's internals

    Scope & Purpose

    Prevent data races when throttle pokes in RocksDB's internals. Performance and correctness impact of this change is yet untested and needs to be proven.

    • [x] :hankey: Bugfix
    • [ ] :pizza: New feature
    • [ ] :fire: Performance improvement
    • [ ] :hammer: Refactoring/simplification

    Checklist

    • [ ] Tests
      • [ ] Regression tests
      • [ ] C++ Unit tests
      • [ ] integration tests
      • [ ] resilience tests
    • [ ] :book: CHANGELOG entry made
    • [ ] :books: documentation written (release notes, API changes, ...)
    • [x] Backports
      • [x] Backport for 3.9: https://github.com/arangodb/arangodb/pull/16440
      • [ ] Backport for 3.8: (Please link PR)
      • [ ] Backport for 3.7: (Please link PR)

    Related Information

    • [ ] Docs PR:
    • [ ] Enterprise PR:
    • [ ] GitHub issue / Jira ticket:
    • [ ] Design document:
  • Exception in rocksdb background thread

    Exception in rocksdb background thread

    My Environment

    • ArangoDB Version: 3.8.0
    • Deployment Mode: Single Server
    • Deployment Strategy: ArangoDB Starter
    • Configuration:
    • Infrastructure: AWS
    • Operating System: Ubuntu 20.04
    • Total RAM in your machine: 32Gb
    • Disks in use: HDD

    Problem: In my arangodb.log file I keep getting this error which i am not completely sure why or how Its coming, but its filling up the log file pretty much. The following is a excerpt from the log file:

    2022-06-21T15:43:56Z [1156] ERROR [fdfa7] {engines} unable to apply revision tree updates for DB/Collection: Tried to remove key that is not present. 2022-06-21T15:43:56Z [1156] WARNING [8236f] {engines} caught exception in rocksdb background thread: Tried to remove key that is not present. (exception location: /work/ArangoDB/arangod/RocksDBEngine/RocksDBMetaCollection.cpp:1128). Please report this error to arangodb.com

    Collection size: 19.28 GB

    I can provide more data regarding this but i am just not sure what data to provide.

  • Supress cppcheck warnings

    Supress cppcheck warnings

    Scope & Purpose

    Remove false-positive cppcheck warnings and fixes for the real ones.

    • [X] :hankey: Bugfix
    • [ ] :pizza: New feature
    • [ ] :fire: Performance improvement
    • [ ] :hammer: Refactoring/simplification

    Checklist

    • [ ] Tests
      • [ ] Regression tests
      • [ ] C++ Unit tests
      • [ ] integration tests
      • [ ] resilience tests
    • [ ] :book: CHANGELOG entry made
    • [ ] :books: documentation written (release notes, API changes, ...)
    • [ ] Backports
      • [ ] Backport for 3.9: (Please link PR)
      • [ ] Backport for 3.8: (Please link PR)
      • [ ] Backport for 3.7: (Please link PR)

    Related Information

    (Please reference tickets / specification / other PRs etc)

    • [ ] Docs PR:
    • [ ] Enterprise PR:
    • [ ] GitHub issue / Jira ticket:
    • [ ] Design document:
Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.
Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.

Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.

Jun 23, 2022
Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.
Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.

Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.

Jun 24, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large-scale graphs with billions of vertices (nodes) and trillions of edges, with milliseconds of latency. It delivers enterprise-grade high performance to simplify the most complex data sets imaginable into meaningful and useful information.

Jun 24, 2022
Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB

Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB. Scylla embraces a shared-nothing approach that increases throughput and storage capacity to realize order-of-magnitude performance improvements and reduce hardware costs.

Jun 24, 2022
RocksDB: A Persistent Key-Value Store for Flash and RAM Storage

RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB

Jun 20, 2022
FEDB is a NewSQL database optimised for realtime inference and decisioning application
FEDB is a NewSQL database optimised for realtime inference and decisioning application

FEDB is a NewSQL database optimised for realtime inference and decisioning applications. These applications put real-time features extracted from multiple time windows through a pre-trained model to evaluate new data to support decision making. Existing in-memory databases cost hundreds or even thousands of milliseconds so they cannot meet the requirements of inference and decisioning applications.

Jun 16, 2022
The MongoDB Database

The MongoDB Database

Jun 18, 2022
ClickHouse® is a free analytics DBMS for big data
ClickHouse® is a free analytics DBMS for big data

ClickHouse® is an open-source column-oriented database management system that allows generating analytical data reports in real time.

Jun 14, 2022
RediSearch is a Redis module that provides querying, secondary indexing, and full-text search for Redis.
RediSearch is a Redis module that provides querying, secondary indexing, and full-text search for Redis.

A query and indexing engine for Redis, providing secondary indexing, full-text search, and aggregations.

Jun 17, 2022
Memgraph is a streaming graph application platform that helps you wrangle your streaming data, build sophisticated models that you can query in real-time, and develop graph applications.
Memgraph is a streaming graph application platform that helps you wrangle your streaming data, build sophisticated models that you can query in real-time, and develop graph applications.

Memgraph is a streaming graph application platform that helps you wrangle your streaming data, build sophisticated models that you can query in real-time, and develop graph applications.

Jun 22, 2022
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

Jun 21, 2022
HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine
HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine

HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine. It can provide fast and consistent execution on heterogeneous SQL data systems, e.g., OLAD database, HTAP system, SparkSQL, and Flink Stream SQL.

Sep 12, 2021
OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

What is OceanBase database OceanBase Database is a native distributed relational database. It is developed entirely by Alibaba and Ant Group. OceanBas

Jun 17, 2022
Source code for the data dependency part of Jan Kossmann's PhD thesis "Unsupervised Database Optimization: Efficient Index Selection & Data Dependency-driven Query Optimization"

Unsupervised Database Optimization: Data Dependency-Driven Query Optimization Source code for the experiments presented in Chapter 8 of Jan Kossmann's

Apr 24, 2022
Nov 24, 2021
StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

Jun 17, 2022