The MongoDB Database

MongoDB README

Welcome to MongoDB!

COMPONENTS

  mongod - The database server.
  mongos - Sharding router.
  mongo  - The database shell (uses interactive javascript).

UTILITIES

  install_compass   - Installs MongoDB Compass for your platform.

BUILDING

  See docs/building.md.

RUNNING

  For command line options invoke:

    $ ./mongod --help

  To run a single server database:

    $ sudo mkdir -p /data/db
    $ ./mongod
    $
    $ # The mongo javascript shell connects to localhost and test database by default:
    $ ./mongo
    > help

INSTALLING COMPASS

  You can install compass using the install_compass script packaged with MongoDB:

    $ ./install_compass

  This will download the appropriate MongoDB Compass package for your platform
  and install it.

DRIVERS

  Client drivers for most programming languages are available at
  https://docs.mongodb.com/manual/applications/drivers/. Use the shell
  ("mongo") for administrative tasks.

BUG REPORTS

  See https://github.com/mongodb/mongo/wiki/Submit-Bug-Reports.

PACKAGING

  Packages are created dynamically by the package.py script located in the
  buildscripts directory. This will generate RPM and Debian packages.

DOCUMENTATION

  https://docs.mongodb.com/manual/

CLOUD HOSTED MONGODB

  https://www.mongodb.com/cloud/atlas

FORUMS

  https://community.mongodb.com

    A forum for technical questions about using MongoDB.

  https://community.mongodb.com/c/server-dev

    A forum for technical questions about building and developing MongoDB.

LEARN MONGODB

  https://university.mongodb.com/

LICENSE

  MongoDB is free and the source is available. Versions released prior to
  October 16, 2018 are published under the AGPL. All versions released after
  October 16, 2018, including patch fixes for prior versions, are published
  under the Server Side Public License (SSPL) v1. See individual files for
  details.
Comments
  • SERVER-4785 maintain slowms at database level

    SERVER-4785 maintain slowms at database level

    See issue at https://jira.mongodb.org/browse/SERVER-4785 and https://jira.mongodb.org/browse/SERVER-18946

    Here is the test result.

    [email protected]:~/github/mongo$ ./mongo

    db.getProfilingStatus() { "was" : 0, "slowms" : 100, "sampleRate" : 1 } db.setProfilingLevel(0, 1) { "was" : 0, "slowms" : 100, "sampleRate" : 1, "ok" : 1 } db.getProfilingStatus() { "was" : 0, "slowms" : 1, "sampleRate" : 1 } for(var i =0; i<10000; i++) {db.test.save({'i': i})} WriteResult({ "nInserted" : 1 }) use test2 switched to db test2 db.getProfilingStatus() { "was" : 0, "slowms" : 100, "sampleRate" : 1 } . -- test 2: still 100 for(var i =0; i<10000; i++) {db.test.save({'i': i})} WriteResult({ "nInserted" : 1 })

    as a result: we capture the slow queries from test database but not for test2 database.

    ` 2017-12-26T13:38:32.678+0800 I COMMAND [conn1] command test.test appName: "MongoDB Shell" command: insert { insert: "test", ordered: true, $db: "test" } ninserted:1 keysInserted:1 numYields:0 reslen:29 locks:{ Global: { acquireCount: { r: 3, w: 1 } }, Database: { acquireCount: { w: 1, R: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_msg 3ms

    `

  •  SERVER-12064 Portability enhancements

    SERVER-12064 Portability enhancements

    The first two commits are of particularly low risk: they should not regress anything, since the new gcc atomic builtins are only used on new architectures not previously supported. In the IA-32 and x86_64 cases, the inline assembly continues to be used.

    The benefit of these first two commits is that a build and smoke test on ARM (specifically Ubuntu armhf) now succeeds, provided that you use --use-system-v8 as the in-tree v8 does not support ARM.

    The third commit switches IA-32 and x86_64 to also use the gcc atomic builtins instead of the previous inline assembly, as requested by Andy Schwerin in SERVER-12064. The inline assembly remains and covers the case of a build with an older gcc that does not support atomic builtins. This commit is higher risk, since I change the code path on IA-32 and x86_64, and I think warrants a close review to make sure that my understanding of the required semantics is correct.

    For my purposes, I'm fine if you take just the first two commits, since I'll have enabled ARM and hopefully AArch64, too.

    However, taking the third commit also is probably a good long term plan for the project.

  • SERVER-2459 add --excludeCollection to mongodump

    SERVER-2459 add --excludeCollection to mongodump

    I wanted to resolve SERVER-2459, so I patched /src/mongo/tools/dump.cpp to add a new command line option, --excludeCollection. I also updated the man file debian/mongodump.1 to reflect this. I think I have made these changes following the style of the existing code. This is my first contribution to Mongo and if you see issues with my code, I would appreciate any feedback. Thanks for considering my patch.

  • SERVER-9306 support Invisible index in MongoDB

    SERVER-9306 support Invisible index in MongoDB

    Hi: This pull request is about https://jira.mongodb.org/browse/SERVER-9306 and https://jira.mongodb.org/browse/SERVER-26589

    The user story of this feature is something like this:

    1. DBA want to drop an index
    2. DBA is not confident enough to know if the index is not need at all.
    3. The table is huge so recreating the index after drop will take a long time.

    In this situation, we can make the index invisible for the following few days, which means mongodb will still maintain index for the following data change but the optimizer will not use it any more. If we found the index is still needed, we can simple rollback our change quickly by making it visible again. If we found the index is not needed for the following N days, we can drop it safely.

    The reason why “indexStats” command in MongoDB 3.2 is not perfect for this user case is that the optimizer may choose a wrong index, so it will still show the index is in use but in fact it can be and should be dropped.

    Here is an overview for this feature:

    > for(var i = 0; i<10000; i++) { db.i.save({a: i})}
    > db.i.createIndex({a: 1})
    {
                   "createdCollectionAutomatically" : false,
                   "numIndexesBefore" : 1,
                   "numIndexesAfter" : 2,
                   "ok" : 1
    }
    
    > db.i.getIndices()
    [
                   {
                                   "v" : 2,
                                   "key" : {
                                                   "_id" : 1
                                   },
                                   "name" : "_id_",
                                   "ns" : "zhifan.i",
                                   "invisible" : false
                   },
                   {
                                   "v" : 2,
                                   "key" : {
                                                   "a" : 1
                                   },
                                   "name" : "a_1",
                                   "ns" : "zhifan.i",
                                   "invisible" : false
                   }
    ]
    
    > db.i.find({a: 100}).explain()
    {
                   "queryPlanner" : {
                                   "plannerVersion" : 1,
                                   "namespace" : "zhifan.i",
                                   "indexFilterSet" : false,
                                   "parsedQuery" : {
                                                   "a" : {
                                                                   "$eq" : 100
                                                   }
                                   },
                                   "winningPlan" : {
                                                   "stage" : "FETCH",
                                                   "inputStage" : {
                                                                   "stage" : "IXSCAN",
                                                                   "keyPattern" : {
                                                                                   "a" : 1
                                                                   },
                                                                   "indexName" : "a_1",
                                                                   "isMultiKey" : false,
                                                                   "multiKeyPaths" : {
                                                                                   "a" : [ ]
                                                                   },
                                                                   "isUnique" : false,
                                                                   "isSparse" : false,
                                                                   "isPartial" : false,
                                                                   "indexVersion" : 2,
                                                                   "direction" : "forward",
                                                                   "indexBounds" : {
                                                                                   "a" : [
                                                                                                   "[100.0, 100.0]"
                                                                                   ]
                                                                   }
                                                   }
                                   },
                                   "rejectedPlans" : [ ]
                   },
                   "serverInfo" : {
                                   "host" : "zhifan-dev16",
                                   "port" : 27017,
                                   "version" : "3.7.0-353-g2307b7a",
                                   "gitVersion" : "2307b7ae2495b4b1c0caa0a83d7be1323b975fe4"
                   },
                   "ok" : 1
    }
    
    > db.runCommand({"collMod": "i", "index": {"name": "a_1", "invisible": true}})
    { "invisible_old" : false, "invisible_new" : true, "ok" : 1 }
    > db.i.find({a: 100}).explain()
    {
                   "queryPlanner" : {
                                   "plannerVersion" : 1,
                                   "namespace" : "zhifan.i",
                                   "indexFilterSet" : false,
                                   "parsedQuery" : {
                                                   "a" : {
                                                                   "$eq" : 100
                                                   }
                                   },
                                   "winningPlan" : {
                                                   "stage" : "COLLSCAN",
                                                   "filter" : {
                                                                   "a" : {
                                                                                   "$eq" : 100
                                                                   }
                                                   },
                                                   "direction" : "forward"
                                   },
                                   "rejectedPlans" : [ ]
                   },
                   "serverInfo" : {
                                   "host" : "zhifan-dev16",
                                   "port" : 27017,
                                   "version" : "3.7.0-353-g2307b7a",
                                   "gitVersion" : "2307b7ae2495b4b1c0caa0a83d7be1323b975fe4"
                   },
                   "ok" : 1
    }
    > db.runCommand({"collMod": "i", "index": {"name": "a_1", "invisible": false}})
    { "invisible_old" : true, "invisible_new" : false, "ok" : 1 }
    > db.i.find({a: 100}).explain()
    {
                   "queryPlanner" : {
                                   "plannerVersion" : 1,
                                   "namespace" : "zhifan.i",
                                   "indexFilterSet" : false,
                                   "parsedQuery" : {
                                                   "a" : {
                                                                   "$eq" : 100
                                                   }
                                   },
                                   "winningPlan" : {
                                                   "stage" : "FETCH",
                                                   "inputStage" : {
                                                                   "stage" : "IXSCAN",
                                                                   "keyPattern" : {
                                                                                   "a" : 1
                                                                   },
                                                                   "indexName" : "a_1",
                                                                   "isMultiKey" : false,
                                                                   "multiKeyPaths" : {
                                                                                   "a" : [ ]
                                                                   },
                                                                   "isUnique" : false,
                                                                   "isSparse" : false,
                                                                   "isPartial" : false,
                                                                   "indexVersion" : 2,
                                                                   "direction" : "forward",
                                                                   "indexBounds" : {
                                                                                   "a" : [
                                                                                                   "[100.0, 100.0]"
                                                                                   ]
                                                                   }
                                                   }
                                   },
                                   "rejectedPlans" : [ ]
                   },
                   "serverInfo" : {
                                   "host" : "zhifan-dev16",
                                   "port" : 27017,
                                   "version" : "3.7.0-353-g2307b7a",
                                   "gitVersion" : "2307b7ae2495b4b1c0caa0a83d7be1323b975fe4"
                   },
                   "ok" : 1
    }
    

    all the existing test case are passed with python buildscripts/resmoke.py jstests/core/*.js and code is formated with python buildscripts/clang_format.py format

  • SERVER-991: Added support for $trim

    SERVER-991: Added support for $trim

    The $trim operator will limit the length of the arrays to which it is applied. A positive number indicates how many items to keep starting counting from the beginning of the array whereas a negative number keeps items from the end of the array.

    array: [1 2 3 4 5 6 7 8] trim(5) -> keep 1 2 3 4 5, drop 6 7 8 -> [1, 2, 3, 4, 5] trim(-5) -> drop 1 2 3, keep 4 5 6 7 8 -> [4, 5, 6, 7, 8]

    $trim is special as it is allowed (and required) to reference the same field as another update modifier. It can not be used by itself, but the same effect can be used by doing a $pushAll: [](pushing an empty list) at the same time as using $trim.

    $trim is always performed after updating the field with any other modifier.

    Test cases have been added.

  • SERVER-6233: Support Connection String URI Format for mongo shell

    SERVER-6233: Support Connection String URI Format for mongo shell

    Support Connection String URI Format for mongo auth.

    Example usage: mongo mongodb://username:[email protected]:port/database

    Jira Reference: https://jira.mongodb.org/browse/SERVER-6233

  • SERVER-14802 Fixed problem with sleepmillis() on Windows due to default timer resolution being typically higher than 15ms

    SERVER-14802 Fixed problem with sleepmillis() on Windows due to default timer resolution being typically higher than 15ms

    All sleepmillis() calls with values lesser than timer resolution will sleep AT LEAST the timer resolution value, making Mongo be really slow on certain use cases. Like when implementing consumer/producer patterns using capped collections to stream objects back and forth

  • SERVER-5399 Essentially aliasing quit, exit, and function call variations. (+ misc cleanup)

    SERVER-5399 Essentially aliasing quit, exit, and function call variations. (+ misc cleanup)

    Previously, exit was being implemented as a code in the DB shell, while quit was being implemented as a shell function. This meant that both were doing different things, and were called differently--i.e., exit could be run as is, whereas quit() had to be called as a function.

  • initial scons 3.0.1 and python3 build support

    initial scons 3.0.1 and python3 build support

    This PR allows building mongo with use of latest (3.0.1) scons which now uses python3 by default. There are still a lot of rough edges but on the end it allows building binary mongod

    python version:

    [email protected]:~/rpmbuild/BUILD/mongo-patched> python3 --version
    Python 3.6.3
    [email protected]:~/rpmbuild/BUILD/mongo-patched> 
    

    scons version:

    [email protected]:~/rpmbuild/BUILD/mongo-patched> scons -v
    SCons by Steven Knight et al.:
            script: v3.0.1.74b2c53bc42290e911b334a6b44f187da698a668, 2017/11/14 13:16:53, by bdbaddog on hpmicrodog
            engine: v3.0.1.74b2c53bc42290e911b334a6b44f187da698a668, 2017/11/14 13:16:53, by bdbaddog on hpmicrodog
            engine path: ['/usr/lib/scons-3.0.1/SCons']
    Copyright (c) 2001 - 2017 The SCons Foundation
    [email protected]:~/rpmbuild/BUILD/mongo-patched>
    

    and finally running scons on my machine

    [email protected]:~/rpmbuild/BUILD/mongo-patched> scons -j 8 MONGO_VERSION=3.6.0  --disable-warnings-as-errors  --ssl
    scons: Reading SConscript files ...
    Mkdir("build/scons")
    scons version: 3.0.1
    python version: 3 6 3 final 0
    Checking whether the C compiler works... yes
    Checking whether the C++ compiler works... yes
    ...
    ...
    ...
    ranlib build/blah/mongo/db/query/libquery_common.a
    Generating library build/blah/mongo/db/libmongod_options.a
    ranlib build/blah/mongo/db/libmongod_options.a
    Linking build/blah/mongo/mongod
    Install file: "build/blah/mongo/mongod" as "mongod"
    scons: done building targets.
    [email protected]:~/rpmbuild/BUILD/mongo-patched> 
    

    and of course running built binary

    [email protected]:~/rpmbuild/BUILD/mongo-patched> ./mongod --dbpath /tmp/ --bind_ip 127.0.0.1
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] MongoDB starting : pid=2937 port=27017 dbpath=/tmp/ 64-bit host=pc
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] db version v3.6.0
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] git version: 9038d0a67ee578aa68ef8482b1fc98750d1007a6
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.0g-fips  2 Nov 2017
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] allocator: tcmalloc
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] modules: none
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] build environment:
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten]     distarch: x86_64
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten]     target_arch: x86_64
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1" }, storage: { dbPath: "/tmp/" } }
    2017-12-22T18:52:02.566+0000 I STORAGE  [initandlisten] Detected data files in /tmp/ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
    2017-12-22T18:52:02.566+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=15542M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
    2017-12-22T18:52:02.637+0000 I STORAGE  [initandlisten] WiredTiger message [1513968722:637511][2937:0x7f87663729c0], txn-recover: Main recovery loop: starting at 6/4736
    2017-12-22T18:52:02.679+0000 I STORAGE  [initandlisten] WiredTiger message [1513968722:679111][2937:0x7f87663729c0], txn-recover: Recovering log 6 through 7
    2017-12-22T18:52:02.706+0000 I STORAGE  [initandlisten] WiredTiger message [1513968722:706386][2937:0x7f87663729c0], txn-recover: Recovering log 7 through 7
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.736+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/tmp/diagnostic.data'
    2017-12-22T18:52:02.737+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
    
  • SERVER-12007 add support for new ObjectId(Date|Number)

    SERVER-12007 add support for new ObjectId(Date|Number)

    see https://github.com/marcello3d/node-buffalo/pull/11, please. Then could u check the following shell scripts:

    var timespan = new ObjectId(Date.now() - 10*60*60*1000);
    db.myColl.find({_id: {'$gt': timespan}}).count();
    

    I would get the latest 10 hours records from their insertion date, It's useful.

  • SERVER-9751 Force S2 geometry library to use Mongo's LOG instead of stderr

    SERVER-9751 Force S2 geometry library to use Mongo's LOG instead of stderr

    Right now S2 library just writes any debugging output and errors to std::cerr. This patch changes S2's logger to use Mongo's LOG macros.

    That way it is much easier to find errors in geometry primitives.

  • [Snyk] Security upgrade http-server from 0.12.3 to 0.13.0

    [Snyk] Security upgrade http-server from 0.12.3 to 0.13.0

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • buildscripts/libdeps/graph_visualizer_web_stack/package.json

    Vulnerabilities that will be fixed

    With an upgrade:

    Severity | Priority Score (*) | Issue | Breaking Change | Exploit Maturity -- | -- | -- | -- | --  High | 696/1000Why? Proof of Concept exploit, Has a fix available, CVSS 7.5 | Denial of Service (DoS)SNYK-JS-ECSTATIC-540354 | No | Proof of Concept

  • change IDL gen, remove security token from opCtx

    change IDL gen, remove security token from opCtx

    In this patch:

    • Added a concatenate_with_tenant_id_and_db namespace type for IDL defined commands
    • Removed securityToken from opCtx and put on the request object

    Not in this patch:

    • Changes to logging
    • Changes to audit

    Example of command parse method (I realized it might be nice to see the generated code):

    CreateCommand CreateCommand::parse(const IDLParserErrorContext& ctxt, const OpMsgRequest& request) {
        NamespaceString localNS;
        CreateCommand object(localNS);
        object.parseProtected(ctxt, request);
        return object;
    }
    void CreateCommand::parseProtected(const IDLParserErrorContext& ctxt, const OpMsgRequest& request) {
        std::bitset<23> usedFields;
        const size_t kCappedBit = 0;
        const size_t kAutoIndexIdBit = 1;
        const size_t kIdIndexBit = 2;
        const size_t kSizeBit = 3;
        const size_t kMaxBit = 4;
        const size_t kStorageEngineBit = 5;
        const size_t kValidatorBit = 6;
        const size_t kValidationLevelBit = 7;
        const size_t kValidationActionBit = 8;
        const size_t kIndexOptionDefaultsBit = 9;
        const size_t kViewOnBit = 10;
        const size_t kPipelineBit = 11;
        const size_t kCollationBit = 12;
        const size_t kRecordPreImagesBit = 13;
        const size_t kChangeStreamPreAndPostImagesBit = 14;
        const size_t kTimeseriesBit = 15;
        const size_t kClusteredIndexBit = 16;
        const size_t kExpireAfterSecondsBit = 17;
        const size_t kEncryptedFieldsBit = 18;
        const size_t kTempBit = 19;
        const size_t kFlagsBit = 20;
        const size_t kDbNameBit = 21;
        const size_t kDollarTenantIdBit = 22;
        BSONElement commandElement;
        bool firstFieldFound = false;
    
        for (const auto& element :request.body) {
            const auto fieldName = element.fieldNameStringData();
    
            if (firstFieldFound == false) {
                commandElement = element;
                firstFieldFound = true;
                continue;
            }
    
            if (fieldName == kCappedFieldName) {
                if (MONGO_likely(ctxt.checkAndAssertTypes(element, {Bool, NumberLong, NumberInt, NumberDecimal, NumberDouble}))) {
                    if (MONGO_unlikely(usedFields[kCappedBit])) {
                        ctxt.throwDuplicateField(element);
                    }
    
                    usedFields.set(kCappedBit);
    
                    _capped = element.trueValue();
                }
            }
           ******
           // continue setting all other fields, removing to be more concise
          *******
            else if (fieldName == kDbNameFieldName) {
                if (MONGO_likely(ctxt.checkAndAssertType(element, String))) {
                    if (MONGO_unlikely(usedFields[kDbNameBit])) {
                        ctxt.throwDuplicateField(element);
                    }
    
                    usedFields.set(kDbNameBit);
    
                    _hasDbName = true;
                    _dbName = element.str();
                }
            }
            else if (fieldName == kDollarTenantIdFieldName) {
                if (MONGO_unlikely(usedFields[kDollarTenantIdBit])) {
                    ctxt.throwDuplicateField(element);
                }
    
                usedFields.set(kDollarTenantIdBit);
    
                _dollarTenantId = TenantId::parseFromBSON(element);
            }
            else {
                if (!mongo::isGenericArgument(fieldName)) {
                    ctxt.throwUnknownField(fieldName);
                }
            }
        }
    
    
        if (MONGO_unlikely(!usedFields.all())) {
            if (!usedFields[kDbNameBit]) {
                ctxt.throwMissingField(kDbNameFieldName);
            }
        }
    
        boost::optional<TenantId> tenantId;
    
        if (request.securityToken.nFields() > 0) {
            _securityToken.emplace(auth::SecurityToken::parse({"Security Token"}, request.securityToken));
            uassert(ErrorCodes::BadValue, "Security token authenticated user requires a valid Tenant ID", _securityToken->getAuthenticatedUser().getTenant());
            tenantId.emplace(*_securityToken->getAuthenticatedUser().getTenant());
        }
    
        if (!tenantId && _dollarTenantId) {
            tenantId.emplace(*_dollarTenantId);
        }
        invariant(_nss.isEmpty());
    
        _nss = ctxt.parseNSCollectionRequired(_dbName, commandElement, false, tenantId);
    }
    
  • WIP Change the network chunk version format to the new format

    WIP Change the network chunk version format to the new format

    This is a WIP test that changes the chunk version format from an array format:

    [<major|minor>, <epoch>, <timestamp>]

    To an object like format:

    {v: <major|minor>, e: <epoch>, t: <timestamp>}

    There are several tests that need to be fixed, and some extra code necessary to fully complete the transition, one of the major problems seems to be split chunk, the full evergreen patch can be found here.

  • SERVER-45617 - Support for Serbian language in FTS

    SERVER-45617 - Support for Serbian language in FTS

    JIRA Ticket: https://jira.mongodb.org/browse/SERVER-45617

    Notes:

    1. codepoints_diacritic_map.cpp was changed to add removal of diacritis for Đ and đ. It had to be updated manually because it doesn't look like it is being regenerated based on the SConscript script located in the fts/unicode folder.

    2. Serbian stemmer from the latest version of Snowball was adjusted for and compiled with the version of Snowball that is currently being used in MongoDB.

Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large-scale graphs with billions of vertices (nodes) and trillions of edges, with milliseconds of latency. It delivers enterprise-grade high performance to simplify the most complex data sets imaginable into meaningful and useful information.

Jun 24, 2022
🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.
🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

?? ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

Jun 24, 2022
FEDB is a NewSQL database optimised for realtime inference and decisioning application
FEDB is a NewSQL database optimised for realtime inference and decisioning application

FEDB is a NewSQL database optimised for realtime inference and decisioning applications. These applications put real-time features extracted from multiple time windows through a pre-trained model to evaluate new data to support decision making. Existing in-memory databases cost hundreds or even thousands of milliseconds so they cannot meet the requirements of inference and decisioning applications.

Jun 16, 2022
Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.
Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.

Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.

Jun 23, 2022
Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.
Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.

Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.

Jun 24, 2022
The MongoDB Database

The MongoDB Database

Jun 18, 2022
ARCHIVED - libbson has moved to https://github.com/mongodb/mongo-c-driver/tree/master/src/libbson

libbson ARCHIVED - libbson is now maintained in a subdirectory of the libmongoc project: https://github.com/mongodb/mongo-c-driver/tree/master/src/lib

May 7, 2022
C++ Driver for MongoDB

MongoDB C++ Driver Welcome to the MongoDB C++ Driver! Branches - releases/stable versus master The default checkout branch of this repository is relea

Jun 14, 2022
A high-performance MongoDB driver for C

mongo-c-driver About mongo-c-driver is a project that includes two libraries: libmongoc, a client library written in C for MongoDB. libbson, a library

Jun 20, 2022
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

Jun 24, 2022
A mini database for learning database

A mini database for learning database

Nov 3, 2021
Bear is a tool that generates a compilation database for clang tooling.

ʕ·ᴥ·ʔ Build EAR Bear is a tool that generates a compilation database for clang tooling. The JSON compilation database is used in the clang project to

Jun 23, 2022
Jun 15, 2022
ESE is an embedded / ISAM-based database engine, that provides rudimentary table and indexed access.

Extensible-Storage-Engine A Non-SQL Database Engine The Extensible Storage Engine (ESE) is one of those rare codebases having proven to have a more th

Jun 13, 2022
Identify I2C devices from a database of the most popular I2C sensors and other devices

I2C Detective Identify I2C devices from a database of the most popular I2C sensors and other devices. For more information see http://www.technoblogy.

Jun 11, 2022
Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB

Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB. Scylla embraces a shared-nothing approach that increases throughput and storage capacity to realize order-of-magnitude performance improvements and reduce hardware costs.

Jun 24, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large-scale graphs with billions of vertices (nodes) and trillions of edges, with milliseconds of latency. It delivers enterprise-grade high performance to simplify the most complex data sets imaginable into meaningful and useful information.

Jun 24, 2022
🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.
🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

?? ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

Jun 24, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

Jun 18, 2022