Draco is a library for compressing and decompressing 3D geometric meshes and point clouds.

Build Status

News

Version 1.4.1 release

Version 1.4.0 release

  • WASM and JavaScript decoders are hosted from a static URL.
  • Changed npm modules to use WASM, which increased performance by ~200%.
  • Updated Emscripten to 2.0.
    • This causes the Draco codec modules to return a promise instead of the module directly.
    • Please see the example code on how to handle the promise.
  • Changed NORMAL quantization default to 8.
  • Added new array API to decoder and deprecated DecoderBuffer.
  • Changed WASM/JavaScript behavior of catching exceptions.
  • Code cleanup.
  • Emscripten builds now disable NODEJS_CATCH_EXIT and NODEJS_CATCH_REJECTION.
    • Authors of a CLI tool might want to add their own error handlers.
  • Added Maya plugin builds.
  • Unity plugin builds updated.
  • Bug fixes.

Version 1.3.6 release

  • WASM and JavaScript decoders are now hosted from a static URL
  • Changed web examples to pull Draco decoders from static URL
  • Added new API to Draco WASM decoder, which increased performance by ~15%
  • Decreased Draco WASM decoder size by ~20%
  • Added support for generic and multiple attributes to Draco Unity plug-ins
  • Added new API to Draco Unity, which increased decoder performance by ~15%
  • Changed quantization defaults:
    • POSITION: 11
    • NORMAL: 7
    • TEX_COORD: 10
    • COLOR: 8
    • GENERIC: 8
  • Code cleanup
  • Bug fixes

Version 1.3.5 release

  • Added option to build Draco for Universal Scene Description
  • Code cleanup
  • Bug fixes

Version 1.3.4 release

  • Released Draco Animation code
  • Fixes for Unity
  • Various file location and name changes

Version 1.3.3 release

  • Added ExpertEncoder to the Javascript API
    • Allows developers to set quantization options per attribute id
  • Bug fixes

Version 1.3.2 release

  • Bug fixes

Version 1.3.1 release

  • Fix issue with multiple attributes when skipping an attribute transform

Version 1.3.0 release

  • Improved kD-tree based point cloud encoding
    • Now applicable to point clouds with any number of attributes
    • Support for all integer attribute types and quantized floating point types
  • Improved mesh compression up to 10% (on average ~2%)
    • For meshes, the 1.3.0 bitstream is fully compatible with 1.2.x decoders
  • Improved Javascript API
    • Added support for all signed and unsigned integer types
    • Added support for point clouds to our Javascript encoder API
  • Added support for integer properties to the PLY decoder
  • Bug fixes

Previous releases

https://github.com/google/draco/releases

Description

Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.

Draco was designed and built for compression efficiency and speed. The code supports compressing points, connectivity information, texture coordinates, color information, normals, and any other generic attributes associated with geometry. With Draco, applications using 3D graphics can be significantly smaller without compromising visual fidelity. For users, this means apps can now be downloaded faster, 3D graphics in the browser can load quicker, and VR and AR scenes can now be transmitted with a fraction of the bandwidth and rendered quickly.

Draco is released as C++ source code that can be used to compress 3D graphics as well as C++ and Javascript decoders for the encoded data.

Contents

Building

See BUILDING for building instructions.

Usage

Unity

For the best information about using Unity with Draco please visit https://github.com/atteneder/DracoUnity

For a simple example of using Unity with Draco see README in the unity folder.

WASM and JavaScript Decoders

It is recommended to always pull your Draco WASM and JavaScript decoders from:

https://www.gstatic.com/draco/v1/decoders/

Users will benefit from having the Draco decoder in cache as more sites start using the static URL.

Command Line Applications

The default target created from the build files will be the draco_encoder and draco_decoder command line applications. For both applications, if you run them without any arguments or -h, the applications will output usage and options.

Encoding Tool

draco_encoder will read OBJ or PLY files as input, and output Draco-encoded files. We have included Stanford's Bunny mesh for testing. The basic command line looks like this:

./draco_encoder -i testdata/bun_zipper.ply -o out.drc

A value of 0 for the quantization parameter will not perform any quantization on the specified attribute. Any value other than 0 will quantize the input values for the specified attribute to that number of bits. For example:

./draco_encoder -i testdata/bun_zipper.ply -o out.drc -qp 14

will quantize the positions to 14 bits (default is 11 for the position coordinates).

In general, the more you quantize your attributes the better compression rate you will get. It is up to your project to decide how much deviation it will tolerate. In general, most projects can set quantization values of about 11 without any noticeable difference in quality.

The compression level (-cl) parameter turns on/off different compression features.

./draco_encoder -i testdata/bun_zipper.ply -o out.drc -cl 8

In general, the highest setting, 10, will have the most compression but worst decompression speed. 0 will have the least compression, but best decompression speed. The default setting is 7.

Encoding Point Clouds

You can encode point cloud data with draco_encoder by specifying the -point_cloud parameter. If you specify the -point_cloud parameter with a mesh input file, draco_encoder will ignore the connectivity data and encode the positions from the mesh file.

./draco_encoder -point_cloud -i testdata/bun_zipper.ply -o out.drc

This command line will encode the mesh input as a point cloud, even though the input might not produce compression that is representative of other point clouds. Specifically, one can expect much better compression rates for larger and denser point clouds.

Decoding Tool

draco_decoder will read Draco files as input, and output OBJ or PLY files. The basic command line looks like this:

./draco_decoder -i in.drc -o out.obj

C++ Decoder API

If you'd like to add decoding to your applications you will need to include the draco_dec library. In order to use the Draco decoder you need to initialize a DecoderBuffer with the compressed data. Then call DecodeMeshFromBuffer() to return a decoded mesh object or call DecodePointCloudFromBuffer() to return a decoded PointCloud object. For example:

draco::DecoderBuffer buffer;
buffer.Init(data.data(), data.size());

const draco::EncodedGeometryType geom_type =
    draco::GetEncodedGeometryType(&buffer);
if (geom_type == draco::TRIANGULAR_MESH) {
  unique_ptr mesh = draco::DecodeMeshFromBuffer(&buffer);
} else if (geom_type == draco::POINT_CLOUD) {
  unique_ptr pc = draco::DecodePointCloudFromBuffer(&buffer);
}

Please see src/draco/mesh/mesh.h for the full Mesh class interface and src/draco/point_cloud/point_cloud.h for the full PointCloud class interface.

Javascript Encoder API

The Javascript encoder is located in javascript/draco_encoder.js. The encoder API can be used to compress mesh and point cloud. In order to use the encoder, you need to first create an instance of DracoEncoderModule. Then use this instance to create MeshBuilder and Encoder objects. MeshBuilder is used to construct a mesh from geometry data that could be later compressed by Encoder. First create a mesh object using new encoderModule.Mesh() . Then, use AddFacesToMesh() to add indices to the mesh and use AddFloatAttributeToMesh() to add attribute data to the mesh, e.g. position, normal, color and texture coordinates. After a mesh is constructed, you could then use EncodeMeshToDracoBuffer() to compress the mesh. For example:

const mesh = {
  indices : new Uint32Array(indices),
  vertices : new Float32Array(vertices),
  normals : new Float32Array(normals)
};

const encoderModule = DracoEncoderModule();
const encoder = new encoderModule.Encoder();
const meshBuilder = new encoderModule.MeshBuilder();
const dracoMesh = new encoderModule.Mesh();

const numFaces = mesh.indices.length / 3;
const numPoints = mesh.vertices.length;
meshBuilder.AddFacesToMesh(dracoMesh, numFaces, mesh.indices);

meshBuilder.AddFloatAttributeToMesh(dracoMesh, encoderModule.POSITION,
  numPoints, 3, mesh.vertices);
if (mesh.hasOwnProperty('normals')) {
  meshBuilder.AddFloatAttributeToMesh(
    dracoMesh, encoderModule.NORMAL, numPoints, 3, mesh.normals);
}
if (mesh.hasOwnProperty('colors')) {
  meshBuilder.AddFloatAttributeToMesh(
    dracoMesh, encoderModule.COLOR, numPoints, 3, mesh.colors);
}
if (mesh.hasOwnProperty('texcoords')) {
  meshBuilder.AddFloatAttributeToMesh(
    dracoMesh, encoderModule.TEX_COORD, numPoints, 3, mesh.texcoords);
}

if (method === "edgebreaker") {
  encoder.SetEncodingMethod(encoderModule.MESH_EDGEBREAKER_ENCODING);
} else if (method === "sequential") {
  encoder.SetEncodingMethod(encoderModule.MESH_SEQUENTIAL_ENCODING);
}

const encodedData = new encoderModule.DracoInt8Array();
// Use default encoding setting.
const encodedLen = encoder.EncodeMeshToDracoBuffer(dracoMesh,
                                                   encodedData);
encoderModule.destroy(dracoMesh);
encoderModule.destroy(encoder);
encoderModule.destroy(meshBuilder);

Please see src/draco/javascript/emscripten/draco_web_encoder.idl for the full API.

Javascript Decoder API

The Javascript decoder is located in javascript/draco_decoder.js. The Javascript decoder can decode mesh and point cloud. In order to use the decoder, you must first create an instance of DracoDecoderModule. The instance is then used to create DecoderBuffer and Decoder objects. Set the encoded data in the DecoderBuffer. Then call GetEncodedGeometryType() to identify the type of geometry, e.g. mesh or point cloud. Then call either DecodeBufferToMesh() or DecodeBufferToPointCloud(), which will return a Mesh object or a point cloud. For example:

// Create the Draco decoder.
const decoderModule = DracoDecoderModule();
const buffer = new decoderModule.DecoderBuffer();
buffer.Init(byteArray, byteArray.length);

// Create a buffer to hold the encoded data.
const decoder = new decoderModule.Decoder();
const geometryType = decoder.GetEncodedGeometryType(buffer);

// Decode the encoded geometry.
let outputGeometry;
let status;
if (geometryType == decoderModule.TRIANGULAR_MESH) {
  outputGeometry = new decoderModule.Mesh();
  status = decoder.DecodeBufferToMesh(buffer, outputGeometry);
} else {
  outputGeometry = new decoderModule.PointCloud();
  status = decoder.DecodeBufferToPointCloud(buffer, outputGeometry);
}

// You must explicitly delete objects created from the DracoDecoderModule
// or Decoder.
decoderModule.destroy(outputGeometry);
decoderModule.destroy(decoder);
decoderModule.destroy(buffer);

Please see src/draco/javascript/emscripten/draco_web_decoder.idl for the full API.

Javascript Decoder Performance

The Javascript decoder is built with dynamic memory. This will let the decoder work with all of the compressed data. But this option is not the fastest. Pre-allocating the memory sees about a 2x decoder speed improvement. If you know all of your project's memory requirements, you can turn on static memory by changing CMakeLists.txt accordingly.

Metadata API

Starting from v1.0, Draco provides metadata functionality for encoding data other than geometry. It could be used to encode any custom data along with the geometry. For example, we can enable metadata functionality to encode the name of attributes, name of sub-objects and customized information. For one mesh and point cloud, it can have one top-level geometry metadata class. The top-level metadata then can have hierarchical metadata. Other than that, the top-level metadata can have metadata for each attribute which is called attribute metadata. The attribute metadata should be initialized with the correspondent attribute id within the mesh. The metadata API is provided both in C++ and Javascript. For example, to add metadata in C++:

pos_metadata = std::unique_ptr( new draco::AttributeMetadata(pos_att_id)); pos_metadata->AddEntryString("name", "position"); // Directly add attribute metadata to geometry. // You can do this without explicitly add |GeometryMetadata| to mesh. pc.AddAttributeMetadata(pos_att_id, std::move(pos_metadata)); ">
draco::PointCloud pc;
// Add metadata for the geometry.
std::unique_ptr metadata =
  std::unique_ptr(new draco::GeometryMetadata());
metadata->AddEntryString("description", "This is an example.");
pc.AddMetadata(std::move(metadata));

// Add metadata for attributes.
draco::GeometryAttribute pos_att;
pos_att.Init(draco::GeometryAttribute::POSITION, nullptr, 3,
             draco::DT_FLOAT32, false, 12, 0);
const uint32_t pos_att_id = pc.AddAttribute(pos_att, false, 0);

std::unique_ptr pos_metadata =
    std::unique_ptr(
        new draco::AttributeMetadata(pos_att_id));
pos_metadata->AddEntryString("name", "position");

// Directly add attribute metadata to geometry.
// You can do this without explicitly add |GeometryMetadata| to mesh.
pc.AddAttributeMetadata(pos_att_id, std::move(pos_metadata));

To read metadata from a geometry in C++:

// Get metadata for the geometry.
const draco::GeometryMetadata *pc_metadata = pc.GetMetadata();

// Request metadata for a specific attribute.
const draco::AttributeMetadata *requested_pos_metadata =
  pc.GetAttributeMetadataByStringEntry("name", "position");

Please see src/draco/metadata and src/draco/point_cloud for the full API.

NPM Package

Draco NPM NodeJS package is located in javascript/npm/draco3d. Please see the doc in the folder for detailed usage.

three.js Renderer Example

Here's an example of a geometric compressed with Draco loaded via a Javascript decoder using the three.js renderer.

Please see the javascript/example/README.md file for more information.

Support

For questions/comments please email [email protected]

If you have found an error in this library, please file an issue at https://github.com/google/draco/issues

Patches are encouraged, and may be submitted by forking this project and submitting a pull request through GitHub. See CONTRIBUTING for more detail.

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

References

Bunny model from Stanford's graphic department https://graphics.stanford.edu/data/3Dscanrep/

Comments
  • 1.4 release throws RuntimeError in the browser console

    1.4 release throws RuntimeError in the browser console

    Hi,

    Seems like new release broke my 3d viewer that displays Draco compressed files. Anyone else has this issue?

    I'm using https://www.gstatic.com/draco/v1/decoders/* to load *.wasm files.

    Screenshot 2020-12-02 at 14 33 14
  • Metadata Example: Javascript decoder.

    Metadata Example: Javascript decoder.

    I've an .obj file with multiple sub-objects and an associated .mtl file with texture information. I want to encode it into draco format and then decode again using Javascript decoder (preserving the textures on each component). I assume that with the metadata support in version 1.0.0 this is possible. Can someone provide me an example on how to do this? Or if there are some pointers to make my own DracoLoader to achieve this?

  • bugfix: attribute NORMAL decoding error in gcc and clang compiler

    bugfix: attribute NORMAL decoding error in gcc and clang compiler

    bugfix: Attribute NORMAL decoding error in gcc and clang compiler: std c lib function int abs(int) can implicitly convert float toint and return an integer, which is not expected as to MSVC compiler.

  • How to decode generic attributes in Javascript?

    How to decode generic attributes in Javascript?

    I couldn't find any way to do it. Is there any way to search for the custom_id in javascript decoder? I can get the number of vertex attributes but can't iterate over the attributes.

  • Unity3D bunny.bytes on Resources folder doesn't load

    Unity3D bunny.bytes on Resources folder doesn't load

    Hey there, I imported the unity folder into a new project, places the bunny.drc file into Resources folder under Assets. Renamed bunny to bunny.bytes

    And I'm getting the error:

    Didn't load file!
    UnityEngine.Debug:Log(Object)
    DracoMeshLoader:LoadMeshFromAsset(String, List`1&) (at Assets/unity/DracoMeshLoader.cs:119)
    DracoDecodingObject:Start() (at Assets/unity/DracoDecodingObject.cs:35)
    
  • Feature: Support for encoding Quads

    Feature: Support for encoding Quads

    Below is a simple obj file that represents a single cube.

    mtllib twocubes.mtl
    o Cube2_Cube.001
    v 3.052341 -1.477031 -5.058577
    v 3.052341 -1.477031 -3.058577
    v 1.052340 -1.477031 -3.058577
    v 1.052341 -1.477031 -5.058577
    v 3.052341 0.522969 -5.058576
    v 3.052340 0.522969 -3.058576
    v 1.052340 0.522969 -3.058577
    v 1.052341 0.522969 -5.058577
    vn 0.0000 -1.0000 0.0000
    vn 0.0000 1.0000 0.0000
    vn 1.0000 0.0000 0.0000
    vn -0.0000 -0.0000 1.0000
    vn -1.0000 -0.0000 -0.0000
    vn 0.0000 0.0000 -1.0000
    usemtl Material
    s off
    f 1//1 2//1 3//1 4//1
    f 5//2 8//2 7//2 6//2
    f 1//3 5//3 6//3 2//3
    f 2//4 6//4 7//4 3//4
    f 3//5 7//5 8//5 4//5
    f 5//6 1//6 4//6 8//6
    

    I had it encoded into a drc file (with default encoding settings) and am having them decoded using the Javascript decoder. Some of the numbers am getting don't seem to make sense.

    numFaces = dracoGeometry.num_faces(); //  this returns 6
    numPoints = dracoGeometry.num_points(); // this returns 18
    

    And the resultant geometry ends up being like this image

    It seems impossible to me that encoding would go wrong on a simple model like this. What exactly is the problem then? :-/

  • Comparisons to glTF binary, OpenCTM, etc?

    Comparisons to glTF binary, OpenCTM, etc?

    Nice work. I was on the Brotli guys about the insuitability of their stuff for 3D meshes a while back: https://github.com/google/brotli/issues/165 This seems like a good solution.

    I am interested in the comparision to OpenCTM (quantization + LZMA), glTF's binary format (https://github.com/KhronosGroup/glTF/wiki/Open-3D-Graphics-Compression) and the old Sun Java3D stuff from the late 1995 (http://web.cse.ohio-state.edu/~hwshen/Su01_888/deering.pdf -- which is now just off patent I believe :). I notice you are using the techniques from http://www.cc.gatech.edu/~jarek/papers/EdgeBreaker.pdf

  • Possible memory leak in DRACOLoader.js

    Possible memory leak in DRACOLoader.js

    I'm using DRACOLoader.js to load draco files (files holds meshes encoded from .obj files):

    const dracoLoader = new THREE.DRACOLoader('<url-to-gcs-bucket-with warm decoder>');
    dracoLoader.setVerbosity(1);
    dracoLoader.setCrossOrigin('anonymous');
    
    ...
    let mesh = null;
    let scene = null;
    
    // initialize three.js scene
    ...
    
    function loadMesh() {
        dracoLoader.load('<url to drc file>', function(bufferGeometry) {
            const material = new THREE.MeshPhongMaterial({
                        color: 0x996633,
                        specular: 0x050505,
                        shininess: 100
              });
            
              mesh = new THREE.Mesh(bufferGeometry, material);
              scene.add(mesh)
        })
    }
    
    function closeViewer() {
        scene.remove(mesh);
        mesh.geometry.dispose();
    }
    
    

    After having loaded the files, and closing my viewer I never seem to be able to release the memory used to load the files.

    I've tried to NOT add the decoded geometry to my THREE.js scene, just to try to rule out that the problem is with THREE.js. Memory is still not released. I tried to dispose geometry and destroy all the draco objects DRACOLoader.js' convertDracoGeometryTo3JS function:

            convertDracoGeometryTo3JS: function(dracoDecoder, decoder, geometryType,
                                                buffer) {
                if (this.getAttributeOptions('position').skipDequantization === true) {
                   decoder.SkipAttributeTransform(dracoDecoder.POSITION);
                }
                .....
                
                // try to destroy the objects
                dracoDecoder.destroy(posTransform);
                dracoDecoder.destroy(decoder);
                dracoDecoder.destroy(dracoGeometry);
                geometry.dispose();
                return null;
    

    so my thought is that with the above code, the memory should've been released. But from taking a memory snapshot with chrome, I get that more then 1GB of memory is still allocated:

    image

    The draco file is aprox: 6MB with 3295950 verts and 3024128 faces I am using three.js v0.88.0

  • v1/decoders started throwing RuntimeError

    v1/decoders started throwing RuntimeError

    Hello, an existing project started breaking since yesterday with the following error:

    failed to asynchronously prepare wasm: TypeError: WebAssembly.instantiate(): Import #0 module="env" error: module is not 
    an object or function
    
    RuntimeError: abort(TypeError: import object field 'env' is not an Object). Build with -s ASSERTIONS=1 for more info.
    

    Website: https://www.pluto.app/ No code changes have been made in the last 3 months.

    Experienced in:

    • Chrome version 94.0.4606.81 (Official Build) (x86_64)
    • Firefox version 92.0 (64-bit)
    • macOS Catalina version 10.15.7

    Files affected: https://www.gstatic.com/draco/v1/decoders/draco_wasm_wrapper.js https://www.gstatic.com/draco/v1/decoders/draco_decoder.wasm

  • documentation confusion for Draco javascript decoder build

    documentation confusion for Draco javascript decoder build

    Hi,

    I've been trying to load 50+ millions points (200MB+) point cloud with DracoLoader for threejs, But it gives me an error on loading: Aborted(). Build with -s ASSERTIONS=1 for more info.

    So I tried to build the decoder, Cloning the emscripten repo, But in your documentation, you have this step: export EMSCRIPTEN=../../emscripten/tools/parent The problem being I do not have this folder on the emscripten repo (I tried multiple version of it but I couldn't find that folder) And when I run make -s ASSERTIONS=1, I can't see any .js files in the build_dir repo.

    So maybe I'm doing something wrong. Forgive me if that's not the right place to ask.

    edit: added some more info

  • Lossy compression issue ?

    Lossy compression issue ?

    Hello Draco team and thank you for this great work.

    I'm noticing that the compression sometimes happens to output unexpected results : the following image represents the result of a conversion, it has many holes and shifted polys indices.

    compression?

    Is this an identified problem ? Is it always to be expected ? Is there a way to avoid it or to predict it ? Thanks.

  • Vertex Colors information gets eliminated in compression

    Vertex Colors information gets eliminated in compression

    I have PLY files that will be compressed with Draco. The original PLY files have all the vertex colors in them. When I am compressing those files and displaying it in the browser via threejs the colors are not showing up at all. Does Draco removes all the color information while compression?

    I also tried finding the sub objects so that I could color them myself, but all the objects gets mixed in one single mesh which at least could have been used to color an/or map textures.

    How to retain vextex color info with Draco

  • PointCloud Color compression

    PointCloud Color compression

    I have used PointCloudBuilder to encode positions and colors using

    draco::PointCloudBuilder pcBuilder;
    pcBuilder.Start(pointCount);
    int posId = pcBuilder.AddAttribute(draco::GeometryAttribute::POSITION, 3, draco::DT_FLOAT32);
    pcBuilder.SetAttributeValuesForAllPoints(posId, positions.data(), sizeOfPosition);
    int colId = pcBuilder.AddAttribute(draco::GeometryAttribute::COLOR, 3, draco::DT_UINT8);
    pcBuilder.SetAttributeValuesForAllPoints(colId, colors.data(), sizeOfColor);
    std::unique_ptr<draco::PointCloud> pc = pcBuilder.Finalize(false);
    
    draco::Encoder encoder;
    encoder.SetSpeedOptions(7, 7);
    encoder.SetAttributeQuantization(draco::GeometryAttribute::POSITION, 16);
    encoder.SetAttributeQuantization(draco::GeometryAttribute::COLOR, 6);
    
    draco::EncoderBuffer encBuff;
    encoder.EncodePointCloudToBuffer(*pc, &encBuff);
    

    Unfortunately, it appears that there is no compression happening on colors... Is it something that will be added?! To improve that, I have decided to encode colors with RGB565 manually. Thus, I modified only the following line :

    int colId = pcBuilder.AddAttribute(draco::GeometryAttribute::COLOR, 1, draco::DT_UINT16);
    

    Unfortunately, colors are inconsistent after decompression by using RGB565 (I tested it without draco compression and it works). Am I missing something ? Is there any issue there?!

    Thank you for the great work, position compression is awesome!

  • `ERR_INVALID_URL` running Javascript decoder in Node v18

    `ERR_INVALID_URL` running Javascript decoder in Node v18

    Running

    const draco3d = require("draco3d");
    Promise.resolve(draco3d.createDecoderModule({}));
    

    throws an ERR_INVALID_URL error:

     ~ node /Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/lib/debugg.js
    { decoderModulePromise: Promise { <pending> } }
    (node:81439) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
    (Use `node --trace-warnings ...` to show where the warning was created)
    node:internal/deps/undici/undici:4816
                throw new TypeError("Failed to parse URL from " + input, { cause: err });
                      ^
    
    TypeError: Failed to parse URL from /Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/node_modules/draco3d/draco_decoder.wasm
        at new Request (node:internal/deps/undici/undici:4816:19)
        at Agent2.fetch2 (node:internal/deps/undici/undici:5544:29)
        ... 4 lines matching cause stack trace ...
        at Object.createDecoderModule (/Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/node_modules/draco3d/draco_decoder_nodejs.js:39:247)
        at Object.<anonymous> (/Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/lib/debugg.js:4:52)
        at Module._compile (node:internal/modules/cjs/loader:1105:14)
        at Module._extensions..js (node:internal/modules/cjs/loader:1159:10) {
      [cause]: TypeError [ERR_INVALID_URL]: Invalid URL
          at new NodeError (node:internal/errors:377:5)
          at URL.onParseError (node:internal/url:563:9)
          at new URL (node:internal/url:643:5)
          at new Request (node:internal/deps/undici/undici:4814:25)
          at Agent2.fetch2 (node:internal/deps/undici/undici:5544:29)
          at Object.fetch (node:internal/deps/undici/undici:6372:20)
          at fetch (node:internal/bootstrap/pre_execution:199:25)
          at /Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/node_modules/draco3d/draco_decoder_nodejs.js:39:1
          at /Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/node_modules/draco3d/draco_decoder_nodejs.js:39:224
          at Object.createDecoderModule (/Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/node_modules/draco3d/draco_decoder_nodejs.js:39:247) {
        input: '/Users/danvas/.nvm/versions/node/v18.1.0/lib/node_modules/gltf-pipeline/node_modules/draco3d/draco_decoder.wasm',
        code: 'ERR_INVALID_URL'
      }
    }
    
    Node.js v18.1.0
    

    Works fine in Node v17.

  • JavaScript DracoLoader failed to load uv2

    JavaScript DracoLoader failed to load uv2

    In order to use the lightmap mesh, uv2 is required, but the dracoloader cannot load uv2 normally. I modified the default loading configuration, but the loaded uv2 is not normal

    this.defaultAttributeIDs = {
    			position: 'POSITION',
    			normal: 'NORMAL',
    			color: 'COLOR',
    			uv: 'TEX_COORD',
    			uv2: 'TEX_COORD1'
    		};
    		this.defaultAttributeTypes = {
    			position: 'Float32Array',
    			normal: 'Float32Array',
    			color: 'Float32Array',
    			uv: 'Float32Array',
    			uv2: 'Float32Array'
    		};
    

    [Cube.drc](https://discourse.threejs.org/uploads/short-url/ApS4qHryAao38Vdui4QYuuFWLdG.drc)

  • bug? uint16 data might be encoded to 24bits

    bug? uint16 data might be encoded to 24bits

    When I close prediction, func EncodeValues in SequentialIntegerAttributeEncoder will go into func ConvertSignedIntsToSymbols. Then ConvertSignedIntsToSymbols will turn uint16 value like 63990 to 127980. Then the uint16 attr will be encoded in 24 bits

A simple single point light shadow mapping with OpenGL 3.3 and C++

omni-directional-light-example Using OpenGL 3.3 with C++ Basically a single light map, no lighting model was used Usage Build the executable outside A

Feb 10, 2022
Simple OpenGL program to visualize point cloud.
Simple OpenGL program to visualize point cloud.

Point Cloud Viewer Simple OpenGL program to visualize point cloud. The input data files should be plain text files. screenshot on Linux: screenshot on

May 31, 2022
Phyxed is a 2D physics engine with support for fixed point math.

Phyxed is a 2D physics engine with support for fixed point math.

Jan 18, 2022
ANSI C library for NURBS, B-Splines, and Bézier curves with interfaces for C++, C#, D, Go, Java, Lua, Octave, PHP, Python, R, and Ruby.

TinySpline TinySpline is a small, yet powerful library for interpolating, transforming, and querying arbitrary NURBS, B-Splines, and Bézier curves. Th

Jun 17, 2022
StereoKit is an easy-to-use open source mixed reality library for building HoloLens and VR applications with C# and OpenXR!
StereoKit is an easy-to-use open source mixed reality library for building HoloLens and VR applications with C# and OpenXR!

StereoKit is an easy-to-use open source mixed reality library for building HoloLens and VR applications with C# and OpenXR! Inspired by libraries like XNA and Processing, StereoKit is meant to be fun to use and easy to develop with, yet still quite capable of creating professional and business ready software.

Jun 16, 2022
A multi core friendly rigid body physics and collision detection library suitable for games and VR applications.
A multi core friendly rigid body physics and collision detection library suitable for games and VR applications.

A multi core friendly rigid body physics and collision detection library suitable for games and VR applications.

Jun 24, 2022
DirectX 11 and 12 library that provides a scalable and GCN-optimized solution for deferred shadow filtering

AMD ShadowFX The ShadowFX library provides a scalable and GCN-optimized solution for deferred shadow filtering. Currently the library supports uniform

May 22, 2022
The official Open-Asset-Importer-Library Repository. Loads 40+ 3D-file-formats into one unified and clean data structure.

Open Asset Import Library (assimp) A library to import and export various 3d-model-formats including scene-post-processing to generate missing render

Jun 24, 2022
A modern cross-platform low-level graphics library and rendering framework
A modern cross-platform low-level graphics library and rendering framework

Diligent Engine A Modern Cross-Platform Low-Level 3D Graphics Library Diligent Engine is a lightweight cross-platform graphics API abstraction library

Jun 17, 2022
A multi-platform library for OpenGL, OpenGL ES, Vulkan, window and input

GLFW Introduction GLFW is an Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platf

Jun 14, 2022
Low Level Graphics Library (LLGL) is a thin abstraction layer for the modern graphics APIs OpenGL, Direct3D, Vulkan, and Metal
Low Level Graphics Library (LLGL) is a thin abstraction layer for the modern graphics APIs OpenGL, Direct3D, Vulkan, and Metal

Low Level Graphics Library (LLGL) Documentation NOTE: This repository receives bug fixes only, but no major updates. Pull requests may still be accept

Jun 19, 2022
Antialiased 2D vector drawing library on top of OpenGL for UI and visualizations.
Antialiased 2D vector drawing library on top of OpenGL for UI and visualizations.

This project is not actively maintained. NanoVG NanoVG is small antialiased vector graphics rendering library for OpenGL. It has lean API modeled afte

Jun 19, 2022
Freecell Solver - a C library for automatically solving Freecell and some other variants of card Solitaire
Freecell Solver - a C library for automatically solving Freecell and some other variants of card Solitaire

The Freecell Solver Repository Root README Freecell Solver is an open source (distributed under the MIT/Expat licence) library, written in C, for atte

May 31, 2022
Pure C math library for 2D and 3D programming

MATHC MATHC is a simple math library for 2D and 3D programming. Features Vectors (2D, 3D and 4D) (integer type and floating-point type) Quaternions Ma

Jun 13, 2022
A terminal-based graphics library for both 2D and 3D graphics.
A terminal-based graphics library for both 2D and 3D graphics.

TermGL A terminal-based graphics library for both 2D and 3D graphics. Written in C, created for terminals supporting ANSI escape codes. Table of Conte

Jun 19, 2022
A minimalist library with basic facilities for developing interactive real-time 3D applications, with a strong emphasis on simplicity and ease of use.
A minimalist library with basic facilities for developing interactive real-time 3D applications, with a strong emphasis on simplicity and ease of use.

SlimEngine A minimalist and platform-agnostic base project for interactive graphical applications (2D/3D) with a strong emphasis on simplicity, ease o

May 10, 2022
OpenCorr is an open source C++ library for development of 2D, 3D/stereo, and volumetric digital image correlation
OpenCorr is an open source C++ library for development of 2D, 3D/stereo, and volumetric digital image correlation

OpenCorr OpenCorr is an open source C++ library for development of 2D, 3D/stereo, and volumetric digital image correlation. It aims to provide a devel

Jun 22, 2022
Open source Altium Database Library with over 147,000 high quality components and full 3d models.
Open source Altium Database Library with over 147,000 high quality components and full 3d models.

Open source Altium Database Library with over 147,000 high quality components and full 3d models.

Jun 20, 2022