Entwine - point cloud organization for massive datasets

Entwine logo

Build Status

OSX Linux Windows Docs Conda Docs Docker

Entwine is a data organization library for massive point clouds, designed to conquer datasets of hundreds of billions of points as well as desktop-scale point clouds. Entwine can index anything that is PDAL-readable, and can read/write to a variety of sources like S3 or Dropbox. Builds are completely lossless, so no points will be discarded even for terabyte-scale datasets.

Check out the client demos, showcasing Entwine output with Potree, Plas.io, and Cesium clients.

Usage

Getting started with Entwine is easy with Docker. First, we can index some public data:

mkdir ~/entwine
docker run -it -v ~/entwine:/entwine connormanning/entwine build \
    -i https://data.entwine.io/red-rocks.laz \
    -o /entwine/red-rocks

Now we have our output at ~/entwine/red-rocks. We could have also passed a directory like -i ~/county-data/ to index multiple files. Now we can statically serve ~/entwine with a simple HTTP server:

docker run -it -v ~/entwine:/var/www -p 8080:8080 connormanning/http-server

And view the data with Potree and Plasio.

To view the data in Cesium, see the EPT Tools project.

Going further

For detailed information about how to configure your builds, check out the configuration documentation. Here, you can find information about reprojecting your data, using configuration files and templates, enabling S3 capabilities, producing Cesium 3D Tiles output, and all sorts of other settings.

To learn about the Entwine Point Tile file format produced by Entwine, see the file format documentation.

Comments
  • running out of memory during dataset inference

    running out of memory during dataset inference

    I have a bunch of 500MB to 1GB laz files in google cloud storage, I'm treating them like http sources, but when I run entwine on more then 3 or 4 of them at once entwine just dies at the Performing dataset inference step, drastically increasing the memory allowed to docker will cause it to work.

    So a couple related questions:

    • is there a way to pre run the inference stuff?
    • am I misreading the documentation and I should just be running it on each image one at a time and merging it into a big pyramid?
  • Reading .npy causes segfault

    Reading .npy causes segfault

    It appears while reading a .npy file an error is thrown, I am unsure if this sits at the PDAL level or the entwine but have attached all relevant information. Demo .npy file

    I built a file to properly install numpy while #114 is fixed. Which is available here:

    FROM connormanning/entwine:latest 
    RUN apt-get install -y \
    	python-numpy \
    	python-pip 
    RUN pip install numpy
    ENTRYPOINT ["entwine"]
    

    Stacktrace:

    #0  0x00007fffdb951eba in PyErr_Occurred () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0
    #1  0x00007fffaccaf5a2 in ?? () from /usr/lib/python2.7/dist-packages/numpy/core/multiarray.x86_64-linux-gnu.so
    #2  0x00007fffacc8870e in ?? () from /usr/lib/python2.7/dist-packages/numpy/core/multiarray.x86_64-linux-gnu.so
    #3  0x00007ffff75568bf in pdal::Streamable::execute(pdal::StreamPointTable&) () from /usr/lib/libpdal_base.so.6
    #4  0x00007ffff7af3955 in entwine::Executor::run<entwine::PooledPointTable> (
        this=0x7ffff7dd3d40 <entwine::Executor::get()::e>, table=..., 
        path="/home/batman/one/xx/projects/map3d/scripts/tmp.qp560DsqXw/lanes/lanes10003.npy", reprojection=0x0, 
        transform=0x0, preserve=std::vector of length 0, capacity 0) at /var/entwine/entwine/util/executor.hpp:215
    #5  0x00007ffff7b2d2e5 in entwine::Executor::preview (this=0x7ffff7dd3d40 <entwine::Executor::get()::e>, 
        path="/home/batman/one/xx/projects/map3d/scripts/tmp.qp560DsqXw/lanes/lanes10003.npy", reprojection=0x0)
        at /var/entwine/entwine/util/executor.cpp:169
    #6  0x00007ffff7b12c99 in entwine::Scan::add (this=0x7fffffffdee0, f=..., 
        localPath="/home/batman/one/xx/projects/map3d/scripts/tmp.qp560DsqXw/lanes/lanes10003.npy")
        at /var/entwine/entwine/builder/scan.cpp:141
    #7  0x00007ffff7b12873 in entwine::Scan::<lambda()>::operator()(void) const (__closure=0x7fffe1731320)
        at /var/entwine/entwine/builder/scan.cpp:133
    #8  0x00007ffff7b15036 in std::_Function_handler<void(), entwine::Scan::add(entwine::FileInfo&)::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...) at /usr/include/c++/7/bits/std_function.h:316
    #9  0x00007ffff7b35216 in std::function<void ()>::operator()() const (this=0x7fffe1731320)
        at /usr/include/c++/7/bits/std_function.h:706
    #10 0x00007ffff7b3413b in entwine::Pool::work (this=0x555555865b40) at /var/entwine/entwine/util/pool.cpp:107
    #11 0x00007ffff7b33b6d in entwine::Pool::<lambda()>::operator()(void) const (__closure=0x5555558216e8)
        at /var/entwine/entwine/util/pool.cpp:43
    #12 0x00007ffff7b34a27 in std::__invoke_impl<void, entwine::Pool::go()::<lambda()> >(std::__invoke_other, entwine::Pool::<lambda()> &&) (__f=...) at /usr/include/c++/7/bits/invoke.h:60
    #13 0x00007ffff7b34832 in std::__invoke<entwine::Pool::go()::<lambda()> >(entwine::Pool::<lambda()> &&) (
        __fn=...) at /usr/include/c++/7/bits/invoke.h:95
    #14 0x00007ffff7b34c94 in std::thread::_Invoker<std::tuple<entwine::Pool::go()::<lambda()> > >::_M_invoke<0>(std::_Index_tuple<0>) (this=0x5555558216e8) at /usr/include/c++/7/thread:234
    #15 0x00007ffff7b34c50 in std::thread::_Invoker<std::tuple<entwine::Pool::go()::<lambda()> > >::operator()(void) (this=0x5555558216e8) at /usr/include/c++/7/thread:243
    #16 0x00007ffff7b34c20 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<entwine::Pool::go()::<lambda()> > > >::_M_run(void) (this=0x5555558216e0) at /usr/include/c++/7/thread:186
    #17 0x00007ffff6be5733 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    #18 0x00007ffff567e6db in start_thread (arg=0x7fffe1732700) at pthread_create.c:463
    #19 0x00007ffff62a188f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
    
    
  • Support of additional point dimensions (e.g. classification) in output Cesium model.

    Support of additional point dimensions (e.g. classification) in output Cesium model.

    Hello again,

    Sorry if this is addressed somewhere, but it's not clear to me if the current version of Entwine supports the preservation of non-color, non-position point dimensions (e.g. classification) when converting to Cesium models.

    I assume this information would live in the batch table of the output cesium 3d tiles.

    It's hinted at in the discussion of the following issues & PRs, but a clear answer is not provided, nor could I find it in the documentation:

    • https://github.com/connormanning/entwine/issues/59
    • https://github.com/connormanning/entwine/pull/85
    • https://github.com/connormanning/entwine/issues/81

    Thanks for your help.

  • no driver for s3

    no driver for s3

    looking at the docs, I should be able to run

    docker run -it  -v `pwd`:`pwd` -w `pwd` --rm connormanning/entwine build -i s3://iowa-lidar/iowa/ -o ./some/directory
    

    but doing so and I get the error Encountered an error: No driver for s3://iowa-lidar/iowa/*

  • Minio S3 support

    Minio S3 support

    How can I store Entwine output on Minio (which claims to be fully S3 compatible)?

    I'm trying to pass the arbiter credentials via environment variables like this (and as you can see, I needed to employ a little trick, since I can't set the (complete) AWS endpoint URL via the environment: I add the resulting URL as Docker host alias to point back to localhost) (credentials ommitted):

    docker run --add-host="entwine.s3.amazonaws.com:127.0.0.1" -e "CURL_VERBOSE=1" -e "AWS_ACCESS_KEY_ID=xxx" -e "AWS_SECRET_ACCESS_KEY=xxx" --net=host --rm -it connormanning/entwine build -i https://entwine.io/sample-data/red-rocks.laz -o s3://entwine/red-rocks

    I confirmed that the credentials are correct by using mc and s3cmd to upload stuff to my local Minio server to the correct bucket. The problem is that I'm still getting 403 errors, so I'm wondering if there is a guide how to properly configure Entwine to work with Minio.

  • Loss of detail in cesium output

    Loss of detail in cesium output

    Hi

    I'm trying to convert a point cloud to cesium format and seem to be losing points in the output in some step.

    When i convert the input data using entwine to a normal fileset and view it in potree the output looks like this:

    potree

    Command line and output:

    output-potree.log

    But when I make a cesium fileset the output is missing most of its points:

    cesium

    Command line and output:

    output-cesium.log

    cesium-intensity.json is cesium.json with coloring: "intensity". There is no change in the point count with the normal cesium.json

    I have tested the following things but nothing seems to fix this:

    • Changing tree depth settings in entwine
    • Reprojecting the input data with pdal to EPSG:4326 before giving it to entwine
    • Setting absolute: true

    The entwine output seems to indicate that it has added all the points but is it somehow guessing the type conversions wrong so that the points are quantized to same locations? The values for offset and bounds are different in the outputs.

    Lasinfo output from the input file:

    lasinfo.txt

  • Creating big 3d-tiles

    Creating big 3d-tiles

    Hi,

    Is it possible to proceed creating 3d tiles index for cesium if previoosly command was interrupted (say due to SSH)? If source data is pretty big (hundreds of gb) generating tiles would take long, so ability to continue previous job would be very useful.

    thanks,

    Alex.

  • Problem to create entwine index

    Problem to create entwine index

    I try to load 15 pointcloud tiles ( source file http://dl.mapgears.com/mg-laz.tar) in an entwine index. Entwine failed to load 3 of those files ( 278-5048_rgb.laz, 278-5047_rgb.laz and 276-5049_rgb.laz).

    my script looks like this:

    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/275-5047_rgb.laz -o /data/greyhound/RDP_RMI -b "[269000, 5034000, -100,308000, 5066000, 150]"
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/275-5048_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/275-5049_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/276-5047_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/276-5048_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/276-5049_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/277-5047_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/277-5048_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/277-5049_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/278-5047_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/278-5048_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/278-5049_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/279-5047_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/279-5048_rgb.laz -o /data/greyhound/RDP_RMI
    docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/279-5049_rgb.laz -o /data/greyhound/RDP_RMI
    

    At my third try I've finally received a real error message of bad memory allocation. Dont know why but Ithe first time I've retry to load this file, entwine crashes wihout any error message!

    Here's my log

    # docker run -it -v /opt/data/:/data connormanning/entwine entwine build -i /data/LAS/278-5048_rgb.laz -o /data/greyhound/RDP_RMI
    
    Continuing previous index...
    
    Input:
            Building from 13 source files
            Trust file headers? yes
            Work threads: 3
            Clip threads: 6
    Output:
            Output path: file:///data/greyhound/RDP_RMI/
            Temporary path: tmp/
            Compressed output? yes
    Tree structure:
            Null depth: 6
            Base depth: 10
            Cold depth: lossless
            Mapped depth: 13
            Sparse depth: 13
            Chunk size: 262144 points
            Dynamic chunks? yes
            Prefix IDs? no
            Build type: hybrid
            Point count hint: 13740813 points
    Geometry:
            Conforming bounds: [(269000.00000, 5034000.00000, -100.00000), (308000.00000, 5066000.00000, 150.00000)]
            Cubic bounds: [(268990.00000, 5030490.00000, -19485.00000), (308010.00000, 5069510.00000, 19535.00000)]
            Reprojection: (none)
            Storing dimensions: [X, Y, Z, Intensity, ReturnNumber, NumberOfReturns, ScanDirectionFlag, EdgeOfFlightLine, Classification, ScanAngleRank, UserData, PointSourceId, GpsTime, Red, Green, Blue, Origin]
    
    Adding 12 - /data/LAS/278-5048_rgb.laz
     A: 1048576 C: 1 H: 38
            Pushes complete - joining...
    Unknown error during /data/LAS/278-5048_rgb.laz
    terminate called after throwing an instance of 'std::bad_alloc'
      what():  std::bad_alloc
    Got error 11
    entwine[0x41b00b]
    /lib/x86_64-linux-gnu/libc.so.6(+0x352f0)[0x7f6c650ec2f0]
    /lib/x86_64-linux-gnu/libc.so.6(abort+0x2d6)[0x7f6c650ee036]
    /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x16d)[0x7f6c65a0006d]
    /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5eee6)[0x7f6c659fdee6]
    /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5ef31)[0x7f6c659fdf31]
    /usr/lib/libentwine.so(+0x59beb)[0x7f6c66812beb]
    

    Here's my memory log just before failure:

    [email protected]:/opt/data/MNT# free -m
                 total       used       free     shared    buffers     cached
    Mem:         16491      16343        147          0          0         14
    -/+ buffers/cache:      16328        162
    Swap:         4095       4004         91
    

    If I create a new Entwine index with only this pointcloud file, it work well. I hope you will be able to reproduce this problem

  • Unable to process pointcloud

    Unable to process pointcloud "No type found for undefined dimension"

    I am attempting to re-index pointclouds that have previously been indexed with (docker based) Entwine 1.x. I am using the same source LAS files for the indexing however, with the newer version of Entwine each LAS file seems to generate the following:

    (readers.las Error) Global encoding WKT flag not set for point format 6 - 10. Exception in pool task: No type found for undefined dimension. SRS could not be determined

    And the scan ends stating that SRS could not be determined and with very bare output, if I attempt a build using the scan output, as well as the warnings for each LAS file I get an error at the end stating that no points were found.

    I have run the exact same LAS file though 1.3 and it still complains about point formats but completes

    (readers.las Error) Invalid SRS specification. GeoTiff not allowed with point formats 6 - 10. Writing details to /tmp/out.entwine-inference...

    I am invoking with: docker run -it -v pwd:/tmp connormanning/entwine:1.3 infer -i /tmp/Pontshill_2_000015.las -o /tmp/out

    and docker run -it -v pwd:/tmp connormanning/entwine:latest scan -i /tmp/Pontshill_2_000015.las -o /tmp/out2

    I have tried forcing an SRS via a config file: { "reprojection": { "in": "EPSG:27700", "out": "EPSG:27700", "hammer": true }, "input": "/tmp/Pontshill_2_000015.las", "output": "/tmp/scanout" }

    which gives the No points found! error:

    docker run -it -v pwd:/tmp connormanning/entwine:latest scan -c /tmp/scan.config Scanning: Input: /tmp/Pontshill_2_000015.las Threads: 8 Reprojection: EPSG:27700 (OVERRIDING file headers) -> EPSG:27700 Trust file headers? yes

    1 / 1: /tmp/Pontshill_2_000015.las (readers.las Error) Global encoding WKT flag not set for point format 6 - 10. (readers.las Error) Global encoding WKT flag not set for point format 6 - 10. Exception in pool task: No type found for undefined dimension. Encountered an error: No points found! Exiting.

    Is there something I am missing?

  • entwine 1.1.0 build fails - jsoncpp issues

    entwine 1.1.0 build fails - jsoncpp issues

    Entwine 1.1.0 is failing to build on :

    • centos 7
    • gcc 4.8.5 (and 6.2.1)

    with dependences:

    • PDAL (master) in /usr/local, built from source
    • jsoncpp 1.8.0 in /usr/local, built from source

    First error:

    /local/build/entwine/entwine/third/arbiter/arbiter.cpp:2609:47: error: invalid conversion from ‘const void*’ to ‘void*’ [-fpermissive]
             if (BIO* bio = BIO_new_mem_buf(s.data(), -1))
    

    ...and then, using -fpermissive:

    [ 98%] Linking CXX executable entwine
    /usr/bin/ld: warning: libjsoncpp.so.11, needed by /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../libpdal_base.so, may conflict with libjsoncpp.so.0
    /usr/bin/ld: CMakeFiles/kernel.dir/build.cpp.o: undefined reference to symbol '_ZN4Json5ValueC1Em'
    /usr/local/lib64/libjsoncpp.so.11: error adding symbols: DSO missing from command line
    collect2: error: ld returned 1 exit status
    make[2]: *** [kernel/entwine] Error 1
    make[1]: *** [kernel/CMakeFiles/kernel.dir/all] Error 2
    make[1]: *** Waiting for unfinished jobs....
    

    Not sure what this means. I can't remove centos default jsoncpp, I figured placing a newer version in /usr/local would be the fix - PDAL is linked against the newer jsoncpp just fine.

    Advice appreciated!

  • Merge command line

    Merge command line

    Hi

    I try entwine the merge command line on my Linux server but I've got an error. Is there an option I should add in the build command to succeed?

    # docker run -it -v /opt/data/greyhound/:/data connormanning/entwine entwine build -r EPSG:2950 EPSG:3857 -s 1 4 -i /data/270_5035.las -o /data/270
    # docker run -it -v /opt/data/greyhound/:/data connormanning/entwine entwine build -r EPSG:2950 EPSG:3857 -s 2 4 -i /data/270_5036.las -o /data/270
    # docker run -it -v /opt/data/greyhound/:/data connormanning/entwine entwine build -r EPSG:2950 EPSG:3857 -s 3 4 -i /data/270_5037.las -o /data/270
    # docker run -it -v /opt/data/greyhound/:/data connormanning/entwine entwine build -r EPSG:2950 EPSG:3857 -s 4 4 -i /data/270_5038.las -o /data/270
    # docker run -it -v /opt/data/greyhound/:/data connormanning/entwine entwine merge /data/270
    Waking up base
    Merging /data/270...
        1 / 4 done.
        2 / 4Waking up base
     merging...Encountered an error: Invalid manifest paths
    Exiting.
    

    Thank you

  • Support EPT point cloud as input?

    Support EPT point cloud as input?

    Hi,

    Can EPT point clouds be used as input? The documentation page mentions that all PDAL-readable formats should be supported but looking at the code seems that files with json extension have a different parsing flow.

    I've tried using a previously generated EPT scans as input for a new one without success (see below). Is there any way to do it using the current version of entwine?

    #
    # Generate an EPT point cloud
    #
    $ entwine build -i data/autzen.laz -o autzen-ept
    1/1: data/autzen.laz
    Dimensions: [
            X:int32, Y:int32, Z:int32, Intensity:uint16, ReturnNumber:uint8,
            NumberOfReturns:uint8, ScanDirectionFlag:uint8, EdgeOfFlightLine:uint8,
            Classification:uint8, ScanAngleRank:float32, UserData:uint8,
            PointSourceId:uint16, GpsTime:float64, Red:uint16, Green:uint16, Blue:uint16
    ]
    Points: 10,653,336
    Bounds: [(635577, 848882, 406), (639004, 853538, 616)]
    Scale: 0.01
    SRS: PROJCS["NAD_1983_HARN_Lambert_Conformal_Conic",GEOGCS["NAD83(HARN)",DATUM["NA...
    
    Adding 0 - data/autzen.laz
    Joining
    00:10 - 68% - 7,266,304 - 2,615 (2,615) M/h - 9W - 0R - 163A
    	Done 0
    Saving
    Wrote 10,653,336 points.
    
    #
    # Generate another EPT point cloud using the first as input fails...
    #
    $ entwine build -i autzen-ept/ept.json -o autzen-ept-2
    1/1: autzen-ept/ept.json
    Encountered an error: [json.exception.out_of_range.403] key 'bounds' not found
    Exiting.
    
    # Info operation also doesn't work...
    $ entwine info -i autzen-ept/ept.json
    Analyzing:
    	Input: autzen-ept/ept.json
    	Reprojection: none
    	Type: shallow
    	Threads: 8
    
    1/1: autzen-ept/ept.json
    	Done.
    
    Errors:
    	- autzen-ept/ept.json: Failed to fetch info: [json.exception.out_of_range.403] key 'path' not found
    Encountered an error: No points found!
    Exiting.
    

    Thank you!

  • Docker images are not versioned

    Docker images are not versioned

    Docker images with version tags are not actually building against that release

    Line 4 in version 2.2.0's Dockerfile refers to the master branch instead of the version 2.2.0 release: https://github.com/connormanning/entwine/blob/49ad52f985536cb8987d079402377cac50360cf3/scripts/docker/Dockerfile#L4

    This can also be seen in the build commands on DockerHub: https://hub.docker.com/layers/connormanning/entwine/2.2.0/images/sha256-669bd97560c92e6f7ff13b1b575831220639fc6c2638f6c8c1be66454776e49d?context=explore

    I have fixed this in my fork: https://github.com/dhardestylewis/entwine/blob/e23a8c1056958e68eff2477c09cf2e3a3a51caa0/scripts/docker/Dockerfile

    and my DockerHub image of version 2.2.0: https://hub.docker.com/layers/dhardestylewis/entwine/2.2.0/images/sha256-93d84fc5266e5c7269cb30c4eb3dc95d59265845ad5c418944fcd115a7f345d8?context=explore

  • Errors and Failure with S3 storage

    Errors and Failure with S3 storage

    Hello,

    I have a S3 storage as output of my EPT config file. The input is about 2920 LAZ files which are added 100 by 100 (thank to the limit parameter) so I run 30 times the process and the log in saved in 30 different log files.

    I done all that two times: once with a classic S3 storage (processing n1), and once with a performant one (processing n2).

    The EPT is created and looks fine, however I have some issues in the log as "failure" or "errors":

    Examples of failure (during the process n2):

    logs/log_pq_16.log-01:04:30 - 50% - 39,524,412,339 - 36,766 (0) M/h - 0W - 0R - 0A
    logs/log_pq_16.log:Failure #1: Failed to put Semis_2021_0884_6265_LA93_IGN69.json
    logs/log_pq_16.log:Failure #1: Failed to put Semis_2021_0884_6268_LA93_IGN69.json
    logs/log_pq_16.log-01:04:40 - 50% - 39,524,412,339 - 36,672 (0) M/h - 0W - 0R - 0A
    
    logs/log_pq_17.log-Adding 1622 - ready/Semis_2021_0916_6275_LA93_IGN69.laz
    logs/log_pq_17.log:Failure #1: Failed to put 15-26672-8714-16342.laz
    logs/log_pq_17.log-13:10 - 50% - 40,076,353,288 - 182,626 (3,148) M/h - 290W - 165R - 1420A
    

    Example of error (before the process n2) :

    logs/log_pq_30.log-SRS: EPSG:2154
    logs/log_pq_30.log:Errors:
    logs/log_pq_30.log-     - ready/Semis_2021_0893_6252_LA93_IGN69.laz: Could not read from pocfluxhd/FXX/ept-data/15-26191-8384-16338.laz
    logs/log_pq_30.log-     - ready/Semis_2021_0896_6267_LA93_IGN69.laz: Could not read from pocfluxhd/FXX/ept-data/14-13129-4349-8170.laz
    logs/log_pq_30.log-     - ready/Semis_2021_0913_6274_LA93_IGN69.laz: Could not read from pocfluxhd/FXX/ept-data/15-26610-8839-16347.laz
    logs/log_pq_30.log-     - ready/Semis_2021_0917_6251_LA93_IGN69.laz: Could not read from pocfluxhd/FXX/ept-data/15-26694-8353-16354.laz
    logs/log_pq_30.log-
    logs/log_pq_30.log-Adding 2900 - ready/Semis_2021_0937_6272_LA93_IGN69.laz
    

    What is import is that if one file appears in this list for the log file n, it will always appears in all the following ones.

    To finish, all the not readable files or the not put files do exist on the S3 storage (n2):

    2022/08/30 22:43:24             11.7K FXX/ept-sources/Semis_2021_0884_6268_LA93_IGN69.json
    2022/08/30 22:43:24             11.6K FXX/ept-sources/Semis_2021_0884_6265_LA93_IGN69.json
    2022/08/30 07:15:23            325.4K FXX/ept-data/15-26672-8714-16342.laz
    
    2022/08/29 19:55:44            461.5K FXX/ept-data/14-13129-4349-8170.laz
    2022/08/29 18:01:48               557 FXX/ept-data/15-26191-8384-16338.laz
    2022/08/30 05:30:13               557 FXX/ept-data/15-26610-8839-16347.laz
    2022/08/30 07:39:26               557 FXX/ept-data/15-26694-8353-16354.laz
    

    (61,573 points for the first one, 0 for the other...)

    I have a lot less issues with the performant one (failures: 38/129; errors: 4/54; respectively p2/p1).

    The errors files and the failure files are not the same.

    I think that the "failure" issue is not a big deal as the file is transferred (and the file are the same on both processing). May be Entwine tries several times and this is just a king of warning?

    However, the "Errors" issue is a more problematic: the files are not the same in p1 and p2 (file with issue are smaller or empty). Is there a way of completing the no valid files?

    Is there any possibilities of validating the generated EPT data?

  • discussion: nlohmann's json library is heavily outdated

    discussion: nlohmann's json library is heavily outdated

    Entwine still uses a 3.6.1 copy (dating ~3 years ago) from the original author in https://github.com/connormanning/entwine/blob/master/entwine/third/mjson/json.hpp, while his current version https://github.com/nlohmann/json#cmake reached 3.11 or something.

    With the evolution of C++ in the last years, this 3.6.1 version seems to have become deprecated or unsupported, depending on the compiler/build environment, and sometimes compilers don't accept the code of that version anymore.

    Please consider todays https://github.com/nlohmann/json#cmake for Entwine. I hope this also helps to make the current builds and merge requests more stable;...

  • fix #284: make gtest an external dependency AND upgrade gtest 1.10.0 -> 1.12.1

    fix #284: make gtest an external dependency AND upgrade gtest 1.10.0 -> 1.12.1

    Regarding issue #284 (and maybe also older issues like #233):

    Please consider this pull request a suggestion and try for yourself whether it works; Caution:

    • this deletes a whole folder from the project (I believe this was copy-paste content)
    • this also upgrades from GoogleTest 1.10.0 to 1.12.x
    • and some test is failing (for me) and may need additional fixing (which I cannot provide, since I am not an Entwine developer, only a user)
LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)
LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

Dec 28, 2022
Dec 31, 2022
personal organization utilities

orgutils: Personal Organization Utilities orgutils are a set of utilities for personal and project organization. Each program has

Dec 8, 2021
A simple CHIP-8 emulator made for the purpose of studying computer organization, mainly how emulation does work.

CHIP8EMU A simple CHIP-8 emulator made for the purpose of studying computer organization, mainly how emulation does work. It was written in just a few

Nov 9, 2021
This is the massive repository for all code for the class CIS3250 Fall Semester.

========================================== Transforming Shapes Through Matrix Multiplication ========================================== Description o

Nov 25, 2021
Well I'd like to test myself how good I am before making something massive :">

Pratice-Cpp Well I'd like to test myself how good I am before making something massive :"> Before I upload something special, that I'll release in my

May 6, 2022
Cloud Native Data Plane (CNDP) is a collection of user space libraries to accelerate packet processing for cloud applications.

CNDP - Cloud Native Data Plane Overview Cloud Native Data Plane (CNDP) is a collection of userspace libraries for accelerating packet processing for c

Dec 28, 2022
An implementation on Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process.
An implementation on Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process.

An implementation on "Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239"

Jan 5, 2023
A LiDAR point cloud cluster for panoptic segmentation
A LiDAR point cloud cluster for panoptic segmentation

Divide-and-Merge-LiDAR-Panoptic-Cluster A demo video of our method with semantic prior: More information will be coming soon! As a PhD student, I don'

Dec 22, 2022
Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering)

my_detection Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering) 本项目使用设置地面坡度阈值的方法,滤除地面点,使用三维弯曲体素聚类法完成点云的聚类,包围盒参数由Apol

Jul 15, 2022
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.

PointCloud on Image The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera. Th

Aug 12, 2022
Simple OpenGL program to visualize point cloud.
Simple OpenGL program to visualize point cloud.

Point Cloud Viewer Simple OpenGL program to visualize point cloud. The input data files should be plain text files. screenshot on Linux: screenshot on

May 31, 2022
An unified library for fitting primitives from 3D point cloud data with both C++&Python API.
An unified library for fitting primitives from 3D point cloud data with both C++&Python API.

PrimitivesFittingLib An unified library for fitting multiple primitives from 3D point cloud data with both C++&Python API. The supported primitives ty

Jun 30, 2022
This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16.
This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16.

pc2image This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16. Requisites ROS Kinetic or Melodic Ve

May 18, 2022
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.

fusion-lidar-camera-ROS 一、介绍 本仓库是一个ROS工作空间,其中ws_fusion_camera/src有一个工具包color_pc ws_fusion_camera │ README.md │ └───src │ └───package: c

Dec 7, 2022
This project is used for lidar point cloud undistortion.
This project is used for lidar point cloud undistortion.

livox_cloud_undistortion This project is used for lidar point cloud undistortion. During the recording process, the lidar point cloud has naturally th

Dec 20, 2022
copc-lib provides an easy-to-use interface for reading and creating Cloud Optimized Point Clouds

copc-lib copc-lib is a library which provides an easy-to-use reader and writer interface for COPC point clouds. This project provides a complete inter

Nov 29, 2022
BAAF-Net - Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)
BAAF-Net - Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

Dec 29, 2022
DeepI2P - Image-to-Point Cloud Registration via Deep Classification. CVPR 2021
DeepI2P - Image-to-Point Cloud Registration via Deep Classification. CVPR 2021

#DeepI2P: Image-to-Point Cloud Registration via Deep Classification Summary Video PyTorch implementation for our CVPR 2021 paper DeepI2P. DeepI2P solv

Jan 8, 2023
A simple localization framework that can re-localize in one point-cloud map.
A simple localization framework that can re-localize in one point-cloud map.

Livox-Localization This repository implements a point-cloud map based localization framework. The odometry information is published with FAST-LIO. And

Jan 2, 2023