Axis video analytics example applications

Mission

Our mission is to provide an excellent development experience by enabling developers to build new AI/ML applications for a smarter and safer world.

Axis video analytics example applications

Video analytics ensures that video surveillance systems become smarter, more accurate, more cost-effective and easier to manage. The most scalable and flexible video analytics architecture is based on ‘intelligence at the edge’, that is, processing as much of the video as possible in the network cameras or video encoders themselves.

This not only uses the least amount of bandwidth but also significantly reduces the cost and complexity of the network. Open application development platforms such as Axis Camera Application Platform (ACAP) facilitate the integration of compatible third-party solutions, resulting in a quickly growing variety of applications – general as well as specialized for different industries. The growing number of video analytics applications creates new end-user benefits and opens new business possibilities.

Getting started

This repository contains a set of application examples which aims to enrich the developers analytics experience. All examples are using Docker framework and has a README file in its directory which shows overview, example directory structure and step-by-step instructions on how to run applications on the camera.

Requirements

To ensure compatibility with the examples, the following requirements shall be met:

  • Camera: ARTPEC-7 DLPU devices (e.g., Q1615 MkIII)
  • docker-compose version 1.29 or higher
  • Docker version 20.10.8 or higher
  • Firmware: 10.7
  • docker-acap installed on the camera
  • docker-acap set to use external memory card

Supported architectures

The examples support the following architectures.

  • armv7hf

Example applications for video analytics

Below is a list of examples available in the repository:

  • object-detector-python
    • A Python example which implements object detection on a video stream from the camera.
  • object-detector-cpp
    • A C++ example which runs object detection on the camera.
  • opencv-image-capture-cpp
    • A C++ example which captures camera frames and properties such as time stamps, zoom, focus etc., through OpenCV.
  • opencv-qr-decoder-python
    • A Python example which detects and decodes QR codes in the video stream using OpenCV.
  • web-server
    • A C++ example which runs a Monkey web server on the camera.

Dockerhub images

The examples are based on the ACAP Computer Vision SDK. This SDK is an image which contains APIs and tooling to build computer vision apps for running on camera, with support for C/C++ and Python. Additionally, there is the ACAP Native SDK, which is more geared towards building ACAPs that uses AXIS-developed APIs directly, and primarily does so using C/C++.

The examples also use the following images:

How to work with Github repository

You can help to make this repo a better one using the following commands.

  1. Fork it (git checkout ..)
  2. Create your feature branch: git checkout -b
  3. Commit your changes: git commit -a
  4. Push to the branch: git push origin
  5. Submit a pull request

License

Apache 2.0

Comments
  • Models does not detect objects CHIP_ID=12  SDK=1.2.1

    Models does not detect objects CHIP_ID=12 SDK=1.2.1

    Describe the bug

    I have a detection model built using tf models (tf 1.15.5 to have per-tensor quantization). The model worked fine on 10.8.x firmware and SDK=1.2. After upgrade to 10.11.x and SDK=1.2.1 I see the following: model works as expected on cpu (CHIP_ID=2) and does not detect anything (low scores, random boxes) for artpec8 (CHIP_ID=12). Whereas object-detection-python example works fine in my environment for cpu and artpec8 both. When I just change CHIP_ID=12 to CHIP_ID=2 my application starts working. There are no any crashes or errors in logs in case CHIP_ID=12 everything looks the same like in case CHIP_ID=2.

    Environment

    • Axis device model: Q3538-LVE
    • Axis device firmware version: 10.11.x
    • OS and version: Ubuntu 20.04 LTS
    • Version: SDK=1.2.1, axisecp/acap-runtime:aarch64-containerized

    Additional context

    Could you provide advice what should I check to find the cause of the issue? Any test to get more information?

  • Changing default detection model to run on Axis camera (Urgent)

    Changing default detection model to run on Axis camera (Urgent)

    Hi everyone,

    I was developing a computer vision pipeline on Axis camera model Q1656-LE Box. I installed Axis ACAP and Axis computer vision SDK using Docker and everything is functional when I use the default detection model which is SSD Mobilenet V2. (Screenshot below). Screenshot from 2022-10-17 18-40-03

    This detection model is configured in the following files as far as I learned.

    • app/object-detector-python/Dockerfile.model
    `ARG ARCH=aarch64
    
    FROM arm64v8/alpine as model-image-aarch64
    
    FROM model-image-${ARCH}
    
    # Get SSD Mobilenet V2
    ADD https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite models/
    ADD https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess.tflite models/
    ADD https://github.com/google-coral/edgetpu/raw/master/test_data/coco_labels.txt models/
    ADD https://github.com/google-coral/edgetpu/raw/master/test_data/COPYRIGHT models/
    
    CMD /bin/ls /models`
    
    • config/env.aarch64.artpec8
    `MODEL_PATH=/models/ssd_mobilenet_v2_coco_quant_postprocess.tflite
    INFERENCE_SERVER_IMAGE=axisecp/acap-runtime:0.6-aarch64
    INFERENCE_SERVER_COMMAND=/usr/bin/acap_runtime -p 8501 -j 12 `
    

    What I need to do is to replace this default detection model with other open-source model like EfficientDet Lite0 (from coral AI model zoo here) or any other newly created model. I tried to change the two files above to accomodate these changes but the Axis device keeps loading the default model SSD Mobilenet V2 and returns an error.

    • app/object-detector-python/Dockerfile.model (Modified)
    `ARG ARCH=aarch64
    
    FROM arm64v8/alpine as model-image-aarch64
    
    FROM model-image-${ARCH}
    
    # Get EfficientDet 0 Model
    ADD https://raw.githubusercontent.com/google-coral/test_data/master/efficientdet_lite0_320_ptq_edgetpu.tflite models/
    ADD https://raw.githubusercontent.com/google-coral/test_data/master/efficientdet_lite0_320_ptq.tflite models/
    ADD https://github.com/google-coral/edgetpu/raw/master/test_data/coco_labels.txt models/
    ADD https://github.com/google-coral/edgetpu/raw/master/test_data/COPYRIGHT models/
    
    CMD /bin/ls /models`
    
    • config/env.aarch64.artpec8
    `MODEL_PATH=/models/efficientdet_lite0_320_ptq_edgetpu.tflite
    INFERENCE_SERVER_IMAGE=axisecp/acap-runtime:0.6-aarch64
    INFERENCE_SERVER_COMMAND=/usr/bin/acap_runtime -p 8501 -j 12 `
    

    Then when I run rebuild the object detector according to the instructions and run the detector, I received the following error: Screenshot from 2022-10-17 18-43-26

    Thank you for your help in advance.

  • Unable to run custom model in inference server

    Unable to run custom model in inference server

    Hi!

    I recently tried to run a custom models with the inference client, but one gave me the following error:

    larod: Session 55: Could not run job: Could only read 196608 out of 786432 bytes from file descriptor

    I used the Python API, with the minimal-inference-server as a reference code base. Inference on the model through Tensorflow Lite on my pc works fine. Any tips on how to solve this?

  • How to load custom model in object-detector-python

    How to load custom model in object-detector-python

    Hello,

    I'm not able to load a custom model in a modified example of object-detector-python. From the Dockerfile.model I see that the current model is compiled to run on google coral so I transformed my model acordingly (based on yolov4-tiny). My custom model is fully quantized to tf-lite int8 and then postprocessed to be compatible with google coral by using edgetpu_compiler with success. also I tried with no success to load the quantized int8 model without compiling for google-coral. But after loading the docker image to the camera (P3265) it shows the following error: ERROR in Inference: Failed to load model model.tflite (Could not load model: Could not build an interpreter of the model) <_InactiveRpcError of RPC that terminated with: object-detector-python-object-detector-python-1 | status = StatusCode.CANCELLED object-detector-python-object-detector-python-1 | details = "" object-detector-python-object-detector-python-1 | debug_error_string = "{"created":"@1657181878.926597840","description":"Error received from peer ipv4:172.29.0.2:8501","file":"src/core/lib/surface/call.cc","file_line":1063,"grpc_message":"","grpc_status":1}"

    I don't know if I'm missing some step or the model needs to be converted in a different way.

    Environment

    • Axis device model: P3265-LVE
    • Axis device firmware version: 10.10.73
    • SDK VERSION: 1.2
    • docker daemon: 1.2.3

    EDIT: I upgraded the camera firmware to 10.11.76 (also I reinstalled docker daemon 1.2.3). Now the error is different: /usr/bin/acap_runtime: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /host/lib/liblarod.so.1)

    Thank you very much

  • Start docker containers on camera bootup

    Start docker containers on camera bootup

    Hi!

    I wonder if there is an easy way to start the docker containers automatically on the camera when booting up, so that no remote machine would be needed to start them manually? I'm looking for a similar feature to the Native API, but would like to use the C++/Python APIs instead.

    Could https://github.com/AxisCommunications/acap-native-sdk-examples/tree/master/container-example be utilized with computer vision sdk?

  • Fix Dockerfile syntax to resolve bug when build

    Fix Dockerfile syntax to resolve bug when build

    Describe your changes

    I change Dockerfile code to be valid instructions syntax of docker

    Issue ticket number and link

    1. RUN instruction before any bash commands
    2. change EOF to RUN <<EOF
    • Ref #(docker instructions) https://docs.docker.com/engine/reference/builder/

    Checklist before requesting a review

    • [x] I have performed a self-review of my own code
    • [x] I have verified that the code builds perfectly fine on my local system
    • [x] I have added tests that prove my fix is effective or that my feature works
    • [ ] I have commented my code, particularly in hard-to-understand areas
    • [x] I have verified that my code follows the style already available in the repository
    • [ ] I have made corresponding changes to the documentation
  • web-server example is not working

    web-server example is not working

    Describe the bug

    web-server example not working

    To reproduce

    Follow the steps on Readme

    Screenshots

    image

    Environment

    • Axis device model: [e.g. Q1615 Mk III]
    • Axis device firmware version: [10.11.65]
    • Stack trace or logs: [e.g. Axis device system logs] Monkey HTTP Server v1.5.6 Built : May 12 2022 12:11:47 (gcc 9.4.0) Home : http://monkey-project.com [+] Process ID is 7 [+] Server socket listening on Port 2001 [+] 2 threads, 134217724 client connections per thread, total 268435448 [+] Transport layer by liana in http mode [+] Linux Features: TCP_FASTOPEN SO_REUSEPORT [2022/05/13 11:56:47] [ Error] Segmentation fault (11), code=1, addr=0x4 Aborted (core dumped)
    • OS and version: [ Ubuntu 20.04 LTS]
  • Error with docker-compose

    Error with docker-compose

    Running the command: docker-compose --tlsverify --host tcp://$DEVICE_IP:$DOCKER_PORT --env-file ./config/env.$ARCH.$CHIP up

    Gives me an error:

    [+] Running 0/1

    • inference-server Error 15.8s Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

    Camera: AXIS Q1615 MK III

  • object-detector-python fails on tag v1.2

    object-detector-python fails on tag v1.2

    Describe the bug

    object-detector-python (tag v1.2) example fails with error:

    acap_dl-models_1          | COPYRIGHT
    acap_dl-models_1          | coco_labels.txt
    acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess.tflite
    acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
    object-detector-python_acap_dl-models_1 exited with code 0
    inference-server_1        | /usr/bin/acap_runtime: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /host/lib/liblarod.so.1)
    object-detector-python_inference-server_1 exited with code 1
    object-detector-python_1  | object-detector-python connect to: inference-server:8501
    object-detector-python_1  | <_InactiveRpcError of RPC that terminated with:
    object-detector-python_1  | 	status = StatusCode.UNAVAILABLE
    object-detector-python_1  | 	details = "DNS resolution failed for service: inference-server:8501"
    object-detector-python_1  | 	debug_error_string = "{"created":"@1660143486.605863880","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":1324,"referenced_errors":[{"created":"@1660143486.605859080","description":"DNS resolution failed for service: inference-server:8501","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":359,"grpc_status":14,"referenced_errors":[{"created":"@1660143486.605787880","description":"C-ares status is not ARES_SUCCESS qtype=A name=inference-server is_balancer=0: Domain name not found","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":698}]}]}"
    

    To reproduce

    Install and run object-detector-python example according to the steps.

    Environment

    • Axis device model: Q3538-LVE, P3265-LV
    • Axis device firmware version: 10.11.76, 10.11.65, 10.12.73
    • OS and version: Ubuntu 20.04 LTS

    Additional context

    Looks like axisecp/acap-runtime:0.6-aarch64 is outdated.

  • Communication between webpage and c++ App

    Communication between webpage and c++ App

    I was using net_http.h in SDK 3 for communicating from webpage to c++ via .cgi file, I found out it is no more supported in SDK 4. I have also checked there is a new way of communicating via Monkey server, which needs a major change in my application architecture. Is there is a way for using same .cgi way in SDK 4?

  • How can I interpret this python exercise, please help.

    How can I interpret this python exercise, please help.

    Entry – Sampling – Cycles – Decisions – Strings

    1. Quantity: 5 times
    2. The data to work with are: file, name and surname, age, sex code, category code, category, salary, retirement. a) File: 1,2,3… i+1 is automatically generated b) Surname: all capital letters. c) Name: first capitalized, rest lowercase d) Age: values ​​between 21 and 60. Validate. e) Sex code: validate F/M. f) Job category code: validate A/B/C/D g) Enter the number of overtime hours at 50%. – Enter the number of overtime hours at 100%. VALIDATE income, values ​​between 0 and 15 h) If the sex code is F, the sex is Female, otherwise it is Male. i) If the job category code is A, the category is SALESMAN and the salary is 72000, if the job category code is B, the category is CASHIER and the salary is 75000; if the category code is C, the category is ADMINISTRATIVE and the salary is 82000, otherwise the category is MAESTRANZA and the salary is 52000. j) Show surname and first name of the oldest person k) Show last name and name of the person with the least amount of overtime at 50% l) Count number of women, number of men. m) Number of people in each category. n) Accumulate salaries by category. o) Calculate retirement: 11% of salary – Social work: 3% of salary p) Calculate the value to be charged according to overtime q) Show data and results
  • Build external library and integrate with ACAP computer vision

    Build external library and integrate with ACAP computer vision

    Issue description

    I'm trying to build an external libraries and integrate them with object-detector-cpp example in ACAP-Computer-Vision-SDK but I get this error

    integration_1_error

    External Libraries

    • oatpp
    • libconfig

    Dockerfile

    
    # syntax=docker/dockerfile:1
    
    ARG ARCH=armv7hf
    ARG REPO=axisecp
    ARG SDK_VERSION=1.4
    ARG UBUNTU_VERSION=22.04
    
    FROM arm32v7/ubuntu:${UBUNTU_VERSION} as runtime-image-armv7hf
    FROM arm64v8/ubuntu:${UBUNTU_VERSION} as runtime-image-aarch64
    
    FROM ${REPO}/acap-computer-vision-sdk:${SDK_VERSION}-${ARCH} as cv-sdk-runtime
    FROM ${REPO}/acap-computer-vision-sdk:${SDK_VERSION}-${ARCH}-devel as cv-sdk-devel
    
    # Setup proxy configuration
    ARG HTTP_PROXY
    ENV http_proxy=${HTTP_PROXY}
    ENV https_proxy=${HTTP_PROXY}
    
    ENV DEBIAN_FRONTEND=noninteractive
    RUN apt-get update && apt-get install -y -f libglib2.0-dev libsystemd0 && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/*
    
    RUN mkdir -p /tmp/devel /tmp/runtime /build-root /target-root
    
    # Download the target libs/headers required for compilation
    RUN apt-get update && apt-get install --reinstall --download-only -o=dir::cache=/tmp/devel -y -f libglib2.0-dev:$UBUNTU_ARCH \
        libsystemd0:$UBUNTU_ARCH \
        libgrpc++-dev:$UBUNTU_ARCH \
        libprotobuf-dev:$UBUNTU_ARCH \
        libc-ares-dev:$UBUNTU_ARCH \
        libgoogle-perftools-dev:$UBUNTU_ARCH \
        libssl-dev:$UBUNTU_ARCH \
        libcrypto++-dev:$UBUNTU_ARCH \
        libgcrypt20:$UBUNTU_ARCH
    
    RUN for f in /tmp/devel/archives/*.deb; do dpkg -x $f /build-root; done
    RUN cp -r /build-root/lib/* /build-root/usr/lib/ && rm -rf /build-root/lib
    
    # Separate the target libs required during runtime
    RUN apt-get update && \ 
        apt-get install --reinstall --download-only -o=dir::cache=/tmp/runtime -y -f libgrpc++:$UBUNTU_ARCH \
        '^libprotobuf([0-9]{1,2})$':${UBUNTU_ARCH} \
        libc-ares2:$UBUNTU_ARCH
    
    RUN for f in /tmp/runtime/archives/*.deb; do dpkg -x $f /target-root; done
    RUN cp -r /target-root/lib/* /target-root/usr/lib/ && rm -rf /target-root/lib
    
    WORKDIR /build-root
    RUN git clone https://github.com/oatpp/oatpp.git && \
        cd oatpp && \
        mkdir build && cd build && \
        cmake -D CMAKE_CXX_COMPILER=/usr/bin/arm-linux-gnueabihf-g++ \
        -D OATPP_DISABLE_ENV_OBJECT_COUNTERS=ON \
        -D OATPP_BUILD_TESTS=OFF .. && \
        make install
    
    ARG BUILDDIR=/build-root/oatpp/build
    RUN cp -r ${BUILDDIR}/src/liboatpp* /target-root/usr/lib
    
    WORKDIR /build-root
    RUN git clone https://github.com/hyperrealm/libconfig.git
    WORKDIR /build-root/libconfig
    RUN autoreconf -i && ./configure &&\
        gCFLAGS=' -O2 -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 -fomit-frame-pointer' \
        CC=arm-linux-gnueabihf-gcc cmake -G"Unix Makefiles" -D CMAKE_CXX_COMPILER=/usr/bin/arm-linux-gnueabihf-g++ . && \
        make -j;
    
    ARG BUILDDIR=/build-root/libconfig/out
    RUN cp -r ${BUILDDIR}/libconfig.so* /target-root/usr/lib &&\
        cp -r ${BUILDDIR}/libconfig++.so* /target-root/usr/lib
    
    COPY app/Makefile /build/
    COPY app/src/ /build/src/
    WORKDIR /build
    RUN make
    
    FROM runtime-image-${ARCH}
    # Copy the libraries needed for our runtime
    COPY --from=cv-sdk-devel /target-root /
    
    # Copy the compiled object detection application
    COPY --from=cv-sdk-devel /build/objdetector /usr/bin/
    
    # Copy the precompiled opencv libs
    COPY --from=cv-sdk-runtime /axis/opencv /
    
    # Copy the precompiled openblas libs
    COPY --from=cv-sdk-runtime /axis/openblas /
    
    CMD ["/usr/bin/objdetector"]
    
    
    

    Makefile

    
    # Application Name
    TARGET := objdetector
    
    # Function to recursively find files in directory tree
    rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
    
    # Find all .o files compiled from protbuf files
    PROTO_O := $(call rwildcard, /axis/tfproto, *.o)
    
    # Determine the base path
    BASE := $(abspath $(patsubst %/,%,$(dir $(firstword $(MAKEFILE_LIST)))))
    
    # Find cpp files
    OBJECTS := $(patsubst %.cpp, %.o, $(wildcard $(BASE)/src/*.cpp))
    OTHEROBJS = $(BASE)/src/*/*.cpp
    
    CXX = $(TARGET_TOOLCHAIN)-g++
    CXXFLAGS += -I/usr/include -I/usr/include/grpcpp/security -I/axis/tfproto -I/axis/openblas/usr/include -I/axis/opencv/usr/include -I/build-root/usr/include
    CXXFLAGS += -I/build-root/oatpp/src
    CXXFLAGS += -I/build-root/libconfig/lib
    
    CPPFLAGS = --sysroot=/build-root $(ARCH_CFLAGS) -Os -pipe -std=c++17
    STRIP=$(TARGET_TOOLCHAIN)-strip
    
    SHLIB_DIR = /target-root/usr/lib
    LDFLAGS = -L$(SHLIB_DIR) -Wl,--no-as-needed,-rpath,'$$ORIGIN/lib'
    LDLIBS += -L $(BASE)/lib \
     -L /axis/opencv/usr/lib \
     -L /axis/openblas/usr/lib
    LDLIBS += -lm -lopencv_core -lopencv_imgproc -lopencv_imgcodecs -lopencv_videoio -lopenblas -lgfortran
    LDLIBS += -lprotobuf -lz -lgrpc -lgpr -lgrpc++ -lssl -lcrypto -lcares -lprofiler -lrt
    LDLIBS += -lvdostream -lfido -lcapaxis -lstatuscache 
    SHLIBS += -loatpp -lconfig++
    
    .PHONY: all clean
    
    all: $(TARGET)
    
    $(TARGET): $(OBJECTS)
    	$(CXX) $< $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS) $(LDLIBS) $(SHLIBS) $(OTHEROBJS) $(PROTO_O) -o [email protected] && $(STRIP) --strip-unneeded [email protected]
    
    clean:
    	$(RM) *.o $(TARGET)
    
    
  • Containers crashing after extended period of time due to RPC error

    Containers crashing after extended period of time due to RPC error

    Please do not disclose security vulnerabilities as issues. See our security policy for responsible disclosures.

    Describe the bug

    I built the object-detector-python and pushed it to a P3255-LVE dome camera. I have noticed that the containers crash after a few hours and I have to restart them using the docker-compose up command. The issue looks to be a problem with sending a frame to the inference server and the request timing out.

    I am expecting it to continue running till the camera is shut down or the docker-compose down command.

    To reproduce

    Build container: docker build . -t obj-app --build-arg axisecp --build-arg armv7hf --build-arg arm32v7/ubuntu:20.04 Save the container and load it on the camera: docker save obj-app -o opencv.tar docker -H tcp://192.168.0.2:2375 load -i opencv.tar Run containers: docker-compose -H tcp://192.168.0.2:2375 --env-file ./config/env.armv7hf.tpu up

    Wait a day or 2 and the containers will crash and exit

    I looked at the logs of the obj-app container and it showed a RPC error, I pasted in the Stack trace or logs section.

    Environment

    • Axis device model: P3255-LVE Dome camera
    • Axis device firmware version: 10.10.69
    • Stack trace or logs: exception.txt
    • OS and version: Windows 10
    • Version: axisecp/acap-runtime:0.6-armv7hf

    Additional context

  • Build fails to install extra libraries with pip

    Build fails to install extra libraries with pip

    I'm using the object-detector-python project as a base. I've made changes to the detector.py that uses some additional python packages shapely and pandas. I've edited the dockerfile to include this: ARG ARCH=armv7hf ARG SDK_VERSION=1.2 ARG REPO=axisecp ARG RUNTIME_IMAGE=arm32v7/ubuntu:20.04

    FROM $REPO/acap-computer-vision-sdk:$SDK_VERSION-$ARCH AS cv-sdk FROM ${RUNTIME_IMAGE} COPY --from=cv-sdk /axis/python / COPY --from=cv-sdk /axis/python-numpy / COPY --from=cv-sdk /axis/python-tfserving / COPY --from=cv-sdk /axis/opencv / COPY --from=cv-sdk /axis/openblas /

    WORKDIR /app RUN pip install requests RUN pip install jsonpickle RUN pip install shapely RUN pip install pandas COPY app/* /app/ CMD ["python3", "detector.py"]

    When I build the container using this command: docker build . -t container --build-arg axisecp --build-arg armv7hf --build-arg arm32v7/ubuntu:20.04 The container fails to build, the output is attached

    The container builds successfully with the other 2 packages I added, requests and jsonpickle. It looks like it fails to install numpy, but there is already a version of numpy copied from /axis/python-numpy. Is there anyway to successfully build the container with these packages?

    Thanks output.txt

A simple example showing how to render a video with libvlc + raylib.
A simple example showing how to render a video with libvlc + raylib.

Hey! I bet you have been trying to render and control a video with raylib for a long long time. Don't you think you should at least buy me a beer? Wha

Nov 8, 2022
Example how to use ffmpeg to decode video file.

FFMpeg-decode-example Example how to use ffmpeg to decode video file. Link to article about decode with FFMpeg. Russian article. Example shows you the

Nov 16, 2022
Vulkan Video Sample Application demonstrating an end-to-end, all-Vulkan, processing of h.264/5 compressed video content.
Vulkan Video Sample Application demonstrating an end-to-end, all-Vulkan, processing of h.264/5 compressed video content.

This project is a Vulkan Video Sample Application demonstrating an end-to-end, all-Vulkan, processing of h.264/5 compressed video content. The application decodes the h.264/5 compressed content using an HW accelerated decoder, the decoded YCbCr frames are processed with Vulkan Graphics and then presented via the Vulkan WSI.

Nov 21, 2022
Video stabilization is a software-based approach in real-time to eliminating environmental effects (wind, heavy vehicle etc.) and enhance the visual performance that degrade video streaming quality.
Video stabilization is a software-based approach in real-time to eliminating environmental effects (wind, heavy vehicle etc.) and enhance the visual performance that degrade video streaming quality.

Video Stabilization Contents General Info Installation To Do General Info Video stabilization is a software-based approach in real-time to eliminating

Nov 23, 2022
Minimalist video maker -- simplify your music score video making process!

VisualScores 极简视频制作程序,简化你的乐谱视频制作! 如果需要编译,请解压 lib 文件夹中压缩包。 使用前请参考 manual 文件夹中的用户手册。 请勿修改、移动或删除 resource 文件夹中的任何文件。 VisualScores Minimalist video maker

Sep 7, 2022
OpenShot Video Library (libopenshot) is a free, open-source C++ library dedicated to delivering high quality video editing, animation, and playback solutions to the world

OpenShot Video Library (libopenshot) is a free, open-source C++ library dedicated to delivering high quality video editing, animation, and playback solutions to the world

Dec 3, 2022
NymphCast is a audio and video casting system with support for custom applications.
NymphCast is a audio and video casting system with support for custom applications.

NymphCast is a software solution which turns your choice of Linux-capable hardware into an audio and video source for a television or powered speakers. It enables the streaming of audio and video over the network from a wide range of client devices, as well as the streaming of internet media to a NymphCast server, controlled by a client device.

Dec 1, 2022
This repository contains applications used in my Gameboy LCD video.

Gameboy LCD stuff This repository contains applications used in my Gameboy LCD video. Pin naming LCD pin naming used in this repository matches the Ga

Jul 3, 2022
Example programs for Talking Async videos

Talking Async Example programs for the Talking Async videos, which can be found on YouTube. Episode 1: Why C++20 is the Awesomest Language for Network

Nov 25, 2022
Open h.265 video codec implementation.
Open h.265 video codec implementation.

libde265 - open h.265 codec implementation libde265 is an open source implementation of the h.265 video codec. It is written from scratch and has a pl

Nov 30, 2022
Vireo is a lightweight and versatile video processing library written in C++11

Overview Vireo is a lightweight and versatile video processing library that powers our video transcoding service, deep learning recognition systems an

Nov 27, 2022
Olive is a free non-linear video editor for Windows, macOS, and Linux.
Olive is a free non-linear video editor for Windows, macOS, and Linux.

Olive is a free non-linear video editor for Windows, macOS, and Linux.

Nov 30, 2022
Video player for 3ds
Video player for 3ds

Video player for 3DS Patch note v1.0.1 Added allow skip frames option v1.0.0 Initial release Summary Video player for 3DS Performance 256x144(144p)@30

Nov 21, 2022
Plugin for VLC that pauses/plays video on mouse click

Pause Click plugin for VLC VLC plugin that allows you to pause/play a video by clicking on the video image. Can be configured to work nicely with doub

Nov 29, 2022
A WFH utility to visually indicate user engagement of audio and video
A WFH utility to visually indicate user engagement of audio and video

DIY: In meeting indicator - WFH Utility The need for in meeting indicator at home So many of you have gotten accustomed to work from home by now. This

Jun 28, 2021
Real-Time Intermediate Flow Estimation for Video Frame Interpolation filter for VapourSynth

Description RIFE filter for VapourSynth, based on rife-ncnn-vulkan. Usage rife.RIFE(clip clip[, int model=0, int gpu_id=auto, int gpu_thread=2, bint t

Nov 23, 2022
SRS is a simple, high efficiency and realtime video server, supports RTMP/WebRTC/HLS/HTTP-FLV/SRT/GB28181.
SRS is a simple, high efficiency and realtime video server, supports RTMP/WebRTC/HLS/HTTP-FLV/SRT/GB28181.

SRS is a simple, high efficiency and realtime video server, supports RTMP/WebRTC/HLS/HTTP-FLV/SRT/GB28181.

Dec 5, 2022
Anki-like app for spaced repetition of video clips
Anki-like app for spaced repetition of video clips

ReeePlayer The ReeePlayer application is designed for spaced repetition of fragments (clips) of video and audio files with similar principle as in Ank

Oct 28, 2022
Shotcut - a free, open source, cross-platform video editor
 Shotcut - a free, open source, cross-platform video editor

cross-platform (Qt), open-source (GPLv3) video editor

Nov 27, 2022