An EDA toolchain for integrated core-memory interval thermal simulations of 2D, 2.5, and 3D multi-/many-core processors

CoMeT: An Integrated Interval Thermal Simulation Toolchain for 2D, 2.5D, and 3D Processor-Memory Systems

With the growing power density in both cores and memories (esp. 3D), thermal issues significantly impact performance and reliability. Thus, increasingly researchers have become interested in understanding the performance, power, and thermal effects of the proposed changes in hardware and software. CoMeT is an integrated Core and Memory Thermal simulation toolchain, providing performance, power, and temperature parameters at regular intervals (epoch) for both cores and memory. It enables computer architects to evaluate various core and main memory integration options (3D, 2.5D, 2D) and analyze runtime management policies.

CoMeT extends the Sniper multicore performance simulator's source code to provide DRAM access information per memory bank (at regular intervals). It emits the access count for reads and writes separately, which can be helpful for memories having asymmetric read/write energy and delay (e.g., NVM). Periodically, using McPAT and CACTI, the core and memory power are computed and fed to HotSpot for (temperature-dependent leakage power-aware) thermal analysis. A thermal management policy monitors the temperature and, in the case of core or memory heating, it redistributes/reduces the power, then the performance simulation is resumed.


Following are the salient features:

  1. Supports various main memory types and their integration to cores (2D off-chip DDR, 3D off-chip memory, 2.5D integration, and 3D stacking of core and memory).
  2. Has a built-in temperature video generation tool, namely HeatView, which supports all core-memory configurations. Additionally, for 3D architectures, a video with a layer-wise 2D view is generated.
  3. A default thermal management policy with an OnDemand governer and open scheduler is included to quick-start the design process. Designers can easily modify the default policy and evaluate different thermal management approaches.
  4. To ease user development and reduce debugging, CoMET provides an automatic build verification test suite (smoke testing) that checks critical functionalities across various architectures. Users can easily add test cases to the smoke tests.
  5. Provides an automated grid-based floorplan generator (floorplanlib), which supports the generation of 2D, 2.5D, and 3D floorplans.
  6. Supports PARSEC, SPLASH-2, and SPEC CPU2017 benchmark suites. Users can also run their benchmarks.
  7. Using the SimulationControl feature, users can run simulations in batch mode, taking the list of workloads (mixes of benchmarks) and configurations as input. Further, to enable detailed output analysis, SimulationControl generates additional outputs, such as performance, power, temperature variation (versus time) graphs, and detailed CPI bar charts.

1 - Getting Started (Installation)

Installing Basic Tools

sudo apt install git make python gcc

Cloning the repo

git clone


Download and extract Pinplay 3.2 to the root CoMeT directory as pin_kit

tar xf pinplay-drdebug-3.2-pin-3.2-81205-gcc-linux.tar.gz
mv pinplay-drdebug-3.2-pin-3.2-81205-gcc-linux pin_kit


CoMeT compiles and runs inside a Docker container. Therefore, we need to download & install Docker. For more info:

Running a Docker image

After installing Docker, let us now create a container using the Dockerfile.

cd docker
make # build the Docker image
make run # starts running the Docker image. Please ignore "docker groups: cannot find name for group id 1000"
cd .. # return to the base Sniper directory (while running inside of Docker)

Compiling Sniper

make # or use 'make -j N' where N is the number of cores in your machine to use parallel make

Compiling HotSpot

Let us compile the [HotSpot] simulator, which shipped with CoMeT.

cd hotspot_tool/
cd ..

2 - Running an Application

cd test/thermal_example
make run | tee logfile # Runs application, displays DRAM bank accesses, outputs temperature files
  • The output of make run displays the time interval or epoch (in µs) in which DRAM access was made, #reads and #writes, and reports the number of DRAM accesses directed to a particular bank. Further, detailed power, temperature traces at epoch level are generated.

  • To enable the above performance, power, and temperature outputs, we have added -s memTherm_core and -c gainestown_3D in the Sniper run command (please see Makefile). The above flags can be used to enable CoMeT simulation for any Sniper compatible executable.

  • Sample output: Apart from Sniper messages and command line, we see a detailed bank-level trace for DRAM accesses. Please note the terminal output with the default epoch of 1 ms (= 1000 µs) shown below.

    Time    #READs  #WRITEs #Access Address     #BANK   Bank Counters

@&  1000    10455   8710    19165       144, 132, 151, 162, 149, 160, 144, 130, 145, 140, 143, 164, 147, 158, 145, 133, 142, 131, 148, 156, 144, 155, 140, 134, 147, 129, 143, 162, 147, 167, 139, 129, 140, 130, 156, 155, 144, 153, 144, 138, 156, 137, 155, 157, 150, 169, 145, 142, 152, 137, 156, 157, 144, 156, 138, 136, 147, 127, 142, 160, 147, 160, 142, 129, 138, 133, 151, 156, 145, 155, 143, 135, 145, 129, 144, 157, 143, 162, 143, 130, 144, 129, 149, 170, 147, 164, 144, 128, 145, 132, 144, 155, 149, 164, 146, 133, 275, 254, 280, 282, 143, 163, 150, 134, 152, 125, 146, 166, 141, 164, 143, 126, 142, 130, 146, 153, 139, 156, 144, 136, 150, 126, 139, 156, 148, 165, 148, 130, 

@&  2000    15742   12212   27954       206, 188, 225, 249, 240, 267, 197, 164, 229, 219, 201, 225, 193, 196, 244, 235, 205, 191, 226, 246, 241, 264, 196, 167, 229, 217, 202, 220, 193, 196, 244, 235, 205, 191, 226, 246, 241, 264, 196, 167, 236, 218, 208, 225, 196, 205, 248, 240, 212, 193, 233, 251, 241, 267, 197, 165, 230, 215, 202, 223, 193, 199, 245, 233, 206, 189, 226, 249, 241, 267, 197, 165, 230, 220, 202, 218, 188, 202, 250, 230, 211, 196, 223, 251, 241, 265, 200, 170, 229, 222, 203, 216, 190, 203, 255, 236, 215, 193, 231, 250, 244, 264, 199, 167, 234, 215, 197, 229, 194, 196, 244, 236, 204, 191, 228, 247, 242, 264, 196, 168, 233, 211, 199, 227, 196, 200, 249, 239, 
Total number of DRAM read requests = 48989 

Total number of DRAM write requests = 32774
  • Sum of DRAM read requests and write requests equals num dram accesses in sim.out file.

    • You can also specify --roi flag in config file to obtain DRAM access trace for a region of interest.
  • Selected useful files: Multiple files containing simulation outputs will be generated (sim.cfg, sim.out, etc.), but the useful ones are described below, these files would have _mem and_core suffix (instead of prefix combined_) to indicate if they are for memory or core temperature simulation:

    • combined_temperature.trace - the temperature trace of core and memory at periodic intervals combined together.
    • combined_power.trace - the power trace of core and memory at periodic intervals combined together.
    • full_temperature.trace (core and mem) - the temperature trace at periodic intervals for various banks and logic cores in the 3D memory. core trace is not generated in case of a 2.5D and 3D architecture.
    • logfile - the simulation output from the terminal. bank_access_counter lists the access counts for different banks.

If you are able to verify this, then you have successfully run an application.

3 - CoMeT Features

3.1 Support for various Core-Memory Integrations

Click here to open details

CoMeT can be configured for various memory and core configurations.

We show changing input configuration, from stacked (core + 3D memory) to off-chip 3D memory, for the thermal_example test case.

#Change to appropriate working directory
cd test/thermal_example

#Change configuration from gainestown_3D to gainestown_3Dmem. Can be done in a text editor also.
sed -i 's/-c gainestown_3D/-c gainestown_3Dmem/g' Makefile

#Running CoMeT
make run > logfile
  • Setting up input configuration: Open Makefile and change the config file used (specified with -c option in the sniper command). The options are as follows:

    • gainestown_DDR - 2x2 core and an external 4x4 bank DDR main memory (2D memory).
    • gainestown_3Dmem - 2x2 core and an external 4x4x8 banks 3D main memory.
    • gainestown_2_5D - 2x2 core and a 4x4x8 banks 3D main memory integrated on the same die (2.5D architecture).
    • gainestown_3D - 2x2 core on top of a 4x4x8 banks 3D main memory.

3.2 HeatView: A temperature video generation tool

Click here to open details
  • To generate the thermal trace video (for stacked 4-core and 3D, 8 layer, 128 bank memory architechure), please run python3 ../../../scripts/ --cores_in_x 2 --cores_in_y 2 --cores_in_z 1 --banks_in_x 4 --banks_in_y 4 --banks_in_z 8 --arch_type 3D --traceFile combined_temperature.trace --output maps. The video will be an avi file generated in the maps folder using the combined_temperature.trace. Detailed command line arguments for HeatView are given below.
Usage: python3 arguments
Switches and command-line arguments: 
     --cores_in_x: Number of cores in x dimension (default 4)
     --cores_in_y: Number of cores in y dimension (default 4)
     --cores_in_z: Number of cores in z dimension (default 1)
     --banks_in_x: Number of memory banks in x dimension (default 4)
     --banks_in_y: Number of memory banks in y dimension (default 4)
     --banks_in_z: Number of memory banks in z dimension (default 8)
     --arch_type: Architecture type = 3D or no3D (default no3D)
     --plot_type: Generated view = 3D or 2D (default 3D)
     --layer_to_view: Layer number to view in 3D plot (starting from 0) (default 0)
     --type_to_view: Layer type to view in 3D plot (CORE or MEMORY) (default MEMORY)
     --verbose (or -v): Enable verbose output
     --inverted_view (or -i): Enable inverted view (heat sink on bottom)
     --debug: Enable debug priting
     --tmin: Minimum temperature to use for scale (default 65 deg C)
     --tmax: Maximum temperature to use for scale (default 81 deg C)
     --samplingRate (or -s): Sampling rate, specify an integer (default 1)
     --traceFile (or -t): Input trace file (no default value)
     --output (or -o): output directory (default maps)
     --clean (or -c): Clean if directory exists

3.3 Dynamic Thermal Management

Click here to open details

Open Scheduler

  • features
    • random arrival times of workloads (open system)
    • API for application mapping and DVFS policies
  • enable with type=open in base.cfg

Configuration Help for Open Scheduler

  • task arrival times: use the config parameters in scheduler/open in base.cfg
  • mapping: select logic with scheduler/open/logic and configure with additional parameters (core_mask, preferred_core)
  • DVFS: select logic with scheduler/open/dvfs/logic and configure accordingly

These policies are implemented in common/scheduler/policies. Mapping policies derive from MappingPolicy, DVFS policies derive from DVFSPolicy. After implementing your policy, instantiate it in SchedulerOpen::initMappingPolicy / SchedulerOpen::initDVFSPolicy.

3.4 Build verification test suite

Click here to open details
  • Running automated test suite to ensure working of different features of CoMeT
cd test/test-installation
make run
  • As each system configuration is successfully simulated, you will see messages as below

    • Running test case with configuration gainestown_3D
    • Finished running test case with configuration gainestown_3D.cfg
    • Test case passed for configuration gainestown_3D.cfg
    • OR Test case failed for configuration gainestown_3D.cfg. Please check test/test-installation/comet_results/gainestown_3D/error_log for details.
    • Video for gainestown_3D saved in test/test-installation/comet_results/gainestown_3D/maps
    • OR Video generation failed for configuration gainestown_3D.cfg. Check test/test-installation/comet_results/gainestown_3D/video_gen_error.log for details.
    • Result saved in test/test-installation/comet_results/gainestown_3D
    • make clean
  • After the test finishes successfully, a folder "comet_results" will be created in the same folder

    • It contains sub-folders, one for each system configurations (DDR, 3Dmem, 3D and 2_5_D)
    • Each sub-folder contains architecture simulation files and thermal simulation files for the test case
    • For per epoch DRAM access trace and Sniper log of test case, please refer simulation_log file
    • For thermal simulation results, please refer to full_temperature.trace file and other related files
  • Video generation

    • If the simulation for a configuration finishes successfully and pre-requisites for generating videos are installed in your host machine, then the video is generated inside "video" folder of that configuration.
    • If the simulation for a configuration crashes, no video is generated. Further, an error_log is generated for that configuration stating why simulation failed.
    • If the simulation finishes successfully but pre-requisites for generating videos are not met, a file named video_gen_error.log is generated to report the error for that configuration.
  • Test summary

    • The complete summary of the running the test suite is written to a file named test_summary.
    • Also, some logs are printed during the execution of test_suite.

3.5 Automated floorplan generator (floorplanlib)

Click here to open details

General Usage

The floorplan creation helpers are an optional tool, you can also use your custom floorplans instead. Usage:

  • create floorplans (and layer configuration files, HotSpot configuration files)
  • change configuration to reference to the created files (for an example see gainestown_*)


off-chip 2D
python3 floorplanlib/ \
    --mode DDR \
    --cores 4x4 --corex 1mm --corey 1mm \
    --banks 8x8 --bankx 0.9mm --banky 0.9mm \
    --out my_2d_floorplan
off-chip 3D memory
python3 floorplanlib/ \
    --mode 3Dmem \
    --cores 4x4 --corex 1mm --corey 1mm \
    --banks 8x8x2 --bankx 0.9mm --banky 0.9mm \
    --out my_3d_oc_floorplan
2.5D (3D memory and 2D core on the same interposer)
python3 floorplanlib/ \
    --mode 2.5D \
    --cores 4x4 --corex 1mm --corey 1mm \
    --banks 8x8x2 --bankx 0.9mm --banky 0.9mm \
    --core_mem_distance 7mm \
    --out my_2.5d_floorplan
3D (fully-integrated 3D stack of cores and memory)
python3 floorplanlib/ \
    --mode 3D \
    --cores 4x4 --corex 0.9mm --corey 0.9mm \
    --banks 8x8x4 --bankx 0.45mm --banky 0.45mm \
    --out my_3d_floorplan

3.6 Supports PARSEC, SPLASH-2, SPEC 2017

Click here to open details
  • Compiling the Benchmarks:
#setting $GRAPHITE_ROOT to CoMeT's root directory
export GRAPHITE_ROOT=$(pwd)
cd benchmarks
#setting $BENCHMARKS_ROOT to the benchmarks directory
export BENCHMARKS_ROOT=$(pwd)
#compiling the benchmarks
#Running the benchmarks
make run
  • You will see that compilation only passes for PARSEC and SPLASH benchmarks, and fails for SPEC benchmarks. Ignore the failed compilation for SPEC benchmarks.
  • For the SPEC 2017 benchmarks,
    cd test/SPEC
    ../../../../../run-sniper -v -s memTherm_core -c gainestown_3Dmem -n 4 --pinballs $SIM_PATH,$SIM_PATH,$SIM_PATH,$SIM_PATH
    • $SIM_PATH represents path of a specific .address for the SPEC benchmark

3.7 Simulation Control

Click here to open details
  • features
    • batch run many simulations with different configurations
      • annotate configuration options in config files (e.g., in base.cfg or gainestown_3D.cfg) with tags following the format # cfg:
      • specify list of tags per run in Only the associated configuration options will be enabled
      • for an example: see example function in and scheduler/open/dvfs/constFreq in base.cfg to run an application at different frequencies
      • IMPORTANT: make sure that all your configuration options have a match in base.cfg
    • create plots of temperature, power, etc. over time
    • create video of temperature (with HeatView)
    • API to automatically parse finished runs (resultlib)
  • usage
    • configure basic settings in simulationcontrol/
    • specify your runs in simulationcontrol/
    • python3
    • print overview of finished simulations: python3

Quickly list the finished simulations:

cd simulationcontrol

Each run is stored in a separate directory in the results directory (see 4). For quick visual check, many plots are automatically generated for you (IPS, power, etc).

To do your own (automated) evaluations, see the simulationcontrol.resultlib package for a set of helper functions to parse the results. See the source code of for a few examples.

Code Acknowledgements







  • Error in Step 3.4

    Error in Step 3.4 "Build verification test suite"

    Sorry to disturb. I'm following your steps to learn CoMeT features. But in step 3.4 "Build verification test suite", all the cases are failed, and one of those error messages is

    `[SNIPER] Warning: Unable to use physical addresses for shared memory simulation. [SNIPER] Running ['/home/dzx/CoMeT/record-trace', '-o', '/tmp/tmp8Pk2EE/run_benchmarks', '-v', '-e', '1', '-s', '0', '-r', '1', '--follow', '--routine-tracing', '--', '/home/dzx/CoMeT/test/test-installation/test'] [SNIPER] Start [SNIPER] Running ['bash', '-c', '/home/dzx/CoMeT/lib/sniper -c /home/dzx/CoMeT/config/base.cfg --general/total_cores=4 --general/output_dir=/home/dzx/CoMeT/test/test-installation --config=/home/dzx/CoMeT/config/nehalem.cfg --config=/home/dzx/CoMeT/config/gainestown_3D.cfg -g --traceinput/stop_with_first_app=true -g --traceinput/restart_apps=false -g --hooks/numscripts=1 -g --hooks/script0name=/home/dzx/CoMeT/test/test-installation/ -g --hooks/script0args= -g --traceinput/stop_with_first_app=false -g --traceinput/enabled=true -g --traceinput/emulate_syscalls=true -g --traceinput/num_apps=1 -g --traceinput/trace_prefix=/tmp/tmp8Pk2EE/run_benchmarks']

    *** Configuration error *** Configuration value traceinput/benchmarks not found.

    [SNIPER] End [SNIPER] Elapsed time: 0.15 seconds [RECORD-TRACE] Using the Pin frontend (sift/recorder) [SIFT_RECORDER] Running /home/dzx/CoMeT/pin_kit/pin -mt -injection child -xyzzy -ifeellucky -follow_execv -t /home/dzx/CoMeT/sift/recorder/obj-intel64/sift_recorder -verbose 1 -debug 0 -roi 0 -roi-mpi 0 -f 0 -d 0 -b 0 -o /tmp/tmp8Pk2EE/run_benchmarks -e 1 -s 0 -r 1 -pa 0 -rtntrace 1 -stop 0 -- /home/dzx/CoMeT/test/test-installation/test [SIFT_RECORDER:0:0] Output = [/tmp/tmp8Pk2EE/run_benchmarks.app0.th0.sift] [SIFT_RECORDER:0:0] Response = [/tmp/tmp8Pk2EE/run_benchmarks_response.app0.th0.sift] [SIFT_RECORDER:0:0] Error: Unable to open the output file /tmp/tmp8Pk2EE/run_benchmarks.app0.th0.sift`

    It seems that some temp files cannot be generated properly. Is it related to the virtual machine I am using? I would appreciate it if you can give me some help.

  • Some issues occurred during the installation of CoMeT

    Some issues occurred during the installation of CoMeT

    Dear authors, I met some issues concerning installing CoMeT and the major issues are shown below.

    1. The python-matplotlib version is depreciated in python2 as shown in Installation phase.

    I tried to install the python-matplotlib using the command sudo apt-get python-matplotlib, but I failed many times and it always showed the error below: image

    I change this version to python3-matplotlib, it works. But I am not sure if the python3-matplotlib is compatible with the whole project.

    2. The lack of hotspot_c_tool folder in the source code as shown in Compile phase

    In your guide, it exhibits that we should compile the hotspot_c_tool for core temperature estimation. However, there is no such hotspot_c_folder in the source code. image

    3. The unclear configuration instruction as shown in Compile phase

    I do not know which config file I should search and operate when I follow the guide “Configure the path of the hotspot tool and config directory in the configure file(search for tool_path and config_path variables)”. I am not aware of what it takes to configure the path of the hotspot tool and config directory in the config file. Indeed, I use the command grep -r "tool_path" and grep -r "config_path" to find the relevant content, but I still don't know what I am supposed to do next. image

    Consequently, the demo didn’t work and it gives me hints saying there were some configure issues in the config.cpp file. I search the relevant contents in that config.cpp file. I assumed there are some issues in the configure process. image

    I am looking forward to your reply.

    Best wishes Yixian

  • Thermal model of Core-3D NUCA System

    Thermal model of Core-3D NUCA System

    Dear Sir, I have been using your EDA tool for a while, I would like to ask if CoMeT supports the thermal models of Core-3d S-NUCA Systems in addition to supporting 3D Processor-Memory Systems? Hope to get your reply soon, thanks! Yours sincerely, Caspar

  • malloc():corrupted top size

    malloc():corrupted top size

    Hi! I followed the Readme tutorial to the last step, "Running an Application", but when I use make run | tee logfile, the expected result does not appear, but malloc():corrupted top size appears, what is the reason for this? Hope to get your reply soon, thanks!

  • I want to know something about Memory information

    I want to know something about Memory information

    Dear Professors: I plan to write a custom Memory Bank DTM policy using CoMeT. While after looking at The CoMeT User Manaua, I have some questions about the Memory in 3D architecture in CoMeT. So, would you please help me solve these problems. 1. In 3D architecture, whether the memory access time is uniform when a same core to access different memory bank? 2. What is the default(no dram policy) mapping relationship between cores and memory banks? 3. Is there an interface provided in CoMeT to specify the core to access some specific memory banks through algoritjms? 4. I want to know what is the full_map mean as well as limited_no_broadcast and limitless. as shown below. image I would appreciate your help. Yours sincerely, Walter

  • can't build docker image

    can't build docker image

    Dear ir or madam: My ubuntu's version is 18.04 and after I installed Docker, I couldn’t use “make” to build the Docker image, and it prompted a version problem of gcc++. Another problem that made me wonder is that there is already version 3.7 pin_kit in the CoMeT folder, why should I download the 3.2 version of pin_kit to replace it.again? And there is a pin3.15 compressed package in the CoMeT root folder. Looking forward to get your reply soon!Thank you.

  • Very low power consumption of memory banks compared to processor's cores

    Very low power consumption of memory banks compared to processor's cores

    Dear CoMeT Developers,

    I've conducted a few experiments using CoMeT. Looking at the combined_power.trace which is shown below, I wonder why the power consumption of memory banks is extremely lower than the cores' power consumption and why some of them are even zero? I see that some extra leakage power, but not dynamic power, is calculated for each memory bank wrt. its temperature in HotSpot. But then, considering the power numbers in the trace below, I wonder where does the dynamic power consumption (above 0.3W) which is shown in figure 4 of your article published in Arxiv come from?

    Best Regards, Sobhan Niknam combined_power.txt

  • low power mode

    low power mode

    Under supervision of Anuj Pathania, I implemented a low power mode for memory banks. The low power mode can be used in memory DTM policy for which I also wrote support. Please let me know what you think. I can provide you with my thesis that explains how the code works, and a manual for writing memory DTM policy.

  • [SIFT:1] Error: Success

    [SIFT:1] Error: Success

    Dear Sir: I got the following error when I used simulationcontrol to simulate, and I was stuck in this place all the time(so that I have to use control+c to interrupt the program), and the corresponding simulation results did not appear in the results folder. image

    and some of my related configuration files are as follows:(Other than that I haven't changed any other files) image image

    I am very confused about this, because I have tried other configuration parameters and different workloads, and I can get the corresponding results.

    Hoping to get your reply soon, thanks.

  • Level 3 cache

    Level 3 cache

    Under the supervision of Anuj Pathania, I added level 3 cache support. This change adds stacked and non-stacked L3 cache to all architecture types (2D, 3Dmem, 2.5D & 3D).

    8 new configurations are added based on the gainestown config, but with stacked or non-stacked l3 Also, other configurations with l3 cache can be made via new arguments in floorplanlib. Heatview and simulationcontrol are both updated to support l3 cache.

    My thesis contains a part about how to use my newly added code. I hope this looks good, if any changes or clarifications are needed let me know.

Related tags
Core - System components and backend.

Core System backend and start session and more. Compile dependencies sudo pacman -S extra-cmake-modules pkgconf qt5-base qt5-quickcontrols2 qt5-x11ext

Nov 27, 2022
Upbit(업비트) Cryptocurrency Exchange Open API Client of Multi-Programming Language Support
Upbit(업비트) Cryptocurrency Exchange Open API Client of Multi-Programming Language Support

Upbit Client Documents Support Upbit Client Upbit(업비트) Cryptocurrency Exchange API Client Description Upbit(업비트) Cryptocurrency Exchange Open API Clie

Nov 6, 2022
High-level build system for distributed, multi-platform C/C++ projects.

fips fips is a highlevel build system wrapper written in Python for C/C++ projects. (this project has nothing to do with the Federal Information Proce

Nov 25, 2022
Bitcoin Core integration/staging tree

Bitcoin is an experimental digital currency that enables instant payments to anyone, anywhere in the world. Bitcoin uses peer-to-peer technology to operate with no central authority: managing transactions and issuing money are carried out collectively by the network. Bitcoin Core is the name of open source software which enables the use of this currency.

Dec 1, 2022
Bitcoin Core integration/staging tree

Bitcoin Core integration/staging tree For an immediately usable, binary version of the Bitcoin Core software, see https://bitc

Nov 19, 2022
Elecrypt core protocol details
Elecrypt core protocol details

This codes are compatible with esp8266 nodemcu 1.0 on Arduino

Nov 6, 2022
x509cert is a tool and library for generating X.509 certificates and certificate requests.

x509cert is a tool and library for generating X.509 certificates and certificate requests. It is written in C99 and uses BearSSL to decode keys and compute signatures.

Sep 5, 2022
HashLibPlus is a recommended C++11 hashing library that provides a fluent interface for computing hashes and checksums of strings, files, streams, bytearrays and untyped data to mention but a few.

HashLibPlus HashLibPlus is a recommended C++11 hashing library that provides a fluent interface for computing hashes and checksums of strings, files,

Apr 11, 2022
Text-Crypt is a tool which encrypts and decrypts texts using a specific and certain key.
Text-Crypt is a tool which encrypts and decrypts texts using a specific and certain key.

Text-Crypt is a tool which encrypts and decrypts texts using a specific and certain key. This tool uses Caesar Cypher Algorithm to encrypt and decrypt a given text.

Dec 24, 2021
An open source, portable, easy to use, readable and flexible SSL library

README for Mbed TLS Mbed TLS is a C library that implements cryptographic primitives, X.509 certificate manipulation and the SSL/TLS and DTLS protocol

Dec 5, 2022
TLS/SSL and crypto library

Welcome to the OpenSSL Project OpenSSL is a robust, commercial-grade, full-featured Open Source Toolkit for the Transport Layer Security (TLS) protoco

Dec 5, 2022
Library and command line tool to detect SHA-1 collision in a file

sha1collisiondetection Library and command line tool to detect SHA-1 collisions in files Copyright 2017 Marc Stevens [email protected] Distributed

Nov 29, 2022
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Dec 3, 2022
MIRACL Cryptographic SDK: Multiprecision Integer and Rational Arithmetic Cryptographic Library is a C software library that is widely regarded by developers as the gold standard open source SDK for elliptic curve cryptography (ECC).

MIRACL What is MIRACL? Multiprecision Integer and Rational Arithmetic Cryptographic Library – the MIRACL Crypto SDK – is a C software library that is

Nov 22, 2022
BTCU Wallet is the original Bitcoin Ultimatum client and it builds the backbone of the network.

The concept of BTCU is similar to the concept of the second cryptocurrency by capitalization - Ethereum.

Jul 1, 2022
Ethereum miner with OpenCL, CUDA and stratum support

Ethminer is an Ethash GPU mining worker: with ethminer you can mine every coin which relies on an Ethash Proof of Work thus including Ethereum, Ethereum Classic, Metaverse, Musicoin, Ellaism, Pirl, Expanse and others. This is the actively maintained version of ethminer. It originates from cpp-ethereum project (where GPU mining has been discontinued) and builds on the improvements made in Genoil's fork. See FAQ for more details.

Dec 4, 2022
Nano is a digital payment protocol designed to be accessible and lightweight, with a focus on removing inefficiencies present in other cryptocurrencies.

Nano is a digital payment protocol designed to be accessible and lightweight, with a focus on removing inefficiencies present in other cryptocurrencies. With ultrafast transactions and zero fees on a secure, green and decentralized network, this makes Nano ideal for everyday transactions.

Dec 5, 2022