KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software

License Github CI DOI Twitter

Release

KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. More in Overview

Kratos is free under BSD-4 license and can be used even in comercial softwares as it is. Many of its main applications are also free and BSD-4 licensed but each derived application can have its own propietary license.

Main Features

Kratos is multiplatform and available for Windows, Linux (several distros) and macOS.

Kratos is OpenMP and MPI parallel and scalable up to thousands of cores.

Kratos provides a core which defines the common framework and several application which work like plug-ins that can be extended in diverse fields.

Its main applications are:

Some main modules are:

Documentation

Here you can find the basic documentation of the project:

Getting Started

Tutorials

More documentation

Wiki

Examples of use

Kratos has been used for simulation of many different problems in a wide variety of disciplines ranging from wind over singular building to granular domain dynamics. Some examples and validation benchmarks simulated by Kratos can be found here

Barcelona Wind Simulation

Contributors

Organizations contributing to Kratos:



International Center for Numerical Methods in Engineering




Chair of Structural Analysis
Technical University of Munich


Altair Engineering


Deltares

Our Users

Some users of the technologies developed in Kratos are:

Airbus Defence and Space
Stress Methods & Optimisation Department

Siemens AG
Corporate Technology

ONERA, The French Aerospace Lab
Applied Aerodynamics Department

Looking forward to seeing your logo here!

Special Thanks To

In Kratos Core:

  • Boost for ublas
  • pybind11 for exposing C++ to python
  • GidPost providing output to GiD
  • AMGCL for its highly scalable multigrid solver
  • JSON JSON for Modern C++
  • filesystem Header-only single-file std::filesystem compatible helper library, based on the C++17 specs
  • ZLib The compression library

In applications:

How to cite Kratos?

Please, use the following references when citing Kratos in your work.

  • Dadvand, P., Rossi, R. & Oñate, E. An Object-oriented Environment for Developing Finite Element Codes for Multi-disciplinary Applications. Arch Computat Methods Eng 17, 253–297 (2010). https://doi.org/10.1007/s11831-010-9045-2
  • Dadvand, P., Rossi, R., Gil, M., Martorell, X., Cotela, J., Juanpere, E., Idelsohn, S., Oñate, E. (2013). Migration of a generic multi-physics framework to HPC environments. Computers & Fluids. 80. 301–309. 10.1016/j.compfluid.2012.02.004.
  • Mataix Ferrándiz, V., Bucher, P., Rossi, R., Cotela, J., Carbonell, J. M., Zorrilla, R., … Tosi, R. (2020, November 27). KratosMultiphysics (Version 8.1). Zenodo. https://doi.org/10.5281/zenodo.3234644
Comments
  • [Core] Adding Subproperties

    [Core] Adding Subproperties

    Fixes #2414

    My only concern is that the only method in the model part that does not do a conversion from PropertiesWithSubProperties to Properties in python is GetProperties()

    We defined a new class called PropertiesWithSubProperties, which is derived from Properties (we do this to avoid overload more the Properties object)

  • [Core][Not to merge right now] Proposal for solving strategies factories

    [Core][Not to merge right now] Proposal for solving strategies factories

    @frastellini and I are interested in implemente a proper adaptative NR strategy. For doing this properly we need to take into account the recomputation of the processes. @RiccardoRossi told me to create factories similar to the linear solvers with the solving strategies in order to implement this in a consistent way with the design of the analysis

    This PR adds factories to:

    • Convergence criteria
    • Builder and solver
    • Strategies -> This one uses the other factories
    • Schemes

    I used the parameters keys already used on the solvers in order to reduce conflicts with the current implementations. Anyway further changes will be necessary

    Right now it only affects the core, the idea is to adapt the applications ones once this is merged and approved.

    I was thinking to move the factories to a different folder, in order to reduce the size of the includes folder (which is huge)

    This PR includes changes from:

    #3178 and #3179, these changes will disapear once merged into master

    I was thinking how to implement the tests of these factories. Maybe using the method info from the different classes and expect a certain output depending of the parameters. What do you say @pooyan-dadvand ?

  • SIMP element for topology optimization

    SIMP element for topology optimization

    Hi everyone,

    I am busy with the reactivation of the Topology Optimization Application, but I am having trouble implementing the SIMP element based on the small-dispalcement solid element (small_displacement.cpp) of the Structural Mechanics Application. In the legacy TopOpt code, the SIMP element was based on the Solid Mechanics Application and it seems that the implementation "philosophy" of these elements is different. I am specifically having a problem with the Calculate function.

    I am developing this in the following repository: MyRepository

    The test example that I am using: exmaples/01_Small_Cantilever_Hexahedra/ (01_Small_Cantilever_Hexahedra/)

    I am not able to get to the root of this problem, does anyone have en idea and can help me with this?

    Thanks in advance!

  • Release 5.2

    Release 5.2

    The next release has been scheduled for the end of the November. So at the end of this month we will make the release branch from master to be used for interface refinement and bug fixing.

    It is very important to add your application to the linux and windows binaries in order to have them available and downloadable for the next 3 month for other people without need to compile the code. (This is useful for collaborating with users who only work with python) The code at the moment of making release branch should contain all features and for release time be as stable as possible. I would like to emphasis that the effort for creating a release increases considerably the quality of the application and improves its maintainability. So I would strongly recommend application developers to make such effort.

    I have changed the previous release project to new one maintaining the same structure and also the application which where included.

    I have also created a milestones for better organization

    Steps to take

    I would encourage all @KratosMultiphysics/team-maintainers and also developers to:

    1. Revising the project and add their application if they want to join this release.
    2. Revising their corresponding issues and assign them to the release 5.2 milestone

    I would kindly ask for all your collaboration during this release period.

    Update: I have realized that an issue cannot be added to two milestones. So I have removed one milestone to keep the organization easier.

  • Model v3 - third iteration of the model redesign

    Model v3 - third iteration of the model redesign

    this is third iteration of the model. It is now not registered in Kernel not it is a global object.

    i would say that the design is pretty clean now (there is nothing strange in the model object, in the sense that it is NOT a singleton any longer)

    the problem is that this change is backward incompatible in that it hides the modelpart. Modelpart can now ONLY be created through the model interface.

    as of now, i ported all of the core, to the point at which all the tests pass (both python and c++). If we go for this design i will need help in porting all of the applications

  • [core] Reduce node and dof size stage 1

    [core] Reduce node and dof size stage 1

    This PR is the first stage in reducing the size of Dof and Node:

    • The sizeof(Dof<double>) has been reduced from 64 to 32 bytes
    • The sizeof(Node<3>) has been reduced from 240 to 224 bytes (For the record the real occupation of empty node node with its allocations was about 290 bytes before these changes)

    The reduction has been made by:

    • Rearranging the Dof data to be more coherent and reducing the data sizes using c++ bit fields.
    • Dof is not derived from Indexed object any more to avoid virtual table pointer allocation
    • A new NodalData class has been created to have all data stored in Node reducing the number of pointers
    • Dof has a pointer to NodalData which has also the Id of the Node so the copy of Node Id is removed

    This PR changes the API by not deriving from Indexed object (so Node and Dof are not Indexed object) and Dof SetId is removed but as far as I know this change should not affect the backward compatibility as this relation was not explored in the code. Meanwhile the behaviour is the same. I would suggest all @KratosMultiphysics/team-maintainers to test their application with this branch before merging it to master

  • [Structural] Adding initial stress and strain capability

    [Structural] Adding initial stress and strain capability

    In this PR I'll be adding to all the implemented CL the capability of imposing an initial strain or stress. Only the Linear elastic 3D law has been improved so far so you can see how it works.

    In order to apply this initial strain/stress we have used the mdpa feature of

    Begin ElementalData INITIAL_STRESS_VECTOR 1 [6] (0,0,1e6,0,0,0) 2 [6] (0,0,1e6,0,0,0) 3 [6] (0,0,1e6,0,0,0) 4 [6] (0,0,1e6,0,0,0) End ElementalData

    Begin ElementalData INITIAL_STRAIN_VECTOR 1 [6] (0.01,0.01,0.01,0,0,0) 2 [6] (0.01,0.01,0.01,0,0,0) 3 [6] (0.01,0.01,0.01,0,0,0) 4 [6] (0.01,0.01,0.01,0,0,0) End ElementalData

    or by using the python process inside the json:

    {
    {
                "python_module" : "set_initial_state_process",
                "kratos_module" : "KratosMultiphysics",
                "process_name"  : "set_initial_state_process",
                "Parameters"    : {
                        "mesh_id"         : 0,
                        "model_part_name" : "Structure",
                        "dimension"       : 2,
                        "imposed_strain"  : [0.0,0.00,0],
                        "imposed_stress"  : [1000000,0,0],
                        "imposed_deformation_gradient"  : [[1,0],[0,1]],
                        "interval"        : [0.0, 1e30]
                        }
            }
    

    The method (inside linear elastic CL) checks whether the geometry has this initial stress/strain, otherwise is a ZeroVector:

        /**
         * @brief Adds the initial stress vector if it is defined in the InitialState
         */
        const void AddInitialStressVectorContribution(Vector& rStressVector, Parameters& rParameterValues) 
        {
            const auto p_initial_state = pGetpInitialState();
            if (p_initial_state) {
                noalias(rStressVector) += p_initial_state->GetInitialStressVector();
            }
        }
    

    Additionally I can add the capability of retrieving the initial strains and stresses from the mat props.

  • [Structural] adding prebuckling solver

    [Structural] adding prebuckling solver

    The Prebuckling Solver computes the critical load multiplier for a given load set at which the structure will buckle. It always refers to the initial, user-defined load. The implementation does not as usually compute the "classical buckling eigenvalue problem" given as (K_mat+ lambda*K_geo)phi = 0, but relies only on the total stiffness matrix of two consecutive load steps. (K_current + lambdaK_dot)*phi = 0; with K_dot = (( K_next(lambda + h) - K_current) ) / h). Where h is a small change in the load factor. Therefore it is not required to compute K_mat and K_geo separately, which would require major changes in the current element implementations. To follow the entire prebuckling load path the simulation is conducted iteratively, while the applied loads are modified towards the computed buckling load. It is differentiated between a "small" load step and a "big" load step. The underlying theory of the approximation of K_dot requires a small change in the load factor (small step). After the small step (small change in the load factor) we analyse the eigenvalue problem and find a new load factor. During the big step we push the loads closer to the actual buckling load e.g. to half the value of the computed buckling load. Then another small step and eigenvalue analysis follows etc.. This procedure is repeated until the load factors finally converge. In case one wants to compute the linear/theoretical buckling load, the simulation can be stopped after the first small load step. The Solver only converges when the initial load is smaller than the actual buckling load. In general it is recommend to apply a very small load.

  • explicit mpcs slave-master relation

    explicit mpcs slave-master relation

    Hi I have the following setup: imp the master nodes are on the light blue line and two slave nodes are at the connection between the dark and the light blue line. A force is applied at the right lower node and as you can see in the video above I can realize a sliding on the light blue line by coupling DISP_Y and DISP_Z between master and slave + searching new neighbour nodes in each iteration (not completely correct sliding but I will improve this...). This is using the implicit dynamic scheme.

    I now wanted to try the new explicit mpcs and this is the result: exi One can see that the master line does not deform, but the constraints are properly set.

    I think the problem is that in the current implementation of the explicit mpcs the load is not transferred to the master line. My guess would be that we have to couple the residual in void ExplicitCentralDifferencesScheme::Update of the slave and the master dof. Because they are in a certain relation, which is not considered at the moment.

    I would be happy about any suggestions.

  • Defining the local coordinate system of elements (beams, shells)

    Defining the local coordinate system of elements (beams, shells)

    Hi together,

    with this post the discussion for the definition of the local coordinate system for elements, which is especially crucial for beam elements, is opened. Feel free to add other members of the team who might be of interested.

    Here is our suggestion:

    The local x-axis is the vector spanning from the starting point to the end point of the beam. Then the local y-axis is calculated with help of the cross product (gobalZ(0,0,1) X local x-axis). Another crossproduct (local x-axis X local y-axis) results in the local z-axis. All local axis are normalized. One exception: In case that the beam axis is parallel to the global Z-Axis: nX = (0, 0, +- 1); nY = (0, 0 ,1); nZ = (-+ 1, 0, 0)

    Looking forward to your comments.

    Andreas

  • [Core] Transition #3185 with only explicit strategies

    [Core] Transition #3185 with only explicit strategies

    Description This is a transition PR for #3185 as @RiccardoRossi suggested. In here only explicit strategies are included (there are not many, so simpler). This way the way #3185 works can be appreciated in a simpler way

    Changelog

    • Added BaseFactory
    • Added factory for explicit builder
    • Added factory for explicit strategy
    • Added tests (cpp/python)
    • Added to Kratos components and registered
  • [Core] Make Properties' Double methods virtual

    [Core] Make Properties' Double methods virtual

    📝 Description As stated in the title, this PR makes the relevant double methods virtual. It is required to create a way to compute sensitivities w.r.t material properties which is required to perform system identification, optimization, etc.... for material properties.

    Since, these won't be used with derrived classes in general cases (only in specific cases optimization cases), AFAIK this won't be adding a significant cost to the simulations.

    🆕 Changelog

    • Make double methods virtual
  • Opt app/improve multi mat thick

    Opt app/improve multi mat thick

    This PR improves and cleans implementations of multi-material and multi-thickness optimization. Some bug fixes, as well as test updates, are also done.

  • [Fluid] Apply wall law process

    [Fluid] Apply wall law process

    📝 Description Following #10571 and #10585, this adds the possibility to use wal models from the input settings. As for the inlet, outlet, etc, a custom process, potentially deriving from this one, must be created for the two-phase case (these will be done in future PR).

    @jginternational once this is merged, we must work on adding this to the GUI.

    #10619 needs to be merged first.

  • [Core] Adding output process with controller

    [Core] Adding output process with controller

    📝 Description This is one of the possible additions we discussed in the last Kratos Workshop from Altair developments.

    The idea is that output processes currently have two responsibilities: deciding when to print and what to print. Whenever you want to have a more complex control on when to print (e.g. every time a 1% of a part is filled), this is very inconvenient and separating the process into two different objects makes things much easier. So the point is separating the responsibilities between:

    • controller: decided when to print, it only needs to implement IsOutputStep.
    • print process: controls what to print and in which format. It implements the rest of the functions, especially the PrintOutput function. Note that any current output process (like gid_output_process) can be used here, it will just never call its IsOutputStep function.
  • Geo heat transfer 10372

    Geo heat transfer 10372

    📝 Description Added thermal element to GeoMechanics Application.

    🆕 Changelog [1] Define thermal element with element classes [2] Add the input variables needed for heat [3] Defind solver [4] Define strategy [5] Add dispersion matrix/law. [6] Build matrices, stiffness, right hand side, mass ... [7] Add boundary conditions (Neumann) [10] Give output for GID [11] Fix bugs

Powerful multi-threaded coroutine dispatcher and parallel execution engine

Quantum Library : A scalable C++ coroutine framework Quantum is a full-featured and powerful C++ framework build on top of the Boost coroutine library

Dec 30, 2022
A fast multi-producer, multi-consumer lock-free concurrent queue for C++11

moodycamel::ConcurrentQueue An industrial-strength lock-free queue for C++. Note: If all you need is a single-producer, single-consumer queue, I have

Jan 3, 2023
A bounded multi-producer multi-consumer concurrent queue written in C++11
A bounded multi-producer multi-consumer concurrent queue written in C++11

MPMCQueue.h A bounded multi-producer multi-consumer concurrent queue written in C++11. It's battle hardened and used daily in production: In the Frost

Dec 25, 2022
C++11 thread safe, multi-producer, multi-consumer blocking queue, stack & priority queue class

BlockingCollection BlockingCollection is a C++11 thread safe collection class that provides the following features: Modeled after .NET BlockingCollect

Nov 23, 2022
A General-purpose Parallel and Heterogeneous Task Programming System
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous tasks programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, a

Dec 31, 2022
Kokkos C++ Performance Portability Programming EcoSystem: The Programming Model - Parallel Execution and Memory Abstraction

Kokkos: Core Libraries Kokkos Core implements a programming model in C++ for writing performance portable applications targeting all major HPC platfor

Jan 5, 2023
An optimized C library for math, parallel processing and data movement

PAL: The Parallel Architectures Library The Parallel Architectures Library (PAL) is a compact C library with optimized routines for math, synchronizat

Dec 11, 2022
Material for the UIBK Parallel Programming Lab (2021)

UIBK PS Parallel Systems (703078, 2021) This repository contains material required to complete exercises for the Parallel Programming lab in the 2021

May 6, 2022
Shared-Memory Parallel Graph Partitioning for Large K

KaMinPar The graph partitioning software KaMinPar -- Karlsruhe Minimal Graph Partitioning. KaMinPar is a shared-memory parallel tool to heuristically

Nov 10, 2022
A General-purpose Parallel and Heterogeneous Task Programming System
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous task programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, an

Dec 26, 2022
C++-based high-performance parallel environment execution engine for general RL environments.
C++-based high-performance parallel environment execution engine for general RL environments.

EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With

Dec 30, 2022
Parallel algorithms (quick-sort, merge-sort , enumeration-sort) implemented by p-threads and CUDA

程序运行方式 一、编译程序,进入sort-project(cuda-sort-project),输入命令行 make 程序即可自动编译为可以执行文件sort(cudaSort)。 二、运行可执行程序,输入命令行 ./sort 或 ./cudaSort 三、删除程序 make clean 四、指定线程

May 30, 2022
Partr - Parallel Tasks Runtime

Parallel Tasks Runtime A parallel task execution runtime that uses parallel depth-first (PDF) scheduling [1]. [1] Shimin Chen, Phillip B. Gibbons, Mic

Jul 17, 2022
Cpp-taskflow - Modern C++ Parallel Task Programming Library
Cpp-taskflow - Modern C++ Parallel Task Programming Library

Cpp-Taskflow A fast C++ header-only library to help you quickly write parallel programs with complex task dependencies Why Cpp-Taskflow? Cpp-Taskflow

Mar 30, 2021
Thrust - The C++ parallel algorithms library.

Thrust: Code at the speed of light Thrust is a C++ parallel programming library which resembles the C++ Standard Library. Thrust's high-level interfac

Jan 4, 2023
EnkiTS - A permissively licensed C and C++ Task Scheduler for creating parallel programs. Requires C++11 support.
EnkiTS - A permissively licensed C and C++ Task Scheduler for creating parallel programs. Requires C++11 support.

Support development of enkiTS through Github Sponsors or Patreon enkiTS Master branch Dev branch enki Task Scheduler A permissively licensed C and C++

Dec 27, 2022
Parallel-hashmap - A family of header-only, very fast and memory-friendly hashmap and btree containers.
Parallel-hashmap - A family of header-only, very fast and memory-friendly hashmap and btree containers.

The Parallel Hashmap Overview This repository aims to provide a set of excellent hash map implementations, as well as a btree alternative to std::map

Jan 3, 2023
ParallelComputingPlayground - Shows different programming techniques for parallel computing on CPU and GPU

ParallelComputingPlayground Shows different programming techniques for parallel computing on CPU and GPU. Purpose The idea here is to compute a Mandel

May 16, 2020
Parallel implementation of Dijkstra's shortest path algorithm using MPI

Parallel implementation of Dijkstra's shortest path algorithm using MPI

Jan 21, 2022