pyLife User guide¶
This document aims to briefly describe the overall design of pyLife and how to use it inside your own applications, for example Jupyter Notebooks.
Overview¶
pyLife provides facilities to perform different kinds of tasks. They can be roughly grouped as follows
Fitting material data¶
This is about extracting material parameters from experimental data. As of now this is a versatile set of classes to fit Wöhler curve (aka SN-curve) parameters from experimental fatigue data. Mid term we would like to see there a module to fit tensile test data and things like that.
Predicting material behavior¶
These modules use material parameters, e.g. the ones fitted by the corresponding module about data fitting, to predict material behavior. As of now these are
Functions to calculate the true stress and true strain, see
pylife.materiallaws.true_stress_strain
Analyzing load collectives and stresses¶
These modules perform basic operations on time signals as well as more complpex things as rainflow counting.
pylife.stress.collective
– facilities to handle load collectivespylife.stress.rainflow
– a versatile module for rainflow countingpylife.stress.equistress
for equivalent stress calculations from stress tensorspylife.stress.timesignal
for operations on time signals
Lifetime assessment of components¶
Calculate lifetime, failure probabilities, nominal endurance limits of components based on load collective and material data.
Mesh operations¶
For operations on FEM meshes
pylife.mesh.meshsignal
– accessor classes for general mesh operationspylife.mesh.HotSpot
for hotspot detectionpylife.mesh.Gradient
to calculate gradients of scalar values along a meshpylife.mesh.Meshmapper
to map one mesh to another of the same geometry by interpolating
VMAP interface¶
Import and export mesh data from/to VMAP files.
Utilities¶
Some mathematical helper functions, that are useful throughout the code base.
General Concepts¶
pyLife aims to provide toolbox of calculation tools that can be plugged together in order to perform complex operations. We try to make the use of the existing modules as well as writing custom ones as easy as possible while at the same time performing well on larger amounts of data. Moreover we try keep data that belongs together in self explaining data structures, rather than in individual variables.
In order to achieve all that we make extensive use of pandas and numpy. In this guide we suppose that you have a basic understanding of these libraries and the data structures and concepts they are using.
The page Data Model describes the way how data should be stored in pandas objects.
The page Signal API describes how mathematical operations are to be performed on those pandas objects.