Provided by: python3-xrstools_0.15.0+git20210910+c147919d-2build1_amd64 bug

NAME

       xrstools - XRStools Documentation

       Contents:

INSTALLATION

          • If  you install from a Debian package you can skip the following points, install it ,
            and then go directly to the code invocation section

          • Using Git, sources can be retrived with the following command

                git clone https://gitlab.esrf.fr/ixstools/xrstools

          • for a local installation you can use

                python setup.py install --prefix=~/packages

            then to run the code you must do beforehand

                export PYTHONPATH=/home/yourname/packages/lib/python2.7/site-packages
                export PATH=/home/yourname/bin:$PATH

          • To install by creating a virtual environment

                export MYPREFIX=/REPLACE/WITH/YOUR/TARGET
                cd ${MYPREFIX}
                python3 -m venv myenv
                source ${MYPREFIX}/myenv/bin/activate
                pip install pip --upgrade
                pip install setuptools --upgrade

                git clone https://gitlab.esrf.fr/ixstools/xrstools

                cd ${MYPREFIX}/xrstools/
                pip install -r requirements.txt
                python setup.py install

          • Examples can be found in the nonregression directory.

          • For the roi selection tool you need a recent  version  of  pymca  installed  on  your
            sistem

          • Usage examples can be found in the non regression directory.

CODE INVOCATION

          • Some of the XRStools capabilities can be accessed by invocation of the XRS_swissknife
            script, providing as input a file in the yaml format.

          • To use the wizard the suggested instruction is

                XRS_wizard  --wroot ~/software/XRStoolsSuperResolution/XRStools/WIZARD/methods/

            the wroot argument tells where extra workflow can be found. In the above  instruction
            we  give workflows in the home source directory. This is practical because the wizard
            allows to edit them online and the modification will remain in  the  sources.  or  to
            access extra workflows that are not coming with the main disribution.

          • Depending  on  the  details  of your installation, you have the XRS_swissknife script
            sitting somewhere in a directory. Check the Installation  page  to  see  how  to  set
            PYTHONPATH and PATH in case of a local installation.

            The  following documentation has been generated automatically from the comments found
            in the code.

   GENERALITIES about XRS_swissknife
   Super Resolution
   to fit optical responses of all the analysers (you selected a ROI for) and the pixel  response
       based on a foil scan
       embedded doc :

   to extrapolate to a larger extent the ROIS and the foils scan, thus to cover a larger sample
       embedded doc :

   to  calculate  the  scalar  product  between  a  foil scan and a sample, for futher use in the
       inversion problem
       embedded doc :

   Other features
       e_rois

EXAMPLES EN VRAC

   xrstools imaging example

VIDEOS

       • A Tool to clean the spectra from Compton profile and absorption edge

       • A Tool to define ROI by using NNMF in spectral and spatial domain

DEVELOPERS CORNER

   XRStools.roifinder_and_gui Module
   XRStools.xrs_utilities Module
       XRStools.xrs_utilities.Chi(chi, degrees=True)
              rotation around (1,0,0), pos sense

       XRStools.xrs_utilities.HRcorrect(pzprofile, occupation, q)
              Returns the first order correction to filled 1s, 2s, and 2p Compton profiles.

              Implementation after Holm and Ribberfors (citation ...).

              Args:

                     • pzprofile (np.array): Compton profile (e.g. tabulated from  Biggs)  to  be
                       corrected (2D matrix).

                     • occupation (list): electron configuration.

                     • q (float or np.array): momentum transfer in [a.u.].

              Returns:
                     asymmetry   (np.array):   asymmetries  to  be  added  to  the  raw  profiles
                     (normalized to the number of electrons on pz scale)

       XRStools.xrs_utilities.NNMFcost(x, A, F, C, F_up, C_up, n, k, m)
              NNMFcost Returns cost and gradient for NNMF with constraints.

       XRStools.xrs_utilities.NNMFcost_der(x, A, F, C, F_up, C_up, n, k, m)

       XRStools.xrs_utilities.NNMFcost_old(x, A, W, H, W_up, H_up)
              NNMFcost Returns cost and gradient for NNMF with constraints.

       XRStools.xrs_utilities.Omega(omega, degrees=True)
              rotation around (0,0,1), pos sense

       XRStools.xrs_utilities.Phi(phi, degrees=True)
              rotation around (0,1,0), neg sense

       XRStools.xrs_utilities.Rx(chi, degrees=True)
              Rx Rotation matrix for vector rotations around the [1,0,0]-direction.

              Args:

                     • chi   (float) : Angle of rotation.

                     • degrees(bool) : Angle given in radians or degrees.

              Returns:

                     • 3x3 rotation matrix.

       XRStools.xrs_utilities.Ry(phi, degrees=True)
              Ry Rotation matrix for vector rotations around the [0,1,0]-direction.

              Args:

                     • phi   (float) : Angle of rotation.

                     • degrees(bool) : Angle given in radians or degrees.

              Returns:

                     • 3x3 rotation matrix.

       XRStools.xrs_utilities.Rz(omega, degrees=True)
              Rz Rotation matrix for vector rotations around the [0,0,1]-direction.

              Args:

                     • omega (float) : Angle of rotation.

                     • degrees(bool) : Angle given in radians or degrees.

              Returns:

                     • 3x3 rotation matrix.

       XRStools.xrs_utilities.TTsolver1D(el_energy,   hkl=[6,   6,   0],   crystal='Si',   R=1.0,
       dev=array([-  50., - 49., - 48., - 47., - 46., - 45., - 44., - 43., - 42., - 41., - 40., -
       39., - 38., - 37., - 36., - 35., - 34., - 33., - 32., - 31., - 30., - 29., - 28., - 27., -
       26., - 25., - 24., - 23., - 22., - 21., - 20., - 19., - 18., - 17., - 16., - 15., - 14., -
       13., - 12., - 11., - 10., - 9., - 8., - 7., - 6., - 5., - 4., - 3., - 2., -  1.,  0.,  1.,
       2.,  3.,  4.,  5.,  6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20.,
       21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37.,  38.,
       39.,  40., 41., 42., 43., 44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55., 56.,
       57., 58., 59., 60., 61., 62., 63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73.,  74.,
       75.,  76., 77., 78., 79., 80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90., 91., 92.,
       93., 94., 95., 96., 97., 98., 99., 100., 101., 102., 103., 104., 105., 106.,  107.,  108.,
       109.,  110., 111., 112., 113., 114., 115., 116., 117., 118., 119., 120., 121., 122., 123.,
       124., 125., 126., 127., 128., 129., 130., 131., 132., 133., 134., 135., 136., 137.,  138.,
       139.,   140.,  141.,  142.,  143.,  144.,  145.,  146.,  147.,  148.,  149.]),  alpha=0.0,
       chitable_prefix='/home/christoph/sources/XRStools/data/chitables/chitable_')
              TTsolver Solves the Takagi-Taupin equation for a bent crystal.

              This function is based on a Matlab implementation by  S.  Huotari  of  M.  Krisch's
              Fortran programs.

              Args:

                     • el_energy (float): Fixed nominal (working) energy in keV.

                     • hkl (array): Reflection order vector, e.g. [6, 6, 0]

                     • crystal (str): Crystal used (can be silicon 'Si' or 'Ge')

                     • R (float): Crystal bending radius in m.

                     • dev  (np.array):  Deviation  parameter  (in  arc.  seconds)  for which the
                       reflectivity curve should be calculated.

                     • alpha (float): Crystal assymetry angle.

              Returns:

                     • refl (np.array): Reflectivity curve.

                     • e (np.array): Deviation from Bragg angle in meV.

                     • dev (np.array): Deviation from Bragg angle in microrad.

       XRStools.xrs_utilities.absCorrection(mu1,      mu2,      alpha,      beta,       samthick,
       geometry='transmission')
              absCorrection

              Calculates  absorption  correction  for  given  mu1 and mu2.  Multiply the measured
              spectrum with this correction factor.  This is a translation of Keijo  Hamalainen's
              Matlab function (KH 30.05.96).

              Args

                     • mu1 : np.array  Absorption coefficient for the incident energy in [1/cm].

                     • mu2 : np.array Absorption coefficient for the scattered energy in [1/cm].

                     • alpha : float Incident angle relative to plane normal in [deg].

                     • beta : float  Exit angle relative to plane normal [deg].

                     • samthick : float  Sample thickness in [cm].

                     • geometry  :  string,  optional  Key  word  for different sample geometries
                       ('transmission', 'reflection', 'sphere').  If geometry is set to 'sphere',
                       no angular dependence is assumed.

              Returns

                     • ac  :  np.array  Absorption  correction  factor.  Multiply  this with your
                       measured spectrum.

       XRStools.xrs_utilities.abscorr2(mu1, mu2, alpha, beta, samthick)
              Calculates absorption correction for given mu1  and  mu2.   Multiply  the  measured
              spectrum with this correction factor.

              This is a translation of Keijo Hamalainen's Matlab function (KH 30.05.96).

              Args:

                     • mu1 (np.array): absorption coefficient for the incident energy in [1/cm].

                     • mu2 (np.array): absorption coefficient for the scattered energy in [1/cm].

                     • alpha (float): incident angle relative to plane normal in [deg].

                     • beta  (float): exit angle relative to plane normal [deg] (for transmission
                       geometry use beta < 0).

                     • samthick (float): sample thickness in [cm].

              Returns:

                     • ac (np.array): absorption  correction  factor.  Multiply  this  with  your
                       measured spectrum.

       XRStools.xrs_utilities.addch(xold, yold, n, n0=0, errors=None)
              # ADDCH     Adds contents of given adjacent channels together # #           [x2,y2]
              = addch(x,y,n,n0) #           x  = original  x-scale   (row  or  column  vector)  #
              y   =  original y-values (row or column vector) #           n  = number of channels
              to be summed up #            n0 = offset for adding, default is 0 #           x2  =
              new   x-scale  #            y2  =  new  y-values  #  #            KH  17.09.1990  #
              Modified 29.05.1995 to include offset

       XRStools.xrs_utilities.bidiag_reduction(A)
              function [U,B,V]=bidiag_reduction(A) %  [U  B  V]=bidiag_reduction(A)  %  Algorithm
              6.5-1  in  Golub & Van Loan, Matrix Computations % Johns Hopkins University Press %
              Finds an upper bidiagonal matrix B so that A=U*B*V' % with U,V orthogonal.  A is an
              m x n matrix

       XRStools.xrs_utilities.bootstrapCNNMF(A, F_ini, C_ini, F_up, C_up, Niter)
              bootstrapCNNMF Constrained non-negative matrix factorization with bootstrapping for
              error estimates.

       XRStools.xrs_utilities.bootstrapCNNMF_old(A, k, Aerr, F_ini, C_ini, F_up, C_up, Niter=100)
              bootstrapCNNMF Constrained non-negative matrix factorization with bootstrapping for
              error estimates.

       XRStools.xrs_utilities.bragg(hkl, e, xtal='Si')
              %    BRAGG     Calculates   Bragg   angle   for   given   reflection   in   RAD   %
              output=bangle(hkl,e,xtal) %        hkl can be a matrix i.e. hkl=[1,0,0 ; 1,1,1];  %
              e=energy  in keV %      xtal='Si', 'Ge', etc. (check dspace.m) or d0 (Si default) %
              %      KH 28.09.93 %

       class XRStools.xrs_utilities.bragg_refl(crystal, hkl, alpha=0.0)
              Bases: object

              Dynamical theory of diffraction.

              get_chi(energy, crystal=None, hkl=None)

              get_nff(nff_path=None)

              get_polarization_factor(tth, case='sigma')
                     Calculate polarization factor.

              get_reflectivity(energy, delta_theta, case='sigma')

              get_reflectivity_bent(energy, delta_theta, R)

       XRStools.xrs_utilities.braggd(hkl, e, xtal='Si')
              # BRAGGD  Calculates Bragg angle for given reflection in deg #      Call BRAGG.M  #
              output=bangle(hkl,e,xtal)  #        hkl can be a matrix i.e. hkl=[1,0,0 ; 1,1,1]; #
              e=energy in keV #      xtal='Si', 'Ge', etc. (check dspace.m) or d0 (Si default)  #
              #      KH 28.09.93

       XRStools.xrs_utilities.cNNMF_chris(A, W_fixed, W_free, maxIter=100, verbose=True)

       XRStools.xrs_utilities.cixsUBfind(x, G, Q_sample, wi, wo, lambdai, lambdao)
              cixsUBfind

       XRStools.xrs_utilities.cixsUBgetAngles_primo(Q)

       XRStools.xrs_utilities.cixsUBgetAngles_secondo(Q)

       XRStools.xrs_utilities.cixsUBgetAngles_terzo(Q)

       XRStools.xrs_utilities.cixsUBgetQ_primo(tthv, tthh, psi)
              returns  the Q0 given the detector position (tthv, tth) and th crystal orientation.
              This orientation is calculated considering :

                 the Bragg condition and the rotation around the G vector :
                        this rotation is defined by psi which is a rotation around G

       XRStools.xrs_utilities.cixsUBgetQ_secondo(tthv, tthh, psi)

       XRStools.xrs_utilities.cixsUBgetQ_terzo(tthv, tthh, psi)

       XRStools.xrs_utilities.cixs_primo(tthv, tthh, psi, anal_braggd=86.5)
              cixs_primo

       XRStools.xrs_utilities.cixs_secondo(tthv, tthh, psi, anal_braggd=86.5)
              cixs_secondo

       XRStools.xrs_utilities.cixs_terzo(tthv, tthh, psi, anal_braggd=86.5)
              cixs_terzo

       XRStools.xrs_utilities.compute_matrix_elements(R1, R2, k, r)

       XRStools.xrs_utilities.con2mat(x, W, H, W_up, H_up)

       XRStools.xrs_utilities.constrained_mf(A, W_ini, W_up, coeff_ini,  coeff_up,  maxIter=1000,
       tol=1e-08, maxIter_power=1000)
              cfactorizeOffDiaMatrix  constrained  version  of factorizeOffDiaMatrix Returns main
              components from an off-diagonal Matrix (energy-loss x angular-departure).

       XRStools.xrs_utilities.constrained_svd(M,  U_ini,  S_ini,  VT_ini,  U_up,  max_iter=10000,
       verbose=False)
              constrained_nnmf Approximate singular value decomposition with constraints.

              function                 [U,                 S,                 V]                =
              constrained_svd(M,U_ini,S_ini,V_ini,U_up,max_iter=10000,verbose=False)

       XRStools.xrs_utilities.convertSplitEDF2EDF(foldername)
              converts the old style EDF files (one  image  for  horizontal  and  one  image  for
              vertical chambers) to the new style EDF (one single image).

              Arg:

                     foldername (str): Path to folder with all the EDF-files to be
                            converted.

       XRStools.xrs_utilities.convg(x, y, fwhm)
              Convolution  with  Gaussian  x  = x-vector y  = y-vector fwhm = fulll width at half
              maximum of the gaussian with which y is convoluted

       XRStools.xrs_utilities.convtoprim(hklconv)
              convtoprim converts diamond structure reciprocal lattice expressed in  conventional
              lattice vectors to primitive one (Helsinki -> Palaiseau conversion) from S. Huotari

       XRStools.xrs_utilities.cshift(w1, th)
              cshift Calculates Compton peak position.

              Args:

                     • w1 (float, array): Incident energy in [keV].

                     • th (float): Scattering angle in [deg].

              Returns:

                     • w2 (foat, array): Energy of Compton peak in [keV].

              Funktion adapted from Keijo Hamalainen.

       XRStools.xrs_utilities.delE_JohannAberration(E, A, R, Theta)
              Calculates the Johann aberration of a spherical analyzer crystal.

              Args:  E      (float):  Working  energy  in [eV].  A     (float): Analyzer aperture
                     [mm].  R     (float): Radius of the Rowland  circle  [mm].   Theta  (float):
                     Analyzer Bragg angle [degree].

              Returns:
                     Johann abberation in [eV].

       XRStools.xrs_utilities.delE_dicedAnalyzerIntrinsic(E, Dw, Theta)
              Calculates the intrinsic energy resolution of a diced crystal analyzer.

              Args:  E      (float):  Working energy in [eV].  Dw    (float): Darwin width of the
                     used reflection [microRad].  Theta (float): Analyzer Bragg angle [degree].

              Returns:
                     Intrinsic energy resolution of a perfect analyzer crystal.

       XRStools.xrs_utilities.delE_offRowland(E, z, A, R, Theta)
              Calculates the off-Rowland contribution of a spherical analyzer crystal.

              Args:  E     (float): Working energy in [eV].  z     (float): Off-Rowland  distance
                     [mm].   A     (float): Analyzer aperture [mm].  R     (float): Radius of the
                     Rowland circle [mm].  Theta (float): Analyzer Bragg angle [degree].

              Returns:
                     Off-Rowland contribution in [eV] to the energy resolution.

       XRStools.xrs_utilities.delE_pixelSize(E, p, R, Theta)
              Calculates the pixel size contribution  to  the  resolution  function  of  a  diced
              analyzer crystal.

              Args:  E      (float):  Working energy in [eV].  p     (float): Pixel size in [mm].
                     R     (float): Radius of the Rowland circle [mm].  Theta  (float):  Analyzer
                     Bragg angle [degree].

              Returns:
                     Pixel  size  contribution  in  [eV]  to  the  energy  resolution for a diced
                     analyzer crystal.

       XRStools.xrs_utilities.delE_sourceSize(E, s, R, Theta)
              Calculates the source size contribution to the resolution function.

              Args:  E     (float): Working energy in [eV].  s     (float): Source size in  [mm].
                     R      (float):  Radius of the Rowland circle [mm].  Theta (float): Analyzer
                     Bragg angle [degree].

              Returns:
                     Source size contribution in [eV] to the energy resolution.

       XRStools.xrs_utilities.delE_stressedCrystal(E, t, v, R, Theta)
              Calculates the  stress  induced  contribution  to  the  resulution  function  of  a
              spherically bent crystal analyzer.

              Args:  E      (float): Working energy in [eV].  t     (float): Absorption length in
                     the analyzer material [mm].  v     (float): Poisson ratio  of  the  analyzer
                     material.  R     (float): Radius of the Rowland circle [mm].  Theta (float):
                     Analyzer Bragg angle [degree].

              Returns:
                     Stress-induced contribution in [eV] to the energy resolution.

       XRStools.xrs_utilities.diode(current, energy, thickness=0.03)
              diode Calculates the number of photons incident for a Si PIPS diode.

              Args:

                     • current (float): Diode current in [pA].

                     • energy (float): Photon energy in [keV].

                     • thickness (float): Thickness of Si active layer in [cm].

              Returns:

                     • flux (float): Number of photons per second.

              Function adapted from Matlab function by S. Huotari.

       XRStools.xrs_utilities.dspace(hkl=[6, 6, 0], xtal='Si')
              % DSPACE Gives d-spacing for given xtal %     d=dspace(hkl,xtal) %     hkl can be a
              matrix  i.e.  hkl=[1,0,0 ; 1,1,1]; %     xtal='Si','Ge','LiF','InSb','C','Dia','Li'
              (case insensitive) %     if xtal is number this is user as a d0 % %     KH 28.09.93
              %        SH 2005 %

       class     XRStools.xrs_utilities.dtxrd(hkl,    energy,    crystal='Si',    asym_angle=0.0,
       angular_range=[- 0.0005, 0.0005], angular_step=1e-08)
              Bases: object

              class to hold all things dynamic theory of diffraction.

              get_anomalous_absorption(energy=None)

              get_eta(angular_range, angular_step=1e-08)

              get_extinction_length(energy=None)

              get_reflection_width()

              get_reflectivity(angular_range=None, angular_step=None)

              set_asymmetry(alpha)
                     negative alpha -> more grazing incidence

              set_energy(energy)

              set_hkl(hkl)

       XRStools.xrs_utilities.dtxrd_anomalous_absorption(energy,  hkl,  alpha=0.0,  crystal='Si',
       angular_range=array([- 0.0005]))

       XRStools.xrs_utilities.dtxrd_extinction_length(energy, hkl, alpha=0.0, crystal='Si')

       XRStools.xrs_utilities.dtxrd_reflectivity(energy,     hkl,     alpha=0.0,    crystal='Si',
       angular_range=array([- 0.0005]))

       XRStools.xrs_utilities.e2pz(w1, w2, th)
              Calculates the momentum scale and the relativistic Compton cross section correction
              according to P. Holm, PRA 37, 3706 (1988).

              This  function  is  translated  from  Keijo  Hamalainen's Matlab implementation (KH
              29.05.96).

              Args:

                     • w1 (float or np.array): incident energy in [keV]

                     • w2 (float or np.array): scattered energy in [keV]

                     • th (float): scattering angle two theta in [deg]

              returns:

                     • pz (float or np.array): momentum scale in [a.u.]

                     • cf (float or np.array): cross section correction factor such that: J(pz) =
                       cf * d^2(sigma)/d(w2)*d(Omega) [barn/atom/keV/srad]

       XRStools.xrs_utilities.edfread(filename)
              reads edf-file with filename "filename" OUTPUT:    data = 256x256 numpy array

       XRStools.xrs_utilities.edfread_test(filename)
              reads edf-file with filename "filename" OUTPUT:    data = 256x256 numpy array

              here  is  how  i  opened  the  HH  data:  data  =  np.fromfile(f,np.int32)  image =
              np.reshape(data,(dim,dim))

       XRStools.xrs_utilities.element(z)
              Converts atomic number into string of the element symbol and vice versa.

              Returns atomic number of given element, if z is a string of the element  symbol  or
              string of element symbol of given atomic number z.

              Args:

                     • z (string or int): string of the element symbol or atomic number.

              Returns:

                     • Z (string or int): string of the element symbol or atomic number.

       XRStools.xrs_utilities.energy(d, ba)
              %  ENERGY   Calculates  energy  corrresponing  to Bragg angle for given d-spacing %
              function  e=energy(dspace,bragg_angle)   %   %        dspace   for   reflection   %
              bragg_angle in DEG % %         KH 28.09.93

       XRStools.xrs_utilities.energy_monoangle(angle, d=1.6374176589984608)
              %  ENERGY   Calculates  energy  corrresponing  to Bragg angle for given d-spacing %
              function e=energy(dspace,bragg_angle) % %         dspace  for  reflection  (defaulf
              for Si(311) reflection) %         bragg_angle in DEG % %         KH 28.09.93 %

       XRStools.xrs_utilities.fermi(rs)
              fermi  Calculates  the plasmon energy (in eV), Fermi energy (in eV), Fermi momentum
              (in a.u.), and critical plasmon cut-off vector (in a.u.).

              Args:

                     • rs (float): electron separation parameter

              Returns:

                     • wp (float): plasmon energy (in eV)

                     • ef (float): Fermi energy (in eV)

                     • kf (float): Fermi momentum (in a.u.)

                     • kc (float): critical plasmon cut-off vector (in a.u.)

              Based on Matlab function from A. Soininen.

       XRStools.xrs_utilities.find_center_of_mass(x, y)
              Returns the center of mass (first moment) for the given curve y(x)

       XRStools.xrs_utilities.find_diag_angles(q, x0,  U,  B,  Lab,  beam_in,  lambdai,  lambdao,
       tol=1e-08, method='BFGS')
              find_diag_angles Finds the FOURC spectrometer and sample angles for a desired q.

              Args:

                     • q (array): Desired momentum transfer in Lab coordinates.

                     • x0 (list): Guesses for the angles (tthv, tthh, chi, phi, omega).

                     • U (array): 3x3 U-matrix Lab-to-sample transformation.

                     • B   (array):   3x3   B-matrix   reciprocal   lattice   to  absolute  units
                       transformation.

                     • lambdai (float): Incident x-ray wavelength in Angstrom.

                     • lambdao (float): Scattered x-ray wavelength in Angstrom.

                     • tol (float): Toleranz for minimization (see scipy.optimize.minimize)

                     • method (str): Method for minimization (see scipy.optimize.minimize)

              Returns:

                     • ans (array): tthv, tthh, phi, chi, omega

       XRStools.xrs_utilities.fwhm(x, y)
              finds full width at half maximum of the curve y vs.  x  returns  f   =  FWHM  x0  =
              position of the maximum

       XRStools.xrs_utilities.gauss(x, x0, fwhm)

       XRStools.xrs_utilities.get_UB_Q(tthv, tthh, phi, chi, omega, **kwargs)
              get_UB_Q  Returns  the  momentum  transfer  and  scattering vectors for given FOURC
              spectrometer and sample angles. U-, B-matrices  and  incident/scattered  wavelength
              are passed as keyword-arguments.

              Args:

                     • tthv (float): Spectrometer vertical 2Theta angle.

                     • tthh (float): Spectrometer horizontal 2Theta angle.

                     • chi (float): Sample rotation around x-direction.

                     • phi (float): Sample rotation around y-direction.

                     • omega (float): Sample rotation around z-direction.

                     •

                       kwargs (dict): Dictionary with key-word arguments:

                              • kwargs['U'] (array): 3x3 U-matrix Lab-to-sample transformation.

                              • kwargs['B']  (array): 3x3 B-matrix reciprocal lattice to absolute
                                units transformation.

                              • kwargs['lambdai'] (float): Incident x-ray wavelength in Angstrom.

                              • kwargs['lambdao']  (float):   Scattered   x-ray   wavelength   in
                                Angstrom.

              Returns:

                     • Q_sample  (array): Momentum transfer in sample coordinates.

                     • Ki_sample (array): Incident beam direction in sample coordinates.

                     • Ko_sample (array): Scattered beam direction in sample coordinates.

       XRStools.xrs_utilities.get_gnuplot_rgb(start=None, end=None, length=None)
              get_gnuplot_rgb Prints out a progression of RGB hex-keys to use in Gnuplot.

              Args:

                     • start (array): RGB code to start from (must be numbers out of [0,1]).

                     • end   (array): RGB code to end at (must be numbers out of [0,1]).

                     • length  (int): How many colors to print out.

       XRStools.xrs_utilities.get_num_of_MD_steps(time_ps, time_step)
              Calculates  the  number of steps in an MD simulation for a desired time (in ps) and
              given step size (in a.u.)

              Args:  time_ps   (float): Desired time span (ps).  time_step (float):  Chosen  time
                     step (a.u.).

              Returns:
                     The number of steps required to span the desired time span.

       XRStools.xrs_utilities.getpenetrationdepth(energy, formulas, concentrations, densities)
              returns  the  penetration  depth  of  a  mixture  of chemical formulas with certain
              concentrations and densities

       XRStools.xrs_utilities.gettransmission(energy,   formulas,   concentrations,    densities,
       thickness)
              returns  the  transmission  through  a  sample  composed  of chemical formulas with
              certain densities mixed to certain concentrations, and a thickness

       XRStools.xrs_utilities.hex2rgb(hex_val)

       XRStools.xrs_utilities.hlike_Rwfn(n, l, r, Z)
              hlike_Rwfn Returns an array with the radial part of a hydrogen-like wave function.

              Args:

                     • n (integer): main quantum number n

                     • l (integer): orbitalquantum number l

                     • r (array): vector of radii on which the function should be evaluated

                     • Z (float): effective nuclear charge

       XRStools.xrs_utilities.householder(b, k)
              function H = householder(b, k) % H = householder(b, k) % Atkinson, Section 9.3,  p.
              611  %  b  is  a column vector, k an index < length(b) % Constructs a matrix H that
              annihilates entries % in the product H*b below index k

              % $Id: householder.m,v 1.1 2008-01-16 15:33:30 mike Exp $ % M. M. Sussman

       XRStools.xrs_utilities.interpolate_M(xc, xi, yi, i0)
                 Linear interpolation scheme after Martin Sundermann that conserves the  absolute
                 number of counts.

                 ONLY WORKS FOR EQUALLY/EVENLY SPACED XC, XI!

                 Args:  xc   (np.array):  The  x-coordinates  of  the  interpolated  values.   xi
                        (np.array): The x-coordinates of the data points, must be increasing.  yi
                        (np.array):  The y-coordinates of the data points, same length as xp.  i0
                        (np.array): Normalization values for the data points, same length as xp.

                 Returns:
                        ic (np.array): The interpolated and normalized data points.

              from scipy.interpolate import Rbf x = arange(20) d = zeros(len(x)) d[10] = 1  xc  =
              arange(0.5,19.5) rbfi = Rbf(x, d) di = rbfi(xc)

       XRStools.xrs_utilities.is_allowed_refl_fcc(H)
              is_allowed_refl_fcc Check if given reflection is allowed for a FCC lattice.

              Args:

                     • H (array, list, tuple): H=[h,k,l]

              Returns:

                     • boolean

       XRStools.xrs_utilities.lindhard_pol(q, w, rs=3.93, use_corr=False, lifetime=0.28)
              lindhard_pol  Calculates  the  Lindhard polarizability function (RPA) for certain q
              (a.u.), w (a.u.) and rs (a.u.).

              Args:

                     • q (float): momentum transfer (in a.u.)

                     • w (float): energy (in a.u.)

                     • rs (float): electron parameter

                     • use_corr (boolean): if True, uses Bernardo's calculation for n(k)  instead
                       of the Fermi function.

                     • lifetime (float): life time (default is 0.28 eV for Na).

              Based on Matlab function by S. Huotari.

       XRStools.xrs_utilities.makeprofile(element,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat',
       E0=9.69, tth=35.0, correctasym=None)
              takes  the  profiles  from  'makepzprofile()',  converts  them onto eloss scale and
              normalizes them to S(q,w) [1/eV] input: element  = element symbol (e.g. 'Si', 'Al',
              etc.)   filename  =  path  and filename to tabulated profiles E0       = scattering
              energy [keV] tth      = scattering angle  [deg]  returns:  enscale  =  energy  loss
              scale  J  = total CP C = only core contribution to CP V = only valence contribution
              to CP q = momentum transfer [a.u.]

       XRStools.xrs_utilities.makeprofile_comp(formula,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat',
       E0=9.69, tth=35, correctasym=None)
              returns the compton profile of a chemical compound with  formula  'formula'  input:
              formula  =  string of a chemical formula (e.g. 'SiO2', 'Ba8Si46', etc.)  filename =
              path and filename to tabulated profiles E0        =  scattering  energy  [keV]  tth
              =  scattering angle  [deg] returns: eloss = energy loss scale J = total CP C = only
              core contribution to CP V = only valence contribution to CP q =  momentum  transfer
              [a.u.]

       XRStools.xrs_utilities.makeprofile_compds(formulas,                   concentrations=None,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat',
       E0=9.69, tth=35.0, correctasym=None)
              returns  sum  of compton profiles from a lost of chemical compounds weighted by the
              given concentration

       XRStools.xrs_utilities.makepzprofile(element,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat')
              constructs  compton  profiles of element 'element' on pz-scale (-100:100 a.u.) from
              the Biggs tables provided in 'filename'

              input:

                     • element   = element symbol (e.g. 'Si', 'Al', etc.)

                     • filename  = path and filename to tabulated profiles

              returns:

                     • pzprofile = numpy array of the CP: *  1. column: pz-scale  *   2.  ...  n.
                       columns:  compton  profile of nth shell * binden     = binding energies of
                       shells * occupation = number of electrons in the according shells

       XRStools.xrs_utilities.mat2con(W, H, W_up, H_up)

       XRStools.xrs_utilities.mat2vec(F, C, F_up, C_up, n, k, m)

       class XRStools.xrs_utilities.maxipix_det(name, spot_arrangement)
              Bases: object

              Class to store some useful values from the detectors used. To be used for arranging
              the ROIs.

              get_det_name()

              get_pixel_range()

       XRStools.xrs_utilities.momtrans_au(e1, e2, tth)
              Calculates  the  momentum  transfer  in  atomic  units input: e1  = incident energy
              [keV] e2  = scattered energy [keV] tth = scattering  angle  [deg]  returns:  q    =
              momentum transfer [a.u.] (corresponding to sin(th)/lambda)

       XRStools.xrs_utilities.momtrans_inva(e1, e2, tth)
              Calculates  the  momentum transfer in inverse angstrom input: e1  = incident energy
              [keV] e2  = scattered energy [keV] tth = scattering  angle  [deg]  returns:  q    =
              momentum transfer [a.u.] (corresponding to sin(th)/lambda)

       XRStools.xrs_utilities.mpr(energy, compound)
              Calculates  the  photoelectric,  elastic,  and  inelastic  absorption of a chemical
              compound.

              Calculates the photoelectric, elastic,  and  inelastic  absorption  of  a  chemical
              compound.

              Args:

                     • energy (np.array): energy scale in [keV].

                     • compound (string): chemical sum formula (e.g. 'SiO2')

              Returns:

                     • murho (np.array): absorption coefficient normalized by the density.

                     • rho (float): density in UNITS?

                     • m (float): atomic mass in UNITS?

       XRStools.xrs_utilities.mpr_compds(energy, formulas, concentrations, E0, rho_formu)
              Calculates  the  photoelectric,  elastic,  and  inelastic  absorption  of  a mix of
              compounds.

              Returns the photoelectric absorption for a sum of different chemical compounds.

              Args:

                     • energy (np.array): energy scale in [keV].

                     • formulas (list of strings): list of chemical sum formulas

              Returns:

                     • murho (np.array): absorption coefficient normalized by the density.

                     • rho (float): density in UNITS?

                     • m (float): atomic mass in UNITS?

       XRStools.xrs_utilities.myprho(energy,                                                   Z,
       logtablefile='/usr/lib/python3/dist-packages/XRStools/resources/data/logtable.dat')
              Calculates the photoelectric, elastic, and inelastic absorption of an element Z

              Calculates the photelectric , elastic, and inelastic absorption of an element Z.  Z
              can be atomic number or element symbol.

              Args:

                     • energy (np.array): energy scale in [keV].

                     • Z (string or int): atomic number or string of element symbol.

              Returns:

                     • murho (np.array): absorption coefficient normalized by the density.

                     • rho (float): density in UNITS?

                     • m (float): atomic mass in UNITS?

       XRStools.xrs_utilities.nonzeroavg(y=None)

       XRStools.xrs_utilities.odefctn(y, t, abb0, abb1, abb7, abb8, lex, sgbeta, y0, c1)
              #%    [T,Y] = ODE23(ODEFUN,TSPAN,Y0,OPTIONS,P1,P2,...)  passes  the  additional  #%
              parameters  P1,P2,... to the ODE function as ODEFUN(T,Y,P1,P2...), and to #%    all
              functions specified in OPTIONS. Use OPTIONS = [] as a  place  holder  if  #%     no
              options are set.

       XRStools.xrs_utilities.odefctn_CN(yCN, t, abb0, abb1, abb7, abb8N, lex, sgbeta, y0, c1)

       XRStools.xrs_utilities.parseformula(formula)
              Parses a chemical sum formula.

              Parses  the  constituing  elements  and  stoichiometries  from a given chemical sum
              formula.

              Args:

                     • formula (string): string of a chemical formula  (e.g.  'SiO2',  'Ba8Si46',
                       etc.)

              Returns:

                     • elements (list): list of strings of constituting elemental symbols.

                     • stoichiometries  (list):  list  of  according  stoichiometries in the same
                       order as 'elements'.

       XRStools.xrs_utilities.plotpenetrationdepth(energy, formulas, concentrations, densities)
              opens a plot window of the penetration depth of a mixture of chemical formulas with
              certain concentrations and densities plotted along the given energy vector

       XRStools.xrs_utilities.plottransmission(energy,   formulas,   concentrations,   densities,
       thickness)
              opens a plot with the transmission plotted along the given energy vector

       XRStools.xrs_utilities.primtoconv(hklprim)
              primtoconv converts diamond structure reciprocal  lattice  expressed  in  primitive
              basis to the conventional basis (Palaiseau -> Helsinki conversion) from S. Huotari

       XRStools.xrs_utilities.pz2e1(w2, pz, th)
              Calculates the incident energy for a specific scattered photon and momentum value.

              Returns  the  incident energy for a given photon energy and scattering angle.  This
              function is translated from Keijo Hamalainen's Matlab implementation (KH 29.05.96).

              Args:

                     • w2 (float): scattered photon energy in [keV]

                     • pz (np.array): pz scale in [a.u.]

                     • th (float): scattering angle two theta in [deg]

              Returns:

                     • w1 (np.array): incident energy in [keV]

       XRStools.xrs_utilities.read_dft_wfn(element,          n,           l,           spin=None,
       directory='/usr/lib/python3/dist-packages/XRStools/resources/data')
              read_dft_wfn Parses radial parts of wavefunctions.

              Args:

                     • element (str): Element symbol.

                     • n (int): Main quantum number.

                     • l (int): Orbital quantum number.

                     • spin (str): Which spin channel, default is average over up and down.

                     • directory (str): Path to directory where the wavefunctions can be found.

              Returns:

                     • r (np.array): radius

                     • wfn (np.array):

       XRStools.xrs_utilities.readbiggsdata(filename, element)
              Reads  Hartree-Fock  Profile of element 'element' from values tabulated by Biggs et
              al. (Atomic Data and Nuclear Data Tables 16, 201-309 (1975))  as  provided  by  the
              DABAX                                   library                                  (‐
              http://ftp.esrf.eu/pub/scisoft/xop2.3/DabaxFiles/ComptonProfiles.dat).       input:
              filename  =  path  to  the ComptonProfiles.dat file (the file should be distributed
              with this package) element  = string of element name returns:

                 •

                   data = the data for the according element as in the file:

                          • #UD  Columns:

                          • #UD  col1: pz in atomic units

                          • #UD  col2: Total compton profile (sum over the atomic electrons

                          • #UD  col3,...coln: Compton profile for the individual sub-shells

                 • occupation = occupation number of the according shells

                 • bindingen  = binding energies of the accorting shells

                 • colnames   = strings of column names as used in the file

       XRStools.xrs_utilities.readfio(prefix, scannumber, repnumber=0)
              if repnumber = 0: reads a spectra-file (name: prefix_scannumber.fio) if repnumber >
              1: reads a spectra-file (name: prefix_scannumber_rrepnumber.fio)

       XRStools.xrs_utilities.readp01image(filename)
              reads a detector file from PetraIII beamline P01

       XRStools.xrs_utilities.readp01scan(prefix, scannumber)
              reads a whole scan from PetraIII beamline P01 (experimental)

       XRStools.xrs_utilities.readp01scan_rep(prefix, scannumber, repetition)
              reads a whole scan with repititions from PetraIII beamline P01 (experimental)

       XRStools.xrs_utilities.savitzky_golay(y, window_size, order, deriv=0, rate=1)
              Smooth  (and  optionally  differentiate)  data  with  a Savitzky-Golay filter.  The
              Savitzky-Golay filter removes high frequency noise from data.  It has the advantage
              of preserving the original shape and features of the signal better than other types
              of filtering approaches, such as moving averages techniques.

              Parameters:

                     • y : array_like, shape (N,) the values of the time history of the signal.

                     • window_size : int the length of the window. Must be an odd integer number.

                     • order : int the order of the polynomial used in the  filtering.   Must  be
                       less then window_size - 1.

                     • deriv:  int the order of the derivative to compute (default = 0 means only
                       smoothing)

              Returns

                     • ys : ndarray, shape (N) the smoothed signal (or it's n-th derivative).

              Notes: The Savitzky-Golay is a type of low-pass  filter,  particularly  suited  for
                     smoothing noisy data. The main idea behind this approach is to make for each
                     point a least-square fit with a polynomial of high order  over  a  odd-sized
                     window centered at the point.

              Examples

                 t = np.linspace(-4, 4, 500)
                 y = np.exp( -t**2 ) + np.random.normal(0, 0.05, t.shape)
                 ysg = savitzky_golay(y, window_size=31, order=4)
                 import matplotlib.pyplot as plt
                 plt.plot(t, y, label='Noisy signal')
                 plt.plot(t, np.exp(-t**2), 'k', lw=1.5, label='Original signal')
                 plt.plot(t, ysg, 'r', label='Filtered signal')
                 plt.legend()
                 plt.show()

              References ::

              [1]  A.  Savitzky,  M.  J.  E.  Golay,  Smoothing  and  Differentiation  of Data by
                   Simplified Least Squares Procedures. Analytical Chemistry, 1964,  36  (8),  pp
                   1627-1639.

              [2]  Numerical  Recipes  3rd  Edition:  The Art of Scientific Computing W.H. Press,
                   S.A. Teukolsky, W.T. Vetterling,  B.P.  Flannery  Cambridge  University  Press
                   ISBN-13: 9780521880688

       XRStools.xrs_utilities.sgolay2d(z, window_size, order, derivative=None)

       XRStools.xrs_utilities.sigmainc(Z,                                                 energy,
       logtablefile='/usr/lib/python3/dist-packages/XRStools/resources/data/logtable.dat')
              sigmainc Calculates the Incoherent Scattering Cross Section in cm^2/g using Log-Log
              Fit.

              Args:

                     • z (int or string): Element number or elements symbol.

                     • energy (float or array): Energy (can be number or vector)

              Returns:

                     • tau (float or array): Photoelectric cross section in [cm**2/g]

              Adapted from original Matlab function of Keijo Hamalainen.

       XRStools.xrs_utilities.specread(filename, nscan)
              reads scan "nscan" from SPEC-file "filename"

              INPUT:

                     • filename = string with the SPEC-file name

                     • nscan    = number (int) of desired scan

              OUTPUT:

                     • data     =

                     • motors   =

                     • counters = dictionary

       XRStools.xrs_utilities.spline2(x, y, x2)
              Extrapolates the smaller and larger valuea as a constant

       XRStools.xrs_utilities.split_hdf5_address(dataadress)

       XRStools.xrs_utilities.stiff_compl_matrix_Si(e1, e2, e3, ansys=False)
              stiff_compl_matrix_Si  Returns  stiffnes  and  compliance  tensor of Si for a given
              orientation.

              Args:

                     • e1 (np.array): unit vector normal to crystal surface

                     • e2 (np.array): unit vector crystal surface

                     • e3 (np.array): unit vector orthogonal to e2

              Returns:

                     • S (np.array): compliance tensor in new coordinate system

                     • C (np.array): stiffnes tensor in new coordinate system

                     • E (np.array): Young's modulus in [GPa]

                     • G (np.array): shear modulus in [GPa]

                     • nu (np.array): Poisson ratio

              Copied from S.I. of L. Zhang et al. "Anisotropic  elasticity  of  silicon  and  its
              application  to  the  modelling  of  X-ray  optics."  J. Synchrotron Rad. 21, no. 3
              (2014): 507-517.

       XRStools.xrs_utilities.sumx(A)
              Short-hand command to sum over 1st dimension of a N-D matrix (N>2) and  to  squeeze
              it to N-1-D matrix.

       XRStools.xrs_utilities.svd_my(M, maxiter=100, eta=0.1)

       XRStools.xrs_utilities.taupgen(e, hkl=[6, 6, 0], crystals='Si', R=1.0, dev=array([- 50., -
       49., - 48., - 47., - 46., - 45., - 44., - 43., - 42., - 41., - 40., - 39., - 38., - 37., -
       36., - 35., - 34., - 33., - 32., - 31., - 30., - 29., - 28., - 27., - 26., - 25., - 24., -
       23., - 22., - 21., - 20., - 19., - 18., - 17., - 16., - 15., - 14., - 13., - 12., - 11., -
       10., - 9., - 8., - 7., - 6., - 5., - 4., - 3., - 2., - 1., 0., 1., 2., 3., 4., 5., 6., 7.,
       8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22.,  23.,  24.,  25.,
       26.,  27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38., 39., 40., 41., 42., 43.,
       44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55., 56., 57., 58., 59., 60.,  61.,
       62.,  63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73., 74., 75., 76., 77., 78., 79.,
       80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90., 91., 92., 93., 94., 95., 96.,  97.,
       98.,  99.,  100.,  101., 102., 103., 104., 105., 106., 107., 108., 109., 110., 111., 112.,
       113., 114., 115., 116., 117., 118., 119., 120., 121., 122., 123., 124., 125., 126.,  127.,
       128.,  129., 130., 131., 132., 133., 134., 135., 136., 137., 138., 139., 140., 141., 142.,
       143., 144., 145., 146., 147., 148., 149.]), alpha=0.0)
              % TAUPGEN          Calculates the reflectivity curves of bent crystals % % function
              [refl,e,dev]=taupgen_new(e,hkl,crystals,R,dev,alpha);  %  %               e = fixed
              nominal energy in keV %            hkl = reflection order vector, e.g. [1  1  1]  %
              crystals  =  crystal string, e.g. 'si' or 'ge' %              R = bending radius in
              meters   %              dev   =   deviation   parameter    for    which    the    %
              curve  will  be calculated (vector) (optional) %          alpha = asymmetry angle %
              based on a FORTRAN program of Michael Krisch % Translitterated to  Matlab  by  Simo
              Huotari 2006, 2007 % Is far away from being good matlab writing - mostly copy&paste
              from % the fortran routines. Frankly, my dear, I don't give a damn.   %  Complaints
              -> /dev/null

       XRStools.xrs_utilities.taupgen_amplitude(e,   hkl=[6,   6,   0],   crystals='Si',   R=1.0,
       dev=array([- 50., - 49., - 48., - 47., - 46., - 45., - 44., - 43., - 42., - 41., - 40.,  -
       39., - 38., - 37., - 36., - 35., - 34., - 33., - 32., - 31., - 30., - 29., - 28., - 27., -
       26., - 25., - 24., - 23., - 22., - 21., - 20., - 19., - 18., - 17., - 16., - 15., - 14., -
       13.,  -  12.,  - 11., - 10., - 9., - 8., - 7., - 6., - 5., - 4., - 3., - 2., - 1., 0., 1.,
       2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.,  17.,  18.,  19.,  20.,
       21.,  22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38.,
       39., 40., 41., 42., 43., 44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55.,  56.,
       57.,  58., 59., 60., 61., 62., 63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73., 74.,
       75., 76., 77., 78., 79., 80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90., 91.,  92.,
       93.,  94.,  95., 96., 97., 98., 99., 100., 101., 102., 103., 104., 105., 106., 107., 108.,
       109., 110., 111., 112., 113., 114., 115., 116., 117., 118., 119., 120., 121., 122.,  123.,
       124.,  125., 126., 127., 128., 129., 130., 131., 132., 133., 134., 135., 136., 137., 138.,
       139., 140., 141., 142., 143., 144., 145., 146., 147., 148., 149.]), alpha=0.0)
              % TAUPGEN          Calculates the reflectivity curves of bent crystals % % function
              [refl,e,dev]=taupgen_new(e,hkl,crystals,R,dev,alpha);  %  %               e = fixed
              nominal energy in keV %            hkl = reflection order vector, e.g. [1  1  1]  %
              crystals  =  crystal string, e.g. 'si' or 'ge' %              R = bending radius in
              meters   %              dev   =   deviation   parameter    for    which    the    %
              curve  will  be calculated (vector) (optional) %          alpha = asymmetry angle %
              based on a FORTRAN program of Michael Krisch % Translitterated to  Matlab  by  Simo
              Huotari 2006, 2007 % Is far away from being good matlab writing - mostly copy&paste
              from % the fortran routines. Frankly, my dear, I don't give a damn.   %  Complaints
              -> /dev/null

       XRStools.xrs_utilities.tauphoto(Z,                                                 energy,
       logtablefile='/usr/lib/python3/dist-packages/XRStools/resources/data/logtable.dat')
              tauphoto Calculates Photoelectric Cross Section in cm^2/g using Log-Log Fit.

              Args:

                     • z (int or string): Element number or elements symbol.

                     • energy (float or array): Energy (can be number or vector)

              Returns:

                     • tau (float or array): Photoelectric cross section in [cm**2/g]

              Adapted from original Matlab function of Keijo Hamalainen.

       XRStools.xrs_utilities.unconstrained_mf(A, numComp=3, maxIter=1000, tol=1e-08)
              unconstrained_mf Returns main components from an off-diagonal Matrix (energy-loss x
              angular-departure),  using  the  power  method  iteratively  on  the different main
              components.

       XRStools.xrs_utilities.vangle(v1, v2)
              vangle Calculates the angle between two cartesian vectors v1 and v2 in degrees.

              Args:

                     • v1 (np.array): first vector.

                     • v2 (np.array): second vector.

              Returns:

                     • th (float): angle between first and second vector.

              Function by S. Huotari, adopted for Python.

       XRStools.xrs_utilities.vec2mat(x, F, C, F_up, C_up, n, k, m)

       XRStools.xrs_utilities.vrot(v, vaxis, phi)
              vrot Rotates a vector around a given axis.

              Args:

                     • v (np.array): vector to be rotated

                     • vaxis (np.array): rotation axis

                     • phi (float): angle [deg] respecting the right-hand rule

              Returns:

                     • v2 (np.array): new rotated vector

              Function by S. Huotari (2007) adopted to Python.

       XRStools.xrs_utilities.vrot2(vector1, vector2, angle)
              rotMatrix Rotate vector1 around vector2 by an angle.

       XRStools.xrs_utilities.xas_fluo_correct(ene,  mu,  formula,  fluo_ene,  edge_ene,   angin,
       angout)
              xas_fluo_correct  Fluorescence yield over-absorption correction as in Larch/Athena.
              see: https://www3.aps.anl.gov/haskel/FLUO/Fluo-manual.pdf

              Args:

                     • ene (np.array): energy axis in [keV]

                     • mu (np.array): measured fluorescence spectrum

                     • formula (str): chemical sum formulas (e.g. 'SiO2')

                     • fluo_ene (float): energy in keV of main fluorescence line

                     • edge_ene (float): edge energy in [keV]

                     • angin (float): incidence angle (relative to sample normal) [deg.]

                     • angout (float): exit angle (relative to sample normal) [deg.]

              Returns:

                     • ene (np.array): energy axis in [keV]

                     • mu_corr (np.array): corrected fluorescence spectrum

   XRStools.XRStool Package
   XRStools.xrs_calctools Module
       XRStools.xrs_calctools.alterGROatomNames(filename, oldName, newName)

       XRStools.xrs_calctools.axsfTrajParser(filename)
              axsfTrajParser

       XRStools.xrs_calctools.beta(a, b, size=None)
              Draw samples from a Beta distribution.

              The Beta distribution is a special case  of  the  Dirichlet  distribution,  and  is
              related to the Gamma distribution.  It has the probability distribution function

                             f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1}
              (1 - x)^{\beta - 1},

              where the normalization, B, is the beta function,

                                  B(\alpha, \beta) = \int_0^1 t^{\alpha - 1}
              (1 - t)^{\beta - 1} dt.

              It is often seen in Bayesian inference and order statistics.

              NOTE:
                 New  code should use the beta method of a default_rng() instance instead; please
                 see the random-quick-start.

              a      float or array_like of floats Alpha, positive (>0).

              b      float or array_like of floats Beta, positive (>0).

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single  value  is  returned  if  a  and  b  are  both  scalars.   Otherwise,
                     np.broadcast(a, b).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized beta distribution.

              Generator.beta: which should be used for new code.

       XRStools.xrs_calctools.binomial(n, p, size=None)
              Draw samples from a binomial distribution.

              Samples  are drawn from a binomial distribution with specified parameters, n trials
              and p probability of success where n an integer >= 0  and  p  is  in  the  interval
              [0,1]. (n may be input as a float, but it is truncated to an integer in use)

              NOTE:
                 New  code  should  use  the binomial method of a default_rng() instance instead;
                 please see the random-quick-start.

              n      int or array_like of ints Parameter of the distribution, >=  0.  Floats  are
                     also accepted, but they will be truncated to integers.

              p      float or array_like of floats Parameter of the distribution, >= 0 and <=1.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  n  and  p  are  both  scalars.   Otherwise,
                     np.broadcast(n, p).size samples are drawn.

              out    ndarray  or  scalar  Drawn   samples   from   the   parameterized   binomial
                     distribution, where each sample is equal to the number of successes over the
                     n trials.

              scipy.stats.binom
                     probability density function, distribution or cumulative  density  function,
                     etc.

              Generator.binomial: which should be used for new code.

              The probability density for the binomial distribution is

                                      P(N) = \binom{n}{N}p^N(1-p)^{n-N},

              where  n  is  the  number  of trials, p is the probability of success, and N is the
              number of successes.

              When estimating the standard error of a proportion  in  a  population  by  using  a
              random sample, the normal distribution works well unless the product p*n <=5, where
              p = population proportion estimate, and n = number of samples, in  which  case  the
              binomial  distribution  is used instead. For example, a sample of 15 people shows 4
              who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4,
              so the binomial distribution should be used in this case.

       [1]  Dalgaard, Peter, "Introductory Statistics with R", Springer-Verlag, 2002.

       [2]  Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill, Fifth Edition, 2002.

       [3]  Lentner, Marvin, "Elementary Applied Statistics", Bogden and Quigley, 1972.

       [4]  Weisstein,  Eric  W. "Binomial Distribution." From MathWorld--A Wolfram Web Resource.
            http://mathworld.wolfram.com/BinomialDistribution.html

       [5]  Wikipedia,                          "Binomial                          distribution",
            https://en.wikipedia.org/wiki/Binomial_distribution

            Draw samples from the distribution:

            >>> n, p = 10, .5  # number of trials, probability of each trial
            >>> s = np.random.binomial(n, p, 1000)
            # result of flipping a coin 10 times, tested 1000 times.

            A real world example. A company drills 9 wild-cat oil exploration wells, each with an
            estimated probability of success of 0.1. All nine wells fail. What is the probability
            of that happening?

            Let's do 20,000 trials of the model, and count the number that generate zero positive
            results.

            >>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000.
            # answer = 0.38885, or 38%.

       XRStools.xrs_calctools.boxParser(filename)
              parseXYZfile Reads an xyz-style file.

       XRStools.xrs_calctools.broaden_diagram(e,   s,   params=[1.0,    1.0,    537.5,    540.0],
       npoints=1000)
              function [e2,s2] = broaden_diagram2(e,s,params,npoints)

              %   BROADEN_DIAGRAM2     Broaden   a   StoBe   line  diagram  %  %   [ENE2,SQW2]  =
              BROADEN_DIAGRAM2(ENE,SQW,PARAMS,NPOINTS)  %  %    gives  the   broadened   spectrum
              SQW2(ENE2)  of  the  line-spectrum  %    SWQ(ENE).  Each line is substituted with a
              Gaussian peak, %   the FWHM of which is determined by PARAMS. ENE2 is  a  linear  %
              scale of length NPOINTS (default 1000).  % %    PARAMS = [f_min f_max emin max] % %
              For ENE <= e_min, FWHM = f_min.  %     For ENE >= e_max, FWHM = f_min.  %      FWHM
              increases   linearly   from   [f_min   f_max]   between   [e_min   e_max].    %   %
              T Pylkkanen @ 2008-04-18 [17:37]

       XRStools.xrs_calctools.broaden_linear(spec, params=[0.8, 8, 537.5, 550], npoints=1000)
              broadens a spectrum with a Gaussian of width params[0] below  params[2]  and  width
              params[1]  above  params[3], width increases linear in between.  returns two-column
              numpy array of length npoints with energy and the broadened spectrum

       XRStools.xrs_calctools.calculateCOMlist(atomList)
              calculateCOMlist Calculates center of mass for a list of atoms.

       XRStools.xrs_calctools.calculateRIJhist(atoms, boxLength, DELR=0.01, MAXBIN=1000)

       XRStools.xrs_calctools.calculateRIJhist2_arb(atoms1,   atoms2,    lattice,    lattice_inv,
       DELR=0.01, MAXBIN=1000)

       XRStools.xrs_calctools.calculateRIJhist_arb(atoms1,    atoms2,    lattice,    lattice_inv,
       DELR=0.01, MAXBIN=1000)

       XRStools.xrs_calctools.changeOHBondLength(h2oMol,  fraction,  boxLength=None,   oName='O',
       hName='H')

       XRStools.xrs_calctools.chisquare(df, size=None)
              Draw samples from a chi-square distribution.

              When df independent random variables, each with standard normal distributions (mean
              0, variance 1), are squared and summed, the resulting  distribution  is  chi-square
              (see Notes).  This distribution is often used in hypothesis testing.

              NOTE:
                 New  code  should  use the chisquare method of a default_rng() instance instead;
                 please see the random-quick-start.

              df     float or array_like of floats Number of degrees of freedom, must be > 0.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if df is a  scalar.   Otherwise,  np.array(df).size
                     samples are drawn.

              out    ndarray   or   scalar   Drawn  samples  from  the  parameterized  chi-square
                     distribution.

              ValueError
                     When df <= 0 or when an inappropriate size (e.g. size=-1) is given.

              Generator.chisquare: which should be used for new code.

              The variable obtained by summing the squares of df independent,  standard  normally
              distributed random variables:

                                      Q = \sum_{i=0}^{\mathtt{df}} X^2_i

              is chi-square distributed, denoted

                                               Q \sim \chi^2_k.

              The probability density function of the chi-squared distribution is

                                    p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)}
              x^{k/2 - 1} e^{-x/2},

              where \Gamma is the gamma function,

                               \Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.

       [1]  NIST                 "Engineering                 Statistics                Handbook"
            https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm

            >>> np.random.chisquare(2,4)
            array([ 1.89920014,  9.00867716,  3.13710533,  5.62318272]) # random

       XRStools.xrs_calctools.choice(a, size=None, replace=True, p=None)
              Generates a random sample from a given 1-D array

              New in version 1.7.0.

              NOTE:
                 New code should use the choice  method  of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              a      1-D  array-like  or int If an ndarray, a random sample is generated from its
                     elements.  If an  int,  the  random  sample  is  generated  as  if  it  were
                     np.arange(a)

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              replace
                     boolean, optional Whether the sample is with or without replacement. Default
                     is True, meaning that a value of a can be selected multiple times.

              p      1-D array-like, optional The probabilities associated with each entry in  a.
                     If  not given, the sample assumes a uniform distribution over all entries in
                     a.

              samples
                     single item or ndarray The generated random samples

              ValueError
                     If a is an int and less than zero, if a or p are not 1-dimensional, if a  is
                     an  array-like  of size 0, if p is not a vector of probabilities, if a and p
                     have different lengths, or if replace=False and the sample size  is  greater
                     than the population size

              randint, shuffle, permutation Generator.choice: which should be used in new code

              Setting  user-specified  probabilities  through  p  uses  a  more  general but less
              efficient sampler than the default. The general sampler produces a different sample
              than the optimized sampler even if each element of p is 1 / len(a).

              Sampling  random  rows  from a 2-D array is not possible with this function, but is
              possible with Generator.choice through its axis keyword.

              Generate a uniform random sample from np.arange(5) of size 3:

              >>> np.random.choice(5, 3)
              array([0, 3, 4]) # random
              >>> #This is equivalent to np.random.randint(0,5,3)

              Generate a non-uniform random sample from np.arange(5) of size 3:

              >>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
              array([3, 3, 0]) # random

              Generate a uniform random sample from np.arange(5) of size 3 without replacement:

              >>> np.random.choice(5, 3, replace=False)
              array([3,1,0]) # random
              >>> #This is equivalent to np.random.permutation(np.arange(5))[:3]

              Generate  a  non-uniform  random  sample  from  np.arange(5)  of  size  3   without
              replacement:

              >>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
              array([2, 3, 0]) # random

              Any  of  the  above  can  be  repeated with an arbitrary array-like instead of just
              integers. For instance:

              >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
              >>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
              array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random
                    dtype='<U11')

       XRStools.xrs_calctools.convg(x, y, fwhm)
              Convolution with Gaussian

       XRStools.xrs_calctools.countHbonds(mol1, mol2, Roocut=3.6, Rohcut=2.4, Aoooh=30.0)

       XRStools.xrs_calctools.countHbonds_orig(mol1, mol2, Roocut=3.6, Rohcut=2.4, Aoooh=30.0)

       XRStools.xrs_calctools.countHbonds_pbc(mol1,  mol2,  boxLength,  Roocut=3.6,   Rohcut=2.4,
       Aoooh=30.0)

       XRStools.xrs_calctools.count_HBonds_pbc_arb(mol1,  mol2, lattice, lattice_inv, Roocut=3.6,
       Rohcut=2.4, Aoooh=30.0)

       XRStools.xrs_calctools.count_OO_neighbors(list_of_o_atoms, Roocut, boxLength=None)

       XRStools.xrs_calctools.count_OO_neighbors_pbc(list_of_o_atoms,     Roocut,      boxLength,
       numbershells=1)

       XRStools.xrs_calctools.cut_spec(spec, emin=None, emax=None)
              deletes lines of matrix with first column smaller than emin and larger than emax

       XRStools.xrs_calctools.dirichlet(alpha, size=None)
              Draw samples from the Dirichlet distribution.

              Draw   size   samples   of   dimension   k   from   a   Dirichlet  distribution.  A
              Dirichlet-distributed random variable can be seen as a multivariate  generalization
              of  a  Beta  distribution.  The  Dirichlet  distribution  is a conjugate prior of a
              multinomial distribution in Bayesian inference.

              NOTE:
                 New code should use the dirichlet method of a  default_rng()  instance  instead;
                 please see the random-quick-start.

              alpha  sequence  of  floats,  length  k Parameter of the distribution (length k for
                     sample of length k).

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n), then m * n * k samples are drawn.  Default is None, in which case a
                     vector of length k is returned.

              samples
                     ndarray, The drawn samples, of shape (size, k).

              ValueError
                     If any value in alpha is less than or equal to zero

              Generator.dirichlet: which should be used for new code.

              The Dirichlet distribution is  a  distribution  over  vectors  x  that  fulfil  the
              conditions x_i>0 and \sum_{i=1}^k x_i = 1.

              The  probability  density  function p of a Dirichlet-distributed random vector X is
              proportional to

                                p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},

              where \alpha is a vector containing the positive concentration parameters.

              The method uses the following property for computation: let Y be  a  random  vector
              which  has  components  that  follow  a  standard  gamma  distribution,  then  X  =
              \frac{1}{\sum_{i=1}^k{Y_i}} Y is Dirichlet-distributed

       [1]  David McKay, "Information Theory, Inference and  Learning  Algorithms,"  chapter  23,
            http://www.inference.org.uk/mackay/itila/

       [2]  Wikipedia,                          "Dirichlet                         distribution",
            https://en.wikipedia.org/wiki/Dirichlet_distribution

            Taking an example cited in Wikipedia, this distribution can be used if one wanted  to
            cut  strings (each of initial length 1.0) into K pieces with different lengths, where
            each piece had, on average, a designated average length, but allowing some  variation
            in the relative sizes of the pieces.

            >>> s = np.random.dirichlet((10, 5, 3), 20).transpose()

            >>> import matplotlib.pyplot as plt
            >>> plt.barh(range(20), s[0])
            >>> plt.barh(range(20), s[1], left=s[0], color='g')
            >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r')
            >>> plt.title("Lengths of Strings")

       class   XRStools.xrs_calctools.erkale(prefix,   postfix,   fromnumber,   tonumber,   step,
       stepformat=2)
              Bases: object

              class to analyze ERKALE XRS results.

              broaden_lin(params=[0.8, 8, 537.5, 550], npoints=1000)

              cut_broadspecs(emin=None, emax=None)

              cut_rawspecs(emin=None, emax=None)

              norm_area(emin=None, emax=None)

              norm_max()

              plot_spec()

              sum_specs()

       XRStools.xrs_calctools.exponential(scale=1.0, size=None)
              Draw samples from an exponential distribution.

              Its probability density function is

                        f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),

              for x > 0 and 0 elsewhere. \beta is the scale parameter, which is  the  inverse  of
              the rate parameter \lambda = 1/\beta.  The rate parameter is an alternative, widely
              used parameterization of the exponential distribution
              [3]_
              .

              The  exponential  distribution  is  a  continuous   analogue   of   the   geometric
              distribution.   It  describes many common situations, such as the size of raindrops
              measured over many rainstorms
              [1]_
              , or the time between page requests to Wikipedia
              [2]_
              .

              NOTE:
                 New code should use the exponential method of a default_rng() instance  instead;
                 please see the random-quick-start.

              scale  float  or  array_like of floats The scale parameter, \beta = 1/\lambda. Must
                     be non-negative.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single   value   is   returned   if   scale   is   a   scalar.    Otherwise,
                     np.array(scale).size samples are drawn.

              out    ndarray   or   scalar  Drawn  samples  from  the  parameterized  exponential
                     distribution.

              Generator.exponential: which should be used for new code.

       [1]  Peyton Z. Peebles Jr., "Probability, Random Variables and Random Signal  Principles",
            4th ed, 2001, p. 57.

       [2]  Wikipedia, "Poisson process", https://en.wikipedia.org/wiki/Poisson_process

       [3]  Wikipedia,                         "Exponential                        distribution",
            https://en.wikipedia.org/wiki/Exponential_distribution

       XRStools.xrs_calctools.f(dfnum, dfden, size=None)
              Draw samples from an F distribution.

              Samples are drawn from an F distribution with specified parameters, dfnum  (degrees
              of  freedom in numerator) and dfden (degrees of freedom in denominator), where both
              parameters must be greater than zero.

              The random variate of the F distribution (also known as the Fisher distribution) is
              a  continuous probability distribution that arises in ANOVA tests, and is the ratio
              of two chi-square variates.

              NOTE:
                 New code should use the f method of a default_rng() instance instead; please see
                 the random-quick-start.

              dfnum  float or array_like of floats Degrees of freedom in numerator, must be > 0.

              dfden  float or array_like of float Degrees of freedom in denominator, must be > 0.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned if dfnum and dfden are both scalars.  Otherwise,
                     np.broadcast(dfnum, dfden).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Fisher distribution.

              scipy.stats.f
                     probability density function, distribution or cumulative  density  function,
                     etc.

              Generator.f: which should be used for new code.

              The  F  statistic is used to compare in-group variances to between-group variances.
              Calculating the distribution depends on the sampling, and so it is  a  function  of
              the respective degrees of freedom in the problem.  The variable dfnum is the number
              of samples minus one, the between-groups degrees of freedom,  while  dfden  is  the
              within-groups  degrees  of  freedom, the sum of the number of samples in each group
              minus the number of groups.

       [1]  Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill, Fifth Edition, 2002.

       [2]  Wikipedia, "F-distribution", https://en.wikipedia.org/wiki/F-distribution

            An example from Glantz[1], pp 47-40:

            Two groups, children of diabetics  (25  people)  and  children  from  people  without
            diabetes  (25  controls).  Fasting  blood glucose was measured, case group had a mean
            value of 86.1, controls had a mean value of 82.2. Standard deviations were  2.09  and
            2.49  respectively.  Are  these  data  consistent  with  the null hypothesis that the
            parents diabetic status does  not  affect  their  children's  blood  glucose  levels?
            Calculating the F statistic from the data gives a value of 36.01.

            Draw samples from the distribution:

            >>> dfnum = 1. # between group degrees of freedom
            >>> dfden = 48. # within groups degrees of freedom
            >>> s = np.random.f(dfnum, dfden, 1000)

            The lower bound for the top 1% of the samples is :

            >>> np.sort(s)[-10]
            7.61988120985 # random

            So  there  is  about  a 1% chance that the F statistic will exceed 7.62, the measured
            value is 36, so the null hypothesis is rejected at the 1% level.

       XRStools.xrs_calctools.findAllWaters(point, waterMols, o_name, cutoff)

       XRStools.xrs_calctools.findHexaneMolecules(box, c_atoms, CC_cut=1.7, CH_cut=1.2)

       XRStools.xrs_calctools.findMethanolMolecules(box, CO_cut=1.6, CH_cut=1.2, OH_cut=1.2)

       XRStools.xrs_calctools.findMolecule(xyzAtoms, molAtomList)

       XRStools.xrs_calctools.find_H2O_molecules(o_atoms, h_atoms, boxLength=None)

       XRStools.xrs_calctools.find_H2O_molecules_PBC_arb(o_atoms, h_atoms, lattice,  lattice_inv,
       OH_cutoff=1.5)

       XRStools.xrs_calctools.gamma(shape, scale=1.0, size=None)
              Draw samples from a Gamma distribution.

              Samples  are  drawn  from  a  Gamma  distribution  with specified parameters, shape
              (sometimes designated "k") and scale (sometimes  designated  "theta"),  where  both
              parameters are > 0.

              NOTE:
                 New code should use the gamma method of a default_rng() instance instead; please
                 see the random-quick-start.

              shape  float or array_like of floats The shape of the gamma distribution.  Must  be
                     non-negative.

              scale  float or array_like of floats, optional The scale of the gamma distribution.
                     Must be non-negative.  Default is equal to 1.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if shape and scale are  both  scalars.   Otherwise,
                     np.broadcast(shape, scale).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized gamma distribution.

              scipy.stats.gamma
                     probability  density  function, distribution or cumulative density function,
                     etc.

              Generator.gamma: which should be used for new code.

              The probability density for the Gamma distribution is

                            p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},

              where k is the shape and \theta the scale, and \Gamma is the Gamma function.

              The Gamma distribution is often used to model the times to  failure  of  electronic
              components,  and  arises naturally in processes for which the waiting times between
              Poisson distributed events are relevant.

       [1]  Weisstein, Eric W. "Gamma Distribution."  From  MathWorld--A  Wolfram  Web  Resource.
            http://mathworld.wolfram.com/GammaDistribution.html

       [2]  Wikipedia, "Gamma distribution", https://en.wikipedia.org/wiki/Gamma_distribution

            Draw samples from the distribution:

            >>> shape, scale = 2., 2.  # mean=4, std=2*sqrt(2)
            >>> s = np.random.gamma(shape, scale, 1000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> import scipy.special as sps
            >>> count, bins, ignored = plt.hist(s, 50, density=True)
            >>> y = bins**(shape-1)*(np.exp(-bins/scale) /
            ...                      (sps.gamma(shape)*scale**shape))
            >>> plt.plot(bins, y, linewidth=2, color='r')
            >>> plt.show()

       XRStools.xrs_calctools.gauss(x, x0, fwhm)

       XRStools.xrs_calctools.gauss1(x, x0, fwhm)
              returns  a gaussian with peak value normalized to unity a[0] = peak position a[1] =
              Full Width at Half Maximum

       XRStools.xrs_calctools.gauss_areanorm(x, x0, fwhm)
              area-normalized gaussian

       XRStools.xrs_calctools.geometric(p, size=None)
              Draw samples from the geometric distribution.

              Bernoulli trials are experiments with one of two outcomes: success or  failure  (an
              example  of  such  an  experiment  is flipping a coin).  The geometric distribution
              models the number of trials that must be run in order to achieve  success.   It  is
              therefore supported on the positive integers, k = 1, 2, ....

              The probability mass function of the geometric distribution is

                                           f(k) = (1 - p)^{k - 1} p

              where p is the probability of success of an individual trial.

              NOTE:
                 New  code  should  use the geometric method of a default_rng() instance instead;
                 please see the random-quick-start.

              p      float or array_like of floats The probability of success  of  an  individual
                     trial.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  p is a scalar.  Otherwise, np.array(p).size
                     samples are drawn.

              out    ndarray  or  scalar  Drawn  samples   from   the   parameterized   geometric
                     distribution.

              Generator.geometric: which should be used for new code.

              Draw  ten  thousand values from the geometric distribution, with the probability of
              an individual success equal to 0.35:

              >>> z = np.random.geometric(p=0.35, size=10000)

              How many trials succeeded after a single run?

              >>> (z == 1).sum() / 10000.
              0.34889999999999999 #random

       XRStools.xrs_calctools.getDistVector(atom1, atom2)

       XRStools.xrs_calctools.getDistVectorPBC_arb(atom1, atom2, lattice, lattice_inv)
              getDistVectorPBC_arb

              Calculates the distance vector between two atoms from an arbitrary  simulation  box
              using the minimum image convention.

              Args:  atom1  (obj):  Instance  of the xzyAtom class.  atom2 (obj): Instance of the
                     xzyAtom class.  lattice (np.array): Array with lattice vectors  as  columns.
                     lattice_inv (np.array): Inverse of lattice.

              Returns:
                     The distance vector between the two atoms (np.array).

       XRStools.xrs_calctools.getDistVectorPbc(atom1, atom2, boxLength)

       XRStools.xrs_calctools.getDistance(atom1, atom2)

       XRStools.xrs_calctools.getDistancePBC_arb(atom1, atom2, lattice, lattice_inv)
              getDistancePBC_arb

              Calculates  the  distance  of  two atoms from an arbitrary simulation box using the
              minimum image convention.

              Args:  atom1 (obj): Instance of the xzyAtom class.  atom2 (obj):  Instance  of  the
                     xzyAtom  class.   lattice (np.array): Array with lattice vectors as columns.
                     lattice_inv (np.array): Inverse of lattice.

              Returns:
                     The distance between the two atoms.

       XRStools.xrs_calctools.getDistancePbc(atom1, atom2, boxLength)

       XRStools.xrs_calctools.getDistsFromMolecule(point, listOfMolecules, o_name=None)

       XRStools.xrs_calctools.getPeriodicTestBox(xyzAtoms, boxLength, numbershells=1)

       XRStools.xrs_calctools.getPeriodicTestBox_arb(xyzAtoms, lattice, lattice_inv, lx=[- 1, 1],
       ly=[- 1, 1], lz=[- 1, 1])

       XRStools.xrs_calctools.getPeriodicTestBox_molecules(Molecules, boxLength, numbershells=1)

       XRStools.xrs_calctools.getTetraParameter(o_atoms, boxLength=None)
              according to NATURE, VOL 409, 18 JANUARY 2001

       XRStools.xrs_calctools.getTranslVec(atom1, atom2, boxLength)
              getTranslVec  Returns  the  translation vector that brings atom2 closer to atom1 in
              case atom2 is further than boxLength away.

       XRStools.xrs_calctools.getTranslVec_geocen(mol1COM, mol2COM, boxLength)
              getTranslVec_geocen

       XRStools.xrs_calctools.get_state()
              Return a tuple representing the internal state of the generator.

              For more details, see set_state.

              legacy bool, optional Flag indicating to return  a  legacy  tuple  state  when  the
                     BitGenerator is MT19937, instead of a dict.

              out    {tuple(str, ndarray of 624 uints, int, int, float), dict} The returned tuple
                     has the following items:

                     1. the string 'MT19937'.

                     2. a 1-D array of 624 unsigned integer keys.

                     3. an integer pos.

                     4. an integer has_gauss.

                     5. a float cached_gaussian.

                     If legacy is False, or the  BitGenerator  is  not  MT19937,  then  state  is
                     returned as a dictionary.

              set_state

              set_state and get_state are not needed to work with any of the random distributions
              in NumPy. If the internal state is manually altered, the user should  know  exactly
              what he/she is doing.

       XRStools.xrs_calctools.groBoxParser(filename, nanoMeter=True)
              groBoxParser Parses an gromacs GRO-style file for the xyzBox class.

       XRStools.xrs_calctools.groTrajecParser(filename, nanoMeter=True)
              groTrajecParser Parses an gromacs GRO-style file for the xyzBox class.

       XRStools.xrs_calctools.gumbel(loc=0.0, scale=1.0, size=None)
              Draw samples from a Gumbel distribution.

              Draw  samples  from  a  Gumbel distribution with specified location and scale.  For
              more information on the Gumbel distribution, see Notes and References below.

              NOTE:
                 New code should use the gumbel  method  of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              loc    float  or  array_like  of  floats,  optional The location of the mode of the
                     distribution. Default is 0.

              scale  float  or  array_like  of  floats,  optional  The  scale  parameter  of  the
                     distribution. Default is 1. Must be non- negative.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  loc and scale are both scalars.  Otherwise,
                     np.broadcast(loc, scale).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Gumbel distribution.

              scipy.stats.gumbel_l    scipy.stats.gumbel_r     scipy.stats.genextreme     weibull
              Generator.gumbel: which should be used for new code.

              The  Gumbel  (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I)
              distribution is one of a class of Generalized  Extreme  Value  (GEV)  distributions
              used  in  modeling  extreme  value  problems.   The Gumbel is a special case of the
              Extreme  Value  Type  I  distribution  for   maximums   from   distributions   with
              "exponential-like" tails.

              The probability density for the Gumbel distribution is

                        p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/
              \beta}},

              where \mu is the mode, a location parameter, and \beta is the scale parameter.

              The  Gumbel (named for German mathematician Emil Julius Gumbel) was used very early
              in the hydrology literature, for modeling the occurrence of  flood  events.  It  is
              also used for modeling maximum wind speed and rainfall rates.  It is a "fat-tailed"
              distribution - the probability of an event in  the  tail  of  the  distribution  is
              larger  than  if one used a Gaussian, hence the surprisingly frequent occurrence of
              100-year floods. Floods  were  initially  modeled  as  a  Gaussian  process,  which
              underestimated the frequency of extreme events.

              It  is one of a class of extreme value distributions, the Generalized Extreme Value
              (GEV) distributions, which also includes the Weibull and Frechet.

              The  function  has  a  mean   of   \mu   +   0.57721\beta   and   a   variance   of
              \frac{\pi^2}{6}\beta^2.

       [1]  Gumbel, E. J., "Statistics of Extremes," New York: Columbia University Press, 1958.

       [2]  Reiss,  R.-D. and Thomas, M., "Statistical Analysis of Extreme Values from Insurance,
            Finance, Hydrology and Other Fields," Basel: Birkhauser Verlag, 2001.

            Draw samples from the distribution:

            >>> mu, beta = 0, 0.1 # location and scale
            >>> s = np.random.gumbel(mu, beta, 1000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s, 30, density=True)
            >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
            ...          * np.exp( -np.exp( -(bins - mu) /beta) ),
            ...          linewidth=2, color='r')
            >>> plt.show()

            Show how an extreme value distribution can arise from a Gaussian process and  compare
            to a Gaussian:

            >>> means = []
            >>> maxima = []
            >>> for i in range(0,1000) :
            ...    a = np.random.normal(mu, beta, 1000)
            ...    means.append(a.mean())
            ...    maxima.append(a.max())
            >>> count, bins, ignored = plt.hist(maxima, 30, density=True)
            >>> beta = np.std(maxima) * np.sqrt(6) / np.pi
            >>> mu = np.mean(maxima) - 0.57721*beta
            >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
            ...          * np.exp(-np.exp(-(bins - mu)/beta)),
            ...          linewidth=2, color='r')
            >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi))
            ...          * np.exp(-(bins - mu)**2 / (2 * beta**2)),
            ...          linewidth=2, color='g')
            >>> plt.show()

       XRStools.xrs_calctools.hypergeometric(ngood, nbad, nsample, size=None)
              Draw samples from a Hypergeometric distribution.

              Samples  are  drawn  from  a hypergeometric distribution with specified parameters,
              ngood (ways to make a good selection), nbad (ways to make  a  bad  selection),  and
              nsample  (number  of  items sampled, which is less than or equal to the sum ngood +
              nbad).

              NOTE:
                 New code should use  the  hypergeometric  method  of  a  default_rng()  instance
                 instead; please see the random-quick-start.

              ngood  int  or array_like of ints Number of ways to make a good selection.  Must be
                     nonnegative.

              nbad   int or array_like of ints Number of ways to make a bad selection.   Must  be
                     nonnegative.

              nsample
                     int  or  array_like of ints Number of items sampled.  Must be at least 1 and
                     at most ngood + nbad.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if  ngood,  nbad,  and  nsample  are  all  scalars.
                     Otherwise, np.broadcast(ngood, nbad, nsample).size samples are drawn.

              out    ndarray  or  scalar  Drawn  samples  from  the  parameterized hypergeometric
                     distribution. Each sample is the number of  good  items  within  a  randomly
                     selected  subset  of  size  nsample taken from a set of ngood good items and
                     nbad bad items.

              scipy.stats.hypergeom
                     probability density function, distribution or cumulative  density  function,
                     etc.

              Generator.hypergeometric: which should be used for new code.

              The probability density for the Hypergeometric distribution is

                           P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},

              where 0 \le x \le n and n-b \le x \le g

              for  P(x)  the  probability  of  x good results in the drawn sample, g = ngood, b =
              nbad, and n = nsample.

              Consider an urn with black and white marbles in it, ngood of  them  are  black  and
              nbad   are  white.  If  you  draw  nsample  balls  without  replacement,  then  the
              hypergeometric distribution describes the distribution of black balls in the  drawn
              sample.

              Note  that  this  distribution is very similar to the binomial distribution, except
              that in this case, samples are drawn without replacement, whereas in  the  Binomial
              case  samples  are drawn with replacement (or the sample space is infinite). As the
              sample space becomes large, this distribution approaches the binomial.

       [1]  Lentner, Marvin, "Elementary Applied Statistics", Bogden and Quigley, 1972.

       [2]  Weisstein, Eric W.  "Hypergeometric  Distribution."  From  MathWorld--A  Wolfram  Web
            Resource.  http://mathworld.wolfram.com/HypergeometricDistribution.html

       [3]  Wikipedia,                       "Hypergeometric                       distribution",
            https://en.wikipedia.org/wiki/Hypergeometric_distribution

            Draw samples from the distribution:

            >>> ngood, nbad, nsamp = 100, 2, 10
            # number of good, number of bad, and number of samples
            >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000)
            >>> from matplotlib.pyplot import hist
            >>> hist(s)
            #   note that it is very unlikely to grab both bad items

            Suppose you have an urn with 15 white and 15 black marbles.  If you pull  15  marbles
            at random, how likely is it that 12 or more of them are one color?

            >>> s = np.random.hypergeometric(15, 15, 15, 100000)
            >>> sum(s>=12)/100000. + sum(s<=3)/100000.
            #   answer = 0.003 ... pretty unlikely!

       XRStools.xrs_calctools.keithBoxParser(cell_fname, coord_fname)
              keithBoxParser

              Reads structure files from Keith's SiO2 simulations.

       XRStools.xrs_calctools.laplace(loc=0.0, scale=1.0, size=None)
              Draw  samples  from  the  Laplace or double exponential distribution with specified
              location (or mean) and scale (decay).

              The Laplace distribution is similar to the  Gaussian/normal  distribution,  but  is
              sharper  at the peak and has fatter tails. It represents the difference between two
              independent, identically distributed exponential random variables.

              NOTE:
                 New code should use the laplace method  of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              loc    float   or  array_like  of  floats,  optional  The  position,  \mu,  of  the
                     distribution peak. Default is 0.

              scale  float or array_like of floats,  optional  \lambda,  the  exponential  decay.
                     Default is 1. Must be non- negative.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  loc and scale are both scalars.  Otherwise,
                     np.broadcast(loc, scale).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Laplace distribution.

              Generator.laplace: which should be used for new code.

              It has the probability density function

                                    f(x; \mu, \lambda) = \frac{1}{2\lambda}
              \exp\left(-\frac{|x - \mu|}{\lambda}\right).

              The first law of Laplace, from 1774, states that the frequency of an error  can  be
              expressed  as an exponential function of the absolute magnitude of the error, which
              leads to the Laplace distribution.  For  many  problems  in  economics  and  health
              sciences,  this  distribution  seems  to  model  the  data better than the standard
              Gaussian distribution.

       [1]  Abramowitz, M. and Stegun, I. A. (Eds.). "Handbook  of  Mathematical  Functions  with
            Formulas, Graphs, and Mathematical Tables, 9th printing," New York: Dover, 1972.

       [2]  Kotz,  Samuel,  et.  al. "The Laplace Distribution and Generalizations, " Birkhauser,
            2001.

       [3]  Weisstein, Eric W. "Laplace Distribution."  From MathWorld--A Wolfram  Web  Resource.
            http://mathworld.wolfram.com/LaplaceDistribution.html

       [4]  Wikipedia, "Laplace distribution", https://en.wikipedia.org/wiki/Laplace_distribution

            Draw samples from the distribution

            >>> loc, scale = 0., 1.
            >>> s = np.random.laplace(loc, scale, 1000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s, 30, density=True)
            >>> x = np.arange(-8., 8., .01)
            >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale)
            >>> plt.plot(x, pdf)

            Plot Gaussian for comparison:

            >>> g = (1/(scale * np.sqrt(2 * np.pi)) *
            ...      np.exp(-(x - loc)**2 / (2 * scale**2)))
            >>> plt.plot(x,g)

       XRStools.xrs_calctools.load_erkale_spec(filename)
              returns an erkale spectrum

       XRStools.xrs_calctools.load_erkale_specs(prefix,   postfix,  fromnumber,  tonumber,  step,
       stepformat=2)
              returns a list of erkale spectra

       XRStools.xrs_calctools.load_stobe_specs(prefix,  postfix,  fromnumber,   tonumber,   step,
       stepformat=2)
              load  a  bunch  of  StoBe  calculations, which filenames are made up of the prefix,
              postfix, and  the  counter  in  the  between  the  prefix  and  postfix  runs  from
              'fromnumber' to 'tonumber' in steps of 'step' (number of digits is 'stepformat')

       XRStools.xrs_calctools.logistic(loc=0.0, scale=1.0, size=None)
              Draw samples from a logistic distribution.

              Samples  are  drawn  from  a  logistic  distribution with specified parameters, loc
              (location or mean, also median), and scale (>0).

              NOTE:
                 New code should use the logistic method of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              loc    float  or  array_like  of  floats,  optional  Parameter of the distribution.
                     Default is 0.

              scale  float or array_like of floats, optional Parameter of the distribution.  Must
                     be non-negative.  Default is 1.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  loc and scale are both scalars.  Otherwise,
                     np.broadcast(loc, scale).size samples are drawn.

              out    ndarray  or  scalar  Drawn   samples   from   the   parameterized   logistic
                     distribution.

              scipy.stats.logistic
                     probability  density  function, distribution or cumulative density function,
                     etc.

              Generator.logistic: which should be used for new code.

              The probability density for the Logistic distribution is

                          P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},

              where \mu = location and s = scale.

              The Logistic distribution is used in Extreme Value problems where it can act  as  a
              mixture of Gumbel distributions, in Epidemiology, and by the World Chess Federation
              (FIDE) where it is used in the Elo ranking system, assuming the performance of each
              player is a logistically distributed random variable.

       [1]  Reiss,  R.-D.  and  Thomas  M.  (2001), "Statistical Analysis of Extreme Values, from
            Insurance, Finance,  Hydrology  and  Other  Fields,"  Birkhauser  Verlag,  Basel,  pp
            132-133.

       [2]  Weisstein,  Eric  W. "Logistic Distribution." From MathWorld--A Wolfram Web Resource.
            http://mathworld.wolfram.com/LogisticDistribution.html

       [3]  Wikipedia,                                                   "Logistic-distribution",
            https://en.wikipedia.org/wiki/Logistic_distribution

            Draw samples from the distribution:

            >>> loc, scale = 10, 1
            >>> s = np.random.logistic(loc, scale, 10000)
            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s, bins=50)

            #   plot against distribution

            >>> def logist(x, loc, scale):
            ...     return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2)
            >>> lgst_val = logist(bins, loc, scale)
            >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max())
            >>> plt.show()

       XRStools.xrs_calctools.lognormal(mean=0.0, sigma=1.0, size=None)
              Draw samples from a log-normal distribution.

              Draw   samples  from  a  log-normal  distribution  with  specified  mean,  standard
              deviation, and array shape.  Note that the mean and standard deviation are not  the
              values for the distribution itself, but of the underlying normal distribution it is
              derived from.

              NOTE:
                 New code should use the lognormal method of a  default_rng()  instance  instead;
                 please see the random-quick-start.

              mean   float  or array_like of floats, optional Mean value of the underlying normal
                     distribution. Default is 0.

              sigma  float or array_like of floats, optional Standard deviation of the underlying
                     normal distribution. Must be non-negative. Default is 1.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if mean and sigma are both scalars.  Otherwise,
                     np.broadcast(mean, sigma).size samples are drawn.

              out    ndarray  or  scalar  Drawn  samples  from   the   parameterized   log-normal
                     distribution.

              scipy.stats.lognorm
                     probability  density  function,  distribution,  cumulative density function,
                     etc.

              Generator.lognormal: which should be used for new code.

              A variable x has a log-normal distribution if log(x) is normally distributed.   The
              probability density function for the log-normal distribution is:

                                     p(x) = \frac{1}{\sigma x \sqrt{2\pi}}
              e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}

              where  \mu  is  the  mean  and  \sigma  is  the  standard deviation of the normally
              distributed logarithm of the variable.  A  log-normal  distribution  results  if  a
              random   variable   is   the   product   of   a   large   number   of  independent,
              identically-distributed variables in  the  same  way  that  a  normal  distribution
              results   if   the   variable  is  the  sum  of  a  large  number  of  independent,
              identically-distributed variables.

       [1]  Limpert, E., Stahel, W. A.,  and  Abbt,  M.,  "Log-normal  Distributions  across  the
            Sciences:   Keys   and   Clues,"   BioScience,   Vol.   51,   No.   5,   May,   2001.
            https://stat.ethz.ch/~stahel/lognormal/bioscience.pdf

       [2]  Reiss, R.D.  and  Thomas,  M.,  "Statistical  Analysis  of  Extreme  Values,"  Basel:
            Birkhauser Verlag, 2001, pp. 31-32.

            Draw samples from the distribution:

            >>> mu, sigma = 3., 1. # mean and standard deviation
            >>> s = np.random.lognormal(mu, sigma, 1000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid')

            >>> x = np.linspace(min(bins), max(bins), 10000)
            >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
            ...        / (x * sigma * np.sqrt(2 * np.pi)))

            >>> plt.plot(x, pdf, linewidth=2, color='r')
            >>> plt.axis('tight')
            >>> plt.show()

            Demonstrate  that  taking  the products of random samples from a uniform distribution
            can be fit well by a log-normal probability density function.

            >>> # Generate a thousand samples: each is the product of 100 random
            >>> # values, drawn from a normal distribution.
            >>> b = []
            >>> for i in range(1000):
            ...    a = 10. + np.random.standard_normal(100)
            ...    b.append(np.product(a))

            >>> b = np.array(b) / np.min(b) # scale values to be positive
            >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid')
            >>> sigma = np.std(np.log(b))
            >>> mu = np.mean(np.log(b))

            >>> x = np.linspace(min(bins), max(bins), 10000)
            >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
            ...        / (x * sigma * np.sqrt(2 * np.pi)))

            >>> plt.plot(x, pdf, color='r', linewidth=2)
            >>> plt.show()

       XRStools.xrs_calctools.logseries(p, size=None)
              Draw samples from a logarithmic series distribution.

              Samples are drawn from a log series distribution with specified shape parameter,  0
              < p < 1.

              NOTE:
                 New  code  should  use the logseries method of a default_rng() instance instead;
                 please see the random-quick-start.

              p      float or array_like of floats Shape parameter for the distribution.  Must be
                     in the range (0, 1).

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  p is a scalar.  Otherwise, np.array(p).size
                     samples are drawn.

              out    ndarray or scalar Drawn samples from the  parameterized  logarithmic  series
                     distribution.

              scipy.stats.logser
                     probability  density  function, distribution or cumulative density function,
                     etc.

              Generator.logseries: which should be used for new code.

              The probability density for the Log Series distribution is

                                        P(k) = \frac{-p^k}{k \ln(1-p)},

              where p = probability.

              The log series distribution is frequently used to represent  species  richness  and
              occurrence,  first  proposed  by  Fisher, Corbet, and Williams in 1943 [2].  It may
              also be used to model the numbers of occupants seen in cars [3].

       [1]  Buzas, Martin A.; Culver,  Stephen  J.,   Understanding  regional  species  diversity
            through the log series distribution of occurrences: BIODIVERSITY RESEARCH Diversity &
            Distributions, Volume 5, Number 5, September 1999 , pp. 187-195(9).

       [2]  Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The relation between  the  number
            of  species and the number of individuals in a random sample of an animal population.
            Journal of Animal Ecology, 12:42-58.

       [3]  D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small Data Sets, CRC Press,
            1994.

       [4]  Wikipedia,                         "Logarithmic                        distribution",
            https://en.wikipedia.org/wiki/Logarithmic_distribution

            Draw samples from the distribution:

            >>> a = .6
            >>> s = np.random.logseries(a, 10000)
            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s)

            #   plot against distribution

            >>> def logseries(k, p):
            ...     return -p**k/(k*np.log(1-p))
            >>> plt.plot(bins, logseries(bins, a)*count.max()/
            ...          logseries(bins, a).max(), 'r')
            >>> plt.show()

       XRStools.xrs_calctools.multinomial(n, pvals, size=None)
              Draw samples from a multinomial distribution.

              The multinomial distribution is  a  multivariate  generalization  of  the  binomial
              distribution.   Take  an experiment with one of p possible outcomes.  An example of
              such an experiment is throwing a dice, where the outcome can be 1 through 6.   Each
              sample  drawn from the distribution represents n such experiments.  Its values, X_i
              = [X_0, X_1, ..., X_p], represent the number of times the outcome was i.

              NOTE:
                 New code should use the multinomial method of a default_rng() instance  instead;
                 please see the random-quick-start.

              n      int Number of experiments.

              pvals  sequence  of  floats,  length  p  Probabilities  of  each of the p different
                     outcomes.  These must sum to 1 (however, the last element is always  assumed
                     to account for the remaining probability, as long as sum(pvals[:-1]) <= 1).

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              out    ndarray The drawn samples, of shape size, if that was provided.  If not, the
                     shape is (N,).

                     In other words, each entry out[i,j,...,:] is an  N-dimensional  value  drawn
                     from the distribution.

              Generator.multinomial: which should be used for new code.

              Throw a dice 20 times:

              >>> np.random.multinomial(20, [1/6.]*6, size=1)
              array([[4, 1, 7, 5, 2, 1]]) # random

              It landed 4 times on 1, once on 2, etc.

              Now, throw the dice 20 times, and 20 times again:

              >>> np.random.multinomial(20, [1/6.]*6, size=2)
              array([[3, 4, 3, 3, 4, 3], # random
                     [2, 4, 3, 4, 0, 7]])

              For  the first run, we threw 3 times 1, 4 times 2, etc.  For the second, we threw 2
              times 1, 4 times 2, etc.

              A loaded die is more likely to land on number 6:

              >>> np.random.multinomial(100, [1/7.]*5 + [2/7.])
              array([11, 16, 14, 17, 16, 26]) # random

              The probability inputs should be normalized. As an implementation detail, the value
              of  the last entry is ignored and assumed to take up any leftover probability mass,
              but this should not be relied on.  A biased coin which has twice as much weight  on
              one side as on the other should be sampled like so:

              >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3])  # RIGHT
              array([38, 62]) # random

              not like:

              >>> np.random.multinomial(100, [1.0, 2.0])  # WRONG
              Traceback (most recent call last):
              ValueError: pvals < 0, pvals > 1 or pvals contains NaNs

       XRStools.xrs_calctools.multivariate_normal(mean,   cov,   size=None,   check_valid='warn',
       tol=1e-8)
              Draw random samples from a multivariate normal distribution.

              The multivariate normal, multinormal or Gaussian distribution is  a  generalization
              of   the   one-dimensional  normal  distribution  to  higher  dimensions.   Such  a
              distribution is specified by its mean and covariance matrix.  These parameters  are
              analogous  to  the  mean (average or "center") and variance (standard deviation, or
              "width," squared) of the one-dimensional normal distribution.

              NOTE:
                 New code should use the multivariate_normal method of a  default_rng()  instance
                 instead; please see the random-quick-start.

              mean   1-D array_like, of length N Mean of the N-dimensional distribution.

              cov    2-D  array_like,  of  shape (N, N) Covariance matrix of the distribution. It
                     must be symmetric and positive-semidefinite for proper sampling.

              size   int or tuple of ints, optional Given a shape of, for example, (m,n,k), m*n*k
                     samples  are  generated,  and packed in an m-by-n-by-k arrangement.  Because
                     each sample is N-dimensional, the output shape is (m,n,k,N).  If no shape is
                     specified, a single (N-D) sample is returned.

              check_valid
                     {  'warn', 'raise', 'ignore' }, optional Behavior when the covariance matrix
                     is not positive semidefinite.

              tol    float, optional Tolerance when checking the singular  values  in  covariance
                     matrix.  cov is cast to double before the check.

              out    ndarray The drawn samples, of shape size, if that was provided.  If not, the
                     shape is (N,).

                     In other words, each entry out[i,j,...,:] is an  N-dimensional  value  drawn
                     from the distribution.

              Generator.multivariate_normal: which should be used for new code.

              The  mean  is  a  coordinate  in N-dimensional space, which represents the location
              where samples are most likely to be generated.  This is analogous to  the  peak  of
              the bell curve for the one-dimensional or univariate normal distribution.

              Covariance  indicates  the  level  to  which two variables vary together.  From the
              multivariate normal distribution, we draw N-dimensional samples, X = [x_1, x_2, ...
              x_N].   The covariance matrix element C_{ij} is the covariance of x_i and x_j.  The
              element C_{ii} is the variance of x_i (i.e. its "spread").

              Instead of specifying the full covariance matrix, popular approximations include:

                 • Spherical covariance (cov is a multiple of the identity matrix)

                 • Diagonal covariance (cov has non-negative elements, and only on the diagonal)

              This geometrical property can be seen  in  two  dimensions  by  plotting  generated
              data-points:

              >>> mean = [0, 0]
              >>> cov = [[1, 0], [0, 100]]  # diagonal covariance

              Diagonal covariance means that points are oriented along x or y-axis:

              >>> import matplotlib.pyplot as plt
              >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T
              >>> plt.plot(x, y, 'x')
              >>> plt.axis('equal')
              >>> plt.show()

              Note   that   the   covariance   matrix   must  be  positive  semidefinite  (a.k.a.
              nonnegative-definite). Otherwise, the behavior of  this  method  is  undefined  and
              backwards compatibility is not guaranteed.

       [1]  Papoulis, A., "Probability, Random Variables, and Stochastic Processes," 3rd ed., New
            York: McGraw-Hill, 1991.

       [2]  Duda, R. O., Hart, P. E., and Stork, D. G., "Pattern Classification,"  2nd  ed.,  New
            York: Wiley, 2001.

            >>> mean = (1, 2)
            >>> cov = [[1, 0], [0, 1]]
            >>> x = np.random.multivariate_normal(mean, cov, (3, 3))
            >>> x.shape
            (3, 3, 2)

            The  following  is  probably  true,  given  that  0.6  is  roughly twice the standard
            deviation:

            >>> list((x[0,0,:] - mean) < 0.6)
            [True, True] # random

       XRStools.xrs_calctools.negative_binomial(n, p, size=None)
              Draw samples from a negative binomial distribution.

              Samples are drawn from a negative binomial distribution with specified  parameters,
              n  successes  and  p probability of success where n is > 0 and p is in the interval
              [0, 1].

              NOTE:
                 New code should use the negative_binomial method  of  a  default_rng()  instance
                 instead; please see the random-quick-start.

              n      float or array_like of floats Parameter of the distribution, > 0.

              p      float or array_like of floats Parameter of the distribution, >= 0 and <=1.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  n  and  p  are  both  scalars.   Otherwise,
                     np.broadcast(n, p).size samples are drawn.

              out    ndarray or scalar Drawn samples from  the  parameterized  negative  binomial
                     distribution,  where  each sample is equal to N, the number of failures that
                     occurred before a total of n successes was reached.

              Generator.negative_binomial: which should be used for new code.

              The probability mass function of the negative binomial distribution is

                           P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},

              where n is the number of successes, p is the probability of  success,  N+n  is  the
              number  of  trials,  and  \Gamma  is  the  gamma  function.  When  n is an integer,
              \frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}, which is the more  common  form
              of  this  term  in  the  the  pmf.  The  negative  binomial  distribution gives the
              probability of N failures given n successes, with a success on the last trial.

              If one throws a die repeatedly until  the  third  time  a  "1"  appears,  then  the
              probability distribution of the number of non-"1"s that appear before the third "1"
              is a negative binomial distribution.

       [1]  Weisstein, Eric W. "Negative Binomial Distribution." From  MathWorld--A  Wolfram  Web
            Resource.  http://mathworld.wolfram.com/NegativeBinomialDistribution.html

       [2]  Wikipedia,               "Negative               binomial              distribution",
            https://en.wikipedia.org/wiki/Negative_binomial_distribution

            Draw samples from the distribution:

            A real world example. A company drills wild-cat oil exploration wells, each  with  an
            estimated  probability  of  success  of  0.1.   What is the probability of having one
            success for each successive well, that is what is the probability of a single success
            after drilling 5 wells, after 6 wells, etc.?

            >>> s = np.random.negative_binomial(1, 0.1, 100000)
            >>> for i in range(1, 11):
            ...    probability = sum(s<i) / 100000.
            ...    print(i, "wells drilled, probability of one success =", probability)

       XRStools.xrs_calctools.noncentral_chisquare(df, nonc, size=None)
              Draw samples from a noncentral chi-square distribution.

              The noncentral \chi^2 distribution is a generalization of the \chi^2 distribution.

              NOTE:
                 New  code should use the noncentral_chisquare method of a default_rng() instance
                 instead; please see the random-quick-start.

              df     float or array_like of floats Degrees of freedom, must be > 0.

                     Changed in version 1.10.0: Earlier NumPy versions required dfnum > 1.

              nonc   float or array_like of floats Non-centrality, must be non-negative.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if  df  and  nonc  are  both  scalars.   Otherwise,
                     np.broadcast(df, nonc).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized noncentral chi-square
                     distribution.

              Generator.noncentral_chisquare: which should be used for new code.

              The probability density function for the noncentral Chi-square distribution is

                                      P(x;df,nonc) = \sum^{\infty}_{i=0}
              \frac{e^{-nonc/2}(nonc/2)^{i}}{i!} P_{Y_{df+2i}}(x),

              where Y_{q} is the Chi-square with q degrees of freedom.

       [1]  Wikipedia,             "Noncentral             chi-squared              distribution"
            https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution

            Draw values from the distribution and plot the histogram

            >>> import matplotlib.pyplot as plt
            >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),
            ...                   bins=200, density=True)
            >>> plt.show()

            Draw values from a noncentral chisquare with very small noncentrality, and compare to
            a chisquare.

            >>> plt.figure()
            >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000),
            ...                   bins=np.arange(0., 25, .1), density=True)
            >>> values2 = plt.hist(np.random.chisquare(3, 100000),
            ...                    bins=np.arange(0., 25, .1), density=True)
            >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob')
            >>> plt.show()

            Demonstrate how large values of non-centrality lead to a more symmetric distribution.

            >>> plt.figure()
            >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),
            ...                   bins=200, density=True)
            >>> plt.show()

       XRStools.xrs_calctools.noncentral_f(dfnum, dfden, nonc, size=None)
              Draw samples from the noncentral F distribution.

              Samples are drawn from an F distribution with specified parameters, dfnum  (degrees
              of  freedom in numerator) and dfden (degrees of freedom in denominator), where both
              parameters > 1.  nonc is the non-centrality parameter.

              NOTE:
                 New code should use the noncentral_f method of a default_rng() instance instead;
                 please see the random-quick-start.

              dfnum  float or array_like of floats Numerator degrees of freedom, must be > 0.

                     Changed in version 1.14.0: Earlier NumPy versions required dfnum > 1.

              dfden  float or array_like of floats Denominator degrees of freedom, must be > 0.

              nonc   float  or  array_like  of  floats  Non-centrality  parameter, the sum of the
                     squares of the numerator means, must be >= 0.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is  returned  if  dfnum,  dfden,  and  nonc  are  all  scalars.
                     Otherwise, np.broadcast(dfnum, dfden, nonc).size samples are drawn.

              out    ndarray  or  scalar  Drawn  samples from the parameterized noncentral Fisher
                     distribution.

              Generator.noncentral_f: which should be used for new code.

              When calculating the power of an experiment (power = probability of  rejecting  the
              null  hypothesis  when  a specific alternative is true) the non-central F statistic
              becomes important.  When the null hypothesis is true, the  F  statistic  follows  a
              central  F  distribution.  When  the null hypothesis is not true, then it follows a
              non-central F statistic.

       [1]  Weisstein, Eric  W.  "Noncentral  F-Distribution."   From  MathWorld--A  Wolfram  Web
            Resource.  http://mathworld.wolfram.com/NoncentralF-Distribution.html

       [2]  Wikipedia,                        "Noncentral                        F-distribution",
            https://en.wikipedia.org/wiki/Noncentral_F-distribution

            In a study, testing for a specific alternative to the null hypothesis requires use of
            the  Noncentral  F  distribution.  We  need  to calculate the area in the tail of the
            distribution that exceeds the value of the F distribution for  the  null  hypothesis.
            We'll plot the two probability distributions for comparison.

            >>> dfnum = 3 # between group deg of freedom
            >>> dfden = 20 # within groups degrees of freedom
            >>> nonc = 3.0
            >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000)
            >>> NF = np.histogram(nc_vals, bins=50, density=True)
            >>> c_vals = np.random.f(dfnum, dfden, 1000000)
            >>> F = np.histogram(c_vals, bins=50, density=True)
            >>> import matplotlib.pyplot as plt
            >>> plt.plot(F[1][1:], F[0])
            >>> plt.plot(NF[1][1:], NF[0])
            >>> plt.show()

       XRStools.xrs_calctools.normal(loc=0.0, scale=1.0, size=None)
              Draw random samples from a normal (Gaussian) distribution.

              The  probability  density  function of the normal distribution, first derived by De
              Moivre and 200 years later by both Gauss and Laplace independently
              [2]_
              , is often called the bell curve because  of  its  characteristic  shape  (see  the
              example below).

              The  normal  distributions  occurs  often in nature.  For example, it describes the
              commonly occurring distribution of samples influenced by a large  number  of  tiny,
              random disturbances, each with its own unique distribution
              [2]_
              .

              NOTE:
                 New  code  should  use  the  normal  method of a default_rng() instance instead;
                 please see the random-quick-start.

              loc    float or array_like of floats Mean ("centre") of the distribution.

              scale  float or array_like of floats Standard deviation (spread or "width") of  the
                     distribution. Must be non-negative.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  loc and scale are both scalars.  Otherwise,
                     np.broadcast(loc, scale).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized normal distribution.

              scipy.stats.norm
                     probability density function, distribution or cumulative  density  function,
                     etc.

              Generator.normal: which should be used for new code.

              The probability density for the Gaussian distribution is

                                   p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
              e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },

              where \mu is the mean and \sigma the standard deviation. The square of the standard
              deviation, \sigma^2, is called the variance.

              The function has its peak at the mean, and its "spread" increases with the standard
              deviation  (the  function  reaches  0.607  times  its maximum at x + \sigma and x -
              \sigma
              [2]_
              ).  This implies that normal is more likely to return samples lying  close  to  the
              mean, rather than those far away.

       [1]  Wikipedia, "Normal distribution", https://en.wikipedia.org/wiki/Normal_distribution

       [2]  P.  R.  Peebles  Jr.,  "Central  Limit Theorem" in "Probability, Random Variables and
            Random Signal Principles", 4th ed., 2001, pp. 51, 51, 125.

            Draw samples from the distribution:

            >>> mu, sigma = 0, 0.1 # mean and standard deviation
            >>> s = np.random.normal(mu, sigma, 1000)

            Verify the mean and the variance:

            >>> abs(mu - np.mean(s))
            0.0  # may vary

            >>> abs(sigma - np.std(s, ddof=1))
            0.1  # may vary

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s, 30, density=True)
            >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
            ...                np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
            ...          linewidth=2, color='r')
            >>> plt.show()

            Two-by-four array of samples from N(3, 6.25):

            >>> np.random.normal(3, 2.5, size=(2, 4))
            array([[-4.49401501,  4.00950034, -1.81814867,  7.29718677],   # random
                   [ 0.39924804,  4.68456316,  4.99394529,  4.84057254]])  # random

       XRStools.xrs_calctools.pareto(a, size=None)
              Draw samples from a Pareto II or Lomax distribution with specified shape.

              The Lomax or Pareto II distribution is a shifted Pareto distribution. The classical
              Pareto  distribution  can  be  obtained from the Lomax distribution by adding 1 and
              multiplying by the scale parameter m (see Notes).  The smallest value of the  Lomax
              distribution  is  zero  while for the classical Pareto distribution it is mu, where
              the standard Pareto distribution has location mu = 1.  Lomax can also be considered
              as  a  simplified  version  of  the  Generalized  Pareto distribution (available in
              SciPy), with the scale set to one and the location set to zero.

              The Pareto distribution must be greater than zero, and is unbounded above.   It  is
              also  known  as  the "80-20 rule".  In this distribution, 80 percent of the weights
              are in the lowest 20 percent of the range, while the  other  20  percent  fill  the
              remaining 80 percent of the range.

              NOTE:
                 New  code  should  use  the  pareto  method of a default_rng() instance instead;
                 please see the random-quick-start.

              a      float or array_like of floats Shape of the distribution. Must be positive.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if a  is  a  scalar.   Otherwise,  np.array(a).size
                     samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Pareto distribution.

              scipy.stats.lomax
                     probability  density  function, distribution or cumulative density function,
                     etc.

              scipy.stats.genpareto
                     probability density function, distribution or cumulative  density  function,
                     etc.

              Generator.pareto: which should be used for new code.

              The probability density for the Pareto distribution is

                                          p(x) = \frac{am^a}{x^{a+1}}

              where a is the shape and m the scale.

              The  Pareto  distribution,  named after the Italian economist Vilfredo Pareto, is a
              power law probability distribution useful in many real world problems.  Outside the
              field of economics it is generally referred to as the Bradford distribution. Pareto
              developed the distribution to describe the distribution of wealth  in  an  economy.
              It  has  also  found use in insurance, web page access statistics, oil field sizes,
              and  many  other  problems,  including  the  download  frequency  for  projects  in
              Sourceforge
              [1]_
              .  It is one of the so-called "fat-tailed" distributions.

       [1]  Francis Hunt and Paul Johnson, On the Pareto Distribution of Sourceforge projects.

       [2]  Pareto, V. (1896). Course of Political Economy. Lausanne.

       [3]  Reiss,  R.D.,  Thomas,  M.(2001),  Statistical Analysis of Extreme Values, Birkhauser
            Verlag, Basel, pp 23-30.

       [4]  Wikipedia, "Pareto distribution", https://en.wikipedia.org/wiki/Pareto_distribution

            Draw samples from the distribution:

            >>> a, m = 3., 2.  # shape and mode
            >>> s = (np.random.pareto(a, 1000) + 1) * m

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> count, bins, _ = plt.hist(s, 100, density=True)
            >>> fit = a*m**a / bins**(a+1)
            >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r')
            >>> plt.show()

       XRStools.xrs_calctools.parseOCEANinputFile(fname)
              parseOCEANinputFile

              Parses an OCEAN input file and returns lattice vectors, atom  names,  and  relative
              atom positions.

              Args:

                     • fname (str): Absolute filename of OCEAN input file.

                     • atoms  (list):  List of elemental symbols in the same order as they appear
                       in the input file.

              Returns:

                     • lattice (np.array): Array of lattice vectors.

                     • rel_coords (np.array): Array of relative atomic coordinates.

                     • oceaatoms (list): List of atomic names.

       XRStools.xrs_calctools.parsePwscfFile(fname)
              parsePwscfFile

              Parses a PWSCF file and returns a xyzBox object.

              Args:  fname (str): Absolute filename of OCEAN input file.

              Returns:
                     xyzBox object

       XRStools.xrs_calctools.parseVaspFile(fname)
              parseVaspFile

              Parses a VASPS file and returns a xyzBox object.

              Args:  fname (str): Absolute filename of VASP file.

              Returns:
                     xyzBox object

       XRStools.xrs_calctools.parseXYZfile(filename)
              parseXYZfile Reads an xyz-style file.

       XRStools.xrs_calctools.permutation(x)
              Randomly permute a sequence, or return a permuted range.

              If x is a multi-dimensional array, it is only shuffled along its first index.

              NOTE:
                 New code should use the permutation method of a default_rng() instance  instead;
                 please see the random-quick-start.

              x      int  or  array_like If x is an integer, randomly permute np.arange(x).  If x
                     is an array, make a copy and shuffle the elements randomly.

              out    ndarray Permuted sequence or array range.

              Generator.permutation: which should be used for new code.

              >>> np.random.permutation(10)
              array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random

              >>> np.random.permutation([1, 4, 9, 12, 15])
              array([15,  1,  9,  4, 12]) # random

              >>> arr = np.arange(9).reshape((3, 3))
              >>> np.random.permutation(arr)
              array([[6, 7, 8], # random
                     [0, 1, 2],
                     [3, 4, 5]])

       XRStools.xrs_calctools.poisson(lam=1.0, size=None)
              Draw samples from a Poisson distribution.

              The Poisson distribution is the limit of the binomial distribution for large N.

              NOTE:
                 New code should use the poisson method  of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              lam    float  or  array_like  of  floats  Expected  number of events occurring in a
                     fixed-time interval, must be >= 0. A sequence must be broadcastable over the
                     requested size.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is returned if lam is a scalar. Otherwise, np.array(lam).size
                     samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Poisson distribution.

              Generator.poisson: which should be used for new code.

              The Poisson distribution

                                f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}

              For events with an  expected  separation  \lambda  the  Poisson  distribution  f(k;
              \lambda)  describes  the  probability  of  k  events  occurring within the observed
              interval \lambda.

              Because the output is limited to the range of the C int64  type,  a  ValueError  is
              raised when lam is within 10 sigma of the maximum representable value.

       [1]  Weisstein,  Eric  W. "Poisson Distribution."  From MathWorld--A Wolfram Web Resource.
            http://mathworld.wolfram.com/PoissonDistribution.html

       [2]  Wikipedia, "Poisson distribution", https://en.wikipedia.org/wiki/Poisson_distribution

            Draw samples from the distribution:

            >>> import numpy as np
            >>> s = np.random.poisson(5, 10000)

            Display histogram of the sample:

            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s, 14, density=True)
            >>> plt.show()

            Draw each 100 values for lambda 100 and 500:

            >>> s = np.random.poisson(lam=(100., 500.), size=(100, 2))

       XRStools.xrs_calctools.power(a, size=None)
              Draws samples in [0, 1] from a power distribution with positive exponent a - 1.

              Also known as the power function distribution.

              NOTE:
                 New code should use the power method of a default_rng() instance instead; please
                 see the random-quick-start.

              a      float  or  array_like  of  floats  Parameter  of  the  distribution. Must be
                     non-negative.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if a  is  a  scalar.   Otherwise,  np.array(a).size
                     samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized power distribution.

              ValueError
                     If a < 1.

              Generator.power: which should be used for new code.

              The probability density function is

                                    P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.

              The  power function distribution is just the inverse of the Pareto distribution. It
              may also be seen as a special case of the Beta distribution.

              It is used, for example, in modeling the over-reporting of insurance claims.

       [1]  Christian Kleiber, Samuel Kotz, "Statistical  size  distributions  in  economics  and
            actuarial sciences", Wiley, 2003.

       [2]  Heckert,  N. A. and Filliben, James J. "NIST Handbook 148: Dataplot Reference Manual,
            Volume 2: Let Subcommands and Library Functions", National Institute of Standards and
            Technology             Handbook             Series,             June            2003.
            https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf

            Draw samples from the distribution:

            >>> a = 5. # shape
            >>> samples = 1000
            >>> s = np.random.power(a, samples)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> count, bins, ignored = plt.hist(s, bins=30)
            >>> x = np.linspace(0, 1, 100)
            >>> y = a*x**(a-1.)
            >>> normed_y = samples*np.diff(bins)[0]*y
            >>> plt.plot(x, normed_y)
            >>> plt.show()

            Compare the power function distribution to the inverse of the Pareto.

            >>> from scipy import stats
            >>> rvs = np.random.power(5, 1000000)
            >>> rvsp = np.random.pareto(5, 1000000)
            >>> xx = np.linspace(0,1,100)
            >>> powpdf = stats.powerlaw.pdf(xx,5)

            >>> plt.figure()
            >>> plt.hist(rvs, bins=50, density=True)
            >>> plt.plot(xx,powpdf,'r-')
            >>> plt.title('np.random.power(5)')

            >>> plt.figure()
            >>> plt.hist(1./(1.+rvsp), bins=50, density=True)
            >>> plt.plot(xx,powpdf,'r-')
            >>> plt.title('inverse of 1 + np.random.pareto(5)')

            >>> plt.figure()
            >>> plt.hist(1./(1.+rvsp), bins=50, density=True)
            >>> plt.plot(xx,powpdf,'r-')
            >>> plt.title('inverse of stats.pareto(5)')

       XRStools.xrs_calctools.rand(d0, d1, ..., dn)
              Random values in a given shape.

              NOTE:
                 This is a convenience function for users porting code  from  Matlab,  and  wraps
                 random_sample.  That  function  takes a tuple to specify the size of the output,
                 which is consistent with other NumPy functions like numpy.zeros and numpy.ones.

              Create an array of the given shape and populate  it  with  random  samples  from  a
              uniform distribution over [0, 1).

              d0, d1, ..., dn
                     int,  optional  The  dimensions of the returned array, must be non-negative.
                     If no argument is given a single Python float is returned.

              out    ndarray, shape (d0, d1, ..., dn) Random values.

              random

              >>> np.random.rand(3,2)
              array([[ 0.14022471,  0.96360618],  #random
                     [ 0.37601032,  0.25528411],  #random
                     [ 0.49313049,  0.94909878]]) #random

       XRStools.xrs_calctools.randint(low, high=None, size=None, dtype=int)
              Return random integers from low (inclusive) to high (exclusive).

              Return random integers from the "discrete uniform" distribution  of  the  specified
              dtype  in the "half-open" interval [low, high). If high is None (the default), then
              results are from [0, low).

              NOTE:
                 New code should use the integers method of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              low    int  or  array-like  of  ints  Lowest (signed) integers to be drawn from the
                     distribution (unless high=None, in which case this parameter  is  one  above
                     the highest such integer).

              high   int  or  array-like  of  ints,  optional  If provided, one above the largest
                     (signed) integer to be drawn from the distribution (see above  for  behavior
                     if high=None).  If array-like, must contain integer values

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              dtype  dtype,  optional Desired dtype of the result. Byteorder must be native.  The
                     default value is int.

                     New in version 1.11.0.

              out    int or ndarray of  ints  size-shaped  array  of  random  integers  from  the
                     appropriate distribution, or a single such random int if size not provided.

              random_integers
                     similar  to  randint, only for the closed interval [low, high], and 1 is the
                     lowest value if high is omitted.

              Generator.integers: which should be used for new code.

              >>> np.random.randint(2, size=10)
              array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random
              >>> np.random.randint(1, size=10)
              array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

              Generate a 2 x 4 array of ints between 0 and 4, inclusive:

              >>> np.random.randint(5, size=(2, 4))
              array([[4, 0, 2, 1], # random
                     [3, 2, 2, 0]])

              Generate a 1 x 3 array with 3 different upper bounds

              >>> np.random.randint(1, [3, 5, 10])
              array([2, 2, 9]) # random

              Generate a 1 by 3 array with 3 different lower bounds

              >>> np.random.randint([1, 5, 7], 10)
              array([9, 8, 7]) # random

              Generate a 2 by 4 array using broadcasting with dtype of uint8

              >>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8)
              array([[ 8,  6,  9,  7], # random
                     [ 1, 16,  9, 12]], dtype=uint8)

       XRStools.xrs_calctools.randn(d0, d1, ..., dn)
              Return a sample (or samples) from the "standard normal" distribution.

              NOTE:
                 This is a convenience function for users porting code  from  Matlab,  and  wraps
                 standard_normal.  That function takes a tuple to specify the size of the output,
                 which is consistent with other NumPy functions like numpy.zeros and numpy.ones.

              NOTE:
                 New code should use the  standard_normal  method  of  a  default_rng()  instance
                 instead; please see the random-quick-start.

              If positive int_like arguments are provided, randn generates an array of shape (d0,
              d1, ..., dn),  filled  with  random  floats  sampled  from  a  univariate  "normal"
              (Gaussian)  distribution  of mean 0 and variance 1. A single float randomly sampled
              from the distribution is returned if no argument is provided.

              d0, d1, ..., dn
                     int, optional The dimensions of the returned array,  must  be  non-negative.
                     If no argument is given a single Python float is returned.

              Z      ndarray  or float A (d0, d1, ..., dn)-shaped array of floating-point samples
                     from the standard  normal  distribution,  or  a  single  such  float  if  no
                     parameters were supplied.

              standard_normal  :  Similar,  but  takes  a  tuple  as its argument.  normal : Also
              accepts mu and sigma arguments.  Generator.standard_normal: which  should  be  used
              for new code.

              For random samples from N(\mu, \sigma^2), use:

              sigma * np.random.randn(...) + mu

              >>> np.random.randn()
              2.1923875335537315  # random

              Two-by-four array of samples from N(3, 6.25):

              >>> 3 + 2.5 * np.random.randn(2, 4)
              array([[-4.49401501,  4.00950034, -1.81814867,  7.29718677],   # random
                     [ 0.39924804,  4.68456316,  4.99394529,  4.84057254]])  # random

       XRStools.xrs_calctools.random(size=None)
              Return  random floats in the half-open interval [0.0, 1.0). Alias for random_sample
              to ease forward-porting to the new random API.

       XRStools.xrs_calctools.random_integers(low, high=None, size=None)
              Random integers of type np.int_ between low and high, inclusive.

              Return random integers of type np.int_ from the "discrete uniform" distribution  in
              the  closed  interval [low, high].  If high is None (the default), then results are
              from [1, low]. The np.int_ type translates to the  C  long  integer  type  and  its
              precision is platform dependent.

              This function has been deprecated. Use randint instead.

              Deprecated since version 1.11.0.

              low    int  Lowest  (signed)  integer  to  be  drawn  from the distribution (unless
                     high=None, in which case this parameter is the highest such integer).

              high   int, optional If provided, the largest (signed) integer to be drawn from the
                     distribution (see above for behavior if high=None).

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              out    int  or  ndarray  of  ints  size-shaped  array  of  random integers from the
                     appropriate distribution, or a single such random int if size not provided.

              randint
                     Similar to random_integers, only for the half-open interval [low, high), and
                     0 is the lowest value if high is omitted.

              To sample from N evenly spaced floating-point numbers between a and b, use:

                 a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.)

              >>> np.random.random_integers(5)
              4 # random
              >>> type(np.random.random_integers(5))
              <class 'numpy.int64'>
              >>> np.random.random_integers(5, size=(3,2))
              array([[5, 4], # random
                     [3, 3],
                     [4, 5]])

              Choose five random numbers from the set of five evenly-spaced numbers between 0 and
              2.5, inclusive (i.e., from the set {0, 5/8, 10/8, 15/8, 20/8}):

              >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4.
              array([ 0.625,  1.25 ,  0.625,  0.625,  2.5  ]) # random

              Roll two six sided dice 1000 times and sum the results:

              >>> d1 = np.random.random_integers(1, 6, 1000)
              >>> d2 = np.random.random_integers(1, 6, 1000)
              >>> dsums = d1 + d2

              Display results as a histogram:

              >>> import matplotlib.pyplot as plt
              >>> count, bins, ignored = plt.hist(dsums, 11, density=True)
              >>> plt.show()

       XRStools.xrs_calctools.random_sample(size=None)
              Return random floats in the half-open interval [0.0, 1.0).

              Results are from the "continuous uniform" distribution over  the  stated  interval.
              To  sample  Unif[a, b), b > a multiply the output of random_sample by (b-a) and add
              a:

                 (b - a) * random_sample() + a

              NOTE:
                 New code should use the random  method  of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              out    float  or  ndarray  of  floats  Array of random floats of shape size (unless
                     size=None, in which case a single float is returned).

              Generator.random: which should be used for new code.

              >>> np.random.random_sample()
              0.47108547995356098 # random
              >>> type(np.random.random_sample())
              <class 'float'>
              >>> np.random.random_sample((5,))
              array([ 0.30220482,  0.86820401,  0.1654503 ,  0.11659149,  0.54323428]) # random

              Three-by-two array of random numbers from [-5, 0):

              >>> 5 * np.random.random_sample((3, 2)) - 5
              array([[-3.99149989, -0.52338984], # random
                     [-2.99091858, -0.79479508],
                     [-1.23204345, -1.75224494]])

       XRStools.xrs_calctools.rayleigh(scale=1.0, size=None)
              Draw samples from a Rayleigh distribution.

              The \chi and Weibull distributions are generalizations of the Rayleigh.

              NOTE:
                 New code should use the rayleigh method of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              scale  float or array_like of floats, optional Scale, also equals the mode. Must be
                     non-negative. Default is 1.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single   value   is   returned   if   scale   is   a   scalar.    Otherwise,
                     np.array(scale).size samples are drawn.

              out    ndarray   or   scalar   Drawn   samples   from  the  parameterized  Rayleigh
                     distribution.

              Generator.rayleigh: which should be used for new code.

              The probability density function for the Rayleigh distribution is

                        P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}

              The Rayleigh  distribution  would  arise,  for  example,  if  the  East  and  North
              components  of  the  wind  velocity had identical zero-mean Gaussian distributions.
              Then the wind speed would have a Rayleigh distribution.

       [1]  Brighton           Webs           Ltd.,           "Rayleigh            Distribution,"
            https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp

       [2]  Wikipedia,                          "Rayleigh                           distribution"
            https://en.wikipedia.org/wiki/Rayleigh_distribution

            Draw values from the distribution and plot the histogram

            >>> from matplotlib.pyplot import hist
            >>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True)

            Wave  heights  tend  to  follow a Rayleigh distribution. If the mean wave height is 1
            meter, what fraction of waves are likely to be larger than 3 meters?

            >>> meanvalue = 1
            >>> modevalue = np.sqrt(2 / np.pi) * meanvalue
            >>> s = np.random.rayleigh(modevalue, 1000000)

            The percentage of waves larger than 3 meters is:

            >>> 100.*sum(s>3)/1000000.
            0.087300000000000003 # random

       XRStools.xrs_calctools.readxas(filename)
              function output = readxas(filename)%[e,p,s,px,py,pz] = readxas(filename)

              % READSTF    Load  StoBe  fort.11  (XAS  output)  data  %  %    [E,P,S,PX,PY,PZ]  =
              READXAS(FILENAME)  %  %       E         energy transfer [eV] %      P        dipole
              transition  intensity  %       S         r^2   transition   intensity   %        PX
              dipole  transition  intensity  along  x %      PY       dipole transition intensity
              along   y   %        PZ         dipole   transition   intensity   along   z   %   %
              as line diagrams.  % %                             T Pylkkanen @ 2011-10-17

       XRStools.xrs_calctools.repair_h2o_molecules_pbc(h2o_mols, boxLength)

       XRStools.xrs_calctools.seed(self, seed=None)
              Reseed a legacy MT19937 BitGenerator

              This is a convenience, legacy function.

              The  best  practice  is to not reseed a BitGenerator, rather to recreate a new one.
              This method is here for legacy reasons.  This example demonstrates best practice.

              >>> from numpy.random import MT19937
              >>> from numpy.random import RandomState, SeedSequence
              >>> rs = RandomState(MT19937(SeedSequence(123456789)))
              # Later, you want to restart the stream
              >>> rs = RandomState(MT19937(SeedSequence(987654321)))

       XRStools.xrs_calctools.set_state(state)
              Set the internal state of the generator from a tuple.

              For use if one has reason to manually  (re-)set  the  internal  state  of  the  bit
              generator  used  by  the  RandomState  instance.  By  default, RandomState uses the
              "Mersenne Twister"
              [1]_
               pseudo-random number generating algorithm.

              state  {tuple(str, ndarray of 624 uints, int, int, float), dict}  The  state  tuple
                     has the following items:

                     1. the string 'MT19937', specifying the Mersenne Twister algorithm.

                     2. a 1-D array of 624 unsigned integers keys.

                     3. an integer pos.

                     4. an integer has_gauss.

                     5. a float cached_gaussian.

                     If  state  is a dictionary, it is directly set using the BitGenerators state
                     property.

              out    None Returns 'None' on success.

              get_state

              set_state and get_state are not needed to work with any of the random distributions
              in  NumPy.  If the internal state is manually altered, the user should know exactly
              what he/she is doing.

              For backwards compatibility, the form (str,  array  of  624  uints,  int)  is  also
              accepted  although  it is missing some information about the cached Gaussian value:
              state = ('MT19937', keys, pos).

       [1]  M. Matsumoto and T. Nishimura, "Mersenne Twister: A 623-dimensionally equidistributed
            uniform   pseudorandom  number  generator,"  ACM  Trans.  on  Modeling  and  Computer
            Simulation, Vol. 8, No. 1, pp. 3-30, Jan. 1998.

       XRStools.xrs_calctools.shuffle(x)
              Modify a sequence in-place by shuffling its contents.

              This function only shuffles the array along the first axis of  a  multi-dimensional
              array. The order of sub-arrays is changed but their contents remains the same.

              NOTE:
                 New  code  should  use  the  shuffle method of a default_rng() instance instead;
                 please see the random-quick-start.

              x      ndarray or MutableSequence  The  array,  list  or  mutable  sequence  to  be
                     shuffled.

              None

              Generator.shuffle: which should be used for new code.

              >>> arr = np.arange(10)
              >>> np.random.shuffle(arr)
              >>> arr
              [1 7 5 2 9 4 3 6 0 8] # random

              Multi-dimensional arrays are only shuffled along the first axis:

              >>> arr = np.arange(9).reshape((3, 3))
              >>> np.random.shuffle(arr)
              >>> arr
              array([[3, 4, 5], # random
                     [6, 7, 8],
                     [0, 1, 2]])

       XRStools.xrs_calctools.sorter(elem)

       XRStools.xrs_calctools.spline2(x, y, x2)
              Extrapolates the smaller and larger valuea as a constant

       XRStools.xrs_calctools.standard_cauchy(size=None)
              Draw samples from a standard Cauchy distribution with mode = 0.

              Also known as the Lorentz distribution.

              NOTE:
                 New  code  should  use  the  standard_cauchy  method of a default_rng() instance
                 instead; please see the random-quick-start.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              samples
                     ndarray or scalar The drawn samples.

              Generator.standard_cauchy: which should be used for new code.

              The probability density function for the full Cauchy distribution is

                               P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+
              (\frac{x-x_0}{\gamma})^2 \bigr] }

              and the Standard Cauchy distribution just sets x_0=0 and \gamma=1

              The Cauchy distribution arises in the solution to the  driven  harmonic  oscillator
              problem,  and  also  describes  spectral  line  broadening.  It  also describes the
              distribution of values at which a line tilted at a random  angle  will  cut  the  x
              axis.

              When  studying hypothesis tests that assume normality, seeing how the tests perform
              on data from a Cauchy distribution is a good indicator of their  sensitivity  to  a
              heavy-tailed  distribution,  since  the  Cauchy  looks  very  much  like a Gaussian
              distribution, but with heavier tails.

       [1]  NIST/SEMATECH   e-Handbook   of   Statistical   Methods,    "Cauchy    Distribution",
            https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm

       [2]  Weisstein,  Eric  W.  "Cauchy  Distribution." From MathWorld--A Wolfram Web Resource.
            http://mathworld.wolfram.com/CauchyDistribution.html

       [3]  Wikipedia, "Cauchy distribution" https://en.wikipedia.org/wiki/Cauchy_distribution

            Draw samples and plot the distribution:

            >>> import matplotlib.pyplot as plt
            >>> s = np.random.standard_cauchy(1000000)
            >>> s = s[(s>-25) & (s<25)]  # truncate distribution so it plots well
            >>> plt.hist(s, bins=100)
            >>> plt.show()

       XRStools.xrs_calctools.standard_exponential(size=None)
              Draw samples from the standard exponential distribution.

              standard_exponential is identical to the  exponential  distribution  with  a  scale
              parameter of 1.

              NOTE:
                 New  code should use the standard_exponential method of a default_rng() instance
                 instead; please see the random-quick-start.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              out    float or ndarray Drawn samples.

              Generator.standard_exponential: which should be used for new code.

              Output a 3x8000 array:

              >>> n = np.random.standard_exponential((3, 8000))

       XRStools.xrs_calctools.standard_gamma(shape, size=None)
              Draw samples from a standard Gamma distribution.

              Samples are drawn from  a  Gamma  distribution  with  specified  parameters,  shape
              (sometimes designated "k") and scale=1.

              NOTE:
                 New  code  should  use  the  standard_gamma  method  of a default_rng() instance
                 instead; please see the random-quick-start.

              shape  float or array_like of floats Parameter, must be non-negative.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single   value   is   returned   if   shape   is   a   scalar.    Otherwise,
                     np.array(shape).size samples are drawn.

              out    ndarray  or  scalar  Drawn  samples  from  the  parameterized standard gamma
                     distribution.

              scipy.stats.gamma
                     probability density function, distribution or cumulative  density  function,
                     etc.

              Generator.standard_gamma: which should be used for new code.

              The probability density for the Gamma distribution is

                            p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},

              where k is the shape and \theta the scale, and \Gamma is the Gamma function.

              The  Gamma  distribution  is often used to model the times to failure of electronic
              components, and arises naturally in processes for which the waiting  times  between
              Poisson distributed events are relevant.

       [1]  Weisstein,  Eric  W.  "Gamma  Distribution."  From MathWorld--A Wolfram Web Resource.
            http://mathworld.wolfram.com/GammaDistribution.html

       [2]  Wikipedia, "Gamma distribution", https://en.wikipedia.org/wiki/Gamma_distribution

            Draw samples from the distribution:

            >>> shape, scale = 2., 1. # mean and width
            >>> s = np.random.standard_gamma(shape, 1000000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> import scipy.special as sps
            >>> count, bins, ignored = plt.hist(s, 50, density=True)
            >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/
            ...                       (sps.gamma(shape) * scale**shape))
            >>> plt.plot(bins, y, linewidth=2, color='r')
            >>> plt.show()

       XRStools.xrs_calctools.standard_normal(size=None)
              Draw samples from a standard Normal distribution (mean=0, stdev=1).

              NOTE:
                 New code should use the  standard_normal  method  of  a  default_rng()  instance
                 instead; please see the random-quick-start.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  Default is None, in which case
                     a single value is returned.

              out    float or ndarray A floating-point array of shape size of drawn samples, or a
                     single sample if size was not specified.

              normal :
                     Equivalent function with additional loc and scale arguments for setting  the
                     mean and standard deviation.

              Generator.standard_normal: which should be used for new code.

              For random samples from N(\mu, \sigma^2), use one of:

                 mu + sigma * np.random.standard_normal(size=...)
                 np.random.normal(mu, sigma, size=...)

              >>> np.random.standard_normal()
              2.1923875335537315 #random

              >>> s = np.random.standard_normal(8000)
              >>> s
              array([ 0.6888893 ,  0.78096262, -0.89086505, ...,  0.49876311,  # random
                     -0.38672696, -0.4685006 ])                                # random
              >>> s.shape
              (8000,)
              >>> s = np.random.standard_normal(size=(3, 4, 2))
              >>> s.shape
              (3, 4, 2)

              Two-by-four array of samples from N(3, 6.25):

              >>> 3 + 2.5 * np.random.standard_normal(size=(2, 4))
              array([[-4.49401501,  4.00950034, -1.81814867,  7.29718677],   # random
                     [ 0.39924804,  4.68456316,  4.99394529,  4.84057254]])  # random

       XRStools.xrs_calctools.standard_t(df, size=None)
              Draw samples from a standard Student's t distribution with df degrees of freedom.

              A  special  case  of  the  hyperbolic  distribution.   As df gets large, the result
              resembles that of the standard normal distribution (standard_normal).

              NOTE:
                 New code should use the standard_t method of a default_rng()  instance  instead;
                 please see the random-quick-start.

              df     float or array_like of floats Degrees of freedom, must be > 0.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned if df is a scalar.  Otherwise, np.array(df).size
                     samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized standard Student's  t
                     distribution.

              Generator.standard_t: which should be used for new code.

              The probability density function for the t distribution is

                            P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df}
              \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}

              The t test is based on an assumption that the data come from a Normal distribution.
              The t test provides a way to test  whether  the  sample  mean  (that  is  the  mean
              calculated from the data) is a good estimate of the true mean.

              The  derivation of the t-distribution was first published in 1908 by William Gosset
              while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had
              to publish under a pseudonym, and so he used the name Student.

       [1]  Dalgaard, Peter, "Introductory Statistics With R", Springer, 2002.

       [2]  Wikipedia,                         "Student's                         t-distribution"
            https://en.wikipedia.org/wiki/Student's_t-distribution

            From Dalgaard page 83
            [1]_
            , suppose the daily energy intake for 11 women in kilojoules (kJ) is:

            >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \
            ...                    7515, 8230, 8770])

            Does their energy intake deviate systematically from the recommended  value  of  7725
            kJ?  Our  null  hypothesis  will  be  the  absence  of  deviation,  and the alternate
            hypothesis will be the presence of  an  effect  that  could  be  either  positive  or
            negative, hence making our test 2-tailed.

            Because  we  are  estimating  the mean and we have N=11 values in our sample, we have
            N-1=10 degrees of freedom. We set our significance level to 95%  and  compute  the  t
            statistic using the empirical mean and empirical standard deviation of our intake. We
            use a ddof of 1 to base the computation of our empirical  standard  deviation  on  an
            unbiased  estimate  of  the variance (note: the final estimate is not unbiased due to
            the concave nature of the square root).

            >>> np.mean(intake)
            6753.636363636364
            >>> intake.std(ddof=1)
            1142.1232221373727
            >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake)))
            >>> t
            -2.8207540608310198

            We draw 1000000 samples from Student's t distribution with the  adequate  degrees  of
            freedom.

            >>> import matplotlib.pyplot as plt
            >>> s = np.random.standard_t(10, size=1000000)
            >>> h = plt.hist(s, bins=100, density=True)

            Does  our  t statistic land in one of the two critical regions found at both tails of
            the distribution?

            >>> np.sum(np.abs(t) < np.abs(s)) / float(len(s))
            0.018318  #random < 0.05, statistic is in critical region

            The probability value for this 2-tailed test is about 1.83%, which is lower than  the
            5% pre-determined significance threshold.

            Therefore, the probability of observing values as extreme as our intake conditionally
            on the null hypothesis being true is too low, and we reject the null hypothesis of no
            deviation.

       class    XRStools.xrs_calctools.stobe(prefix,   postfix,   fromnumber,   tonumber,   step,
       stepformat=2)
              Bases: object

              class to analyze StoBe results

              broaden_lin(params=[0.8, 8, 537.5, 550], npoints=1000)

              cut_rawspecs(emin=None, emax=None)

              norm_area(emin, emax)

              sum_specs()

       XRStools.xrs_calctools.translateOcean2FDMNES_p1(ocean_in, fdmnes_out, header_file)

       XRStools.xrs_calctools.triangular(left, mode, right, size=None)
              Draw samples from the triangular distribution over the interval [left, right].

              The triangular distribution is a continuous  probability  distribution  with  lower
              limit  left,  peak  at mode, and upper limit right. Unlike the other distributions,
              these parameters directly define the shape of the pdf.

              NOTE:
                 New code should use the triangular method of a default_rng()  instance  instead;
                 please see the random-quick-start.

              left   float or array_like of floats Lower limit.

              mode   float  or  array_like of floats The value where the peak of the distribution
                     occurs.  The value must fulfill the condition left <= mode <= right.

              right  float or array_like of floats Upper limit, must be larger than left.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value  is  returned  if  left,  mode,  and  right  are  all  scalars.
                     Otherwise, np.broadcast(left, mode, right).size samples are drawn.

              out    ndarray   or   scalar   Drawn  samples  from  the  parameterized  triangular
                     distribution.

              Generator.triangular: which should be used for new code.

              The probability density function for the triangular distribution is

                                         P(x;l, m, r) = \begin{cases}
              \frac{2(x-l)}{(r-l)(m-l)}&     \text{for     $l     \leq     x     \leq      m$},\\
              \frac{2(r-x)}{(r-l)(r-m)}&  \text{for  $m  \leq  x \leq r$},\\ 0& \text{otherwise}.
              \end{cases}

              The triangular distribution  is  often  used  in  ill-defined  problems  where  the
              underlying  distribution  is  not  known, but some knowledge of the limits and mode
              exists. Often it is used in simulations.

       [1]  Wikipedia,                         "Triangular                          distribution"
            https://en.wikipedia.org/wiki/Triangular_distribution

            Draw values from the distribution and plot the histogram:

            >>> import matplotlib.pyplot as plt
            >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200,
            ...              density=True)
            >>> plt.show()

       XRStools.xrs_calctools.uniform(low=0.0, high=1.0, size=None)
              Draw samples from a uniform distribution.

              Samples are uniformly distributed over the half-open interval [low, high) (includes
              low, but excludes high).  In other words, any value within the  given  interval  is
              equally likely to be drawn by uniform.

              NOTE:
                 New  code  should  use  the  uniform method of a default_rng() instance instead;
                 please see the random-quick-start.

              low    float or array_like  of  floats,  optional  Lower  boundary  of  the  output
                     interval.   All  values generated will be greater than or equal to low.  The
                     default value is 0.

              high   float or array_like of floats Upper boundary of the  output  interval.   All
                     values  generated  will be less than or equal to high.  The default value is
                     1.0.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if low  and  high  are  both  scalars.   Otherwise,
                     np.broadcast(low, high).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized uniform distribution.

              randint  :  Discrete  uniform  distribution,  yielding integers.  random_integers :
              Discrete uniform distribution over the closed
                 interval [low, high].

              random_sample : Floats uniformly distributed over  [0,  1).   random  :  Alias  for
              random_sample.  rand : Convenience function that accepts dimensions as input, e.g.,
                 rand(2,2)  would  generate  a 2-by-2 array of floats, uniformly distributed over
                 [0, 1).

              Generator.uniform: which should be used for new code.

              The probability density function of the uniform distribution is

                                            p(x) = \frac{1}{b - a}

              anywhere within the interval [a, b), and zero elsewhere.

              When high == low, values of low will be returned.  If high < low, the  results  are
              officially  undefined  and  may eventually raise an error, i.e. do not rely on this
              function to behave when passed arguments satisfying that inequality condition.  The
              high  limit  may  be included in the returned array of floats due to floating-point
              rounding in the equation low + (high-low) * random_sample(). For example:

              >>> x = np.float32(5*0.99999999)
              >>> x
              5.0

              Draw samples from the distribution:

              >>> s = np.random.uniform(-1,0,1000)

              All values are within the given interval:

              >>> np.all(s >= -1)
              True
              >>> np.all(s < 0)
              True

              Display the histogram of the samples, along with the probability density function:

              >>> import matplotlib.pyplot as plt
              >>> count, bins, ignored = plt.hist(s, 15, density=True)
              >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r')
              >>> plt.show()

       XRStools.xrs_calctools.vaspBoxParser(filename)
              groTrajecParser Parses an gromacs GRO-style file for the xyzBox class.

       XRStools.xrs_calctools.vaspTrajecParser(filename, min_boxes=0, max_boxes=1000)
              groTrajecParser Parses an gromacs GRO-style file for the xyzBox class.

       XRStools.xrs_calctools.vonmises(mu, kappa, size=None)
              Draw samples from a von Mises distribution.

              Samples are drawn from a von  Mises  distribution  with  specified  mode  (mu)  and
              dispersion (kappa), on the interval [-pi, pi].

              The  von  Mises  distribution (also known as the circular normal distribution) is a
              continuous probability distribution on the unit circle.  It may be  thought  of  as
              the circular analogue of the normal distribution.

              NOTE:
                 New  code  should  use  the vonmises method of a default_rng() instance instead;
                 please see the random-quick-start.

              mu     float or array_like of floats Mode ("center") of the distribution.

              kappa  float or array_like of floats Dispersion of the distribution, has to be >=0.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if mu  and  kappa  are  both  scalars.   Otherwise,
                     np.broadcast(mu, kappa).size samples are drawn.

              out    ndarray   or   scalar   Drawn  samples  from  the  parameterized  von  Mises
                     distribution.

              scipy.stats.vonmises
                     probability density function, distribution, or cumulative density  function,
                     etc.

              Generator.vonmises: which should be used for new code.

              The probability density for the von Mises distribution is

                            p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},

              where  \mu  is  the mode and \kappa the dispersion, and I_0(\kappa) is the modified
              Bessel function of order 0.

              The  von  Mises  is  named  for  Richard  Edler  von  Mises,  who   was   born   in
              Austria-Hungary,  in what is now the Ukraine.  He fled to the United States in 1939
              and became a professor at Harvard.  He worked in probability theory,  aerodynamics,
              fluid mechanics, and philosophy of science.

       [1]  Abramowitz,  M.  and  Stegun,  I. A. (Eds.). "Handbook of Mathematical Functions with
            Formulas, Graphs, and Mathematical Tables, 9th printing," New York: Dover, 1972.

       [2]  von Mises, R.,  "Mathematical  Theory  of  Probability  and  Statistics",  New  York:
            Academic Press, 1964.

            Draw samples from the distribution:

            >>> mu, kappa = 0.0, 4.0 # mean and dispersion
            >>> s = np.random.vonmises(mu, kappa, 1000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> from scipy.special import i0
            >>> plt.hist(s, 50, density=True)
            >>> x = np.linspace(-np.pi, np.pi, num=51)
            >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa))
            >>> plt.plot(x, y, linewidth=2, color='r')
            >>> plt.show()

       XRStools.xrs_calctools.wald(mean, scale, size=None)
              Draw samples from a Wald, or inverse Gaussian, distribution.

              As  the  scale  approaches infinity, the distribution becomes more like a Gaussian.
              Some references claim that the Wald is an inverse Gaussian with mean  equal  to  1,
              but this is by no means universal.

              The  inverse  Gaussian  distribution  was first studied in relationship to Brownian
              motion. In 1956 M.C.K. Tweedie used the name inverse Gaussian because there  is  an
              inverse relationship between the time to cover a unit distance and distance covered
              in unit time.

              NOTE:
                 New code should use the wald method of a default_rng() instance instead;  please
                 see the random-quick-start.

              mean   float or array_like of floats Distribution mean, must be > 0.

              scale  float or array_like of floats Scale parameter, must be > 0.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if mean and scale are both scalars.  Otherwise,
                     np.broadcast(mean, scale).size samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Wald distribution.

              Generator.wald: which should be used for new code.

              The probability density function for the Wald distribution is

                               P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^
              \frac{-scale(x-mean)^2}{2\cdotp mean^2x}

              As noted above the inverse Gaussian distribution first arise from attempts to model
              Brownian  motion.  It  is  also  a competitor to the Weibull for use in reliability
              modeling and modeling stock returns and interest rate processes.

       [1]  Brighton            Webs             Ltd.,             Wald             Distribution,
            https://web.archive.org/web/20090423014010/http://www.brighton-webs.co.uk:80/distributions/wald.asp

       [2]  Chhikara, Raj S., and Folks, J. Leroy, "The Inverse Gaussian Distribution:  Theory  :
            Methodology, and Applications", CRC Press, 1988.

       [3]  Wikipedia,                "Inverse               Gaussian               distribution"
            https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution

            Draw values from the distribution and plot the histogram:

            >>> import matplotlib.pyplot as plt
            >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, density=True)
            >>> plt.show()

       XRStools.xrs_calctools.weibull(a, size=None)
              Draw samples from a Weibull distribution.

              Draw samples from a 1-parameter Weibull distribution with the given shape parameter
              a.

                                              X = (-ln(U))^{1/a}

              Here, U is drawn from the uniform distribution over (0,1].

              The  more common 2-parameter Weibull, including a scale parameter \lambda is just X
              = \lambda(-ln(U))^{1/a}.

              NOTE:
                 New code should use the weibull method  of  a  default_rng()  instance  instead;
                 please see the random-quick-start.

              a      float  or array_like of floats Shape parameter of the distribution.  Must be
                     nonnegative.

              size   int or tuple of ints, optional Output shape.  If the given shape  is,  e.g.,
                     (m,  n,  k), then m * n * k samples are drawn.  If size is None (default), a
                     single value is returned if a  is  a  scalar.   Otherwise,  np.array(a).size
                     samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Weibull distribution.

              scipy.stats.weibull_max   scipy.stats.weibull_min   scipy.stats.genextreme   gumbel
              Generator.weibull: which should be used for new code.

              The Weibull (or Type III asymptotic extreme value distribution for smallest values,
              SEV  Type  III,  or  Rosin-Rammler  distribution)  is one of a class of Generalized
              Extreme Value (GEV) distributions used in modeling extreme  value  problems.   This
              class includes the Gumbel and Frechet distributions.

              The probability density for the Weibull distribution is

                                                p(x) = \frac{a}
              {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},

              where a is the shape and \lambda the scale.

              The function has its peak (the mode) at \lambda(\frac{a-1}{a})^{1/a}.

              When a = 1, the Weibull distribution reduces to the exponential distribution.

       [1]  Waloddi Weibull, Royal Technical University, Stockholm, 1939 "A Statistical Theory Of
            The Strength Of Materials", Ingeniorsvetenskapsakademiens Handlingar  Nr  151,  1939,
            Generalstabens Litografiska Anstalts Forlag, Stockholm.

       [2]  Waloddi Weibull, "A Statistical Distribution Function of Wide Applicability", Journal
            Of Applied Mechanics ASME Paper 1951.

       [3]  Wikipedia, "Weibull distribution", https://en.wikipedia.org/wiki/Weibull_distribution

            Draw samples from the distribution:

            >>> a = 5. # shape
            >>> s = np.random.weibull(a, 1000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> x = np.arange(1,100.)/50.
            >>> def weib(x,n,a):
            ...     return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a)

            >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000))
            >>> x = np.arange(1,100.)/50.
            >>> scale = count.max()/weib(x, 1., 5.).max()
            >>> plt.plot(x, weib(x, 1., 5.)*scale)
            >>> plt.show()

       XRStools.xrs_calctools.writeFDMNESinput_file(xyzAtoms, fname, Filout, Range, Radius, Edge,
       NRIXS, Absorber, Green=False, SCF=False)
              writeFDMNESinput_file Writes an input file to be used for FDMNES.

       XRStools.xrs_calctools.writeFEFFinput_arb(fname, headerfile, xyzBox, exatom, edge)
              writeFEFFinput_arb

       XRStools.xrs_calctools.writeMD1Input(fname, box, headerfile, exatomNo=0)
              writeWFN1input  Writes  an  input  for cp.x by Quantum espresso for electronic wave
              function minimization.

       XRStools.xrs_calctools.writeOCEAN_XESInput(fname, box, headerfile, exatomNo=0)
              writeOCEAN_XESInput Writes an input for ONEAN XES calculation for 17 molecule water
              boxes.

       XRStools.xrs_calctools.writeOCEANinput(fname, headerfile, xyzBox, exatom, edge, subshell)
              writeOCEANinput

       XRStools.xrs_calctools.writeOCEANinput_arb(fname,   headerfile,   xyzBox,   exatom,  edge,
       subshell)
              writeOCEANinput

       XRStools.xrs_calctools.writeOCEANinput_full(fname, xyzBox, exatom, edge, subshell)
              Writes a complete OCEAN input file.

              Args:

                     • fname     (str): Filename for the input file to be written.

                     • xyzBox (xyzBox): Instance of the xyzBox class  to  be  converted  into  an
                       OCEAN input file.

                     • exatom    (str): Atomic symbol for the excited atom.

                     • edge       (int):  Integer  defining  which  shell  to  excite (e.g. 0 for
                       K-shell, 1 for L, etc.).

                     • subshell  (int): Integer defining which sub-shell to excite ( e.g.  0  for
                       s, 1 for p, etc.).

       XRStools.xrs_calctools.writeOCEANinput_new(fname,   headerfile,   xyzBox,   exatom,  edge,
       subshell)
              writeOCEANinput

       XRStools.xrs_calctools.writePWinuptFile(fname, box, param_dict)
              writePWinuptFile

       XRStools.xrs_calctools.writeRelXYZfile(filename,  n_atoms,  boxLength,  title,   xyzAtoms,
       inclAtomNames=True)

       XRStools.xrs_calctools.writeWFN1waterInput(fname, box, headerfile, exatomNo=0)
              writeWFN1input  Writes  an  input  for cp.x by Quantum espresso for electronic wave
              function minimization.

       XRStools.xrs_calctools.writeXYZfile(filename, numberOfAtoms, title, list_of_xyzAtoms)

       XRStools.xrs_calctools.writeXYZtrajectory(filename, boxes)

       class XRStools.xrs_calctools.xyzAtom(name, coordinates, number)
              Bases: object

              xyzAtom

              Class to hold information about and manipulate a single atom in xyz-style format.

              Args. :

                     • name (str): Atomic symbol.

                     • coordinates (np.array): Array of xyz-coordinates.

                     • number (int): Integer, e.g. number of atom in a cluster.

              getAnglePBCarb(atom2, atom3, lattice, lattice_inv, degrees=True)
                     get_angle Return angle between the three given atoms (as seen from atom2).

              getCoordinates()

              getDist(atom)

              getDistPBCarb(atom, lattice, lattice_inv)

              getNorm()

              load_spectrum(file_name)

              load_spectrum_all_pol(prefix, num_pols, printing=False)

              normalize_spectrum(normrange)

              translateSelf(vector)

              translateSelf_arb(lattice, lattice_inv, vector)

       class XRStools.xrs_calctools.xyzBox(xyzAtoms, boxLength=None, title=None)
              Bases: object

              xyzBox

              Class to hold information about and manipulate a xyz-periodic cubic box.

              Args.:

                     • xyzAtoms (list): List of instances of the xyzAtoms class that make up  the
                       molecule.

                     • boxLength (float): Box length.

              changeOHBondlength(fraction, oName='O', hName='H')
                     changeOHBondlength  Changes all OH covalent bond lengths inside the box by a
                     fraction.

              count_contact_pairs(name_1, name_2, cutoff, counter_name='contact_pair')

              count_hbonds(Roocut=3.6,   Rohcut=2.4,   Aoooh=30.0,    counter_name='num_H_bonds',
              counter_name2='H_bond_angles')
                     count_hbonds Counts the number of hydrogen bonds around all oxygen atoms and
                     sets that number as attribute to the accorting xyzAtom.

              count_neighbors(name1,        name2,        cutoff_low=0.0,        cutoff_high=2.0,
              counter_name='num_OO_shell')
                     count_neighbors

                     Counts number of neighbors (of name2) around atom of name1.

                     Args:

                            • name1         (str): Name of first type of atom.

                            • name2         (str): Name of second type of atom.

                            • cutoff_low  (float): Lower cutoff (Angstrom).

                            • cutoff_high (float): Upper cutoff (Angstrom).

                            • counter_name  (str): Attribute namer under which  the result should
                              be saved.

              deleteTip4pCOM()
                     deleteTip4pCOM Deletes the ficticious atoms used in the TIP4P water model.

              findMethAndHexMolecules(CO_cut=1.6, CH_cut=1.2, OH_cut=1.2, CC_cut=1.7)
                     CH3OH

              findMethanolMolecules(CO_cut=1.6, CH_cut=1.2, OH_cut=1.2)
                     CH3OH

              find_hydroniums(OH_cutoff=1.5)
                     find_hydroniums Returns a list of hydronium molecules.

              find_hydroxides(OH_cutoff=1.5)
                     find_hydroxides Returns a list of hydroxide molecules.

              find_tmao_molecules_arb(CH_cut=1.2, CN_cut=1.6, NO_cut=1.5, CC_cut=2.5)
                     find_tmao_molecules Returns a list of TMAO molecules.

              find_urea_molecules_arb(NH_cut=1.2, CN_cut=1.6, CO_cut=1.5)
                     find_urea_molecules Returns a list of Urea molecules.

              getCoordinates()
                     getCoordinates Return coordinates of all atoms in the cluster.

              getDistVectorPBC_arb(atom1, atom2)
                     getDistVectorPBC_arb

                     Calculates  the  distance  vector  between  two  atoms  from  an   arbitrary
                     simulation box using the minimum image convention.

                     Args:  atom1 (obj): Instance of the xzyAtom class.  atom2 (obj): Instance of
                            the xzyAtom class.

                     Returns:
                            The distance vector between the two atoms (np.array).

              getDistancePBC_arb(atom1, atom2)
                     getDistancePBC_arb Calculates the distance of two atoms  from  an  arbitrary
                     simulation box using the minimum image convention.

                     Args:  atom1 (obj): Instance of the xzyAtom class.  atom2 (obj): Instance of
                            the xzyAtom class.

                     Returns:
                            The distance between the two atoms.

              getTetraParameter()
                     getTetraParameter Returns a list of tetrahedrality paprameters, according to
                     NATURE, VOL 409, 18 JANUARY (2001).

                     UNTESTED!!!

              get_OO_neighbors(Roocut=3.6)
                     get_OO_neighbors  Returns list of numbers of nearest oxygen neighbors within
                     readius 'Roocut'.

              get_OO_neighbors_pbc(Roocut=3.6)
                     get_OO_neighbors_pbc Returns a list of numbers of nearest oxygen atoms, uses
                     periodic boundary conditions.

              get_angle(atom1, atom2, atom3, degrees=True)
                     get_angle Return angle between the three given atoms (as seen from atom2).

              get_angle_arb(atom1, atom2, atom3, degrees=True)
                     get_angle Return angle between the three given atoms (as seen from atom2).

              get_atoms_by_name(name)
                     get_atoms_by_name Return a list of all xyzAtoms of a given name 'name'.

              get_atoms_from_molecules()
                     get_atoms_from_molecules  Parses  all  atoms  inside  self.xyzMolecules into
                     self.xyzAtoms (useful for turning an xyzMolecule into an xyzBox).

              get_h2o_molecules(o_name='O', h_name='H')
                     get_h2o_molecules Finds all water molecules inside the box and collects them
                     inside the self.xyzMolecules attribute.

              get_h2o_molecules_arb(o_name='O', h_name='H')

              get_hbonds(Roocut=3.6, Rohcut=2.4, Aoooh=30.0)
                     get_hbonds  Counts  the hydrogen bonds inside the box, returns the number of
                     H-bond donors and H-bond acceptors.

              multiplyBoxPBC(numShells)
                     multiplyBoxPBC Applies the periodic boundary conditions and  multiplies  the
                     box in shells around the original.

              multiplyBoxPBC_arb(lx=[- 1, 1], ly=[- 1, 1], lz=[- 1, 1])
                     multiplyBoxPBC_arb  Applies  the periodic boundary conditions and multiplies
                     the box in shells around the original. Works with arbitrary lattices.

              normalize_arb_spectrum(normrange, attribute)

              normalize_spectrum(normrange)

              scatterPlot()
                     scatterPlot Opens a plot window with a scatter-plot of  all  coordinates  of
                     the box.

              setBoxLength(boxLength, angstrom=True)
                     setBoxLength Set the box length.

              translateAtomsMinimumImage(lattice, lattice_inv)
                     translateAtomsMinimumImage

                     Brings  back  all  atoms  into  the  original  box  using  periodic boundary
                     conditions and minimal image convention.

              writeBox(filename)
                     writeBox Creates an xyz-style text file with all coordinates of the box.

              writeClusters(cenatom_name, number, cutoff, prefix, postfix='.xyz')
                     writeXYZclusters Write water clusters into files.

              writeClusters_arb(cenatom_name,    number,    cutoff,    prefix,    postfix='.xyz',
              test_box_multiplyer=1)
                     writeXYZclusters Write water clusters into files.

              writeFDMNESinput(fname, Filout, Range, Radius, Edge, NRIXS, Absorber)
                     writeFDMNESinput   Creates   an  input  file  to  be  used  for  q-dependent
                     calculations with FDMNES.

              writeH2Oclusters(cutoff, prefix, postfix='.xyz', o_name='O', h_name='H')
                     writeXYZclusters Write water clusters into files.

              writeMoleculeCluster(molAtomList, fname, cutoff=None, numH2Omols=None,  o_name='O',
              h_name='H', mol_center=None)
                     writeMoleculeCluster  Careful,  this  works  only  for  a single molecule in
                     water.

              writeOCEANinput(fname, headerfile, exatom, edge, subshell)
                     writeOCEANinput Creates an OCEAN input file based on the headerfile.

              writeRelBox(filename, inclAtomNames=True)
                     writeRelBox Writes all relative atom coordinates into a text file (useful as
                     OCEAN input).

       class XRStools.xrs_calctools.xyzMolecule(xyzAtoms, title=None)
              Bases: object

              xyzMolecule

              Class to hold information about and manipulate an xyz-style molecule.

              Args.:

                     • xyzAtoms  (list): List of instances of the xyzAtoms class that make up the
                       molecule.

              appendAtom(Atom)
                     appendAtom Add an xzyAtom to the molecule.

              getCoordinates()
                     getCoordinates Return coordinates of all atoms in the cluster.

              getCoordinates_name(name)
                     getCoordinates_name Return coordintes of all atoms with 'name'.

              getGeometricCenter()
                     getGeometricCenter Return the geometric center of the xyz-molecule.

              getGeometricCenter_arb(lattice, lattice_inv)

              get_atoms_by_name(name)
                     get_atoms_by_name Return a list of all xyzAtoms of a given name 'name'.

              popAtom(xyzAtom)
                     popAtom Delete an xyzAtom from the molecule.

              scatterPlot()
                     scatterPlot Opens a plot window with a scatter-plot of  all  coordinates  of
                     the molecule.

              translateAtomsMinimumImage(lattice, lattice_inv, center=array([0., 0., 0.]))
                     translateAtomsMinimumImage

                     Brings  back  all  atoms  into  the  original  box  using  periodic boundary
                     conditions and minimal image convention.

              translateSelf(vector)
                     translateSelf Translate all atoms of the molecule by a vector 'vector'.

              writeXYZfile(fname)
                     writeXYZfile Creates an xyz-style text file  with  all  coordinates  of  the
                     molecule.

       XRStools.xrs_calctools.xyzTrajecParser(filename, boxLength, firstBox=0, lastBox=- 1)
              Parses a Trajectory of xyz-files.

              Args:  filename (str): Filename of the xyz Trajectory file.

              Returns:
                     A list of xzyBoxes.

       class XRStools.xrs_calctools.xyzTrajectory(xyzBoxes)
              Bases: object

              getRDF(atom1='O', atom2='O', MAXBIN=1000, DELR=0.01, RHO=1.0)

              getRDF2_arb(atom1='O', atom2='O', MAXBIN=1000, DELR=0.01, RHO=1.0)

              getRDF_arb(atom1='O', atom2='O', MAXBIN=1000, DELR=0.01, RHO=1.0)

              loadAXSFtraj(filename)

              writeRandBox(filename)

              writeXYZtraj(filename)

       XRStools.xrs_calctools.zipf(a, size=None)
              Draw samples from a Zipf distribution.

              Samples are drawn from a Zipf distribution with specified parameter a > 1.

              The  Zipf  distribution  (also  known  as  the  zeta  distribution) is a continuous
              probability distribution that satisfies Zipf's law: the frequency  of  an  item  is
              inversely proportional to its rank in a frequency table.

              NOTE:
                 New  code should use the zipf method of a default_rng() instance instead; please
                 see the random-quick-start.

              a      float or array_like of floats Distribution parameter. Must be  greater  than
                     1.

              size   int  or  tuple of ints, optional Output shape.  If the given shape is, e.g.,
                     (m, n, k), then m * n * k samples are drawn.  If size is None  (default),  a
                     single  value  is  returned  if  a  is a scalar. Otherwise, np.array(a).size
                     samples are drawn.

              out    ndarray or scalar Drawn samples from the parameterized Zipf distribution.

              scipy.stats.zipf
                     probability density function, distribution, or cumulative density  function,
                     etc.

              Generator.zipf: which should be used for new code.

              The probability density for the Zipf distribution is

                                        p(x) = \frac{x^{-a}}{\zeta(a)},

              where \zeta is the Riemann Zeta function.

              It  is  named  for  the  American linguist George Kingsley Zipf, who noted that the
              frequency of any word in a sample of a language is inversely  proportional  to  its
              rank in the frequency table.

       [1]  Zipf,  G.  K., "Selected Studies of the Principle of Relative Frequency in Language,"
            Cambridge, MA: Harvard Univ. Press, 1932.

            Draw samples from the distribution:

            >>> a = 2. # parameter
            >>> s = np.random.zipf(a, 1000)

            Display the histogram of the samples, along with the probability density function:

            >>> import matplotlib.pyplot as plt
            >>> from scipy import special

            Truncate s values at 50 so plot is interesting:

            >>> count, bins, ignored = plt.hist(s[s<50], 50, density=True)
            >>> x = np.arange(1., 50.)
            >>> y = x**(-a) / special.zetac(a)
            >>> plt.plot(x, y/max(y), linewidth=2, color='r')
            >>> plt.show()

   XRStools.xrs_extraction Module
       class XRStools.xrs_extraction.HF_dataset(data, formulas, stoich_weights, edges)
              Bases: object

              dataset A class to hold all information  from  HF  Compton  profiles  necessary  to
              subtract background from the experiment.

              get_C_edges_av(element, edge, columns)

              get_C_total(columns)

              get_J_total_av(columns)

       class  XRStools.xrs_extraction.edge_extraction(exp_data,  formulas, stoich_weights, edges,
       prenormrange=[5, inf])
              Bases: object

              edge_extraction Class to destill core edge  spectra  from  x-ray  Raman  scattering
              experiments.

              analyzerAverage(roi_numbers, errorweighing=True)
                     analyzerAverage  Averages  signals  from  several crystals before background
                     subtraction.

                     Args:

                        •

                          roi_numbers
                                 list, str list of ROI numbers to average  over  of  keyword  for
                                 analyzer chamber (e.g. 'VD','VU','VB','HR','HL','HB')

                        •

                          errorweighing
                                 boolean  (True  by  default) keyword if error weighing should be
                                 used for the averaging or not

              removeCorePearsonAv(element,    edge,    range1,    range2,     weights=[2,     1],
              HFcore_shift=0.0,      guess=None,      scaling=None,      return_background=False,
              show_plots=True)
                     removeCorePearsonAv

                     guess (list): [position, FWHM, shape, intensity, ax, b, scale  ]

              removeCorePearsonAv_new(element,   edge,    range1,    range2,    HFcore_shift=0.0,
              guess=None, scaling=None, return_background=False, reg_lam=10)
                     removeCorePearsonAv_new

              removePearsonAv(element,  edge,  range1,  range2=None,  weights=[2, 1], guess=None,
              scale=1.0, HFcore_shift=0.0)
                     removePearsonAv

              removePolyCoreAv(element, edge, range1, range2, weights=[1,  1],  guess=[1.0,  0.0,
              0.0], ewindow=100.0)
                     removePolyCoreAv  Subtract  a polynomial from averaged data guided by the HF
                     core Compton profile.

                     Args

                        • element : str String (e.g. 'Si') for the element you want to work on.

                        • edge: str String (e.g. 'K' or 'L23') for the edge to extract.

                        • range1 : list List with start and end value for fit-region 1.

                        • range2 : list List with start and end value for fit-region 2.

                        • weigths : list of ints List with weights for the respective fit-regions
                          1 and 2. Default is [1,1].

                        • guess  :  list  List  of  starting  values  for  the  fit.  Default  is
                          [1.0,0.0,0.0] (i.e. a quadratic function. Change the  number  of  guess
                          values  to  get  other  degrees  of  polynomials (i.e. [1.0, 0.0] for a
                          constant, [1.0,0.0,0.0,0.0] for a cubic, etc.).  The first guess  value
                          passed  is  for scaling of the experimental data to the HF core Compton
                          profile.

                        • ewindow: float Width of energy window used  in  the  plot.  Default  is
                          100.0.

              save_average_Sqw(filename, emin=None, emax=None, normrange=None)
                     save_average_Sqw  Save  the  S(q,w)  into a ascii file (energy loss, S(q,w),
                     Poisson errors).

                     Args:

                            • filename : str Filename for the ascii file.

                            • emin : float Use this to save only part of the spectrum.

                            • emax : float Use this to save only part of the spectrum.

                            • normrange  :  list  of  floats  E_start  and  E_end  for   possible
                              area-normalization before saving.

       class XRStools.xrs_extraction.functorObjectV(y, eloss, hfcore, lam)
              Bases: object

              funct(a, eloss)

       XRStools.xrs_extraction.map_chamber_names(name)
              map_chamber_names Maps names of chambers to range of ROI numbers.

       class XRStools.xrs_extraction.valence_CP
              Bases: object

              valence_CP  Class  to  organize  information  about  extracted experimental valence
              Compton profiles.

              get_asymmetry()

              get_pzscale()

   XRStools.xrs_imaging Module
   XRStools.xrs_read Module
   XRStools.xrs_scans Module
   XRStools.xrs_ComptonProfiles Module
       class XRStools.xrs_ComptonProfiles.AtomProfile(element, filename, stoichiometry=1.0)
              Bases: object

              AtomProfile

              Class to construct and handle Hartree-Fock  atomic  Compton  Profile  of  a  single
              atoms.

              Attributes:

                     • filename : string Path and filename to the HF profile table.

                     • element : string Element symbol as in the periodic table.

                     • elementNr : int Number of the element as in the periodic table.

                     • shells : list of strings Names of the shells.

                     • edges : list List of edge onsets (eV).

                     • C_total : np.array Total core Compton profile.

                     • J_total : np.array Total Compton profile.

                     • V_total : np.array Total valence Compton profile.

                     • CperShell : dict. of np.arrays Core Compton profile per electron shell.

                     • JperShell : dict. of np.arrays Total Compton profile per electron shell.

                     • VperShell : dict. of np.arrays Valence Compton profile per electron shell.

                     • stoichiometry : float, optional Stoichiometric weight (default is 1.0).

                     • atomic_weight : float Atomic weight.

                     • atomic_density : float Density (g/cm**3).

                     • twotheta : float Scattering angle 2Th (degrees).

                     • alpha : float Incident angle (degrees).

                     • beta : float Exit angle (degrees).

                     • thickness : float Sample thickness (cm).

              absorptionCorrectProfiles(alpha, thickness, geometry='transmission')
                     absorptionCorrectProfiles

                     Apply absorption correction to the Compton profiles on energy loss scale.

                     Args:

                            • alpha :float Angle of incidence (degrees).

                            • beta  :  float  Exit  angle  for the scattered x-rays (degrees). If
                              'beta' is negative, transmission geometry is assumed, if 'beta'  is
                              positive, reflection geometry.

                            • thickness : float Sample thickness.

              get_elossProfiles(E0, twotheta, correctasym=None, valence_cutoff=20.0)
                     get_elossProfiles Convert the HF Compton profile on to energy loss scale.

                     Args: E0 : float
                        Analyzer energy, enery of the scattered r-rays.

                     twotheta
                            float or list of floats Scattering angle 2Th.

                     correctasym
                            float, optional Scaling factor to be multiplied to the asymmetry.

                     valence_cutoff
                            float,  optional Energy cut off as to what is considered the boundary
                            between core and valence.

              get_stoichiometry()

       class XRStools.xrs_ComptonProfiles.ComptonProfiles(element)
              Bases: object

              Class for multiple HF Compton profiles.

              This class should hold one or more instances of the ComptonProfile class  and  have
              methods  to  return profiles from single atoms, single shells, all atoms. It should
              be able to apply corrections etc. on those...

              Attributes:

                     • element (string): Element symbol as in the periodic table.

                     • elementNr (int) : Number of the element as in the periodic table.

                     • shells (list)   :

                     • edges (list)    :

                     • C (np.array)    :

                     • J (np.array)    :

                     • V (np.array)    :

                     • CperShell (dict. of np.arrays):

                     • JperShell (dict. of np.arrays):

                     • VperShell (dict. of np.arrays):

       class XRStools.xrs_ComptonProfiles.FormulaProfile(formula, filename, weight=1)
              Bases: object

              FormulaProfile

              Class to construct and handle Hartree-Fock  atomic  Compton  Profile  of  a  single
              chemical compound.

              Attributes

                     • filename : string Path and filename to Biggs database.

                     • formula  :  string Chemical sum formula for the compound of interest (e.g.
                       'SiO2' or 'H2O').

                     • elements : list of strings  List  of  atomic  symbols  that  make  up  the
                       chemical sum formula.

                     • stoichiometries  :  list of integers List of the stoichimetric weights for
                       each of the elements in the list elements.

                     • element_Nrs : list of integers List of atomic numbers for each element  in
                       the elements list.

                     • AtomProfiles  : list of AtomProfiles List of instances of the AtomProfiles
                       class for each element in the list.

                     • eloss : np.ndarray Energy loss scale for the Compton profiles.

                     • C_total : np.ndarray Core HF Compton profile (one column per 2Th).

                     • J_total : np.ndarray Total HF Compton profile (one column per 2Th).

                     • V_total :np.ndarray Valence HF Compton profile (one column per 2Th).

                     • E0 : float Analyzer energy (keV).

                     • twotheta : float, list, or np.ndarray  Value  or  list/np.ndarray  of  the
                       scattering angle.

              get_correctecProfiles(densities, alpha, beta, samthick)

              get_elossProfiles(E0, twotheta, correctasym=None, valence_cutoff=20.0)

              get_stoichWeight()

       class XRStools.xrs_ComptonProfiles.HFProfile(formulas, stoich_weights, filename)
              Bases: object

              HFProfile

              Class  to  construct  and  handle  Hartree-Fock  atomic  Compton  Profile of sample
              composed of several chemical compounds.

              Attributes

              get_elossProfiles(E0, twotheta, correctasym=None, valence_cutoff=20.0)

       XRStools.xrs_ComptonProfiles.HRcorrect(pzprofile, occupation, q)
              Returns the first order correction to filled 1s, 2s, and 2p Compton profiles.

              Implementation after Holm and Ribberfors (citation ...).

              Args:

                     • pzprofile (np.array): Compton profile (e.g. tabulated from  Biggs)  to  be
                       corrected (2D matrix).

                     • occupation (list): electron configuration.

                     • q (float or np.array): momentum transfer in [a.u.].

              Returns:

                     • asymmetry  (np.array):   asymmetries  to  be  added  to  the  raw profiles
                       (normalized to the number of electrons on pz scale)

       XRStools.xrs_ComptonProfiles.PzProfile(element, filename)
              Returnes tabulated HF Compton profiles.

              Reads in tabulated HF Compton profiles from the Biggs paper, interpolates them, and
              normalizes them to the # of electrons in the shell.

              Args:

                     • element (string):  element symbol (e.g. 'Si', 'Al', etc.)

                     • filename (string): absolute path and filename to tabulated profiles

              Returns:

                     • CP_profile (np.array): Matrix of the Compton profile * 1. column: pz-scale
                       * 2. ... n. columns: Compton profile of nth shell

                     • binding_energy (list): binding energies of shells

                     • occupation_num (list): number of electrons in the according shells

       class XRStools.xrs_ComptonProfiles.SqwPredict
              Bases: object

              Class to build a S(q,w) prediction based on HF Compton Profiles.

              Attributes:

                 • sampleStr (list of strings): one string per compound (e.g. ['C','SiO2'])

                 • concentrations (list  of  floats):  relative  compositional  weight  for  each
                   compound

       XRStools.xrs_ComptonProfiles.elossProfile(element,  filename,  E0,  tth, correctasym=None,
       valence_cutoff=20.0)
              Returns HF Compton profiles on energy loss scale.

              Uses the PzProfile function to read read in Biggs HF  profiles  and  converts  them
              onto  energy  loss  scale.  The profiles are cut at the respective electron binding
              energies and are normalized to the f-sum rule (i.e. S(q,w) is in units of [1/eV]).

              Args:

                     • element (string): element symbol.

                     • filename  (string):  absolute  path  and  filename  to  tabulated  Compton
                       profiles.

                     • E0 (float): analyzer energy in [keV].

                     • tth (float): scattering angle two theta in [deg].

                     • correctasym (np.array): vector of scaling factors to be applied.

                     • valence_cutoff  (float):  energy value below which edges are considered as
                       valence

              Returns:

                     • enScale (np.array): energy loss scale in [eV]

                     • J_total (np.array): total S(q,w) in [1/eV]

                     • C_total (np.array): core contribution to S(q,w) in [1/eV]

                     • V_total (np.array): valence contribution to S(q,w) in [1/eV], the  valence
                       is defined by valence_cutoff

                     • q (np.array): momentum transfer in [a.u]

                     • J_shell  (dict  of np.arrays): dictionary of contributions for each shell,
                       the key are defines as in Biggs table.

                     • C_shell (dict of np.arrays): same as J_shell for core contribution

                     • V_shell (dict of np.arrays): same as J_shell for valence contribution

       XRStools.xrs_ComptonProfiles.getAtomicDensity(Z)
              Returns the atomic density.

       XRStools.xrs_ComptonProfiles.getAtomicWeight(Z)
              Returns the atomic weight.

       XRStools.xrs_ComptonProfiles.list_duplicates(seq)

       XRStools.xrs_ComptonProfiles.mapShellNames(shell_str, atomicNumber)
              mapShellNames

              Translates to and from spectroscopic edge notation and the convention of the  Biggs
              database.

              Args:

                     • shell_str  : string Spectroscopic symbol to be converted to Biggs database
                       convention.

                     • atomicNumber : int Z for the atom in question.

       XRStools.xrs_ComptonProfiles.parseChemFormula(ChemFormula)

       XRStools.xrs_ComptonProfiles.trapz_weights(x)

   XRStools.xrs_fileIO Module
       XRStools.xrs_fileIO.EdfRead(fname)

       XRStools.xrs_fileIO.FabioEdfRead(fname)
              Returns the EDF-data using FabIO.

       XRStools.xrs_fileIO.PrepareEdfMatrix(scan_length, num_pix_x, num_pix_y)
              Returns np.zeros of the shape of the detector.

       XRStools.xrs_fileIO.PrepareEdfMatrix_TwoImages(scan_length, num_pix_x, num_pix_y)
              Returns np.zeros for old data (horizontal and vertical Maxipix images in  different
              files).

       XRStools.xrs_fileIO.PyMcaEdfRead(fname)
              Returns the EDF-data using PyMCA.

       XRStools.xrs_fileIO.PyMcaSpecRead(filename, nscan)
              Returns data, counter-names, and EDF-files using PyMCA.

       XRStools.xrs_fileIO.PyMcaSpecRead_my(filename, nscan)
              Returns data, counter-names, and EDF-files using PyMCA.

       XRStools.xrs_fileIO.ReadEdfImages(ccdcounter,   num_pix_x,   num_pix_y,  path,  EdfPrefix,
       EdfName, EdfPostfix)
              Reads a series of EDF-images and returs them in a 3D Numpy  array  (horizontal  and
              vertical Maxipix images in different files).

       XRStools.xrs_fileIO.ReadEdfImages_PyMca(ccdcounter, path, EdfPrefix, EdfName, EdfPostfix)
              Reads  a  series  of EDF-images and returs them in a 3D Numpy array (horizontal and
              vertical Maxipix images in different files).

       XRStools.xrs_fileIO.ReadEdfImages_TwoImages(ccdcounter,   num_pix_x,   num_pix_y,    path,
       EdfPrefix_h, EdfPrefix_v, EdfNmae, EdfPostfix)
              Reads  a  series  of EDF-images and returs them in a 3D Numpy array (horizontal and
              vertical Maxipix images in different files).

       XRStools.xrs_fileIO.ReadEdfImages_my(ccdcounter, path, EdfPrefix, EdfName, EdfPostfix)
              Reads a series of EDF-images and returs them in a 3D Numpy  array  (horizontal  and
              vertical Maxipix images in different files).

       XRStools.xrs_fileIO.ReadEdf_justFirstImage(ccdcounter,     path,    EdfPrefix,    EdfName,
       EdfPostfix)

       XRStools.xrs_fileIO.ReadScanFromFile(fname)
              Returns a scan stored in a Numpy archive.

       XRStools.xrs_fileIO.SilxSpecRead(filename, nscan)
              Returns data, motors, counter-names, and labels using Silx.

       XRStools.xrs_fileIO.SpecRead(filename, nscan)
              Parses a SPEC file and returns a specified scan.

              Args:

                     • filename (string): SPEC file name (inlc. path)

                     • nscan (int): Number of the desired scan.

              Returns:

                     • data (np.array): array of the data from the specified scan.

                     • motors (list): list  of  all  motor  positions  from  the  header  of  the
                       specified scan.

                     • counters  (dict):  all  counters in a dictionary with the counter names as
                       keys.

       XRStools.xrs_fileIO.WriteScanToFile(fname, data, motors, counters, edfmats)
              Writes a scan into a Numpy archive.

       XRStools.xrs_fileIO.dump_on_file_list(filename)

       XRStools.xrs_fileIO.myEdfRead(filename)
              Returns EDF-data, if PyMCA is not installed (this is slow).

       XRStools.xrs_fileIO.readbiggsdata(filename, element)
              Reads Hartree-Fock Profile of element 'element' from values tabulated by  Biggs  et
              al.  (Atomic  Data  and  Nuclear Data Tables 16, 201-309 (1975)) as provided by the
              DABAX                                  library                                   (‐
              http://ftp.esrf.eu/pub/scisoft/xop2.3/DabaxFiles/ComptonProfiles.dat).       input:
              filename = path to the ComptonProfiles.dat file (the  file  should  be  distributed
              with  this  package) element  = string of element name returns: data     = the data
              for the according element as in the file:
                 #UD  Columns: #UD  col1: pz in atomic units #UD   col2:  Total  compton  profile
                 (sum  over  the  atomic  electrons  #UD   col3,...coln:  Compton profile for the
                 individual sub-shells

              occupation = occupation  number  of  the  according  shells  bindingen   =  binding
              energies  of  the  accorting shells colnames   = strings of column names as used in
              the file

   XRStools.xrs_prediction Module
   XRStools.xrs_rois Module
   XRStools.xrs_utilities Module
       XRStools.xrs_utilities.Chi(chi, degrees=True)
              rotation around (1,0,0), pos sense

       XRStools.xrs_utilities.HRcorrect(pzprofile, occupation, q)
              Returns the first order correction to filled 1s, 2s, and 2p Compton profiles.

              Implementation after Holm and Ribberfors (citation ...).

              Args:

                     • pzprofile (np.array): Compton profile (e.g. tabulated from  Biggs)  to  be
                       corrected (2D matrix).

                     • occupation (list): electron configuration.

                     • q (float or np.array): momentum transfer in [a.u.].

              Returns:
                     asymmetry   (np.array):   asymmetries  to  be  added  to  the  raw  profiles
                     (normalized to the number of electrons on pz scale)

       XRStools.xrs_utilities.NNMFcost(x, A, F, C, F_up, C_up, n, k, m)
              NNMFcost Returns cost and gradient for NNMF with constraints.

       XRStools.xrs_utilities.NNMFcost_der(x, A, F, C, F_up, C_up, n, k, m)

       XRStools.xrs_utilities.NNMFcost_old(x, A, W, H, W_up, H_up)
              NNMFcost Returns cost and gradient for NNMF with constraints.

       XRStools.xrs_utilities.Omega(omega, degrees=True)
              rotation around (0,0,1), pos sense

       XRStools.xrs_utilities.Phi(phi, degrees=True)
              rotation around (0,1,0), neg sense

       XRStools.xrs_utilities.Rx(chi, degrees=True)
              Rx Rotation matrix for vector rotations around the [1,0,0]-direction.

              Args:

                     • chi   (float) : Angle of rotation.

                     • degrees(bool) : Angle given in radians or degrees.

              Returns:

                     • 3x3 rotation matrix.

       XRStools.xrs_utilities.Ry(phi, degrees=True)
              Ry Rotation matrix for vector rotations around the [0,1,0]-direction.

              Args:

                     • phi   (float) : Angle of rotation.

                     • degrees(bool) : Angle given in radians or degrees.

              Returns:

                     • 3x3 rotation matrix.

       XRStools.xrs_utilities.Rz(omega, degrees=True)
              Rz Rotation matrix for vector rotations around the [0,0,1]-direction.

              Args:

                     • omega (float) : Angle of rotation.

                     • degrees(bool) : Angle given in radians or degrees.

              Returns:

                     • 3x3 rotation matrix.

       XRStools.xrs_utilities.TTsolver1D(el_energy,   hkl=[6,   6,   0],   crystal='Si',   R=1.0,
       dev=array([-  50., - 49., - 48., - 47., - 46., - 45., - 44., - 43., - 42., - 41., - 40., -
       39., - 38., - 37., - 36., - 35., - 34., - 33., - 32., - 31., - 30., - 29., - 28., - 27., -
       26., - 25., - 24., - 23., - 22., - 21., - 20., - 19., - 18., - 17., - 16., - 15., - 14., -
       13., - 12., - 11., - 10., - 9., - 8., - 7., - 6., - 5., - 4., - 3., - 2., -  1.,  0.,  1.,
       2.,  3.,  4.,  5.,  6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20.,
       21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37.,  38.,
       39.,  40., 41., 42., 43., 44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55., 56.,
       57., 58., 59., 60., 61., 62., 63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73.,  74.,
       75.,  76., 77., 78., 79., 80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90., 91., 92.,
       93., 94., 95., 96., 97., 98., 99., 100., 101., 102., 103., 104., 105., 106.,  107.,  108.,
       109.,  110., 111., 112., 113., 114., 115., 116., 117., 118., 119., 120., 121., 122., 123.,
       124., 125., 126., 127., 128., 129., 130., 131., 132., 133., 134., 135., 136., 137.,  138.,
       139.,   140.,  141.,  142.,  143.,  144.,  145.,  146.,  147.,  148.,  149.]),  alpha=0.0,
       chitable_prefix='/home/christoph/sources/XRStools/data/chitables/chitable_')
              TTsolver Solves the Takagi-Taupin equation for a bent crystal.

              This function is based on a Matlab implementation by  S.  Huotari  of  M.  Krisch's
              Fortran programs.

              Args:

                     • el_energy (float): Fixed nominal (working) energy in keV.

                     • hkl (array): Reflection order vector, e.g. [6, 6, 0]

                     • crystal (str): Crystal used (can be silicon 'Si' or 'Ge')

                     • R (float): Crystal bending radius in m.

                     • dev  (np.array):  Deviation  parameter  (in  arc.  seconds)  for which the
                       reflectivity curve should be calculated.

                     • alpha (float): Crystal assymetry angle.

              Returns:

                     • refl (np.array): Reflectivity curve.

                     • e (np.array): Deviation from Bragg angle in meV.

                     • dev (np.array): Deviation from Bragg angle in microrad.

       XRStools.xrs_utilities.absCorrection(mu1,      mu2,      alpha,      beta,       samthick,
       geometry='transmission')
              absCorrection

              Calculates  absorption  correction  for  given  mu1 and mu2.  Multiply the measured
              spectrum with this correction factor.  This is a translation of Keijo  Hamalainen's
              Matlab function (KH 30.05.96).

              Args

                     • mu1 : np.array  Absorption coefficient for the incident energy in [1/cm].

                     • mu2 : np.array Absorption coefficient for the scattered energy in [1/cm].

                     • alpha : float Incident angle relative to plane normal in [deg].

                     • beta : float  Exit angle relative to plane normal [deg].

                     • samthick : float  Sample thickness in [cm].

                     • geometry  :  string,  optional  Key  word  for different sample geometries
                       ('transmission', 'reflection', 'sphere').  If geometry is set to 'sphere',
                       no angular dependence is assumed.

              Returns

                     • ac  :  np.array  Absorption  correction  factor.  Multiply  this with your
                       measured spectrum.

       XRStools.xrs_utilities.abscorr2(mu1, mu2, alpha, beta, samthick)
              Calculates absorption correction for given mu1  and  mu2.   Multiply  the  measured
              spectrum with this correction factor.

              This is a translation of Keijo Hamalainen's Matlab function (KH 30.05.96).

              Args:

                     • mu1 (np.array): absorption coefficient for the incident energy in [1/cm].

                     • mu2 (np.array): absorption coefficient for the scattered energy in [1/cm].

                     • alpha (float): incident angle relative to plane normal in [deg].

                     • beta  (float): exit angle relative to plane normal [deg] (for transmission
                       geometry use beta < 0).

                     • samthick (float): sample thickness in [cm].

              Returns:

                     • ac (np.array): absorption  correction  factor.  Multiply  this  with  your
                       measured spectrum.

       XRStools.xrs_utilities.addch(xold, yold, n, n0=0, errors=None)
              # ADDCH     Adds contents of given adjacent channels together # #           [x2,y2]
              = addch(x,y,n,n0) #           x  = original  x-scale   (row  or  column  vector)  #
              y   =  original y-values (row or column vector) #           n  = number of channels
              to be summed up #            n0 = offset for adding, default is 0 #           x2  =
              new   x-scale  #            y2  =  new  y-values  #  #            KH  17.09.1990  #
              Modified 29.05.1995 to include offset

       XRStools.xrs_utilities.bidiag_reduction(A)
              function [U,B,V]=bidiag_reduction(A) %  [U  B  V]=bidiag_reduction(A)  %  Algorithm
              6.5-1  in  Golub & Van Loan, Matrix Computations % Johns Hopkins University Press %
              Finds an upper bidiagonal matrix B so that A=U*B*V' % with U,V orthogonal.  A is an
              m x n matrix

       XRStools.xrs_utilities.bootstrapCNNMF(A, F_ini, C_ini, F_up, C_up, Niter)
              bootstrapCNNMF Constrained non-negative matrix factorization with bootstrapping for
              error estimates.

       XRStools.xrs_utilities.bootstrapCNNMF_old(A, k, Aerr, F_ini, C_ini, F_up, C_up, Niter=100)
              bootstrapCNNMF Constrained non-negative matrix factorization with bootstrapping for
              error estimates.

       XRStools.xrs_utilities.bragg(hkl, e, xtal='Si')
              %    BRAGG     Calculates   Bragg   angle   for   given   reflection   in   RAD   %
              output=bangle(hkl,e,xtal) %        hkl can be a matrix i.e. hkl=[1,0,0 ; 1,1,1];  %
              e=energy  in keV %      xtal='Si', 'Ge', etc. (check dspace.m) or d0 (Si default) %
              %      KH 28.09.93 %

       class XRStools.xrs_utilities.bragg_refl(crystal, hkl, alpha=0.0)
              Bases: object

              Dynamical theory of diffraction.

              get_chi(energy, crystal=None, hkl=None)

              get_nff(nff_path=None)

              get_polarization_factor(tth, case='sigma')
                     Calculate polarization factor.

              get_reflectivity(energy, delta_theta, case='sigma')

              get_reflectivity_bent(energy, delta_theta, R)

       XRStools.xrs_utilities.braggd(hkl, e, xtal='Si')
              # BRAGGD  Calculates Bragg angle for given reflection in deg #      Call BRAGG.M  #
              output=bangle(hkl,e,xtal)  #        hkl can be a matrix i.e. hkl=[1,0,0 ; 1,1,1]; #
              e=energy in keV #      xtal='Si', 'Ge', etc. (check dspace.m) or d0 (Si default)  #
              #      KH 28.09.93

       XRStools.xrs_utilities.cNNMF_chris(A, W_fixed, W_free, maxIter=100, verbose=True)

       XRStools.xrs_utilities.cixsUBfind(x, G, Q_sample, wi, wo, lambdai, lambdao)
              cixsUBfind

       XRStools.xrs_utilities.cixsUBgetAngles_primo(Q)

       XRStools.xrs_utilities.cixsUBgetAngles_secondo(Q)

       XRStools.xrs_utilities.cixsUBgetAngles_terzo(Q)

       XRStools.xrs_utilities.cixsUBgetQ_primo(tthv, tthh, psi)
              returns  the Q0 given the detector position (tthv, tth) and th crystal orientation.
              This orientation is calculated considering :

                 the Bragg condition and the rotation around the G vector :
                        this rotation is defined by psi which is a rotation around G

       XRStools.xrs_utilities.cixsUBgetQ_secondo(tthv, tthh, psi)

       XRStools.xrs_utilities.cixsUBgetQ_terzo(tthv, tthh, psi)

       XRStools.xrs_utilities.cixs_primo(tthv, tthh, psi, anal_braggd=86.5)
              cixs_primo

       XRStools.xrs_utilities.cixs_secondo(tthv, tthh, psi, anal_braggd=86.5)
              cixs_secondo

       XRStools.xrs_utilities.cixs_terzo(tthv, tthh, psi, anal_braggd=86.5)
              cixs_terzo

       XRStools.xrs_utilities.compute_matrix_elements(R1, R2, k, r)

       XRStools.xrs_utilities.con2mat(x, W, H, W_up, H_up)

       XRStools.xrs_utilities.constrained_mf(A, W_ini, W_up, coeff_ini,  coeff_up,  maxIter=1000,
       tol=1e-08, maxIter_power=1000)
              cfactorizeOffDiaMatrix  constrained  version  of factorizeOffDiaMatrix Returns main
              components from an off-diagonal Matrix (energy-loss x angular-departure).

       XRStools.xrs_utilities.constrained_svd(M,  U_ini,  S_ini,  VT_ini,  U_up,  max_iter=10000,
       verbose=False)
              constrained_nnmf Approximate singular value decomposition with constraints.

              function                 [U,                 S,                 V]                =
              constrained_svd(M,U_ini,S_ini,V_ini,U_up,max_iter=10000,verbose=False)

       XRStools.xrs_utilities.convertSplitEDF2EDF(foldername)
              converts the old style EDF files (one  image  for  horizontal  and  one  image  for
              vertical chambers) to the new style EDF (one single image).

              Arg:

                     foldername (str): Path to folder with all the EDF-files to be
                            converted.

       XRStools.xrs_utilities.convg(x, y, fwhm)
              Convolution  with  Gaussian  x  = x-vector y  = y-vector fwhm = fulll width at half
              maximum of the gaussian with which y is convoluted

       XRStools.xrs_utilities.convtoprim(hklconv)
              convtoprim converts diamond structure reciprocal lattice expressed in  conventional
              lattice vectors to primitive one (Helsinki -> Palaiseau conversion) from S. Huotari

       XRStools.xrs_utilities.cshift(w1, th)
              cshift Calculates Compton peak position.

              Args:

                     • w1 (float, array): Incident energy in [keV].

                     • th (float): Scattering angle in [deg].

              Returns:

                     • w2 (foat, array): Energy of Compton peak in [keV].

              Funktion adapted from Keijo Hamalainen.

       XRStools.xrs_utilities.delE_JohannAberration(E, A, R, Theta)
              Calculates the Johann aberration of a spherical analyzer crystal.

              Args:  E      (float):  Working  energy  in [eV].  A     (float): Analyzer aperture
                     [mm].  R     (float): Radius of the Rowland  circle  [mm].   Theta  (float):
                     Analyzer Bragg angle [degree].

              Returns:
                     Johann abberation in [eV].

       XRStools.xrs_utilities.delE_dicedAnalyzerIntrinsic(E, Dw, Theta)
              Calculates the intrinsic energy resolution of a diced crystal analyzer.

              Args:  E      (float):  Working energy in [eV].  Dw    (float): Darwin width of the
                     used reflection [microRad].  Theta (float): Analyzer Bragg angle [degree].

              Returns:
                     Intrinsic energy resolution of a perfect analyzer crystal.

       XRStools.xrs_utilities.delE_offRowland(E, z, A, R, Theta)
              Calculates the off-Rowland contribution of a spherical analyzer crystal.

              Args:  E     (float): Working energy in [eV].  z     (float): Off-Rowland  distance
                     [mm].   A     (float): Analyzer aperture [mm].  R     (float): Radius of the
                     Rowland circle [mm].  Theta (float): Analyzer Bragg angle [degree].

              Returns:
                     Off-Rowland contribution in [eV] to the energy resolution.

       XRStools.xrs_utilities.delE_pixelSize(E, p, R, Theta)
              Calculates the pixel size contribution  to  the  resolution  function  of  a  diced
              analyzer crystal.

              Args:  E      (float):  Working energy in [eV].  p     (float): Pixel size in [mm].
                     R     (float): Radius of the Rowland circle [mm].  Theta  (float):  Analyzer
                     Bragg angle [degree].

              Returns:
                     Pixel  size  contribution  in  [eV]  to  the  energy  resolution for a diced
                     analyzer crystal.

       XRStools.xrs_utilities.delE_sourceSize(E, s, R, Theta)
              Calculates the source size contribution to the resolution function.

              Args:  E     (float): Working energy in [eV].  s     (float): Source size in  [mm].
                     R      (float):  Radius of the Rowland circle [mm].  Theta (float): Analyzer
                     Bragg angle [degree].

              Returns:
                     Source size contribution in [eV] to the energy resolution.

       XRStools.xrs_utilities.delE_stressedCrystal(E, t, v, R, Theta)
              Calculates the  stress  induced  contribution  to  the  resulution  function  of  a
              spherically bent crystal analyzer.

              Args:  E      (float): Working energy in [eV].  t     (float): Absorption length in
                     the analyzer material [mm].  v     (float): Poisson ratio  of  the  analyzer
                     material.  R     (float): Radius of the Rowland circle [mm].  Theta (float):
                     Analyzer Bragg angle [degree].

              Returns:
                     Stress-induced contribution in [eV] to the energy resolution.

       XRStools.xrs_utilities.diode(current, energy, thickness=0.03)
              diode Calculates the number of photons incident for a Si PIPS diode.

              Args:

                     • current (float): Diode current in [pA].

                     • energy (float): Photon energy in [keV].

                     • thickness (float): Thickness of Si active layer in [cm].

              Returns:

                     • flux (float): Number of photons per second.

              Function adapted from Matlab function by S. Huotari.

       XRStools.xrs_utilities.dspace(hkl=[6, 6, 0], xtal='Si')
              % DSPACE Gives d-spacing for given xtal %     d=dspace(hkl,xtal) %     hkl can be a
              matrix  i.e.  hkl=[1,0,0 ; 1,1,1]; %     xtal='Si','Ge','LiF','InSb','C','Dia','Li'
              (case insensitive) %     if xtal is number this is user as a d0 % %     KH 28.09.93
              %        SH 2005 %

       class     XRStools.xrs_utilities.dtxrd(hkl,    energy,    crystal='Si',    asym_angle=0.0,
       angular_range=[- 0.0005, 0.0005], angular_step=1e-08)
              Bases: object

              class to hold all things dynamic theory of diffraction.

              get_anomalous_absorption(energy=None)

              get_eta(angular_range, angular_step=1e-08)

              get_extinction_length(energy=None)

              get_reflection_width()

              get_reflectivity(angular_range=None, angular_step=None)

              set_asymmetry(alpha)
                     negative alpha -> more grazing incidence

              set_energy(energy)

              set_hkl(hkl)

       XRStools.xrs_utilities.dtxrd_anomalous_absorption(energy,  hkl,  alpha=0.0,  crystal='Si',
       angular_range=array([- 0.0005]))

       XRStools.xrs_utilities.dtxrd_extinction_length(energy, hkl, alpha=0.0, crystal='Si')

       XRStools.xrs_utilities.dtxrd_reflectivity(energy,     hkl,     alpha=0.0,    crystal='Si',
       angular_range=array([- 0.0005]))

       XRStools.xrs_utilities.e2pz(w1, w2, th)
              Calculates the momentum scale and the relativistic Compton cross section correction
              according to P. Holm, PRA 37, 3706 (1988).

              This  function  is  translated  from  Keijo  Hamalainen's Matlab implementation (KH
              29.05.96).

              Args:

                     • w1 (float or np.array): incident energy in [keV]

                     • w2 (float or np.array): scattered energy in [keV]

                     • th (float): scattering angle two theta in [deg]

              returns:

                     • pz (float or np.array): momentum scale in [a.u.]

                     • cf (float or np.array): cross section correction factor such that: J(pz) =
                       cf * d^2(sigma)/d(w2)*d(Omega) [barn/atom/keV/srad]

       XRStools.xrs_utilities.edfread(filename)
              reads edf-file with filename "filename" OUTPUT:    data = 256x256 numpy array

       XRStools.xrs_utilities.edfread_test(filename)
              reads edf-file with filename "filename" OUTPUT:    data = 256x256 numpy array

              here  is  how  i  opened  the  HH  data:  data  =  np.fromfile(f,np.int32)  image =
              np.reshape(data,(dim,dim))

       XRStools.xrs_utilities.element(z)
              Converts atomic number into string of the element symbol and vice versa.

              Returns atomic number of given element, if z is a string of the element  symbol  or
              string of element symbol of given atomic number z.

              Args:

                     • z (string or int): string of the element symbol or atomic number.

              Returns:

                     • Z (string or int): string of the element symbol or atomic number.

       XRStools.xrs_utilities.energy(d, ba)
              %  ENERGY   Calculates  energy  corrresponing  to Bragg angle for given d-spacing %
              function  e=energy(dspace,bragg_angle)   %   %        dspace   for   reflection   %
              bragg_angle in DEG % %         KH 28.09.93

       XRStools.xrs_utilities.energy_monoangle(angle, d=1.6374176589984608)
              %  ENERGY   Calculates  energy  corrresponing  to Bragg angle for given d-spacing %
              function e=energy(dspace,bragg_angle) % %         dspace  for  reflection  (defaulf
              for Si(311) reflection) %         bragg_angle in DEG % %         KH 28.09.93 %

       XRStools.xrs_utilities.fermi(rs)
              fermi  Calculates  the plasmon energy (in eV), Fermi energy (in eV), Fermi momentum
              (in a.u.), and critical plasmon cut-off vector (in a.u.).

              Args:

                     • rs (float): electron separation parameter

              Returns:

                     • wp (float): plasmon energy (in eV)

                     • ef (float): Fermi energy (in eV)

                     • kf (float): Fermi momentum (in a.u.)

                     • kc (float): critical plasmon cut-off vector (in a.u.)

              Based on Matlab function from A. Soininen.

       XRStools.xrs_utilities.find_center_of_mass(x, y)
              Returns the center of mass (first moment) for the given curve y(x)

       XRStools.xrs_utilities.find_diag_angles(q, x0,  U,  B,  Lab,  beam_in,  lambdai,  lambdao,
       tol=1e-08, method='BFGS')
              find_diag_angles Finds the FOURC spectrometer and sample angles for a desired q.

              Args:

                     • q (array): Desired momentum transfer in Lab coordinates.

                     • x0 (list): Guesses for the angles (tthv, tthh, chi, phi, omega).

                     • U (array): 3x3 U-matrix Lab-to-sample transformation.

                     • B   (array):   3x3   B-matrix   reciprocal   lattice   to  absolute  units
                       transformation.

                     • lambdai (float): Incident x-ray wavelength in Angstrom.

                     • lambdao (float): Scattered x-ray wavelength in Angstrom.

                     • tol (float): Toleranz for minimization (see scipy.optimize.minimize)

                     • method (str): Method for minimization (see scipy.optimize.minimize)

              Returns:

                     • ans (array): tthv, tthh, phi, chi, omega

       XRStools.xrs_utilities.fwhm(x, y)
              finds full width at half maximum of the curve y vs.  x  returns  f   =  FWHM  x0  =
              position of the maximum

       XRStools.xrs_utilities.gauss(x, x0, fwhm)

       XRStools.xrs_utilities.get_UB_Q(tthv, tthh, phi, chi, omega, **kwargs)
              get_UB_Q  Returns  the  momentum  transfer  and  scattering vectors for given FOURC
              spectrometer and sample angles. U-, B-matrices  and  incident/scattered  wavelength
              are passed as keyword-arguments.

              Args:

                     • tthv (float): Spectrometer vertical 2Theta angle.

                     • tthh (float): Spectrometer horizontal 2Theta angle.

                     • chi (float): Sample rotation around x-direction.

                     • phi (float): Sample rotation around y-direction.

                     • omega (float): Sample rotation around z-direction.

                     •

                       kwargs (dict): Dictionary with key-word arguments:

                              • kwargs['U'] (array): 3x3 U-matrix Lab-to-sample transformation.

                              • kwargs['B']  (array): 3x3 B-matrix reciprocal lattice to absolute
                                units transformation.

                              • kwargs['lambdai'] (float): Incident x-ray wavelength in Angstrom.

                              • kwargs['lambdao']  (float):   Scattered   x-ray   wavelength   in
                                Angstrom.

              Returns:

                     • Q_sample  (array): Momentum transfer in sample coordinates.

                     • Ki_sample (array): Incident beam direction in sample coordinates.

                     • Ko_sample (array): Scattered beam direction in sample coordinates.

       XRStools.xrs_utilities.get_gnuplot_rgb(start=None, end=None, length=None)
              get_gnuplot_rgb Prints out a progression of RGB hex-keys to use in Gnuplot.

              Args:

                     • start (array): RGB code to start from (must be numbers out of [0,1]).

                     • end   (array): RGB code to end at (must be numbers out of [0,1]).

                     • length  (int): How many colors to print out.

       XRStools.xrs_utilities.get_num_of_MD_steps(time_ps, time_step)
              Calculates  the  number of steps in an MD simulation for a desired time (in ps) and
              given step size (in a.u.)

              Args:  time_ps   (float): Desired time span (ps).  time_step (float):  Chosen  time
                     step (a.u.).

              Returns:
                     The number of steps required to span the desired time span.

       XRStools.xrs_utilities.getpenetrationdepth(energy, formulas, concentrations, densities)
              returns  the  penetration  depth  of  a  mixture  of chemical formulas with certain
              concentrations and densities

       XRStools.xrs_utilities.gettransmission(energy,   formulas,   concentrations,    densities,
       thickness)
              returns  the  transmission  through  a  sample  composed  of chemical formulas with
              certain densities mixed to certain concentrations, and a thickness

       XRStools.xrs_utilities.hex2rgb(hex_val)

       XRStools.xrs_utilities.hlike_Rwfn(n, l, r, Z)
              hlike_Rwfn Returns an array with the radial part of a hydrogen-like wave function.

              Args:

                     • n (integer): main quantum number n

                     • l (integer): orbitalquantum number l

                     • r (array): vector of radii on which the function should be evaluated

                     • Z (float): effective nuclear charge

       XRStools.xrs_utilities.householder(b, k)
              function H = householder(b, k) % H = householder(b, k) % Atkinson, Section 9.3,  p.
              611  %  b  is  a column vector, k an index < length(b) % Constructs a matrix H that
              annihilates entries % in the product H*b below index k

              % $Id: householder.m,v 1.1 2008-01-16 15:33:30 mike Exp $ % M. M. Sussman

       XRStools.xrs_utilities.interpolate_M(xc, xi, yi, i0)
                 Linear interpolation scheme after Martin Sundermann that conserves the  absolute
                 number of counts.

                 ONLY WORKS FOR EQUALLY/EVENLY SPACED XC, XI!

                 Args:  xc   (np.array):  The  x-coordinates  of  the  interpolated  values.   xi
                        (np.array): The x-coordinates of the data points, must be increasing.  yi
                        (np.array):  The y-coordinates of the data points, same length as xp.  i0
                        (np.array): Normalization values for the data points, same length as xp.

                 Returns:
                        ic (np.array): The interpolated and normalized data points.

              from scipy.interpolate import Rbf x = arange(20) d = zeros(len(x)) d[10] = 1  xc  =
              arange(0.5,19.5) rbfi = Rbf(x, d) di = rbfi(xc)

       XRStools.xrs_utilities.is_allowed_refl_fcc(H)
              is_allowed_refl_fcc Check if given reflection is allowed for a FCC lattice.

              Args:

                     • H (array, list, tuple): H=[h,k,l]

              Returns:

                     • boolean

       XRStools.xrs_utilities.lindhard_pol(q, w, rs=3.93, use_corr=False, lifetime=0.28)
              lindhard_pol  Calculates  the  Lindhard polarizability function (RPA) for certain q
              (a.u.), w (a.u.) and rs (a.u.).

              Args:

                     • q (float): momentum transfer (in a.u.)

                     • w (float): energy (in a.u.)

                     • rs (float): electron parameter

                     • use_corr (boolean): if True, uses Bernardo's calculation for n(k)  instead
                       of the Fermi function.

                     • lifetime (float): life time (default is 0.28 eV for Na).

              Based on Matlab function by S. Huotari.

       XRStools.xrs_utilities.makeprofile(element,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat',
       E0=9.69, tth=35.0, correctasym=None)
              takes  the  profiles  from  'makepzprofile()',  converts  them onto eloss scale and
              normalizes them to S(q,w) [1/eV] input: element  = element symbol (e.g. 'Si', 'Al',
              etc.)   filename  =  path  and filename to tabulated profiles E0       = scattering
              energy [keV] tth      = scattering angle  [deg]  returns:  enscale  =  energy  loss
              scale  J  = total CP C = only core contribution to CP V = only valence contribution
              to CP q = momentum transfer [a.u.]

       XRStools.xrs_utilities.makeprofile_comp(formula,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat',
       E0=9.69, tth=35, correctasym=None)
              returns the compton profile of a chemical compound with  formula  'formula'  input:
              formula  =  string of a chemical formula (e.g. 'SiO2', 'Ba8Si46', etc.)  filename =
              path and filename to tabulated profiles E0        =  scattering  energy  [keV]  tth
              =  scattering angle  [deg] returns: eloss = energy loss scale J = total CP C = only
              core contribution to CP V = only valence contribution to CP q =  momentum  transfer
              [a.u.]

       XRStools.xrs_utilities.makeprofile_compds(formulas,                   concentrations=None,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat',
       E0=9.69, tth=35.0, correctasym=None)
              returns  sum  of compton profiles from a lost of chemical compounds weighted by the
              given concentration

       XRStools.xrs_utilities.makepzprofile(element,
       filename='/usr/lib/python3/dist-packages/XRStools/resources/data/ComptonProfiles.dat')
              constructs  compton  profiles of element 'element' on pz-scale (-100:100 a.u.) from
              the Biggs tables provided in 'filename'

              input:

                     • element   = element symbol (e.g. 'Si', 'Al', etc.)

                     • filename  = path and filename to tabulated profiles

              returns:

                     • pzprofile = numpy array of the CP: *  1. column: pz-scale  *   2.  ...  n.
                       columns:  compton  profile of nth shell * binden     = binding energies of
                       shells * occupation = number of electrons in the according shells

       XRStools.xrs_utilities.mat2con(W, H, W_up, H_up)

       XRStools.xrs_utilities.mat2vec(F, C, F_up, C_up, n, k, m)

       class XRStools.xrs_utilities.maxipix_det(name, spot_arrangement)
              Bases: object

              Class to store some useful values from the detectors used. To be used for arranging
              the ROIs.

              get_det_name()

              get_pixel_range()

       XRStools.xrs_utilities.momtrans_au(e1, e2, tth)
              Calculates  the  momentum  transfer  in  atomic  units input: e1  = incident energy
              [keV] e2  = scattered energy [keV] tth = scattering  angle  [deg]  returns:  q    =
              momentum transfer [a.u.] (corresponding to sin(th)/lambda)

       XRStools.xrs_utilities.momtrans_inva(e1, e2, tth)
              Calculates  the  momentum transfer in inverse angstrom input: e1  = incident energy
              [keV] e2  = scattered energy [keV] tth = scattering  angle  [deg]  returns:  q    =
              momentum transfer [a.u.] (corresponding to sin(th)/lambda)

       XRStools.xrs_utilities.mpr(energy, compound)
              Calculates  the  photoelectric,  elastic,  and  inelastic  absorption of a chemical
              compound.

              Calculates the photoelectric, elastic,  and  inelastic  absorption  of  a  chemical
              compound.

              Args:

                     • energy (np.array): energy scale in [keV].

                     • compound (string): chemical sum formula (e.g. 'SiO2')

              Returns:

                     • murho (np.array): absorption coefficient normalized by the density.

                     • rho (float): density in UNITS?

                     • m (float): atomic mass in UNITS?

       XRStools.xrs_utilities.mpr_compds(energy, formulas, concentrations, E0, rho_formu)
              Calculates  the  photoelectric,  elastic,  and  inelastic  absorption  of  a mix of
              compounds.

              Returns the photoelectric absorption for a sum of different chemical compounds.

              Args:

                     • energy (np.array): energy scale in [keV].

                     • formulas (list of strings): list of chemical sum formulas

              Returns:

                     • murho (np.array): absorption coefficient normalized by the density.

                     • rho (float): density in UNITS?

                     • m (float): atomic mass in UNITS?

       XRStools.xrs_utilities.myprho(energy,                                                   Z,
       logtablefile='/usr/lib/python3/dist-packages/XRStools/resources/data/logtable.dat')
              Calculates the photoelectric, elastic, and inelastic absorption of an element Z

              Calculates the photelectric , elastic, and inelastic absorption of an element Z.  Z
              can be atomic number or element symbol.

              Args:

                     • energy (np.array): energy scale in [keV].

                     • Z (string or int): atomic number or string of element symbol.

              Returns:

                     • murho (np.array): absorption coefficient normalized by the density.

                     • rho (float): density in UNITS?

                     • m (float): atomic mass in UNITS?

       XRStools.xrs_utilities.nonzeroavg(y=None)

       XRStools.xrs_utilities.odefctn(y, t, abb0, abb1, abb7, abb8, lex, sgbeta, y0, c1)
              #%    [T,Y] = ODE23(ODEFUN,TSPAN,Y0,OPTIONS,P1,P2,...)  passes  the  additional  #%
              parameters  P1,P2,... to the ODE function as ODEFUN(T,Y,P1,P2...), and to #%    all
              functions specified in OPTIONS. Use OPTIONS = [] as a  place  holder  if  #%     no
              options are set.

       XRStools.xrs_utilities.odefctn_CN(yCN, t, abb0, abb1, abb7, abb8N, lex, sgbeta, y0, c1)

       XRStools.xrs_utilities.parseformula(formula)
              Parses a chemical sum formula.

              Parses  the  constituing  elements  and  stoichiometries  from a given chemical sum
              formula.

              Args:

                     • formula (string): string of a chemical formula  (e.g.  'SiO2',  'Ba8Si46',
                       etc.)

              Returns:

                     • elements (list): list of strings of constituting elemental symbols.

                     • stoichiometries  (list):  list  of  according  stoichiometries in the same
                       order as 'elements'.

       XRStools.xrs_utilities.plotpenetrationdepth(energy, formulas, concentrations, densities)
              opens a plot window of the penetration depth of a mixture of chemical formulas with
              certain concentrations and densities plotted along the given energy vector

       XRStools.xrs_utilities.plottransmission(energy,   formulas,   concentrations,   densities,
       thickness)
              opens a plot with the transmission plotted along the given energy vector

       XRStools.xrs_utilities.primtoconv(hklprim)
              primtoconv converts diamond structure reciprocal  lattice  expressed  in  primitive
              basis to the conventional basis (Palaiseau -> Helsinki conversion) from S. Huotari

       XRStools.xrs_utilities.pz2e1(w2, pz, th)
              Calculates the incident energy for a specific scattered photon and momentum value.

              Returns  the  incident energy for a given photon energy and scattering angle.  This
              function is translated from Keijo Hamalainen's Matlab implementation (KH 29.05.96).

              Args:

                     • w2 (float): scattered photon energy in [keV]

                     • pz (np.array): pz scale in [a.u.]

                     • th (float): scattering angle two theta in [deg]

              Returns:

                     • w1 (np.array): incident energy in [keV]

       XRStools.xrs_utilities.read_dft_wfn(element,          n,           l,           spin=None,
       directory='/usr/lib/python3/dist-packages/XRStools/resources/data')
              read_dft_wfn Parses radial parts of wavefunctions.

              Args:

                     • element (str): Element symbol.

                     • n (int): Main quantum number.

                     • l (int): Orbital quantum number.

                     • spin (str): Which spin channel, default is average over up and down.

                     • directory (str): Path to directory where the wavefunctions can be found.

              Returns:

                     • r (np.array): radius

                     • wfn (np.array):

       XRStools.xrs_utilities.readbiggsdata(filename, element)
              Reads  Hartree-Fock  Profile of element 'element' from values tabulated by Biggs et
              al. (Atomic Data and Nuclear Data Tables 16, 201-309 (1975))  as  provided  by  the
              DABAX                                   library                                  (‐
              http://ftp.esrf.eu/pub/scisoft/xop2.3/DabaxFiles/ComptonProfiles.dat).       input:
              filename  =  path  to  the ComptonProfiles.dat file (the file should be distributed
              with this package) element  = string of element name returns:

                 •

                   data = the data for the according element as in the file:

                          • #UD  Columns:

                          • #UD  col1: pz in atomic units

                          • #UD  col2: Total compton profile (sum over the atomic electrons

                          • #UD  col3,...coln: Compton profile for the individual sub-shells

                 • occupation = occupation number of the according shells

                 • bindingen  = binding energies of the accorting shells

                 • colnames   = strings of column names as used in the file

       XRStools.xrs_utilities.readfio(prefix, scannumber, repnumber=0)
              if repnumber = 0: reads a spectra-file (name: prefix_scannumber.fio) if repnumber >
              1: reads a spectra-file (name: prefix_scannumber_rrepnumber.fio)

       XRStools.xrs_utilities.readp01image(filename)
              reads a detector file from PetraIII beamline P01

       XRStools.xrs_utilities.readp01scan(prefix, scannumber)
              reads a whole scan from PetraIII beamline P01 (experimental)

       XRStools.xrs_utilities.readp01scan_rep(prefix, scannumber, repetition)
              reads a whole scan with repititions from PetraIII beamline P01 (experimental)

       XRStools.xrs_utilities.savitzky_golay(y, window_size, order, deriv=0, rate=1)
              Smooth  (and  optionally  differentiate)  data  with  a Savitzky-Golay filter.  The
              Savitzky-Golay filter removes high frequency noise from data.  It has the advantage
              of preserving the original shape and features of the signal better than other types
              of filtering approaches, such as moving averages techniques.

              Parameters:

                     • y : array_like, shape (N,) the values of the time history of the signal.

                     • window_size : int the length of the window. Must be an odd integer number.

                     • order : int the order of the polynomial used in the  filtering.   Must  be
                       less then window_size - 1.

                     • deriv:  int the order of the derivative to compute (default = 0 means only
                       smoothing)

              Returns

                     • ys : ndarray, shape (N) the smoothed signal (or it's n-th derivative).

              Notes: The Savitzky-Golay is a type of low-pass  filter,  particularly  suited  for
                     smoothing noisy data. The main idea behind this approach is to make for each
                     point a least-square fit with a polynomial of high order  over  a  odd-sized
                     window centered at the point.

              Examples

                 t = np.linspace(-4, 4, 500)
                 y = np.exp( -t**2 ) + np.random.normal(0, 0.05, t.shape)
                 ysg = savitzky_golay(y, window_size=31, order=4)
                 import matplotlib.pyplot as plt
                 plt.plot(t, y, label='Noisy signal')
                 plt.plot(t, np.exp(-t**2), 'k', lw=1.5, label='Original signal')
                 plt.plot(t, ysg, 'r', label='Filtered signal')
                 plt.legend()
                 plt.show()

              References ::

              [1]  A.  Savitzky,  M.  J.  E.  Golay,  Smoothing  and  Differentiation  of Data by
                   Simplified Least Squares Procedures. Analytical Chemistry, 1964,  36  (8),  pp
                   1627-1639.

              [2]  Numerical  Recipes  3rd  Edition:  The Art of Scientific Computing W.H. Press,
                   S.A. Teukolsky, W.T. Vetterling,  B.P.  Flannery  Cambridge  University  Press
                   ISBN-13: 9780521880688

       XRStools.xrs_utilities.sgolay2d(z, window_size, order, derivative=None)

       XRStools.xrs_utilities.sigmainc(Z,                                                 energy,
       logtablefile='/usr/lib/python3/dist-packages/XRStools/resources/data/logtable.dat')
              sigmainc Calculates the Incoherent Scattering Cross Section in cm^2/g using Log-Log
              Fit.

              Args:

                     • z (int or string): Element number or elements symbol.

                     • energy (float or array): Energy (can be number or vector)

              Returns:

                     • tau (float or array): Photoelectric cross section in [cm**2/g]

              Adapted from original Matlab function of Keijo Hamalainen.

       XRStools.xrs_utilities.specread(filename, nscan)
              reads scan "nscan" from SPEC-file "filename"

              INPUT:

                     • filename = string with the SPEC-file name

                     • nscan    = number (int) of desired scan

              OUTPUT:

                     • data     =

                     • motors   =

                     • counters = dictionary

       XRStools.xrs_utilities.spline2(x, y, x2)
              Extrapolates the smaller and larger valuea as a constant

       XRStools.xrs_utilities.split_hdf5_address(dataadress)

       XRStools.xrs_utilities.stiff_compl_matrix_Si(e1, e2, e3, ansys=False)
              stiff_compl_matrix_Si  Returns  stiffnes  and  compliance  tensor of Si for a given
              orientation.

              Args:

                     • e1 (np.array): unit vector normal to crystal surface

                     • e2 (np.array): unit vector crystal surface

                     • e3 (np.array): unit vector orthogonal to e2

              Returns:

                     • S (np.array): compliance tensor in new coordinate system

                     • C (np.array): stiffnes tensor in new coordinate system

                     • E (np.array): Young's modulus in [GPa]

                     • G (np.array): shear modulus in [GPa]

                     • nu (np.array): Poisson ratio

              Copied from S.I. of L. Zhang et al. "Anisotropic  elasticity  of  silicon  and  its
              application  to  the  modelling  of  X-ray  optics."  J. Synchrotron Rad. 21, no. 3
              (2014): 507-517.

       XRStools.xrs_utilities.sumx(A)
              Short-hand command to sum over 1st dimension of a N-D matrix (N>2) and  to  squeeze
              it to N-1-D matrix.

       XRStools.xrs_utilities.svd_my(M, maxiter=100, eta=0.1)

       XRStools.xrs_utilities.taupgen(e, hkl=[6, 6, 0], crystals='Si', R=1.0, dev=array([- 50., -
       49., - 48., - 47., - 46., - 45., - 44., - 43., - 42., - 41., - 40., - 39., - 38., - 37., -
       36., - 35., - 34., - 33., - 32., - 31., - 30., - 29., - 28., - 27., - 26., - 25., - 24., -
       23., - 22., - 21., - 20., - 19., - 18., - 17., - 16., - 15., - 14., - 13., - 12., - 11., -
       10., - 9., - 8., - 7., - 6., - 5., - 4., - 3., - 2., - 1., 0., 1., 2., 3., 4., 5., 6., 7.,
       8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22.,  23.,  24.,  25.,
       26.,  27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38., 39., 40., 41., 42., 43.,
       44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55., 56., 57., 58., 59., 60.,  61.,
       62.,  63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73., 74., 75., 76., 77., 78., 79.,
       80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90., 91., 92., 93., 94., 95., 96.,  97.,
       98.,  99.,  100.,  101., 102., 103., 104., 105., 106., 107., 108., 109., 110., 111., 112.,
       113., 114., 115., 116., 117., 118., 119., 120., 121., 122., 123., 124., 125., 126.,  127.,
       128.,  129., 130., 131., 132., 133., 134., 135., 136., 137., 138., 139., 140., 141., 142.,
       143., 144., 145., 146., 147., 148., 149.]), alpha=0.0)
              % TAUPGEN          Calculates the reflectivity curves of bent crystals % % function
              [refl,e,dev]=taupgen_new(e,hkl,crystals,R,dev,alpha);  %  %               e = fixed
              nominal energy in keV %            hkl = reflection order vector, e.g. [1  1  1]  %
              crystals  =  crystal string, e.g. 'si' or 'ge' %              R = bending radius in
              meters   %              dev   =   deviation   parameter    for    which    the    %
              curve  will  be calculated (vector) (optional) %          alpha = asymmetry angle %
              based on a FORTRAN program of Michael Krisch % Translitterated to  Matlab  by  Simo
              Huotari 2006, 2007 % Is far away from being good matlab writing - mostly copy&paste
              from % the fortran routines. Frankly, my dear, I don't give a damn.   %  Complaints
              -> /dev/null

       XRStools.xrs_utilities.taupgen_amplitude(e,   hkl=[6,   6,   0],   crystals='Si',   R=1.0,
       dev=array([- 50., - 49., - 48., - 47., - 46., - 45., - 44., - 43., - 42., - 41., - 40.,  -
       39., - 38., - 37., - 36., - 35., - 34., - 33., - 32., - 31., - 30., - 29., - 28., - 27., -
       26., - 25., - 24., - 23., - 22., - 21., - 20., - 19., - 18., - 17., - 16., - 15., - 14., -
       13.,  -  12.,  - 11., - 10., - 9., - 8., - 7., - 6., - 5., - 4., - 3., - 2., - 1., 0., 1.,
       2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.,  17.,  18.,  19.,  20.,
       21.,  22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38.,
       39., 40., 41., 42., 43., 44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55.,  56.,
       57.,  58., 59., 60., 61., 62., 63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73., 74.,
       75., 76., 77., 78., 79., 80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90., 91.,  92.,
       93.,  94.,  95., 96., 97., 98., 99., 100., 101., 102., 103., 104., 105., 106., 107., 108.,
       109., 110., 111., 112., 113., 114., 115., 116., 117., 118., 119., 120., 121., 122.,  123.,
       124.,  125., 126., 127., 128., 129., 130., 131., 132., 133., 134., 135., 136., 137., 138.,
       139., 140., 141., 142., 143., 144., 145., 146., 147., 148., 149.]), alpha=0.0)
              % TAUPGEN          Calculates the reflectivity curves of bent crystals % % function
              [refl,e,dev]=taupgen_new(e,hkl,crystals,R,dev,alpha);  %  %               e = fixed
              nominal energy in keV %            hkl = reflection order vector, e.g. [1  1  1]  %
              crystals  =  crystal string, e.g. 'si' or 'ge' %              R = bending radius in
              meters   %              dev   =   deviation   parameter    for    which    the    %
              curve  will  be calculated (vector) (optional) %          alpha = asymmetry angle %
              based on a FORTRAN program of Michael Krisch % Translitterated to  Matlab  by  Simo
              Huotari 2006, 2007 % Is far away from being good matlab writing - mostly copy&paste
              from % the fortran routines. Frankly, my dear, I don't give a damn.   %  Complaints
              -> /dev/null

       XRStools.xrs_utilities.tauphoto(Z,                                                 energy,
       logtablefile='/usr/lib/python3/dist-packages/XRStools/resources/data/logtable.dat')
              tauphoto Calculates Photoelectric Cross Section in cm^2/g using Log-Log Fit.

              Args:

                     • z (int or string): Element number or elements symbol.

                     • energy (float or array): Energy (can be number or vector)

              Returns:

                     • tau (float or array): Photoelectric cross section in [cm**2/g]

              Adapted from original Matlab function of Keijo Hamalainen.

       XRStools.xrs_utilities.unconstrained_mf(A, numComp=3, maxIter=1000, tol=1e-08)
              unconstrained_mf Returns main components from an off-diagonal Matrix (energy-loss x
              angular-departure),  using  the  power  method  iteratively  on  the different main
              components.

       XRStools.xrs_utilities.vangle(v1, v2)
              vangle Calculates the angle between two cartesian vectors v1 and v2 in degrees.

              Args:

                     • v1 (np.array): first vector.

                     • v2 (np.array): second vector.

              Returns:

                     • th (float): angle between first and second vector.

              Function by S. Huotari, adopted for Python.

       XRStools.xrs_utilities.vec2mat(x, F, C, F_up, C_up, n, k, m)

       XRStools.xrs_utilities.vrot(v, vaxis, phi)
              vrot Rotates a vector around a given axis.

              Args:

                     • v (np.array): vector to be rotated

                     • vaxis (np.array): rotation axis

                     • phi (float): angle [deg] respecting the right-hand rule

              Returns:

                     • v2 (np.array): new rotated vector

              Function by S. Huotari (2007) adopted to Python.

       XRStools.xrs_utilities.vrot2(vector1, vector2, angle)
              rotMatrix Rotate vector1 around vector2 by an angle.

       XRStools.xrs_utilities.xas_fluo_correct(ene,  mu,  formula,  fluo_ene,  edge_ene,   angin,
       angout)
              xas_fluo_correct  Fluorescence yield over-absorption correction as in Larch/Athena.
              see: https://www3.aps.anl.gov/haskel/FLUO/Fluo-manual.pdf

              Args:

                     • ene (np.array): energy axis in [keV]

                     • mu (np.array): measured fluorescence spectrum

                     • formula (str): chemical sum formulas (e.g. 'SiO2')

                     • fluo_ene (float): energy in keV of main fluorescence line

                     • edge_ene (float): edge energy in [keV]

                     • angin (float): incidence angle (relative to sample normal) [deg.]

                     • angout (float): exit angle (relative to sample normal) [deg.]

              Returns:

                     • ene (np.array): energy axis in [keV]

                     • mu_corr (np.array): corrected fluorescence spectrum

   XRStools.roifinder_and_gui Module
       • genindex

       • modindex

       • search

AUTHOR

       Christoph Sahle, Alessandro Mirone

COPYRIGHT

       2022, Christoph Sahle, Alessandro Mirone